Pricing at Seaplane: What Is a Global Compute Unit?
Cloud billing is awful.
This is hardly news. Actually, it’s a bit of a meme. The simple fact that multiple, third party services exist solely to help users parse their cloud bill is a testament to how user-unfriendly these services have become.
The fractured nature of cloud offerings, where every little piece of performance and functionality is a discrete line item, requires a lot of DIY to use and expertise to understand. This is especially true when it comes time to scale. Billing is just as fractured and opaque as the services themselves, hiding (intentionally or otherwise) an incredibly high total cost of ownership. It’s difficult enough estimating costs on a monthly basis. When you wrap in the staffing and operational needs that come with building and operating a bespoke cloud deployment, it becomes nearly impossible to know how much any provider really costs day to day, month to month, and year to year.
When starting Seaplane, we knew we’d need to approach billing differently than the established players. This naturally poses a challenge because we work directly with those established players to make them more approachable and consumable for your average developer. Our problem was two fold:
- We treat cloud and edge resources as utilities, but we’re still being billed on the backend for them as complex, multi-layered services.
- We cannot pass that complexity off to our customers.
Luckily, we have a few key advantages over the large public cloud providers of the world. Namely, the benefit of being able to peek at those providers’ work and listen to the market about what it does, and doesn’t, like.
The result is a new unit of measure for billing: Global Compute Units™.
Why Global Compute Units?
Seaplane is the product of user research, and our billing structure reflects the hundreds of conversations we’ve had since Seaplane was just an idea on the back of a napkin. Over the course of those sessions we learned a lot about the average cloud experience, and one thing became very clear: billing is broken, but change is scary.
| “Cloud costs are getting insane and we don’t know what we’ll be charged for or what the bill will be. All we know is it’ll be something big that will get us a lot of credit card points.” - DevOps Engineer|
Walking away from those conversations we sketched out the rough shape of how we wanted to price Seaplane products. As the old saying goes, we wanted to “put our money where our mouth is” and have our pricing reflect our mission of simplifying the cloud. To do that we needed to follow a few guiding principles:
- Our pricing needs to be as simple as our product. The central thrust of Seaplane is the belief that creators should ship apps, not infrastructure. If we wanted to be the solution to cloud complexity then we needed to tackle every facet of that complexity — from billing to implementation to total cost of ownership.
- Users should (actually) only pay for what they use. Ostensibly, many clouds follow this model but in practice it’s rarely so simple. We wanted our metering to be as close as possible to a true “pay for what you use” model.
- Using Seaplane should be cost effective for any team, regardless of size. Similar to our “pay for what you use” ethos, we wanted Seaplane to not only offer a better, more streamlined cloud experience, but a more cost effective experience over all. We wanted to offer additional features (balanced against the complexity of offering more features), but we didn’t want to force users into upgrades they didn’t need just to access the services they did.
- Billing should be better, but still familiar. There’s a reason the save icon is still a floppy disk. The teams we spoke to almost universally reached for familiar metrics and milestones when describing hypothetical pricing, and those touchpoints exist for a reason. We wanted our model to be intuitive.
What are Global Compute Units?
A Global Compute Unit (GCU) is the way we express the combination of hardware and services that makes the Seaplane platform available across zones, regions, and clouds.
Here’s what’s included in a GCU:
- Continuously optimized app placement and delivery via Seaplane Autopilot
- Autoscaling of deployed cores, memory, and bandwidth to balance performance, availability, and cost
- Global load balancing with edge ingress and app delivery for a constantly optimized end user experience
- Self-healing deployments that autonomously route around infrastructure outages
- Dynamic, blended infrastructure deployments that minimize risk
- Automated and continuous egress cost reduction
- A range of hardware platforms tuned by workload and cost requirements
The GCU model rhymes with the vCPU + memory per millisecond model, but abstracts away the strictly defined hardware component that neglects the many ancillary services and work required to make them useful as you scale out. Instead, our focus is on the outcomes achieved by the machines plus all the additional services supporting that hardware. In this instance, that outcome is highly available, resilient compute capacity that gets your app where it needs to be: as near your users as possible.
Instead of divorcing the most basic unit of compute from the essential services a user needs to run a multi-region or multi-cloud deployment (or worse, breaking out essential services purely to obscure the total cost) we wanted to consider those services as inherent to the Seaplane offering. This provides a better experience for our users and yours, and makes it much easier for any given team to estimate their total cost.

All that said, every application is different! Performance needs can vary wildly from industry to industry, company to company, and deployment to deployment. We don’t want to force someone looking for a bicycle to buy a Ferrari, so how do we account for those differences?
How should I fly with Seaplane?
A tiered system is the natural way to offer different levels of performance on a workload (not company!) level. Many clouds offer something like a tier system, though often with a lot more complexity. Because we’re using very precise metering where every deployment’s cores and memory autoscale with demand, we don’t need to nickel and dime every little performance boost. The GCU model also abstracts away the hardware, so we’re not presenting customers with a menu of memory options and leaving it to them to determine how that might translate into actual performance.
The value of Seaplane lies in the automation and optimization baked into the platform. We wanted to make sure our customers’ apps consistently ran on the best hardware for their specific performance and budgetary needs at that specific moment in time.
That means there’s a lot of shifting happening throughout the day as demand shifts, and we didn’t want to limit a customer’s deployment options arbitrarily. It doesn’t make sense to lock customers into resources they don’t need. Conversely, it also doesn’t make sense to lock customers out of resources that might be less expensive but no less performant.
We also wanted to avoid the costly annoyance of forcing our customers to guess at their hardware needs. Over provisioning and under provisioning both surfaced as common pain points for many of the developers, product teams, and executives we talked to over the course of our research.
| “We all know we don’t pay for what we use, we pay for what we forget to turn off.” - Cloud Engineer |
The solution to the problem of varying performance requirements, then, is broad buckets defined by performance output first which is made more tangible with hardware ranges. Some apps might run the gamut of any tier’s hardware range, some might hover on the high or low end with occasional spikes in either direction. Regardless, providing a range allows us the room we need to optimize for cost and performance.
Importantly, you don’t have to buy into the same tier for every workload. If you have heavier workloads that you want to run highly performantly you can run those on a higher, more performant tier while running more lightweight services on a lower tier. This was key for adhering to our goal of cost effectiveness for all. No artificial upsells.
Here’s how we broke out our tiers.
Coach Class:
Best for smaller, lightweight applications or lower priority projects. This tier has all the services included in a GCU, and is optimized for cost rather than high availability or performance. Charges are by the compute core per hour.
- No charge for provisioned concurrency, routing, or global load balancing
- Cost-optimized hardware platforms
- Up to 8 cores and 32 GB memory per container
- No edge deployments
Business Class:
Best for most standard applications, this tier has all the services included in a GCU and balances both cost and performance. Charges are by the compute core per hour.
- No charge for provisioned concurrency, routing, or global load balancing
- Higher performance hardware suitable for standard applications
- Up to 32 cores and up to 128 GB memory per container
- Automatic upgrades where availability allows for edge deployments
First Class:
Best for larger, heavier apps and high priority projects, First Class prioritizes availability and performance and is most cost efficient for demanding applications. Charges are by the compute core per hour.
- No charge for provisioned concurrency, routing, or global load balancing
- Best performance hardware with priority access to global resources
- Up to 128 cores (more available upon request) and up to 512 GB memory per container
- Deployed as close to the user as possible with priority access to edge deployments
So, between the GCU metric and this basic tiering system we’ve covered most of our bases. However, there is always more functionality that is less broadly applicable (and therefore shouldn’t be wrapped into the base offering and inflate cost) but no less critical for specific use cases.
What do we do about these floating features?
What about upgrades?
This was one of the trickiest parts of designing our pricing model. How do we account for nonstandard functionality without building a nightmare maze of options and upgrades?
First, we wanted to price out additional functionality using the same GCU per hour charge depending on your tier. If your workload is flying Coach Class, then you’ll be paying less than another workload flying First Class but (and this is a big but!) the performance discrepancy will remain. We thought this was the best way to keep pricing consistent and intuitive, while also preserving each tier’s performance level.
Despite our best efforts, not everything could be priced based on GCU so we allowed room for fixed fees and bandwidth charges where appropriate.
Here’s a look at some of the additional functionality that may be purchased, ad hoc, to support your specific application needs. We made a serious effort to make all the upgrades available at all levels of performance with only one exception due to technical requirements.

What about egress?
Egress costs are the bane of so many organizations that we knew we’d need to do everything we could to keep them low.
| “The hardest thing for us is egress costs. It doesn’t make sense for us to spin up infrastructure for those other regions yet so we just pass the data and absorb the costs." - Senior Product Manager |
While we did find that many providers upcharge on egress to prevent users from leaving their platform, there is a very real reason egressing data costs more than ingressing data. While our goal is to eventually eliminate egress costs all together (be the change you want to see) the cold hard reality is that we cannot absorb all the costs for all of our customers.
We can, however, severely minimize them.
First, we offer our customers 250 GB per month of free egress. This serves our goal of cost efficiency for all in a very concrete way.
Second, the Seaplane platform is always and automatically optimizing for your workload needs, and egress is no exception. Through Seaplane’s Autopilot function we can strategically deploy across locations while controlling for egress. As customer volume grows, we automatically lower the price and pass those savings along.
In sum, we charge for egress per GB of bandwidth while maintaining a cost ceiling comparable to other providers. In practice, costs swing closer to zero based on continuously optimized delivery using our edge network.
Where do I sign up?
If all this pontificating on pricing has moved you to try Seaplane, you can contact us at contact@seaplane.io for early access or subscribe to our newsletter for updates (and more pontificating).
We’re also offering a free Seaplane boarding pass worth up to $500 for qualifying projects, so it can't hurt to reach out!
Seaplane wouldn’t be possible without the many incredible developers, product teams, and executives who participated in our ongoing user research program. If you’d like to provide some insight and help us bring joy back to developers, we’d love to talk to you about your experience using the cloud! You can schedule time with the user research team here.