Select Page

Arising partly as a reaction to VM sprawl and poor utilization, and partly as a better way to deploy microservices onto shared machines, there’s no doubt that containers are the de facto standard for application deployment today.

Outside hyperscale datacenters, containers initially gained popularity with mainstream developers as an easy way to package software and its dependencies, in a reproducible and easily portable fashion. These days though, using containers as a unit of deployment and scaling has hit the mainstream with over half of enterprises now using them, and another third experimenting and planning to use them, for this purpose. Enterprises consistently see improvements in deployment speed, scaling of operations and efficiency in running microservice based workloads in containers.

While they are undeniably a major step forward, containers tend to make the resource sprawl problem worse as we’ve swapped one problem for an even trickier one. To run our containers, we need to deploy a cluster, which is essentially a software-defined “big container machine” made out of slices of several virtual or physical machines. 

Several container orchestration and cluster management stacks have emerged but by far and away Kubernetes is dominant here. But the resource sprawl pattern repeats. We’ll say more in another post, but due to its tremendous complexity, best practices suggest running many small clusters, so when things go wrong the blast radius is limited to a small set of containers.

Given the small size of nodes within typical clusters, it’s not hard to squint and see one larger machine hidden underneath. Though in practice of course we’d end up sharing underlying machines among several clusters, whether our own or others. In essence, clusters are the new virtual machines, with complex quirks of their own but sharing all the same operational and management headaches. And the same tendency to sprawl.

The rise in popularity of serverless container platforms, which take away the explicit cluster problem, in practice still leave us with the larger problem of choosing zones & regions to deploy in, building multi-region active-active architectures, and figuring how out to spend across regions to best serve your users. This extends to supporting services, such as object storage and databases too.

In fact, common wisdom suggests living with a certain amount of downtime due to local zonal failures rather than dealing with the cost of multi-region storage, replication complexity, and poor utilization of geo-distributed resources. The thinking being that perhaps we can live with poor performance and lower availability to avoid doubling or tripling our infrastructure costs.

This is the essential structural problem in cloud-native development today. There is an expensive and complex mismatch between our need to deploy global applications and the available cloud infrastructure to build it on. With the clouds, essentially, everything is possible but nothing is easy or cheap. Each misstep increases your cloud bill and affects your user experience with poor latencies, and other deployment induced performance problems.

One global zone 

Clouds are, of course, inherently a fragmented physical infrastructure that has taken amazing engineering and astonishing budgets to build. But exposing that infrastructure directly has caused spiraling costs, complexity, and waste higher up the stack. While cloud vendors are justifiably proud of the infrastructure, forcing application developers to paper over the cracks is the wrong way to do things.

Seaplane is our solution. We give you a single global zone to run your containers and we do the heavy lifting of figuring out where and when to run them to meet your needs. Containers are scheduled around the world to meet your availability and performance needs, with our intelligent AutoPilot automatically scaling resources up and down so you only pay for what you really need.

You can think of Seaplane as like a CDN for your containers. Always on but always optimized.