When Google announced the Kubernetes Docker-management system on Tuesday, it wasn’t open sourcing its cloud computing “secret weapon” as much as it was open sourcing its viewpoint on how applications should be built and deployed. Google, the cloud computing provider, will always be compared with Amazon Web Services, but pushing technologies reminiscent of Google’s own vaunted Omega system could become a strong point of distinction and a major draw for developers.
This has been Google’s approach to cloud computing all along, if you’ll recall. The promise of App Engine, its now 6-year-old platform-as-a-service offering, is being able to deploy web applications on Google’s infrastructure stack and being confident they’ll keep running and scale as needed with little handholding. Not long after Google opened up its Compute Engine infrastructure-as-a-service offering, it announced a feature called live migration that promises to dynamically move virtual machines from region to region when Google is doing scheduled maintenance in the original region. It also offers container-optimized VMs.
Kubernetes seems to split these approaches down the middle. Docker and its container-based approach already appears to have stolen some of the thunder from early PaaS efforts by giving developers an easy way to build applications and also to deploy them across various environments (Google, in fact, now also supports Docker within App Engine). Google was already using containers heavily internally, all managed by Omega to move them from place to place and ensure its services keep running. It wasn’t too big a leap (one could argue it was a no-brainer, in fact) to essentially rebuild a less-sophisticated version of Google’s system specifically for Docker containers.
In theory, developers, operations staff and Google should all be happy. Google’s cloud customers get to build applications and cloud-based systems that run like Google’s do, and they don’t have to give up too much control, give up on the tools they like or do too much heavy-lifting to get there. Google gets to prove the awesomeness of its cloud (and its engineering smarts) by getting those customers doing things the way Google thinks they should be done. Aside from higher resiliency to server failures, the Docker-plus-Kubernetes combination in lower bills because better resource utilization means fewer Compute Engines instances are required.
Open sourcing Kubernetes is the icing on the cake — albeit some very critical icing. If someone can fork it to run on environments other than Google Compute Engine (which is what the code Google released is built for), Kubernetes acts as a stick with which to beat the still bigger and badder AWS. Companies fear being locked into a cloud platform, but a Kubernetes that can run on virtual machines, bare metal or even (gasp!) AWS would mean Google’s approach to computing now travels well – something that can’t presently be said about AWS’s approach to computing.
All of this ties nicely into one of the undying themes of our Structure conference, which kicks off a week from today (on June 18) in San Francisco. That theme, which I have written about very recently, is the osmosis of web infrastructure through the corporate filters and into the mainstream. It’s not just Google that doesn’t particularly care about server virtualization or the idea of individual machines at all, but a growing numbers of large web companies as well. Some very smart folks from Google (Urs Hölzle), Facebook (Jay Parikh), Twitter (Raffi Krikorian) and Airbnb (Mike Curtis) will be presenting at Structure and speaking about how they design systems capable of functioning at web scale.
Twitter and Airbnb will likely mention Mesos, an open source technology originally created at the University of California, Berkeley, that was inspired by Google’s cluster-management systems and currently underpins infrastructure at those companies as well as many others. Mesos also supports Docker — eBay, in fact, has published a lengthy blog post detailing its Docker-on-Mesos environment — and can can turn any collection of Linux servers into a pool of shared resources. Here’s a handy presentation illustrating some Mesos deployments at various companies.
Florian Leibert, a former engineer at Twitter and Airbnb, and founder and CEO of a startup called Mesosphere, has referred to the Mesos stack as being like a PaaS for your data center. Or, in the case of Airbnb and HubSpot, for your AWS resources. Like Google’s Omega and its offspring Kubernetes, Mesos and a related technology called Marathon (or, in Twitter’s case, a system called Aurora) simplify the deployment of applications and services and automate the process of ensuring each has the resources it needs to run.
With Kubernetes, it seems like Google is (once again) wisely trying to position itself as the cloud provider that will lets its users actually operate like a cloud provider. As Docker, Mesos and similar approaches rise above the early adopter set and into the mainstream, Google wants to be the cloud provider that can stand apart from the crowd and say with sincerity that it was built for this kind of computing.

Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.