Cloud Standards, Transparency And Data Mobility

I was on a panel recently talking about the role of infrastructure and “The Cloud” in online gaming (and I’m talking “fun” games, like Farmville, not online gambling). One of the questions was “What do you think about cloud interoperability and standards?”

To which I asked, “What do you mean?” “Well, what do you think about API standards and the like?” To which I replied, “Completely uninteresting.”

Now I know that at first read, it sounds like I’m saying to forget “standards” and to forget “interoperability”, but I’m not. It’s just that most of the current conversations about it are uninteresting.

And uninteresting in the sense that I’m not convinced there is even customer pain, I’m not convinced that having to tool around different APIs that only currently accomplish provisioning is that difficult (remember the great thing about it is generally that it takes

In the case of virtualisation, many use libvirt and that’s what you generally do in programming: interoperability comes in the form of a library or middleware generated by producers and real users and not design by committee. I expect to see more of these types of projects emerge.

Beside the fact that one’s application shouldn’t have to be aware that it’s no longer in your datacenter and it’s now “in the cloud”, I’m not even sure what most of the current standardisation discussions (many seem focused around provisioning APIs or things like “trust” and “integrity”) would enable start-ups, tool vendor adoption, ISV adoption and an “ecosystem” to emerge in the grand scheme of things. I don’t think that these are the main problems that limit adoption.

And what are the real problems where interoperability and standardisation is important? I think in data mobility and transparency.

Data mobility?

Let’s only talk about mobility at the VM level. If I create an AMI at Amazon Web Services and push it into S3, I can use that AMI to provision new systems on EC2, but for the life of me, I can’t find the ability to export that AMI as a complete bootable image so that I can run it on a local system capable of booting other Xen images. The same goes for Joyent Accelerators. We don’t make this easy to do. We should.


Now this is where I think things get good. And where standard data exchanges around what our “cloud” is doing and whether it has the capacity to accomplish what a customer needs it to. Previously I have mentioned; “The hallmark of this “Cloud Computing” needs to be complete transparency, instrumentability and while making it certain that applications just work, the interesting aspects of future APIs aren’t provisioning and self-management of machine images, it’s about stating policies and being able to make decisions that matter to my business.”

The power of this is that it would actually enable customers to get the best price at the best times, know that they’re moving an application workload to somewhere that will actually accomplish it and it is required for there to be the computing equivalent to the energy spot market.

SHARETweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Pin on PinterestDigg thisShare on RedditShare on TumblrShare on StumbleUponEmail this to someone

Steve Tuck leads a team dedicated to helping companies achieve scale through the use of Joyent’s cloud offering, working with many top companies in media, entertainment, and gaming. He is responsible for the strategy, operations and management of the Joyent’s public cloud. Prior to joining Joyent, Steve spent 9 years at Dell where he rapidly expanded the channel business, developed the Web 2.0 offering, and led Dell’s involvement in the Facebook’s developer program launch. Steve grew up in the Bay Area and attended the University of Wisconsin, Madison where he earned Bachelor’s Degrees in Economics and Political Science.