Tag Archives: gce

Three Thoughts on Cloud for 2014

Right around the end of December people like to post their predictions for the coming year. Me, I’m kinda swamped until about a week into January. By then I’ve had a chance to catch up a bit, both on sleep and on reading that fell to the side during the holidays. I’ve also had a chance to think about what I saw in 2013 and how that might play out in the next twelve months.

Seriously, 2013 was all kinds of exciting. For those of us who have been doing “cloud” long before it was called cloud (scale-out, horizontal scale, distributed computing, grid, utility, etc.) what we’ve seen happen recently is pretty awesome. There are a lot of great environments to work in today, both as “public” and “on-prem” deployments. There are also too many cool services to count, from monitoring to provisioning to automation. I’m a big believer that what defines big-C Cloud is that it’s all about being agile and easy. From that point of view, 2013 was the year that I really saw everything come together.

So what’s next? I’m not going to try for any “bold” predictions. These are mostly what I’ve been observing lately and a way to give me some entertainment 11 months from now. As I look at the landscape I’ve got (at least) three thoughts about what I expect is going to happen this year, and I’ll be interested to know what others think.

Nothing will be single data-center

Maybe this one seems obvious to folks reading my blog, but it’s an idea I see met with surprising skepticism on a daily basis. The way I see it, almost no one runs a mission-critical system in a single data-center today. At the very least there’s some kind of Disaster Recovery option with hot-spare application infrastructure and replicated data available at a moment’s notice.

Five years ago this was something exotic. High-end enterprises had the resources and experience but it hadn’t tricked down. More to the point, getting that kind of hardware online was expensive and unrealistic. Today, however, I can go into Amazon and spin-up instances in two availability zones, or better still two separate regions, without much effort. We’re living in the future.

The thing is, increasingly it’s about a lot more than Disaster Recovery. Applications need to be active in multiple places at once. Since there are so many data-centers available around the world, it only makes sense to distribute application logic to give users lower-latency access. We did this a long time ago with CDNs and other caches; now the application and data tiers are catching up.

At Re:invent a few months ago this was a theme that I heard time and time again, both from developers and from the teams working on AWS. The thing is that while the infrastructure is there the software is still growing up. True, there are ways to get active-active deployments today, but these kinds of Geo-Distributed architectures are still challenging. Even Amazon’s Web Services are still largely focused on operation within a single region (I’m looking at you, Cloud Formation).

So what’s changing in 2014? The infrastructure for Geo-Distribution is available, from Amazon to Google to SoftLayer and others and that’s pushing software to make it usable. Is this a shameless plug for my company? Maybe, but the way I think about it I work on a distributed system because it’s what I’m passionate about and it’s where I think we’re headed. I expect this year to be the year when everyone gets behind that vision and when anyone building serious application is going to demand Geo-Distribution as a baseline.

Google Compute Engine will hit its stride

I first started playing with Google’s Compute Engine back in November 2012, in its early stages. The tools were nascent (at best) but the system as a whole showed a lot of promise. I was lucky enough to get that sneak-peak ’cause of great people over at Google who wanted early developer feedback. They gave us access to a large number of servers. The result (including high-larious demo hijinks caused by the building wifi failing) is available as a Google Developer Video.

Most of us use software and services from Google in one capacity or another on a daily basis. Most of us also have experienced heartbreak at least once when a Google project that was “in beta” for some extended period of time suddenly got cancelled. Remember Google Wave? If not, stop reading this and go read about Wave’s formal underpinnings. Seriously, the system behind the app was really something. So when GCE started allowing access as a “beta” I was curious to see where it was heading.

Broadly speaking, there are at least three camps in terms of cloud infrastructure. First, there are infrastructure providers focused on service-oriented offerings like Amazon. For the record, I am fiercely in love with how easy they’ve made it to get up & running whether I just want to test out an idea or crank on something at scale. Second, there are providers like Rackspace or SoftLayer who I see more as hardware providers with good distributed access. Third are the interface providers like Pivotal (or the open source projects like OpenStack or Eucalyptus) who are more focused on the service & API layers than specific infrastructure.

Where does GCE fall? From what I’ve seen so far, Google is still trying to sort that one out. They have a really solid infrastructure: fast, stable systems with good networking. They are layering management tools starting with flexible (if slightly verbose) command-line interfaces that seem focused squarely on infrastructure automation. They’re also exploring services that bump up the stack a few levels but are tied to fairly specific use cases (e.g., the schema and access requirements to make an F1 application scale). If pressed, I’d say that Google is starting with something in between raw infrastructure and bootstrap services, and seeing where the development community takes it.

In any case, that seems to be the point. Google is doing one of the things that they do best: they’re creating a developer tool and seeing what developers will do with it. About a month ago I was psyched to see the “beta” label removed from GCE. Are developers going to drop familiar platforms? No, of course not. What I expect, however, is that in 2014 developers are going to run GCE through its paces, and that’s going to help Google give GCE the clear identity it needs to emerge as a differentiated, developer-focused platform.

Consistency will matter

This one may be less-obvious, but bear with me a minute. Historically, distributed data was the realm of high-end enterprise (see thought one, above) where relational databases are de rigueur. Solutions like Tandem NonStop or Oracle RAC were designed to provide parallelism and scale-out behavior for transactionally consistent data management. That’s all well and good, but starting about eight or nine years ago requirements for distribution started pushing down into more general-use.

At this point two things happened. First, there were an increasing number of developers unwilling (or unable) to buy & deploy the high-end solutions. Second, there was a realization that even if you bought the licenses these might not be the right solutions for Cloud-style deployments. This led to rapid iteration using the tools already available (hello BerkleyDB) and basic insight into the problems that had to be solved on a very short timeline. The result was a new way of approaching data management that traded off global consistency for scale-out behavior.

Eventually the term “NoSQL” stuck, but the point here isn’t really about the language. It’s about an assumption that the right way (possibly the only way) to make a data management system scale out is to punt on consistency and let the application sort things out as-needed. For some applications this is a perfectly reasonable trade-off. No doubt several years ago this was a pragmatic response by developers in need of solutions to real problems. I mean, that’s what we developers do best.

The thing is, there’s nothing about SQL or Transactions or ACID that can’t scale. The tools and architectural assumptions in place at the time didn’t scale, true. As I’ve written about elsewhere, however, the programming models scale just fine if you’re willing to re-think the architecture. Every day I talk with developers or operators who are frustrated with having to choose between strong consistency models and scale. It’s not that every application needs strong consistency, but a lot of them do, and even if you might be able to get away with limited consistency most developers (in my experience) would prefer to work with a consistent system as long as it doesn’t limit what they can do.

This is the trend I expect to see in 2014. Developers now have good options for building-out systems at scale that don’t force them to re-invent consistency at the application layer, and increasingly they will demand it. Over the next year we’ll see the growth of new systems in this mold. We’ll also see increased effort in the NoSQL community to provide forms of consistency out of the box for developers. I’m not going all the way to say that these different world-views are going to merge into some common-ground, but I do think that’s the direction we’re heading, and I think it’s a good thing all around.

Happy 2014

Ok, yeah, there’s my short-list. Like I said at the start, I’m curious to know what anyone reading this thinks. Obviously this list isn’t everything on my mind, but it’s definitely what’s on the top of my stack. The high-order bit is that it’s really cool what we have at our disposal as developers, and we’re using those things to push requirements in crazy new ways. I’m wicked excited to see what the next year brings, and if anything I’ve said here pans out it’s going to be a fun year in the Cloud. Cheers.