Monday, January 28, 2008

It's the labor, baby...

I'm getting ready to go back to work on Wednesday, so I decided today (while Owen is at school and Mia has Emery) to get caught up on some of the blog chatter out there. First, read Nick Carr's interview with GRIDToday. Damn it, I wish this was the sentiment he communicated in "The Big Switch", not the "its all going to hell" tone the book actually conveyed.

Second, Google Alerts, as always, is an excellent source, and I found an interesting contrarian viewpoint about cloud computing from Robin Harris (apparently a storage marketing consultant). Robin argues there are two myths that are propelling "cloud computing" as a buzz phrase, but that private data centers will never go away in any real quantity.

Daniel Lemire responds with a short-but-sweet post that points out the main problem with Robin's thinking: he assumes that hardware is the issue, and ignores the cost of labor required to support that hardware. (Daniel also makes a point about latency being the real issue in making cloud computing work, not bandwidth, but I won't address that argument here, especially with Cisco's announcement today.)

The cost of labor, combined with real economies of scale is the real core of the economics of cloud computing. Take this quote from Nick Carr's GRIDToday interview:
If you look at the big trends in big-company IT right now, you see this move toward a much more consolidated, networked, virtualized infrastructure; a fairly rapid shift of compressing the number of datacenters you run, the number of computers you run. Ultimately … if you can virtualize your own IT infrastructure and make it much more efficient by consolidating it, at some point it becomes natural to start to think about how you can gain even more advantages and more cost savings by beginning to consolidate across companies rather than just within companies.
Where does labor come into play in that quote? Well, consider "compessing of the number of datacenters you run", and add to that to the announcement that the Google datacenter in Lenoir, North Carolina will hire a mere 200 workers (up to 4 times as many as announced Microsoft and Yahoo data centers). This is a datacenter that will handle traffic for millions of people and organizations worldwide. If, as Robin implies, corporations will take advantage of the same clustering, storage and network technologies that the Googles and Microsofts of the world leverage, then certainly the labor required to support those data centers will go down.

The rub here is that, once corporations experience these new economies of scale, they will begin to look for ways to push the savings as far as possible. Now the "consolidat[ion] across companies rather than just within companies" takes hold, and companies begin to shut down their own datacenters and rely on the compute utility grid. Its already happening with small business, as Nick, I and many others have pointed out. Check out Don McAskill's SmugMug blog if you don't believe me. Or GigaOM's coverage of Standout Jobs. It may take decades, as Nick notes, but big business will eventually catch on. (Certainly those startups that turn into big businesses using the cloud will drive some of these economics.)

One more objection to Robin's post. To argue that "Networks are cheap" is a falicy, he notes that networks still lag is speed behind processors, memory, bus speeds, etc. Unfortunately, that misses the point entirely. All that is needed are network speeds that get to the point where functions complete in a time that is acceptible for human users and economically viable for system communications. That function is independent of the network's speed relative to other components. For example, my choice of Google Analytics to monitor blog traffic is solely dependent on my satisfaction with the speed of the conversation. I don't care how fast Google's hardware is, and all evidence seems to point to the fact that their individual systems and storage aren't exceptionally fast at all.

2 comments:

Daniel Lemire said...

CISCO's new switch has impressive bandwidth, but it does not necessarily help latency. See one of the comments:

Thats a bit like saying we could run twice as much car traffic by doubling the freeway lanes, when the freeway entrances/exits and feeder routes actually dictate the throughput.

James Urquhart said...

The highway/ramp/feeder analogy is certainly a good one, but certainly you agree that the three combine to establish traffic throughput; adding feeder and onramp capacity does nothing if the freeway is the bottleneck. Greatly increasing the freeway capacity in this way allows (and, I would argue, motivates) those that are building ramps and feeders to increase capacity to match.

I appreciate you pointing this out, however. It really is a great analogy.