I always start with Nick Carr, and today he did not disappoint. It seems that IBM has posited that a single (distributed) computer could be built that could run the entire Internet, and expand as needed to meet demand. Of course, this would require the use of Blue Gene, an IBM technology, but man does it feed right into Nick's vision of the World Wide Computer. To Nick's credit, he seems skeptical--I know I am. However, it is a worthy thought experiment to think how one would design distributed computing to be more efficient if one had control over the entire architecture from chip to system software. (Er, come to think of it, I could imagine Apple designing a compute cloud...)
I then came across an interesting breakdown of cloud computing by John M Willis, who appears to contribute to redmonk. He breaks down the cloud according to "capacity-on-demand" options, and is one of the few to include a "turn your own capacity into a utility" component. Unfortunately, he needs a little education of these particular options, but I did my best to set him straight. (I appreciate his kind response to my comment.) If you are trying to understand how to break down the "capacity-on-demand" market, this post (along with the comments) is an excellent starting place.
Next on the list was a GigaOm post by Nitin Borwankar stating his concept of "Data Property Rights" and expressing some skepticism about the "data portability" movement. At first I was concerned that he was going to make an argument reinforced certain cloud lock-in principles, but he actually makes a lot of sense. I still want to see Data Portability as an element of his basic rights list, but he is correct when he says if the other elements are handled correctly, data portability will be a largely moot issue (though I would argue it remains a "last resort" property right).
Dana Blankenhorn at ZDNet/open-source covers a concept being put forth by Etelos, a company I find difficult to describe, but that seems to be an "application-on-demand" company (interesting concept). "Opportunity computing", as described by Etelos CEO Danny Kolke describes the complete set of software and infrastructure required to meet a market opportunity on a moments notice. “Opportunity computing is really a superset of utility computing,” Kolke notes. Blankenhorn adds,
In other words, it seems like prebuilt ecommerce, CRM and other applications that can quickly be customized and deployed as needed, to the hosting solution of your choice. My experience with this kind of thing is that it is impossible to satisfy all of the people, all of the time, but I'm fascinated by the concept. Sort of Platform as a Service with a twist."It’s when you look at the tools Kolke is talking about that you begin to get the picture. He’s combining advertising, applications, the cash register, and all the relationships which go into those elements in his model. "
Finally, the denial. The blog "pupwhines" remains true to its name as its author whimpers about how Nick "has figured out that companies can write their own code and then run it in an outsourced data center." Those of you that have been following utility/cloud computing know that this misses the point entirely. Its not outsourcing capacity that is new, but its the way it is outsourced--no contracts for labor, no work-order charges for capacity changes, etc. In other words, just pay for the compute time.
With SLAuto, it gets even more interesting as you would just tell the cloud "run this software at these service levels", and the who, what, where and how would be completely hidden from you. To equate that with the old IBM/Accenture/{Insert Indian company here} mode of outsourcing is like comparing your electric utility to renting generators from your neighbors. (OK, not a great analogy, but you get the picture.)
Another interesting data point for measuring the booming interest in utility and cloud computing is the fact that my Google Alerts emails for both terms have grown from one or two links a day, to five or more links each and every day. People are talking about this stuff because the economics are so compelling its impossible not to. Just remember to think before you jump on in.
4 comments:
Ok now, maybe, I am a little more "Educated". Your not going to like this. I see 3Tera and Collage as apples and oranges. Collage looks like a steroids based provisioning system and maybe the only true orchestration system I have ever seen (based on your web site stuff). However, and I will admit I am probably spitting hairs, Collage does not fall into what I would call a cloud. Utiliy computing yes, cloud no. IMHO, in a cloud hardware and physical location are completely out of the picture.
Actually most of what IBM has done so far with thier cloud initiatives is very much based on a solution similar to yours. IBM uses the acquired Think Dynamics software (now called TPM) internally to provision dev, test, and demo systems. However, nothing like you guys are doing with power management and utility services.
In the end can Collage provide the same SLA's as 3Tera? I am certain it can.
johnmwillis.com
Hmmm. I guess I'm a little confused...
You seem to imply that 3TERA is either a) hardware independent or b) completely hides physical infrastructure from software.
If you mean (a), then that is easily countered by a trip to their web site. 3TERA has supported hardware platforms, just like anyone else. (x86 primarily, though the hint of SPARC coming; IDE/SATA drives; GigE).
If you mean (b), then all I have to say is its easy to "virtualize" shared network and security infrastructure if you build your own software versions of everything. However, the hardware products that exist in these spaces are popular for a reason--they meet the performance and cost characteristics required by large enterprise systems.
Cassatt, on the other hand, has been built from the ground up to have exactly zero effect on the execution stack of the software it manages, and near zero effect on the network and security architectures an enterprise would choose to employ. (Again, there are some hardware/OS platform choices that unfortunately have to be made, but we aim to meet the vast majority of the market requirement.) The way we manage the software stack allows applications to be migrated with ease between Cassatt environments--much like (but not exactly like) 3TERA's story for their environments.
That said, I won't say 3TERA's approach is better or worse than Cassatt's; the purpose of my blog is not to hock Cassatt, but to open IT's eyes to the economics and strategies that utility/cloud computing provides. In both cases, the important thing to note is that utility/cloud computing affects EVERY aspect of how IT deploys and operates software, and will likely affect many aspects of how they develop/purchase it. Choosing 3TERA won't be any more or less painful than choosing Cassatt or IBM. The long term competition here is which approach leads to more flexibility and ease in where and how you buy compute capacity. I would argue the results of that contest are far from decided.
If you'd like a demo of what we are doing and are in the San Jose area, please let me know. I'd be happy to continue this conversation with our platform right there to play with. Let me know: james dot urquhart at cassatt dot com.
James,
Your post led me to read the other articles you reference including the one by 'pupwhines'.
Aren't you being a bit dismissive of his views, I thought he had analysed some of the issues quite well.
You don't address his point about latency, which I would have thought is fairly fundamental. In fact, he is one of the few people I've read that even mentions latency when discussing "The Cloud".
What's your take? Is his point about latency valid?
Sure, his point about badly designed distributed systems being untennable is well taken. Anyone who tries to join relational tables over the Internet today is nuts. But then, that shows a distinct bias towards a specific data architecture, doesn't it?
Look, the so-called laws of physics are changing every day.
- Cisco has a new switch line that changes the rules of throughput. (Caveat: Its version 1 of a new software platform, so there may be kinks in the near term.)
- MapReduce and Hadoop are changing the way distributed data is being processed (and is one likely solution to pupwhine's concerns)
- SaaS vendors are having to face integration issues already, and are offering compelling products to address this. See force.com.
Not to mention, are any SaaS vendors allowing a two table join with another vendors environment? No, mostly because it isn't needed.
Its fine to say "only people who have written code, designed databases, administered servers or engineered networks at some time in their careers will get to write about IT's past, present and future." But real engineers and administrators are showing the world that highly distributed systems can be done, and can operate at tremendous scale. Is pupwhines saying that Google engineers don't understand what they are doing? I don't think so, but they are doing exactly the things that pupwhines claims won't (can't?) be done. (As is Microsoft, Yahoo, Amazon, EBay, etc., etc., etc.)
His quick dismissal of cloud computing as a fairy tale is what I am dismissive of. In the short term his arguments about latency (and privacy/security) are true for many existing applications--thus my recommendation that existing data centers convert their own capacity into a utility/cloud rather than go to an Amazon or other. But I stand by my argument that these problems will be addressed, and the world of capacity as a utility will come to pass for most IT environments.
Post a Comment