Tuesday, November 06, 2007

Beating the Utility Computing Lockdown, Part 2

Well, not long after I posted part 1 of this series, Bert noted that he agreed with my assessment of lock-in, then preceded to note how his (competitive to my employer's) grid platform was the answer.

Now, Bert is just having fun cross promoting on a blog with ties to a competitor, but I think its only fair to note that no one has a platform that avoids vendor lock-in in utility computing today. The best that someone like 3TERA (or even Cassatt) can do is give you some leverage between the organizations that are utilizing their platform; however, to get the portability he speaks of, you have to lock your servers, (and possibly load balancers, storage, etc-etc-etc) into that platform. (Besides, as I understand it, 3TERA is really only portable at the "data center" level, not the individual server level. I suppose you could define a bunch of really small "data centers" for each application component, but in a SOA world, that just seems cumbersome to me.)

Again, what is needed is a truly open, portable, ubiquitous standard for defining virtual "components" and their operation level configurations that can be ported and run between a wide variety of virtualization, hardware and automation platforms. (Bert, I've been working on Cassatt--are you willing to push 3TERA to submit, cooperate on and/or agree to such a standard in the near future?) As I said once before, I believe the file system is the perfect place to start, as you can always PXE boot a properly defined image on any compatible physical or virtual machine, regardless of the vendor. (This is true for every platform except for Windows--c'mon Redmond, get with the program!) However, I think the community will have the final say here, and the Open Virtual Format is a hell of a start. (It still lacks any tracking of operation level configurations, such as "safe" CPU and memory utilization thresholds, SNMP traps to monitor for heartbeats, etc.)

Unfortunately, those standards aren't baked yet. So, here's what you can do today to avoid vendor lock-in with a capacity provider tomorrow. Begin with a utility computing platform that you can use in your existing environment today. Ideally, that platform:
  1. Does not require you to modify the execution stack of your application and server images (e.g.
    • no agentry of any kind that isn't already baked into the OS,
    • no requirement to run on virtualization if that isn't appropriate or cost effective,
  2. Uses a server/application/whatever imaging format that is open enough to "uncapture" or translate to a different format by hand if necessary--again, I like our approach of just capturing a sample server file system and "generalizing" it for replication as needed. It's reversible, if you know your OS well.)
  3. Is supported by a community or business that is committed to supporting open standards wherever appropriate and will provide a transition path form any proprietary approach to the open approach when it is available.

I used to be concerned that customers would ask why they should convert their own infrastructure into a utility (if it was their goal to use utility computing technology to reduce their infrastructure footprint). I now feel comfortable that the answer is simply because there is no safe alternative for large enterprises at this time. Leave alone the issue of security (e.g. can you trust your most sensitive data to S3), and the fact that there is little or no automation available to actually reduce your cost of operations in such an environment, there are many risks to consider with respect to how deeply you are willing to commit to a nascent marketplace today.

I encourage all of you to get started with the basic concepts of utility computing. I want to talk next about ways to cost justify this activity with your business, and talk little about the relationship between utility computing and data center efficiency.

2 comments:

Bert said...

Sorry for the cross-vendor post. I try not to do that, but it was late and I didn't remember you were with Cassatt. On the standardization issue, I'd welcome the chance to work on a true standard. Unfortunately DMTF isn't the right organization as it doesn't provide for open discussion or voting. IEEE would be a better venue.

Anonymous said...

I’ve spoken at several IT service seminars and I like you like to preach about itil management. This is where good data center automation begins. Many companies in an attempt to save a few dollars will overlook the importance of ITIL and generally see their networks compromised at some point and in some fashion.

In general I recommend that companies begin by looking for a company that can provide them with an itil service. Outsourcing this aspect of database administration is both cost effective and smart as it allows any company’s IT team to stay focused on immediate in house needs. I also warn company’s against over looking this important step and remind them that they need to start here if they are hoping to have a solid run book automation process.