Monday, October 16, 2006

Two important links...

I have two new links to trumpet:

Thursday, October 12, 2006

The Datacenter is Dead! (Or Just Mutating Badly!)

I used to be a member of an object oriented programming user group in St. Paul, MN run by a professor at St. Thomas University (if I recall correctly--I can't even remember his name). This man was a tireless organizer of what was then a critical forum for fostering MN software development expertise. He was also a frequent speaker to the group, and one speech always comes immediately to mind when I remember the "good old days".

This computer science professor stood in front of a highly attentive audience one evening and declared "data is dead!"

His point was that if we modified our models of how computers stored data persistently to use a "always executing" approach, the need for databases to manage storage and retrieval of data from block-based storage would be made obsolete. (I view "always executing" systems much like your cell phone or Palm device today; when you turn on your device, applications remain in the state they were in when you last shut it off.)

Its funny to remember how much we thought objects were going to replace everything, given the intense dependency we have on relational databases today. But his arguments forced us to really think about the relation between the RDBMS and object oriented applications. One result of years of this thinking, for example, is Hibernate.

Jonathan Schwartz, my beloved leader in a former life, recently blogged about the future of the datacenter, contending that the need for large, centralized computing facilities are numbered. In other words, "the datacenter is dead".

His contention is that the push towards edge and peer computing with "fail in place" architectures would make central facilities tended by technology priests obsolete. Ultimately, his point is that we should reexamine current enterprise architectures given the growing ubiquity of these new technologies.

I have to say, I think he makes a good argument...up to a point. My problem is that he seems to ignore two things:
  • Data has to live somewhere (i.e. data is certainly not dead)
  • People expect predictable service levels from shared services--the more critical those service levels, the more critical that those service levels can be guaranteed.
Rather, I think that the days of the company owned datacenter are beginning to wane, and that the future is in a combination of edge computing and commercial computing utilities which will offer service delivery at guaranteed service levels.

I think its good news that, in order to achieve such a vision, we must take baby steps from the static, siloed, humans-as-service-level-managers approach of today's IT shops.

As you may have guessed from my previous blogs:
  • I believe the first of these steps is to shed dependencies between software services and infrastructure components.
  • Following that we need to begin to turn monitors into meters, capturing usage data for both real time correction of service level violations, as well as analysis of usage and incident trends.
  • Finally, we need the automation tools that guarantee these service levels to operate across organization boundaries, allowing businesses to drive the behavior (and associated cost) of their services wherever they may run in an open computing capacity marketplace.
The cool thing to think about is how SLA applies to the edge devices, though. Can we guarantee that necessary processing will occur both in backend data and services utilities as well as our edge and interface devices? How about in a peer network environment, especially one where one organization does not own or manage all of the computing capacity running the service?

No, neither data nor the datacenter are dead, they are just evolving quickly enough that they may soon be unrecognizable...

Thursday, October 05, 2006

InfoWorld: ITs Virtual Assett Economy

Check out:

http://www.infoworld.com/article/06/10/04/41OPcurve_1.html

Hmmmm… Service Level Automation, anyone?
“When money is distributed to managers for IT-related purchases, that capital goes to IT with the investor’s minimum requirements attached. Ideally, those requirements will be expressed in terms that are accessible to the investors…”

Great concept. Almost a "commons" (in the 18th century farming sense) for computing resources. Certainly many simularities to commodity market models as well (e.g. options, trading, etc.).

Tuesday, October 03, 2006

Service Virtualization defined

The biggest issue I've had with server virtualization vendors has nothing to do with the applicability of their products. I believe firmly that hardware virtualization is key to truly optimizing capacity usage and costs in a datacenter environment. No, the problem I have is with the rather insane notion that solving your hardware utilization problems solves your IT problems. In other words, as long as you can manipulate servers, your costs will be minimized and compliance with service levels will be a no brainer.

That's crazy.

My argument starts with the observation that its not server utilization service levels that businesses care about, but the quality and availability of the services that run the business that really matter. From a business perspective--from the view of the CEO and CFO--its not how many servers you use and how you use them, its how many orders you gather and how cheaply you gather them.

So, this focus by the VM companies on hardware and manipulating servers (virtual or not) falls short of meeting the goals of the business. Look closely at what VMWare, Virtual Iron and even XenSource are offering:
  • Virtual Servers. This is the core of their value proposition, and its by far the most valuable tool they deliver. As we established earlier, this is needed technology.
  • Virtual Server Management. VirtualCenter, Virtualization Manager, etc. provide key tools for managing virtual servers.
  • Automation. Tools to provision, expand, move and replace servers based on current observed conditions.
What's missing from all this? I'll give you a hint: where's the word "service" above?

Virtual machine technologies have no concept of a service, or even an application. They barely have the concept of an OS. This is by design; if they focus on the hardware virtualization problem, they have a fairly simply bounded problem--just make some software behave exactly like its emulated hardware platform. That way, you can cover existing software installations with minimal effort, and don't have to worry about the vagrancies of application, network and storage configuration. All of that can happen outside of a simple virtualization wrapper targeted at making the virtual servers work in the physical reality.

The true holy grail for IT, in my opinion, is service virtualization. I will define service virtualization as technology that decouples a set of functionality (a web service, an application, etc.) from any of the computing resources required to execute that functionality, regardless of whether those resources are themselves hardware or software. What we ultimately want to do is to optimize the delivery of this functionality by whatever metric is deemed important by the business.

Thus, I applaud the validation of policy-based automation, decoupling of physical hardware from software and automated response to server load and failure that the VM companies are clearly giving. However, I caution each of you to consider closely whether automating server management is enough, or if service virtualization is the better path.