Friday, September 08, 2006

Decoupling software from hardware: a story

Imagine two companies merging into one—two production datacenters, two sets of architecture standards, and two unique installed software baselines.

In any modern corporate merger, one key element of success is using economies of scale to boost the value of each company’s technology property. To achieve this, the combined organizations’ applications must be deployed into the most cost effective operations environment possible.

How would you handle this in your company today? My guess is that your applications remain tightly coupled to the hardware systems they run on, so moving applications also means moving hardware. Achieving an efficiency such as consolidating the combined company’s IT to one datacenter is insanely expensive, as you not only have to deal with the logistics of moving the hardware from location to location, but you have to:

  • Find real estate, power, air conditioning capacity, and expertise in the surviving datacenter to support the relocated hardware.

  • Configure the surviving datacenter’s networks and storage, as well as the relocated software payloads (including application, middleware platform (if any) and operating system) to allow the applications to actually work.

  • Begin the arduous process of consolidating the software stacks of both the indigenous and relocated applications, including porting applications to approved platforms, resolving functional overlap between applications and integrating functionality to meet business process needs.

The unfortunate truth is that that you would put together a large scale project plan involving thousands or millions of dollars for logistics, datacenter capacity and human resources. The capital and operational expenditures would be overwhelming, and this project plan would take months—possibly years—to implement. What an incredible burden on the success of the merger.

Now, consider a model where both company’s software is decoupled from the computing resources they run on. In the ideal example, the application and data images from the decommissioned datacenter would simply be moved into the surviving data center and executed on available computing capacity there. As the granularity being relocated is applications (or even individual services), these components can simply be deployed on the OS/middleware stacks already supported in the surviving datacenter. The business owners of the relocated applications don’t care what platforms the application runs on, as long as they run.

If OSes or middleware need to be migrated as well for technical or business reasons, they too can be simply delivered in portable software payload images that can be simply allocated to computing resources as needed. In fact, each layer of any software stack can be managed and delivered as a separate, portable “building block” that does not need extensive installation or configuration to operate on completely new computing resources.

If additional capacity is needed to support the relocated applications, only the number of systems required to provide that capacity would need to be moved. Only a fraction of the excess capacity from the decommissioned data center would need to be shipped. The rest could be sold or discarded. In fact, as both the surviving and decommissioned data centers would have their own excess capacity for load spikes, failover, etc., it is possible to reduce the number of computing resources in the combined datacenter just by consolidating that excess capacity. (Excess capacity includes unused CPU time, memory, storage and network bandwidth.)

Granted, this ideal is far from being realized (though all of the components are there in one form or another—more about that later), and there are additional fixed costs associated with decommissioning a datacenter. The cost of supporting the relocated software in its new home is also a constant. However, think about how much money was saved in the above example. Think about the amount of time and pain that was removed from a datacenter consolidation effort. This is just one example of the advantages of decoupling software from hardware, operating systems and middleware.

Next I'll talk about what technologies are contributing to decoupling software from hardware today. I'll also connect all of this to the subject of this blog, service level automation. Stay tuned...

No comments: