Friday, September 01, 2006

Loosening the Bonds Between Software and Hardware

Why is software still so tightly coupled with hardware?

The days when the applications run on a computer provided their own system control software are long gone.

We have introduced operating systems to allow applications to be built without specific knowledge of the hardware it will run on.

We have created software layers to separate the application from the operating system in the quest for write once, run anywhere.

Our operating systems have gotten more intelligent at delaying the coupling with hardware to the last second. Case in point, most *NIX installations can be moved from compatible bare-metal server to bare-metal server and booted without modification.

Lately, we have even introduced a software layer between the hardware and the operating system to allow any operating system to be decoupled from its hardware. (Though now the OS is coupled to virtual hardware instead.) This is critical in a non-portable OS like Windows.

Yet, despite all of these advances, most of us use the following deployment model:
  • Take a new server (or remove an existing server from user access)
  • Install an OS (if its not there already)
  • Install any libraries and/or containers required for the application(s) (if not there already)
  • Install/upgrade the application(s)
  • Test like crazy
  • Put the server in the data center (or allow user access again)
Once this deployment is completed, that server is forever hosting that application or its future upgrades. Almost never do we actually repurpose a server for another application, because the cost of doing each of those steps listed above is so high.

Now, to be fair, virtual server technology is allowing us to be more flexible in how we use physical hardware, as we can move OS/container/application stacks from one machine to another. But I want you to note that this requires that each physical server involved be loaded with a hypervisor for that VM technology; in other words, the physical server remains tightly coupled to the VM environment, and can only be used for VM hosting.

(I've always wondered why we are working so hard to move our systems to commodity physical hardware, but are just fine with deploying those same systems on proprietary virtual hardware, but never mind.)

Ideally, I would like to see us have the capability to move software stacks across physical resources without making those physical resources dedicated to *any* single software technology. In the Linux (and other Unix flavor) world, this is actually somewhat possible, but we are a long way from this in the Windows world. Its going to take standardized HAL (Hardware Abstraction Layer) and power management interfaces, OSes that can adjust to varying hardware configurations, and will ultimately require operations automation tools to take full advantage.

Stay tuned for a vision of what the world could be like if we achieved this level of decoupling...

No comments: