Wednesday, December 05, 2007

How fluid is your software?

I come from a software development background, and I can never quite get the itch to build the perfect architecture out of my system. That's partly why it is so hard for me to blog about power, even though it is an absolutely legitimate topic, and a problem that needs to be attacked from many fronts. However, power is not a software issue, it is a hardware and facilities issue, and my heart just isn't there when it comes to pontificating.

All is not lost, however, as Cassatt still plays in the utility computing infrastructure world, and
I get plenty of exposure to dynamic provisioning, service level automation (SLAuto) and the future of capacity on demand. And to that end, I've been giving a lot of thought to the question of what, if any, software architecture decisions should be made with utility computing in mind.

While a good infrastructure platform won't require wholesale changes to your software architecture (and none at all, if you are willing to live with consequences that will become obvious later in this discussion), the very concept of making software mobile--in the changing capacity sense, not the wireless device sense--must lead all software engineers and architects to contemplate what happens when their applications are moved from one server to another, or even one capacity provider to another. There are a myriad of issues to be considered, and I aim to cover just a few of them here.

The term I want to introduce to describe the ability of application components to migrate easily is "fluidity". The definition of the term fluidity includes "the ability of a substance to flow", and I don't think its much of a stretch to apply the term to software deployments. We talk about static and dynamic deployments today, and a fluid software system is simply one that can be moved easily without breaking the functionality of the system.

An ideally fluid system, in my opinion, would be one that could be moved in its entirety or in pieces from one provider to another without interruption. As far as I know, nobody does this. (3TERA claims they can move a data center, but as I understand it you must stop the applications to execute the move.) However, for purely academic reasons, let's analyze what it would take to do this:

  1. Software must be decoupled from physical hardware. There are currently two ways to do this that I know of:
    • Run the application on a virtual server platform
    • Boot the file system dynamically (via PXE or similar) on the appropriate capacity
  2. Software must loosely coupled from "external" dependencies. This means all software must be deployable without hard coded reference to "external" systems on which it is dependent. External systems could be other software processes on the same box, but the most critical elements to manage here are software processes running on other servers, such as services, data conduits, BPMs, etc.
  3. Software must always be able to find "external" dependencies. Loose coupling, as most of you know, is sometimes easier said than done, especially in a networked environment. Critical here is that the software can locate, access and negotiate communication with the external dependencies. Service registries, DNS and CMDB systems are all tools that can be used to help systems maintain or reestablish contact with "external" dependencies.
  4. System management and monitoring must "travel" with the software. Its not appropriate for a fluid environment to become a Schrodinger's box, where the state of the system becomes unknown until you can reestablish measurement of its function. I think this may be one of the hardest requirements to meet, but at first blush I see two approaches:
    • Keep management systems aware of how to locate and monitor systems as they move from one system to another.
    • Allow monitoring systems to dynamically "rediscover" systems when they move, if necessary. (Systems maintaining the same IP address, for instance, may not need to be rediscovered.)

This is just a really rough first cut of this stuff, but I wanted to put this out there partly to keep writing, and partly to get feedback from those of you with insights (or "incites") into the concept of software fluidity.

In future posts I'll try to cover what products, services and (perhaps most importantly) standards are important to software fluidity today. I also want to explore whether "standard" SOA and BPM architectures actually allow for fluidity. I suspect they generally do, but I would not be suprised to find some interesting implications when moving from static SOA to fluid SOA, for instance.

Respond. Let me know what you think.

1 comment:

swardley said...

Hi James - I totally agree with you (but then of course I would when it comes to portability especially in the XaaS world).

After messing around with terms like fungitility , I settled on an old English word Patration to describe "the freedom and portability to move from one service provider to another without hinderance or boundaries"

I nicked the idea from Robert Lefkowitz. I would highly recommend the use of archaic words :-)