This is one of the unforeseen effects of "paying for what you use", and I have to say its an effect that should scare the heck out of most enterprise IT departments. Although I would argue part of that fear should come from the exposure of lousy coding in most custom applications, the worst part is the lack of control most organizations will have over the lousy coding in the packaged applications they purchased and installed. Suddenly, algorithms matter again in all phases of software development, not just computing intensive steps.
The worst offender here will probably the the user interface components: Java SWING, AJAX and even browser applications themselves. To the extent that these are hosted from centralized computing resources (and even most desktops fall into this category in the some visionaries' eyes), then the incredible amount of constant cycling, polling and unnecessary redrawing will be painfully obvious in the next 10 years or so.
I have always been a strong proponent for not over-engineering applications. If you can meet the business's ongoing service levels with an architecture that cost "just enough" to implement, you were golden in my book. However, utility computing changes the mathematics here significantly, and that key phrase of "meet the business's ongoing service levels" comes much more into play. Ongoing service levels now include optimizations to the cost of executing the software itself; something that could be masked in a underutilized, siloed-stack world.
The performance/optimization guys must be loving this, because they now have a product that should see immediate increase in demand. If you are building a new business application today, you had better be:
- Building for a service-based, highly distributed, utility infrastructure world, and
- Making sure your software is a cheap to run as possible.
Number 2 above itself implies a few key things. Your software had better be:
- as standards based as possible--making it possible for any computing provider to successfully deploy, integrate and monitor your application;
- as simple to install, migrate and upgrade remotely as possible--to allow for cheap deployment into a competitive computing market;
- as efficient to execute as possible--each function should take as few cycles as possible to do its job
The cost dynamics will be interesting to note, especially their effects on the agile processes, SOA, and ITIL movements. I will keep a careful tab on this, and will share my ongoing thoughts in future posts.
2 comments:
Utility computing services provide the ability to add resources with a linear increase in cost. Without utility computing your costs increase geometrically as you start to scale beyond a few dozen servers. Therefore, I'd argue that the economic impact of bad code has actually been lessened by utility computing.
What has changed is that this cost is now a visible as a single line item on a bill.
This will make it far easier for entrepreneurs to make the trade-off between time to market and operational expense.
Don't misinterpret the meaning of my post. Yes, utility computing definitely provides cost benefits over siloed systems for all software.
However, it will most certainly be more expensive to host a badly written application in a utility computing environment than a well written one. Once applications are moved to a utility computing environment, the drive will be to make those applications run more efficiently.
I'm just warning people to thing that way before they make the switch.
Post a Comment