I promised to go over the options that one has when considering how to evolve from typical statically managed server environments to a utility computing model. I've thought a lot about this, and I see essentially two options:
- Deployment directly into a third party capacity utility
- Adoption of utility computing technologies in your own data center
As a quick refresher to the other two parts of this series, I want to note that this is not an easy decision, by any means. Each approach has advantages for some classes of applications/services, and disadvantages to others.
For those starting out from scratch, with no data center resources of their own to depreciate, option 1 probably sounds like the best option. If you can get the service levels you want without buying the servers necessary to run them--which leads to needing people to operate them, which leads to management systems to coordinate the people, and so on--and you can get those service levels at a cost that beats owning your own infrastructure, then by all means take a look at managed hosting providers, such as Amazon (yeah, I'm starting to treat them as a special case of this category), Rackspace, etc. Most of the biggies are offering some sort of "capacity on demand" model, although most (though not all) are focused on giving you access to servers which you have to provision and operate manually.
Just be aware that when you choose your vendor, you choose your poison. The lock-in issues I have described in my previous posts are very real, and can end up being very costly. Be aware that there are no standards for server payload, application or data portability between different vendors of utility computing services. Once you buy in to your capacity choice, factor in that a failure to deliver service on that vendor's part may result in a costly redeployment and testing of your entire stack at your expense!
For this reason, I think anyone with an existing IT infrastructure that is interested in gaining the benefits of capacity as a utility should start with option 2. I also think option 2 applies to "green field" build-outs with big security and privacy concerns. This approach has the following benefits for such organizations (assuming you choose the right platform):
- Existing infrastructure can be utilized to deliver the utility. Little or no additional hardware is required.
- Applications can be run unmodified, though you may need to address minor start up and shutdown scripting issues when you capture your software images.
- Projects can be converted one or two at a time, allowing iterative approaches to addressing technical and cultural issues as they arise. (Don't minimize the cultural issues--utility computing touches every aspect of your IT organization.)
- Data remains on your premises, allowing existing security and privacy policies to work with minimal changes.
- Anyone with a reasonable background in system administration, software deployment and/or enterprise architecture can get the ball rolling.
I've been personally involved in a few of these projects in the last couple of years, and I can tell you that the work to move an application to Amazon and then build the infrastructure to monitor and automate management of those applications is at least as much as it ends up taking to convert ones own infrastructure to a platform that already provides that monitoring and automation. You may sound cool at the water cooler talking about EC2 and S3, but you've done little to actually reduce the operations costs of a complex software environment.
If you are intimidated now by the amount of work and thought that must go into addressing utility computing, I don't blame you. Its not as easy as it sounds. Don't let any vendor tell you otherwise. However, there are ways to ease into the effort.
One way is to find a problem that you must address immediately in your existing environment with a quick ROI, and address that problem with a solution that introduces some basic utility computing concepts. One of these, perhaps the most impressive financially today, is power. Others have covered the economics here in depth, but let me just note that applying automated management policies to server power is a no brainer in a cyclical usage environment. Dev/test labs, grid computing farms and large web application environments are excellent candidates for turning off unneeded capacity without killing the availability of those applications.
I realize it might sound like I'm tooting Cassatt's horn here, but I am telling you as a field technologist with real experience trying to get utility computing going in some of the most dynamic and forward thinking data centers in the country, that this approach is a win-win for the CxOs of your company as well as the grunts on the ground. If you don't like power management as a starter approach, however, there are many others: data center migration, middleware license management, hardware fail over and disaster recovery are just a few that can show real ROI in the short term, while getting your IT department on the road to capacity as a utility today. All of which can be handled by a variety of vendors, though Cassatt certainly gives you one of the best paths directly from a starting approach to a complete capacity as a utility platform.
One final note for those who may think I've ignored multiple options for third party utility computing besides "HaaS" (Hardware as a Service) vendors. I realize that moving into SaaS, FaaS, PaaS, or WaaS (Whatever as a Service) can give you many advantages over owning your own infrastructure as well, and I certainly applaud those that find ways to trim cost while increasing service through these approaches.
However, the vendor lock-in story is equally as sticky in these cases, especially when it comes to the extremely valuable data generated by SaaS applications. Just be sure to push any vendor you select to support standards for porting that data/service/application/whatever to another provider if required. They won't like it, but if enough prospective customers balk at lock-in, they'll find innovative ways to assure your continued ownership of your data, probably while still making it more expensive for you to move than stay put. Still, that's better than not having any control over your data at all...
4 comments:
A most excellent and well written article. Totally agree with you.
Your question to me is when do I think it is reasonable for a customer to go on-premise vs. off-premise.
I think before offerings like Collage and 3Tera lines were clear. All mom and pops should go off-premise and all others should stay home.
My experience has always been with E5k customers and it will be very interesting to see how their brick and mortar data centers react to the "utility" movement. All things being equal I think Carr is correct and computing will eventually go the way of electricity. However, enterprises like the Federal Reserve run underground hidden on-premise DR sites. There is a lot of IP/Coin out there that may never go the Carr way. However if they all do I am sure the Fed Res will be the last ones to turn out the lights .
Botchagalupe,
Excellent. Then we agree.
James
There is a third option which I have been encouraging my large enterprise customers to take to 'test the water': low risk deployments, either piloting with a small group of users or by using a low risk solution (web/mail filtering) for everyone.
Cloud Computing, by its very nature (massive scalability, significant accounting infrastructure, interoperable services) is extremely difficult to emulate internally.
Post a Comment