I remember the seventies, when gas prices skyrocketed (the first time) and there were suddenly all these tiny cars on the road. One member of my mom's congregation even showed up one Sunday with this crazy little car that ran on a motorcycle engine. It was made by some new car company called Honda, and it was one of the first years that Civics were sold in America.
As a nation, we clamored to change our lifestyles--ditching heavy steel muscle cars for sporty (or utilitarian) little "economy cars". Our approach to solving the energy crisis was to increase the efficiency with which our cars consumed energy. Note, however, it was not (by a long shot) to reduce the amount of driving we did.
Now flash forward to today, and take a look at the current energy crisis in America's (and the world's) data centers. Electricity is expensive, and growing more so (except for those lucky enough to have subsidised power). Add to that concerns about global climate change, and you've got company after company scrambling to be "green".
Again, however, note that the target is not to do less computing than we did before. In fact, if anything, the demand is increasing for information technology and business automation. I believe pushing the automation envelope is going to take more computing power than we know.
So, like the automobile vendors of the seventies, today's systems vendors are working hard to release "energy efficient" models of servers, laptops and desktops. They do this ostensibly to give us all a good feeling about what good stewards of our tiny planet we are, but in reality its all about saving money. None of this changes our worst behaviour, however; our tendency to leave as much capacity running as possible at all times, "just in case".
Of course, the server that uses the least amount of power is the one that is turned off. That's where Service Level Automation comes into the picture. As noted in the past, one of the key aspects of a good Service Level Automation platform is the capability of shutting down anything that isn't serving an immediate business need. Traditionally, I've always talked about this in relation to scale-out applications--your SLA platform should shut down servers not needed to meet current demand in such applications. Now, however, I want to talk about three use cases where SLA enhances the day to day power consumption of all applications in the data center.
- Job-specific management. OK, think of every server you've touched in the last six months. How many of those served a short term purpose (e.g. getting a software release out the door), but frequently spend days unused for any purpose. I remember going days or even weeks between placing builds on staging servers in my previous life. Service Level Automation should be able to detect unused software payloads, and shut down that equipment until needed again by that or any other payload.
- Time-specific management. Almost every data center (especially development and test labs) have systems that are hit hard during some portion of the day, then remain idle for the remainder. SLA should provide the capability to not only schedule system shutdowns, but to actually look at that status of systems to determine which are best candidates for shutdown. In other words, go beyond automating "blind" scheduled events to delivering intelligent management of system power cycles.
- Power emergency management. One of the great benefits of living in the San Francisco Bay area is the incredible ingenuity of our power utility in encouraging companies to conserve power and "be good neighbors" in a power emergency. PG&E offers rebates to companies willing to join Demand Response programs, where they agree to voluntarily reduce electric consumption to help the utility avoid the infamous "rolling blackout".
The Silicon Valley Leadership Group has recently been hosting a series of events around "Energy Efficient Data Centers", one of which targeted how SLA could deliver on all three of the above. The response was tremendous--so much so that my employer has asked me to join a team building a simple targeted solution to these problems based on our already innovative SLA platform. I can't say much more right now, but I certainly will communicate all that I can as soon as I can.
By the way, the first lesson I've learned from all of this is that power measuring capabilities varies widely from data center to data center. Some companies can't tell you anything more than their monthly bill, others can show you power consumption over time at the individual server level. Part of the issue is that there are no "simple" power metering solutions at the server level...power controllers (i.e. iLO2) are just now starting to give management systems access to the power measurement tools on Intel and AMD boards. MPDUs have some good features, but they vary widely from vendor to vendor.
You can't control what you can't measure, so get on board system vendors! Give us the tools we need to measure and manage those beautifully efficient next generation servers. Heck, give us the tools we need to measure and manage all those older systems we have out there now. That would be more green by far than just squeezing another milliamp out of a MIP.
No comments:
Post a Comment