Tuesday, August 28, 2007

Links - 08/28/2007

Don't Worry, It's Safe to Power off that Server and Power It on Again (Vinay Pai): Vinay posts on one of the biggest myths in data center operations: the "Mean Time Between Failure" myth. In short, if you ran 1000 servers for 3 years, there is a .06% chance that any power supply would fail. From this, he notes that dual power supplies are an inefficient solution (from a green standpoint) to a decidedly minor problem. Remember this point for some of my future posts--this myth is busted, and knowing this opens you to some very quick and simple power efficiency practices.

Lowering Barriers to Entry: Open Source and the Enterprise (The Future of Software: Stephen O’Grady): Stephen, of Red Monk fame, argues that the real value of open source software is not its price or code quality, but the ease in which it can be introduced into an enterprise. According to Stephen, open source puts the power of software acquisition into the hands of developers and architects. Would you agree? Is there an equivalent possibility for open source hardware? SaaS? Utility computing? Or will those drive the pendulum back to central management control?

Scalability != concurrency (Ted Leung on the Air): Given my past as a enterprise software development junkie, this article is particularly interesting to me. A little debate is breaking out about the shortcomings of Java in a highly concurrent hardware model, and there seem to be a few upstart languages making a name for themselves. I'm was not aware of Erlang, but you can bet I will spend some time reading about it now (despite its apparent shortcomings--see below). For those interested in utility computing and SLAuto, remember the goal is to deliver the functionality required by the business using the most cost effective resource set necessary to do so. Software is a big part of the problem here, as inefficient programs (or even VMs, it seems) can minimize the impact of better hardware technologies. Keep an eye on this, and if you are writing software to run in a SaaS / utility computing world, consider the importance of concurrency to gaining cost effective scale.

http://www.russellbeattie.com/blog/java-needs-an-overhaul (Russell Beattie): This is the article that triggered Ted's comments above. An interesting breakdown from one developer's point of view, with an brief overview of Erlang's shortcomings as well.

Thursday, August 23, 2007

Links - 08/23/2007

AWS and Web 2.0 Mapping (WeoGeo: Paul Bissett): Paul, founder of WeoGeo, a company focused on geospacial solutions, made a comment in this post that I thought needed sharing:

Mapping, particularly quantitative mapping like GIS, and AWS go together like peanut butter and jelly (I have 3 small kids who have been out of school all summer, so this was the first analogy that came to mind). The utility computing of EC2 and the large web-addressable disk storage of S3 provide opportunities for developing and sharing of mapping products that previously were cost prohibitive.

A solar-powered data center saves energy its own way (SearchDataCenter.com: Mark Fontecchio): Frankly, I think this may actually be a wave of the future. Time to buy cheap real estate in the California desert, folks. (No access to the grid required! Drill a well for water, dig a septic tank, and Internet access is your only utility need.) Given the success AISO.net is having here, I would be surprised if more small-medium sized data centers don't pop up where the sun shines almost every day.

Hell, with the spaceports going up in New Mexico, that state's deserts might not be a bad place to place your bet either.

Wednesday, August 22, 2007

Business doesn't ask for utility computing, part 2

Bob Warfield (of the stealth mode company, SmoothScan) called me out on the admittedly flippant argument I put out for IT ownership of infrastructure strategy and architecture. I made this argument specifically in the face of business units (and even hosting clients) who are extremely resistant to sharing "their" servers with anyone. Bob's response is insightful, probably exactly how BUs will respond, and deserves a careful response.

IT is a service organization. (Translation: You work for us, and we're more mission critical than you are. You are replaceable by VAR/SIs and by SaaS. Be careful when getting uppity with us.)

Damn straight. We are a service organization, and as such our sole purpose of costing our enterprise money is to meet your functional and service level requirements in the most cost effective way possible.

However, your statement does not explain why you shouldn't share servers. If we demonstrate conclusively that we can better meet your service levels at a lower cost with virtualization and/or utility computing, clearly it is in the financial interest of the company for you to pursue the concept further. In the same vein, if you can prove that you can get the business functionality you need for cheaper through a SaaS solution, we should help you make that happen.

You have not always met your SLA's and delivered your projects on time and on budget. In fact, there is at least one major nightmare project on everyone's mind at any time. (Hey, it's software, what else is new, it wasn't our fault, part of it is business' fault beacuse of how they spec'd their requirements and then failed to deliver, yada, yada. But, fair or not, IT gets the blame. IT has more glass on its house than anyone.)

We completely agree--IT has often failed to deliver (or been party to delivery failures). However, because we are focusing on infrastructure issues, let's let the SOA guys describe how they will mitigate software delivery failures.

There are two key forms of project failures for IT infrastructure:


  1. Failing to acquire, install, configure and provision hardware in a timely fashion
  2. Failing to meet agreed upon SLAs when operating that hardware and their software payloads.

Assuming we physically receive the hardware in a timely fashion, we then must use automation to greatly reduce the cost of getting new systems up and running quickly. Whether or not systems are shared by business units, this need is there.

In fact, because we are utilizing resource pooling in a utility computing model, it will often be possible to provision your software without requiring you to wait for any associated hardware. Want to get a quick beta marketing program up and running in a matter of hours? Give us the binaries and we will find capacity for it today. We'll determine the need to add additional capacity to the system later, once we see trends in overall infrastructure demand.

As far as service levels go, response to violations have to be automated. No more waiting for someone to respond to a pager call--if a server dies in the middle of the night, SLAuto systems must quickly restart or replace the failed system. With automation involved in meeting your SLA needs on shared systems, we aim to remove the dependency on "human time" and the limits of siloed resources, which are what was killing us before.

BTW, our new friend (insert name of Enterprise Software Sales Guy) has told us all about these topics, so we're knowledgeable too, and we think you ought to listen to us. (Very dangerous game for the Enterprise Sales guy, but if IT already shut him down, this is exactly how they'll play it in many cases because they have nothing to lose.)

Again, unless Mr. Enterprise Software Sales Guy was selling you something that manages infrastructure (in which case, why do you care?), what he is selling doesn't impact the decision of why or why not to share servers with other business units in an IT utility. If he is telling you it does matter, he'd better be able to demonstrate how his product will beat the ROI we are projecting from utility computing. Oh, and that is a BIG number.

We still remember those times when you put the needs and requirements of your own organization ahead of our business needs. You wouldn't choose the app we wanted because it didn't fit your "standards". The app you did choose stinks and our competitors, who use the app we wanted, are now running rings around us. (Yep, it happens. I've seen IT frequently choose an inferior or even unacceptable app because they didn't care and had the power to ram it down the business' throats. When it blew up, IT either lost credibility or the business suffered depending on how the culture worked. This happens at all kinds of companies, large and small, successful or not.)

In my earlier example, I made it clear that you have the right to push as much as you want for functionality and aesthetics. Applications are the point where both originate, and we fully support your demands that we not make your application decisions for you (but hope we can make them with you). However, architecture is a different story, and infrastructure architecture is about as far removed from functionality and aesthetics as you can get (except in relation to service levels, but we already covered that). Again, if we deliver the functionality you want at the service levels you require in the most cost efficient way reasonably possible, then you shouldn't care.

Oh, and by the way, we take full responsibility of reporting to you the SLA compliance levels and associated cost of infrastructure as we move forward, so you can determine on your own if we are really achieving our goal. Of course, that might come in the form of a bill...

The core of Phil's comment boils down to the following:

You won't wean the business from sticking their nose into IT's business so long as these cultural factors persist. Earning the write to be a trusted deliverer carries with it the responsibility to be trustworthy.

Have you been trustworthy?

If not, even if it wasn't your fault, consider a more consensus oriented approach. After all, the speeches described above boil down to "do it because I say so". I try to avoid that with my kids where possible, and it is possible almost 100% of the time.

To that I reply "mea culpa". I was stating my case in a much less friendly tone than I would in real life to make my point. You are right that all business relationships must (ideally) be consensus driven. However, in the end the cost savings driven by a multi-tenant approach (be it utility computing, virtualization or SaaS) can't be achieved if each business unit demands owning its own server.

One last thing: it has been my experience in the last year or two that business units are much more open to sharing infrastructure within a company. As long as data security is maintained, business application owners are most concerned about the same things IT is--delivering required functionality at required service levels for as little cost as possible.

Sharing infrastructure with another company, however, is an entirely different story.

Tuesday, August 21, 2007

Links - 08/21/2007

MaaS - Money as a Service (Roman Stanek's Push-Button Thinking): The analogy of banking to software usage is a good one. As Roman says:
"[A]s nobody would keep their money at home stuffed in a mattress anymore, I don't expect users to go through the pains of installs, upgrades, re-installs and maintenance of complex software products. "
Sure enough. However, note that there are always some entities that keep their own cash handy: banks, for one, not to mention government treasuries. In the same vein, I think there will always be certain non-IT organizations that will maintain their own data centers, such as financial firms with proprietary IT that enable competitive advantage, as well as law enforcement and national defense.

Millions of Square Feet (RackLabs: Lew Moorman): This is an interesting example of how computing needs translates directly into physical space. I'm interested in knowing what RackSpace / RackLabs view of the business model for utility computing is, but at the very least we can see that giant compute farms are most definitely in our future.

Tech's own data centers are their green showrooms (InfoWorld: Robert Mullins, IDG News Service): This article covers the "eat your own dog food" approach that both Sun and Fujitsu are taking in terms of energy efficient computing. It is interesting to me, however, that none of the solutions described simply turn unused equipment off...

Monday, August 20, 2007

Business doesn't ask for utility computing,either...

Call for more EA collaboration (Enterprise Architecture: From Incite comes Insight...: James McGovern) and
SOA and EDA: SOA-selling battle goes on in blogosphere (SOA and EDA: Jack van Hoof): Interesting discussion regarding Jack's post, "SOA and EDA: Business doesn't ask for SOA". There seems to be a little bit of backlash to the argument that no one should have to sell SOA to the business. However, James puts it wonderfully when he presents the following observation:
Imagine finding a carpenter with thirty years of experience and having him ask you whether it is OK if he uses a nailer instead of the trusty hammer and nail. Wouldn't this feel absurd?
Absolutely. IT architecture is actually very rarely a business issue. This is as true in infrastructure as it is in software. Which is why arguments from the business that "I don't want to share my server with anyone" shouldn't hold a lick of weight with IT. If you encounter that kind of resistance in your world, just fire back the following:

"As long as I am meeting your service levels, how I deliver them is not your concern. Like the relationship between home builder and client, we are responsible for delivering the product you pay for to required building codes (meaning IT technology governance, not business "want to haves") and contractual quality specifications (SLAs).

Feel free to "drive by the property" occasionally to see our progress (and comment on aesthetic and feature completeness concerns), but trust our professional experience to design and build the required infrastructure. As a cost center, believe that it is in our interest to drive down costs, passing the savings on to you."

This argument would probably hold true for the hosting-client relationship as well...

Friday, August 17, 2007

Plumbers are plumbers, dude...

Allan Leinwand, a venture partner with Panorama Capital, founder of Vyatta, and the former CTO of Digital Island posted an interesting article about what it will take for today's telecom service providers to become major players in the Internet of the future. As Allan puts it:
If there’s one thing that service providers denounce, it’s being classified as the plumbers and pipe fitters of the Internet, destined to move bits between co-location facilities. With the software-as-a-service (SaaS) and Web 2.0 revolutions in full swing, service providers are pounding the table, insisting that they have evolved beyond the mundane task of moving bits to become “service provider 2.0” companies.
Allan goes on to demonstrate that the true advantage that these SPs have over startups is their understanding of scale, though he is less than certain that they will be able to take advantage of the opportunity.

I believe the telecom providers have never moved beyond being the plumbers, though innovative plumbers that have figured out all kinds of ways to charge you for every turn of a faucet. Doubt me? Just look at the Web 1.0 world. Every single Internet access provider I have used has offered me a "home page" of their making, with supposedly advanced services for accessing mail, news, search and other key features of the early Internet. And in every case, I quickly replaced their tired page with either my My Yahoo page or Google. Not a single one was able to offer me anything innovative enough to see them as leading edge technology in the Web content space.

The same will be true for SaaS (Software as a Service), FaaS (Frameworks as a Service) and PaaS (Platform as a Service). They may be great at scaling network architectures, pretty damn good at scaling computing infrastructures (making one or more Bells a player in the compute capacity space), but they haven't got a clue how to provide the art that makes Internet content compelling. I've worked with telecoms and Internet access providers in the past, and I wouldn't trust them to create an ERP package, social networking site or even an online photo album that would hold a candle to Salesforce.com, Facebook or Flickr respectively.

It all comes down to the layering that Isabel Wang points out some major players are evangelizing these days. To quote Isabel:

Amazon and Microsoft made me realize that Internet infrastructure solutions should be - will be - delivered in 4 layers:

(a) Data centers/physical servers/virtualizataion software

(b) Utility computing fabric comprised of large pools of servers across multiple facilities

(c) Application frameworks, such as Amazon's web services APIs

(d) Shared services, such as identity management and social networking

Damn straight. Think about the implications of the above. To expand on those definitions a little bit, if you want to cover all of the bases in the Web 3.0 world, you have to deliver:
  • servers (physical and virtual) with supporting network, storage, power, cooling, etc. systems
  • automation and management intelligence to deliver service levels in an optimal fashion (insert SLAuto here) on that infrastructure
  • some killer APIs/frameworks/GUIs to allow customers to apply the infrastructure to their needs
  • all of those core capabilities that customers will require but will not want to build/support themselves (such as the things that Isabel notes, but also there is some SLAuto here as well)

The SPs that Alan references are great at Isabel's layer (a), and have a head start on delivering (b). However, when you move to (c), all of a sudden most service providers fall down. Even the wireless guys rely on Java / Microsoft / Nokia / etc. to provide this interface on their networks. Today, there are no telecoms, hosting providers or other Internet service provider that comes even close to handling (d).

Is anyone handing all four layers? Sure, the software companies that know how to scale: Google, Amazon, Microsoft, Salesforce.com, Ebay, etc. These guys worked from the top down to build their businesses: they wanted to provide applications to the masses, so they had to build (d), (c) and (b) in order to keep the required (a) manageable. Some (soon most?) of these guys are working to make a buck off of the rest of us with their technology.

It took startups--quickly growing startups, mind you--to work through the pain of being dominant Web 3.0 pioneers. However, even they don't own the entire infrastructure stack needed to do truly dynamic web computing, and they are really still pretty damn primitive. (For example, while many of these vendors have internal automation to make their own lives easier, they offer their customers little or no automation for their own use.)

Telecoms will always own the networks that in turn make the rest of our online lives possible. They may also acquire companies that know how to do the software infrastructure side a bit better--identity infrastructure especially seems like a good telecom business; after all, what is a phone number other than a numeric user ID? But they will not likely be the owners of the social networks of the future. They probably will never be the dominant capacity providers in the utility computing world. However, owning the network is a powerful position to have.

Network neutrality, anyone?

Update: You should read the comments to Alan's article, as well. Lot's of very smart people saying much the same as my long post, but in far fewer words. :)

Thursday, August 16, 2007

Links - 08/16/2007

Convergence of Virtualization and Green Data Center Trends Could Be Perfect Timing for Microsoft (ITBusinessEdge: Kachina Dunn): Kachina notes that Microsoft may gain the most from the way the virtualization market is setting up. Combine that with the comments that Chris Kanaracus made about Microsoft's "compute cloud" strategy, and you begin to get the feeling that Mr. Softy may out-engineer its rivals in utility computing technology. I hope not, because (like VMWare) they are still obsessed with locking people into their platform. We need that utility computing portability standard!

Ozzie Reveals More Details of Cloud Development Platform (RedmondDeveloper: Chris Kanaracus): Another good breakdown of Microsoft's "PaaS" (Platform as a Service) play. Ray's comments at the end of the article give me the feeling that Google, Amazon, Salesforce.com and others are not free and clear of MSFTs influence, yet.
"We believe we are the only company with the platform DNA that's necessary to viably deliver this highly leveragable platform approach to services. And we're certainly one of the few companies that has the financial capacity to capitalize on this sea change, this services transformation."

Grid computing: Term may fade, but features will live on (ComputerWorld: Barbara DePompa): Barbara discusses the view of many that the term "grid computing" may go extinct in the face of virtualization and utility computing. My own opinion is that "grid" has actually had its definition narrowed back to its roots: grid computing platforms provide resource allocation to job-based computing processes, like batch data processing, image rendering and HPC. Utility computing is the term that applies to all "computing on demand" applications, including grid applications.

Automation and going green in the data center (NetworkWorld: NewDataCenter): Someone at NetworkWorld saw the light at Next Generation Data Center in San Francisco earlier this month. I just wanted to welcome them aboard, and invite them to explore the importance of SLAuto to both automation and green practices.

Wednesday, August 15, 2007

Links - 08/15/2007

VMware, Xen and the hardware business (RoughType: Nicholas Carr): Nicholas reports on the big buzz in utility computing and virtualization circles today, the purchase of XenSource by Citrix quickly following the "irrational exuberance" around the VMWare IPO, and the effect each will have on the bare metal market. He argues that true utility computing will drive consolidation not within the enterprise, but across enterprises and even across industries. My comment in response was:

I agree whole heartedly that virtualization is going to permanently change the application-to-server ratio in most companies forever. The days that you buy an email application and a dedicated piece of hardware are fading fast. Furthermore, utility computing (especially SaaS) will probably replace much of the need to even deploy virtual machines within an enterprise.

(I have noted that virtualization != utility computing, at least as far as service level automation is concerned.)

However, I would offer that there are certain organizations with applications that may never be willing to have infrastructure exposed to other enterprises, even if they will benefit from the use of utility computing practices. For example, even if the security hurdles can be overcome, is there any way that *politically* the defense and intelligence communities would ever be allowed to host their data / applications in a third party "utility"? Will banks be willing to put customer account applications into EC2/S3--on infrastructure shared with everyone from other banks to the hacker community? I don't think so.

Utility computing will not be a "one size fits all" world, as you well understand. There will be a combination of SaaS vendors, HaaS vendors, boutique capacity and/or service providers and--yes--private data centers (with many utility computing capabilities) that will complete the vision. All, however, will allow customers to optimize the cost of computing within each of those domains.

None of that derails your thesis, however; the market for individual servers will most certainly be "consolidated", thanks largely to the successes of VMWare and XenSource/Citrix.

Open Source System Management Suites: A Viable Alternative? (SearchDataCenter.com: Audrey Rasmussen): If you are wondering if there are any open source technologies that could be incorporated into a SLAuto solution, this class of open source management tools may be a good place to start. No, there are no policy engines integrated into these tools, so the analyze capabilities are severely hindered, but the measure and respond functions are fairly well represented. A complete solution, however, looks only to be available commercially right now.

Scaling on Amazon EC2 with RightScale (SynthaSite): Similarly, there are some interesting management tools for EC2 appearing on the market. How long until someone offers SLAuto for Amazon's web services, I wonder.

Tuesday, August 14, 2007

Links - 08/14/2007

What I Learned at NGDC: Technology is Ready, Users are Not (GridToday: Derrick Harris): I worked the Next Generation Data Center show (which was piggybacked on LinuxWorld this year), and have also spent the last 18 months trying to convince system administrators to adopt NGDC practices. What Derrick says here generally rings true, though he is perhaps more skeptical than I would be at this point. Of course, if we could make it easier to get started in the NGDC world...stay tuned.

In search of the green data center (statesman.com: Brian Bergstein [AP]): If there is one statement that clearly defines the resistance of most system administrators to simple energy savings practices, such as--oh, I don't know--turning off unused servers, it is the following:
"There are probably two key metrics for the IT guy: 'no downtime' — if the boss's e-mail doesn't work, he hears about it right away — and 'no security breaches on my watch,' " says Eric Birch, executive vice president of Degree Controls Inc., which sells a system that increases electronics cooling efficiency. "They normally do not know, don't care and aren't measured by their electric bill."
Man, how true, how true. Guess what folks, in environments like dev/test labs where "no downtime" is not the same as "no loss of productivity", its time to change our view. SLAuto can be used to automate the power state of servers based on a variety of policies, including schedules, utility events and even--oh, I don't know--extended disuse.

This article also mentions a variety of good technologies to look at if you are building a power-efficient data center.

SaaS invades enterprise software markets (ZDNet: Phil Wainewright): One of the reason SaaS vendors are making some headway (though not on all fronts, according to the article), is the fact that ERP apps require lots of expensive infrastructure and operational support. In fact, these inefficiently coded processing hogs make up a huge part of many IT department operations budgets, and probably register a significant impact on the Facilities budget, as well. SaaS is one form of utility computing that mitigates those costs for businesses that don't want to be in the IT operations business. Stating the obvious for many, I am sure, but one question that comes out of the practice is how will these SaaS users protect themselves from failing infrastructure that they don't own?


EPA REPORT GIVES DATA CENTERS LITTLE GUIDANCE
(SearchCIO.com): The early reviews are in, and this report seems far from a home run. However, it is a start, and that even debate over its effectiveness can only help make us more aware of power consumption issues. What is really interesting is the view that several interviewees had that software was a big part of the problem:

"Boergermann said the software industry should share in the responsibility in reducing power consumption. For example, many business applications require huge amounts of server processor capacity to run simple tasks, which cause servers to consume more energy. Boergermann said he has one application that manages his bank's property appraisals. He said just scrolling through a window causes the application to use 100% of its server's capacity. He said an application shouldn't hog so much energy for such a simple task...

...Gartner's Kumar, who said he generally admires the decades-long energy efficiency efforts by leading hardware vendors, especially HP and IBM, also wondered why the Microsofts, Oracles and SAPs of the computing industry have gotten off easily."

Monday, August 13, 2007

Data X.0, and why you need SLAuto now

Data 2.0: How the Web disrupts our relational database world (GigaOM: The Future of Software): I'm a little annoyed that there is a claim here that distributed data is only rev 2 of database technology, but otherwise this is an important trend to keep tabs on if you are interested in enterprise software architectures. The following statement from the article says it best:
Relational databases are to software what mainframes are to networked hardware:the monolithic beast at the core that needs magic incantations from high priests to run, and consumes unsuspecting junior engineers for breakfast.
The truth is we have depended on giant centralized relational stores for a very long time now, but we've had the luxury of building databases with a basically "one-owner" model for the data they contain. However, as the web (and, I guess, social networking software) blows away that concept, and the data that drives our applications becomes necessarily distributed both intra- and inter-organization, we are being forced to expand the technology around data management.

What does this have to do with SLAuto? Well, how are you going to handle the increased management complexity that all of this brings? How are you going to monitor the health and performance of dozens or hundreds (or thousands!) of distinct data sources that make up your key applications and services? Worse yet, how are you going to scale the infrastructure to handle varied--and likely unpredictable--demand?

I would argue that you need some level of SLAuto in place before creating or depending on a distributed data infrastructure. I admit this is something I haven't put a lot of thought into, but its sheer complexity calls out for intelligence managing scale and failure recovery. If you start exploding your data infrastructure without some level of automation around the infrastructure, you are doomed to doing the same tasks manually. With that many more "atoms" to manage, the effect on productivity may well be overwhelming.

I guarantee that Yahoo, Google and Amazon have the management and monitoring in place to keep these distributed data "ecosystems" running smoothly. In fact, the article even notes that Yahoo is using "Hadoop" for "the massive data mining of Webserver logs." I'm not sure, but I bet there are some management tasks that depend on this data mining.

Don't wait for the complexity to overwhelm you--implement Service Level Automation.

Saturday, August 11, 2007

Links - 08/11/2007

Man vs machine, or, from SLA to SLAuto (Isabel Wang): Isabel was kind enough to provide her comments on SLAuto, and--no surprise--she get's it. In fact, her analogy at the bottom of this post is a wonderful one, and I hope she's OK if I use it (with appropriate credit, of course :):

"You don't need no SLAuto, you say, because you've got great customer service reps and data center techs? Well... 10 years ago I used to know people who prided themselves on their ability to dish out web space manually. They could charge credit cards and create customer folders faster than anyone else! Then competitors started using auto-provisioning tools and they went out of business. History will repeat itself."

Is the Tap Dry? (CXOtoday, India: Tahirih Gaur): Tahirih describes India's biggest aparent obsticals to utility computing: storage and inefficient management of outsourced IT. (Does anyone else see an irony in that? James McGovern?) She notes that many companies (banks for instance) have a problem with storing sensitive data on disks shared with competitors. She also quotes a Gartner statistic that 80% of all outsourcing deals are renegotiated within 3 years. I've posted on this before, and Nicholas Carr is writing extensively about it, but make no mistake that the move to utility computing is even more of a cultural shift than a technical shift. My employer is betting on the fact that a large number of organizations will not be comfortable outsourcing their utility computing entirely, and want to create a utility within their own infrastructure.

Utility computing's elusive definition (CNet news.com: David Becker): In searching for more coverage on utility computing, I came across this 2003 article covering a panel discussion on the topic at that year's Comdex. My first reaction in reading it is that the more things change, the more they stay the same. All of the issues presented here remain true today. I don't see one element of this article that doesn't ring true today (other than new marketing names for the vendor products).

I also love the proposition by Tony Siress (then Senior Director of Advanced Services for Sun) of transportation as a better analogy of utility computing than electricity:

Siress maintained that transportation is a better analogy, considering how people employ a combination of owned, leased and rented cars along with taxis to meet their changing transit needs. "Taxi cabs are a good example of a fully outsourced piece of infrastructure, and they're the right approach in some situations" he said. "The trick is understanding the mix of approaches that delivers the highest value and the least risk to you."

This actually highlights something that I have trouble remembering sometimes; that utility computing isn't a one-size-fits-all approach. Not every application is appropriate for managed hosting, nor does every one require a private IT utility. Some "trips" (analogous to either transactions or functions?) require multiple "modes of transportation": a little SaaS, a little hosted virtual server capacity, even a few bare metal servers in a closet thrown in for good measure. The challenge for SLAuto is to provide policy across all of these, or at least provide the building blocks to do so.

Functionality (or "service flow") is the electricity in utility computing; hardware and software are just the generators and transformers. The network is the power line, and SLAuto is the demand management system.

Thursday, August 09, 2007

Links - 08/09/2007

Technology companies tout greener credentials, but significant improvements are well off (Associated Press via Technology Review): This article is a layman's explanation about the energy consumption imposed by data centers, and the reasons behind that consumption. There are some interesting statistics here (most of which are recycled--pardon the pun), but the article is light on possible solutions. Remember, the entire purpose of SLAuto is to deliver the required service levels for your business using the most cost (and energy?) effective resources necessary to do so.

"Why is Amazon Web Services partnering with NaviSite?" (Isabel Wang): An overview of several interesting trends around utility computing in the managed hosting market. My comments:
  1. NaviSite is providing an interesting service here, and one I think will need to evolve to an automation model eventually. Today it looks like your same old monitoring-only "management" environment, but with a few interesting hooks who knows.
  2. I love the PlanetWide Media story only because it is one of the first example of "Web 2.0" infrastructure mashup that I've seen. Why not Web 3.0? Because I would bet right now that Planet Media is economically locked in to LayeredTech for the foreseeable future (i.e. the cost of moving their software would negate the benefit of moving it).
  3. Hmmm. Perhaps the future goes so far as to divide mindshare into recombinable building blocks. (OK, sorry, the BS meter pegged on that one...)
  4. Isabel wraps up with a comment about the long tail of computing itself, which I believe is the real next revolution in IT that will drive new businesses and perhaps even industries. I believe Nicholas Carr agrees, but we shall have to wait an see.

Green data center scuttlebutt from NGDC conference (Server Specs: SearchDataCenter.com): The interesting part of this video to me is the description of the GDC panel discussion as "sniping". I agree, and that's why I left early. Nothing interesting came out of the discussion other than the fact that each major vendor is still months or even years away from actually reducing the energy burden of the data center market (reinforcing the AP article above). I say watch for solutions that may be a little more short term.

Wednesday, August 08, 2007

"Web 3.0" and Infrastructure

In "What is Web 3.0?" Nicholas Carr breaks down various early definitions of Web 3.0 for the reader. In the end, he offers the following:
Web 3.0 involves the disintegration of digital data and software into modular components that, through the use of simple tools, can be reintegrated into new applications or functions on the fly by either machines or people.
Great definition, but again leaves out the importance of infrastructure on the equation. (Wait, wasn't Carr the one who pointed out that it all starts with infrastructure? What happened, Nicholas?) I would modify his statement to read
Web 3.0 involves the disintegration of digital data, software and infrastructure into modular components that, through the use of simple tools, can be reintegrated into new applications or functions on the fly by either machines or people.
To get a sense of how this technology would affect infrastructure, picture a world in which every aspect of the infrastructure stack, from application server to operating system to bare metal server to network fabric to shared storage, etc., can be assembled as necessary to meet the service level needs of an application (or even an application function). Need a J2EE service to run at 4 9's up time? Choose from a smorgasbord of app server vendors running on a selection of Hardware as a Service vendors with access to any number of supporting services from a variety of Software as a Service vendors--or let a service level automation tool (whether an appliance, a software product or a SaaS offering) do it for you. Ideally, let the SLAuto determine the most cost-effective way to deliver your service at the SL's you require.

To be fair, this is a ways down the road, but then so is anything Web 3.0.

Links - 08/08/2007

As you can see on the upper right hand column of my blogspot page, I have added myself to the findtechblogs.com realm. Seems like a very cool service, and its already given Ken Oestreich some visibility. (See below.)

The CMDB - An anemic answer for a deeper crisis (Fountnhead; Ken Oestreich): Ken is, of course, a collegue, so I may be a little biased, but I love what he is saying here. Basically, if you are keeping a CMDB, you are keeping manual records of the state of your data center. If you automate with a system that tracks its actions, then you have an automated way to keep those records. I think he washes over the legal requirements for a CMDB a little bit, but other than that, you need to read this. I learned of this post via Google Alerts, and findtechblogs.com.

Nirvanix To Challenge Amazon S3 (TechCrunch): A San Diego startup is daring to challenge Amazon's S3 dominance of the nacent Storage-as-a-Service market. Judging from the comments, these guys have their work cut out for them.

Green Grid lays out 2007 roadmap (Between the Lines; ZDNet): This was the only interesting piece of news to come out of the Green Data Center panel at LinuxWorld, IMHO. When combined with the EPA study, it looks like a trickle of real science being injected into the "green" hype. Its still all talk at this point, but I expect to see some useful tools and guidance from both sources. I hope that optimizing to service levels is one of the key criteria, though.

Tuesday, August 07, 2007

Links - 08/07/2007

Spent the morning at home waiting to resolve jury duty (I'm free! I'm free!), and the afternoon at LinuxWorld. More on that later.

Scratching itches in the cloud (O'Reilly Radar): O'Reilly and Sriram Krishna describe three key problems being introduced by "the cloud", including the inherent difficulty of working with the software itself (whether its open or closed source), potentials for "data lock-in", and the barrier to entry of building a competitive site. Again, I think this argues for a standards around utility computing portability--not just for data, as O'Reilly suggests here, but also for server images and application images. See the earlier discussion documented here and here.

Web Services war is over: Time to REST (The Future of Software): Puh-lease... Those of us with real experience in developing highly scalable distributed applications have always known that WS-* was more vendor opportunity than great architecture, but I'll believe REST has won when I see Google, PG&E and others convert their web services from SOAP-RPC/JAX-WS/whatever. Amazon is a force to be reckoned with, but there is a lot of war left to be fought. It reminds me of the COM+ (now .NET) vs. J2EE battles fought in the late nineties--any winner there yet? The good news is that none of this matters to SLAuto users, assuming their service level monitors at the service component level understand both REST and WS-*.

Service Must Be Job #1 In The Data Center (InformationWeek): Gee, does that mean that it all comes down to meeting service expectations...as defined in service level agreements...which can, in forward thinking organizations, lead to policies that can automate resource allocation...which, in turn, leads to reliable measurement of resource consumption? Believe me when I tell you that this is exactly why I am passionate about this subject. Best quote in the article:

As eBay's Smart put it: "The next generation data center has to be about changing the nature of the data center and its relationship to the business. Understanding these relationships allows you to go to your boss and say that you know the cost of delivering the value of this service."

Amen.

Monday, August 06, 2007

Links - 08/06/2007

In the interest of increasing my post frequency while not increasing the workload it imposes, I thought I'd take a hint from some of the more popular bloggers in my blogspace. Starting today, I will try to show everyone what I am reading online, and why I think it has importance (if any) to Service Level Automation and utility computing.

This will not replace my longer posts covering key topics in SLAuto or utility computing. I simply intend it to replace my tendency to put an interesting post or article aside saying "I'll blog on that some day", never to return.

Here goes day 1:

Sometimes 69 million > 143 million (Isabel Wang): I've talked before about the great consolidation of computing capacity that is coming our way. (Not a complete consolidation, mind you--there will always be private data centers for highly secure applications, and I believe there will be dozens of "boutique" capacity providers.) Isabel is covering Amazon's new payment service here, but her last paragraph on the winning providers supplying framework and application layers in addition to pure hardware capacity is right on the money.

EPA sends final report on data center energy efficiency to Congress (SearchDataCenter.com): It should be obvious why is important to SLAuto/Utility Computing/whatever. The chart from the executive summary says it all: we are on the hook as an industry to improve our practices as it relates to both utilization and power management. Have you thought about what that will take in your data center?

Virtualization users say, 'Better management tools, please' (SearchServerVirtualization.com): Another Survey reinforcing what we've been saying all along; if you jump into the virtualization bandwagon, that's great, but be prepared for a management nightmare. While I would agree that improving the "virtualization awareness" of traditional management tools might help, I would argue that you still have too much volume to handle without automation. In this context, when you see "management", I recommend that you think automation.

Microsoft Building the Ultimate Spyware System (Jek Hui): Of the many posts covering Ray Ozzie's overview of Microsoft's utility computing play, I chose this one a) for the headline, and b) for the generally succinct coverage of Ray's comments. Believe me, if Microsoft can really overcome the "innovator's dilemma" and make this work, they will knock two or three of the utility computing competitors out of the race. So far, it's Google vs. Amazon (see above), but I like Microsoft's chances here. Will the service be sensitive to the needs and privacy of its users? I know where I'd put my money...