Thursday, January 25, 2007

Greasing the skids...Simplifying Datacenter Migration

Here are a couple of fun buzzwords that have created all kinds of interesting headaches in IT of late: "rationalization" and "consolidation". I'm not talking about servers here...I've covered that somewhat earlier. Instead, I'm talking about datacenter rationalization and consolidation.

This is a huge trend amongst Fortune 500 companies. In my work, I keep hearing VPs of Operations/Infrastructure and the like saying things like "we are consolidating from [some large number of] datacenters to [some small number, usually 2 or 3] datacenters." In the course of these migrations, they are rationalizing the need for each application that they must migrate from one datacenter to another.

The cost of these migrations can be staggering. "Fork-lifting" servers from one site to another incurs costs in packaging, shipping and replacing damaged goods (hardware in this case). Copying an installation from one datacenter to another involves the same issues: packaging (how to capture the application at the source site and unpack it at the destination site), shipping (costs around bandwidth use or physical shipping to move the application package between sites) and repair of damaged goods (fixing apps that "break" in the new infrastructure).

What if something could "grease the skids" of these moves--reduce the cost and pain of migrating code from one datacenter to another?

One approach is to package your software payloads as images that are portable between hardware, network and storage implementations. Now the cost of packaging the application is taken care of, the cost of shipping the package remains the same or gets cheaper, and the odds of the software failing to run are greatly reduced because it is already prepared for the changing conditions of a new set of infrastructure.

Admittedly, the solution here is more related to decoupling software from hardware than Service Level Automation, per se. But a good Service Level Automation environment will act as an enabler for this kind of imaging, as it too has to solve the problem of creating generic "golden" images that can boot on a variety of hardware using a variety of network and storage configurations. In fact, I have run into several customers in the last couple of months that have a) recognized this advantage and b) rushed to get a POC going to prove it out.

Of course, if you can easily move software images between datacenters, simpler disaster recovery can't be far behind...

Monday, January 22, 2007

NAS overtaking SAN for automated server virtualization

In an interesting observation from storage leader EMC, who also happens to own the VMWare franchise, it seems that NAS is overtaking SAN as the storage platform of choice for virtual server environments. This seems especially true for VI3 automated environments, where namespace and connectivity issues make NAS much simpler to configure and access in a highly dynamic environment than SAN.

(As an aside, I also love the quote in the article where EMC Corp. vice president of technology alliances Chuck Hollis pointed out that "To be honest, we're not seeing a whole lot of high performance stuff being put on VMware." Don't be fooled, most large datacenters will always have applications that can not be virtualized without a penalty.)

I would have to say that EMC's observation aligns with my own, as it has been clear for some time that NAS has offered some advantages over SAN for application storage in Cassatt environments. It boils down to accessibility--SAN requires special interface cards, and very few (if any) of the SAN switches today are remotely configurable by an automation environment. There are cool vendors out there (see 3PAR and DataCore, for example) that have tools to increase the dynamic nature of SANs, but NAS tends to rule here.

The article also notes some reasons why performance is overrated in the SAN vs. NAS comparison. Low end (e.g. workgroup class) NASs may suffer from some limitations based on network bandwidth, but TOE NICs and multi-NIC high-end NAS configurations are "widening the highway", allowing NAS performance to catch up to, and even surpass SAN. Cost/performance numbers are still something to consider, but I expect that the only apps that will be using fiber SAN in five years will be extremely high I/O applications, such as OLTP apps.

Let me give you quick reason why all of this is important: multitenancy. The Software as a Service (SaaS) and Managed Hosting Provider spaces have embraced the concept of one infrastructure supporting a large number of unique, individual clients. However, to achieve this, one needs to be able to "virtually" isolate each client from each other for both security and data integrity reasons.

To achieve this isolation, it is necessary to uniquely assign each customer two things: network access and (you guessed it) storage. Managed hosting providers and SaaS vendors are looking for tools that will allow them to dynamically assign a server (and thus, its hosted software) to specific VLANs and LUNs/namespaces. This will be a key focus for automation vendors in the next 2 years or so.

What do you think? How do you plan to address storage in your automated data center?

Thursday, January 18, 2007

Tsunami of Automation

Its been fun the last couple of months seeing the scramble by various infrastructure vendors to show they play in the automation space. For instance, there are application server vendors, managed hosting providers, and even SaaS management providers trying to move their marketing message into the enterprise service level automation and utility computing spaces. Still, in every case, the focus seems to be on some piece of the puzzle: servers and middleware, provisioning and customer service, or virtualized applications.

Seems to me that to achieve service levels for an application, each part of the application's infrastructure, from the app itself to the electricity it consumes needs to be measured and adjusted as needed to meet demand. I guarantee that if you do less, you will need to integrate your "policy-based" tools with other "policy-based" tools ad nauseum. And it will take you years to get there. (Note: we need standards here...)

Nonetheless, its good to see all of the market validation going on right now. And I encourage you to read about these vendors and others talking about utility computing, QoS, and automation. There are a lot of cool ideas here, waiting to work together...

Thursday, January 04, 2007

IDC recognizes Service Level Automation!!!

Big victory today for the growing number of voices trumpeting the benefits of Service Level Automation; IDC has defined a category of products specifically entitled Service Level Automation in its recent paper entitled Utility Computing: A Current Analysis of the Evolution. There are two vendors in the category today. (I'll let you guess who--one has the initial C, the other IBM...)

What is really cool about this is that it validates the need for systems that focus not on infrastructure automation per se (e.g. automating deployment processes, automating server creation, etc.), but that focus on the needs of the business and their applications and services. Sure, server virtualization, metering and billing, and so on are still important in a utility computing environment, but the concept is not complete unless something is monitoring your quality of service, and making adjustments as necessary to maintain compliance at minimal cost.

Of course, every so-called "policy-based" automation solution, no matter how single-product focused will probably now claim this title. But since you've been reading this blog, you know better than to fall into that trap...right? :)

Ken Oestreich starts blogging

Ken is the product marketing director at Cassatt, and I have enjoyed the priviledge of working with him on a number of projects. His new blog, titled Fountainhead, is another excellent source of information about service level automation, virtualization, and creating an operating system for the entire datacenter. I have always enjoyed Ken's ruminations on the datacenter market, and look forward to all that he has to say. Welcome to the conversation, Ken!

Tuesday, January 02, 2007

Real quick, I came across this interesting post discussing the issues around SOA and virtualization "convergence". In summary, Todd is noting that the world of the software infrastructure surrounding SOA is changing rapidly in response to the highly dynamic nature of Web Services infrastructure. I responded with the following comment:

I’m facinated by this concept of the coupling between software (especially web services) and infrastructure (including servers, networks and storage). In fact, Cassatt has done a tremendous amount of thinking around how Service Level Automation and service oriented infrastructure applies to web services, especially in the changing world of the software infrastructure used to host those services. (The hardware evolution is also facinating, but is tangental to the conversation here.)
Dynamically changing the number of physical, virtual or even application servers hosting the service certainly addresses the sticky performance issues surrounding web services, but it does nothing to address the *efficiency* issues, especially with regards to how resources can be pooled to meet the demand of a number of applications and services at the same time. Think “how can I deliver the required service levels for my applications and services using the minimum resources required to do so”.
This is what I am addressing on my blog. I hope you will check it out and comment at will on what you see there. I’m glad to see such interesting discussion about service oriented infrastructure. It is certainly a problem that will be addressed dramatically in the next 5-10 years.

Service Level Automation in 2007

Its the first day back in the office after a superb, work free holiday break, so I thought I'd get my (our?) motor reving again with some "predictions" about the virtualization and SLA spaces in 2007.


  • Service Level Automation will expand from the server-centric approach of today, to a variety of granularities, including the application level, the middleware level, the OS level, the cluster level, etc. This is crucial when it comes to truly optimizing both hardware and software usage, including optimizing license utilization, etc.

  • The incentive for larger organizations to move to a true utility computing infrastructure will grow tremendously as initiatives are announced throughout the Fortune 500.

  • Successful SLA and utility computing implementations will continue to appear in both commercial and government customers. Unsuccessful implementations will also appear, either due to poorly planned solutions (the "I can build it" syndrome), or poorly planned projects (the "I can convert my entire datacenter at once" syndrome).

  • The winners in the utility computing and service level automation space will be defined by successful implementations, strong partnerships and innovation that continues to disrupt the traditional tightly coupled, "silo"-based approach IT uses today.

  • Utility computing will appear as a system integrator specialty with increasing frequency over 2007. In fact, specialist boutique firms will start appearing in large cities around the United States and Western Europe, and will be quite profitable.

Let me know what you think. I've only had a few hours to think about this this year, so I'm sure more will occur to me over the next few days.