Monday, December 08, 2008

The Wisdom of Clouds is moving!!!

Finally! It's been a long time coming, but the "morphing" of The Wisdom of Clouds I've hinted at a couple of times in the last month is finally here. Dan Farber and Margaret Kane, the good editors the CNET, have agreed to publish this blog (with a slight name change) on the CNET Blog Network. Hence forth the blog will be titled "The Wisdom of the Clouds", and located at http://news.cnet.com/the-wisdom-of-clouds. Please go there and subscribe today.

CNET is one of the most respected IT news sources, and with about 15 million unique visitors a month this is a huge opportunity to broaden the cloud computing discussion to the mainstream IT community. The other members of the CNET Blog Network include such thought leaders as Matt Asay, Gordon Haff and Peter N. Glaskowsky, and I am humbled to be listed among them.

However, I also want to recognize and thank each of you for helping to make The Wisdom of Clouds what it is today. At the beginning of 2008, I had a little over 120 subscribers. This last week saw a record 948 subscribers, with over 200 of you reading each new post within 24 hours of it hitting the feeds, and about 50 more reading the same on the blog pages itself. It has been tremendously enriching to see the uptake in interest, and I am grateful to each of you for your interest, attention and feedback. Thank you.

Unfortunately, this transition will not be without its inconveniences. As you may have guessed, I will no longer be publishing to this site; for now http://blog.jamesurquhart.com will become an archive site for the two years or so of posts that I've written since early in my Cassatt days. I will frequently reference back to those posts initially, but all new material will appear at CNET. If you want to follow where the conversation goes from here, it is important that you go the the CNET URL and subscribe.

I will probably continue to publish my del.icio.us bookmarks to the existing feed for a while, but I want to consolidate that traffic with the article publications over time. Stay tuned for how that will work out. I won't be bookmarking my own posts as a rule; thus subscribe to the new feed.

Please let me know if you have any problems or concerns with the transition, and I hope that each and every one of you will continue to be a part of my own education about the cloud and its consequences. As always, I can be reached at jurquhart at (ignore this) yahoo dot com.

Again, thank you all, and I'll see you on The Wisdom of the Clouds.

Saturday, December 06, 2008

The Two Faces of Cloud Computing

One of the fun aspects of a nascent cloud computing market is that there are "veins" of innovative thinking to be mined from all of the hype. Each of us discover these independently, though the velocity of recognition increases greatly as the effects of "asymmetrical follow" patterns take effect. Those "really big ideas" of cloud computing usually start as a great observation by one or a few independent bloggers. If you are observant, and pay attention to patterns in terminology and concepts, you can get a jump on the opportunities and intellectual advances triggered by a new "really big idea".

One of these memes that I have been noticing more and more in the last week is that of the two-faceted cloud; the concept that cloud computing is beginning to address two different market needs, that of large scale web applications (the so-called "Web 2.0" market), and that of traditional data center computing (the so-called "Enterprise" market). As I'll try to explain, this is a "reasonably big idea" (or perhaps "reasonably big observation" is a more accurate portrayal).

I first noticed the meme when I was made aware of a Forrester report titled "There Are Two Types Of Compute Clouds: Server Clouds And Scale-Out Clouds Serve Very Different Customer Needs", written by analyst Frank E. Gillett. The abstract gives the best summary of the concept that I've found to date:
"Cloud computing is a confusing topic for vendor strategists. One reason? Most of us confuse two fundamentally different types of compute clouds as one. Server clouds support the needs of traditional business apps while scale-out clouds are designed for massive, many-machine workloads such as Web sites or grid compute applications. Scale-out clouds differ from server clouds in five key ways: 1) much larger workloads; 2) loosely coupled software architecture; 3) fault tolerance in software, not hardware; 4) simple state management; and 5) server virtualization is for provisioning flexibility — not machine sharing. Strategists must update their server virtualization plans to embrace the evolution to server cloud, while developing a separate strategy to compete in the arena for scale-out clouds."
Get it? There are two plans of attack for an enterprise looking to leverage the cloud:
  • How do you move existing load to the IaaS, PaaS, and SaaS providers?
  • How do you leverage the new extremely large scale infrastructures used by the Googles and Amazons of the world to create new competitive advantage?
Around then I started seeing references to other posts that suggested the same thing; that there are two customers for the cloud: those that need to achieve higher scale at lower costs than possible before, and those that want to eliminate data center capital in favor of a "pay-as-you-go" model.

I'm not sure how revolutionary this observation is (obviously many people noticed it before it clicked with me), but it is important. Where is it most obvious? In my opinion, the three PaaS members of the "big four" are good examples:
  • Google is the sole Scale-out vendor on the list...for now. I hear rumors that Microsoft may explore this as well, but for now it is not Mr. Softy's focus.
  • Microsoft's focus is, on the other hand, the enterprise. By choosing a .NET centric platform, Azure, complete with Enterprise Service Bus and other integration-centric technologies, they have firmly targeted the corporate database applications that run so much of our economy today.
  • Salesforce.com is perhaps the most interesting in that they chose to compete for enterprises with force.com and Sites, but through a "move all your stuff here" approach. Great for the Salesforce.com users, but perhaps a disadvantage to those wishing to build stand-alone systems, much less those wishing to integrate with their on-premises SAP instances.
The point here, I guess, is that comparisons between Scale-out and Enterprise clouds, while sometimes tempting (especially in the Google vs. Microsoft case), are rather useless. They serve different purposes, often for completely different audiences, and enterprise IT organizations would do better to focus their efforts on the specific facet of cloud computing that applies to a given project. If you are a budding PaaS vendor, understand the distinction, and focus on the technologies required to meet your market's demand. Don't try to be "all cloud to all people".

Except, possibly, if you are Microsoft...

Monday, December 01, 2008

The enterprise "barrier-to-exit" to cloud computing

An interesting discussion ensued on Twitter this weekend between myself and George Reese of Valtira. George--who recently posted some thought provoking posts on O'Reilly Broadcast about cloud security, and is writing a book on cloud computing--argued strongly that the benefits gained from moving to the cloud outweighed any additional costs that may ensue. In fact, in one tweet he noted:
"IT is a barrier to getting things done for most businesses; the Cloud reduces or eliminates that barrier."
I reacted strongly to that statement; I don't buy that IT is that bad in all cases (though some certainly is), nor do I buy that simply eliminating a barrier to getting something done makes it worth while. Besides, the barrier being removed isn't strictly financial, it is corporate IT policy. I can build a kick butt home entertainment system for my house for $50,000; that doesn't mean it's the right thing to do.

However, as the conversation unfolded, it became clear that George and I were coming at the problem from two different angles. George was talking about many SMB organizations, which really can't justify the cost of building their own IT infrastructure, but have been faced with a choice of doing just that, turning to (expensive and often rigid) managed hosting, or putting a server in a colo space somewhere (and maintaining that server). Not very happy choices.

Enter the cloud. Now these same businesses can simply grab capacity on demand, start and stop billing at their leisure and get real world class power, virtualization and networking infrastructure without having to put an ounce of thought into it. Yeah, it costs more than simply running a server would cost, but when you add the infrastructure/managed hosting fees/colo leases, cloud almost always looks like the better deal. At least that's what George claims his numbers show, and I'm willing to accept that. It makes sense to me.

I, on the other hand, was thinking of medium to large enterprises which already own significant data center infrastructure, and already have sunk costs in power, cooling and assorted infrastructure. When looking at this class of business, these sunk costs must be added to server acquisition and operation costs when rationalizing against the costs of gaining the same services from the cloud. In this case, these investments often tip the balance, and it becomes much cheaper to use existing infrastructure (though with some automation) to deliver fixed capacity loads. As I discussed recently, the cloud generally only gets interesting for loads that are not running 24X7.

(George actually notes a class of applications that sadly are also good candidates, though they shouldn't necessarily be: applications that IT just can't or won't get to on behalf of a business unit. George claims his business makes good money meeting the needs of marketing organizations that have this problem. Just make sure the ROI is really worth it before taking this option, however.)

This existing investment in infrastructure therefore acts almost as a "barrier-to-exit" for these enterprises when considering moving to the cloud. It seems to me highly ironic, and perhaps somewhat unique, that certain aspects of the cloud computing market will be blazed not by organizations with multiple data centers and thousands upon thousands of servers, but by the little mom-and-pop shop that used to own a couple of servers in a colo somewhere that finally shut them down and turned to Amazon. How cool is that?

The good news, as I hinted at earlier, is that there is technology that can be rationalized financially--through capital equipment and energy savings--which in turn can "grease the skids" for cloud adoption in the future. Ask the guys at 3tera. They'll tell you that their cloud infrastructure allows an enterprise to optimize infrastructure usage while enabling workload portability (though not running workload portability) between cloud providers running their stuff. VMWare introduced their vCloud initiative specifically to make enterprises aware of the work they are doing to allow workload portability across data centers running their stuff. Cisco (my employer) is addressing the problem. In fact, there are several great products out there who can give you cloud technology in your enterprise data center that will open the door to cloud adoption now (with things like cloudbursting) and in the future.

If you aren't considering how to "cloud enable" your entire infrastructure today, you ought to be getting nervous. Your competitors probably are looking closely at these technologies, and when the time is right, their barrier-to-exit will be lower than yours. Then, the true costs of moving an existing data center infrastructure to the cloud will become painfully obvious.

Many thanks to George for the excellent discussion. Twitter is becoming a great venue for cloud discussions.

Saturday, November 29, 2008

What is the value of IT convenience?

RPath's Billy Marshall wrote a post that is closely related to a topic I have been thinking about a lot lately. Namely, Billy points out that the effect of server virtualization hasn't been to satisfy the demand on IT resources, but simply to accelerate that demand through simplifying resource allocation. Billy gives a very clear example of what he means:
"Over the past 2 weeks, I have had a number of very interesting conversations with partners, prospects, customers, and analysts that lead me to believe that a virtual machine tsunami is building which might soon swamp the legacy, horizontal system management approaches. Here is what I have heard:

Two separate prospects told me that they have quickly consumed every available bit of capacity on their VMware server farms. As soon as they add more capacity, it disappears under the weight of an ever pressing demand of new VMs. They are scrambling to figure out how they manage the pending VM sprawl. They are also scrambling to understand how they are going to lower their VMware bill via an Amazon EC2 capability for some portion of the runtime instances.

Two prominent analysts proclaimed to me that the percentage of new servers running a hypervisor as the primary boot option will quickly approach 90% by 2012. With all of these systems sporting a hypervisor as the on-ramp for applications built as virtual machines, the number of virtual machines is going to explode. The hypervisor takes the friction out of the deployment process, which in turn escalates the number of VMs to be managed."
The world of Infrastructure as a Service isn't really any different:
"Amazon EC2 demand continues to skyrocket. It seems that business units are quickly sidestepping those IT departments that have not yet found a way to say “yes” to requests for new capacity due to capital spending constraints and high friction processes for getting applications into production (i.e. the legacy approach of provisioning servers with a general purpose OS and then attempting to install/configure the app to work on the production implementation which is no doubt different than the development environment). I heard a rumor that a new datacenter in Oregon was underway to support this burgeoning EC2 demand. I also saw our most recent EC2 bill, and I nearly hit the roof. Turns out when you provide frictionless capacity via the hypervisor, virtual machine deployment, and variable cost payment, demand explodes. Trust me."
Billy isn't the only person I've heard comment about their EC2 bill lately. Justin Mason commented on my post, "Do Your Cloud Applications Need to be Elastic?":
"[W]e also have inelastic parts of the infrastructure that could be hosted elsewhere at a colo for less cost, and personally, I would probably have done this given the choice; but mgmt were happier just to use EC2 as widely as possible, despite the additional costs, since it keeps things simpler."
In each case, management chooses to pay more for convenience.

I think these examples demonstrate an important decision point for IT organizations, especially during these times of financial strife. What is the value of IT convenience? When is it wise to choose to pay more dollars (or euros, or yen, or whatever) to gain some level of simplicity or focus or comfort? In the case of virtualization, is it always wise to leverage positive economic changes to expand service coverage? In the case of cloud computing, is it always wise to accept relatively high price points per CPU hour over managing your own cheaper compute loads?

I think there are no simple answers, but there are some elements that I would consider if the choice was mine:
  • Do I already have the infrastructure and labor skills I need to do it just as well or better than the cloud? If I were to simply apply some automation to what I already have, would it deliver the elasticity/reliability/agility I want without committing a monthly portion of my corporate revenues to an outside entity?

  • Is virtualization and/or the cloud the only way to get the agility I need to meet my objectives? The answer here is often "yes" for virtualization, but is it as frequently for cloud computing?

  • Do I have the luxury of cash flow that allows for me to spend up a little for someone else to worry about problems that I would have to handle otherwise? Of course, this is the same question that applies to outsourcing, managed hosting, etc.

One of the reasons you've seen a backlash against some aspects of cloud computing, or even a rising voice to the "its the same thing we tried before" argument, is that much of the marketing hype out there is staring to ignore the fact that cloud computing costs money; costs enough to provide a profit to the vendor. Yes, it is true that many (most?) IT organizations have lacked the ability to deliver the same efficiencies as the best cloud players, but that can change and change quickly if those same organizations were to look to automation software and infrastructure to provide that efficiency.

My advice to you: if you already own data centers, and if you want convenience on a budget, balance the cost of Amazon/GoGrid/Mosso/whoever with the value delivered by Arjuna/3TERA/Cassatt/Enomaly/etc./etc./etc., including controlling your virtualization sprawl and preparing you for using the cloud in innovative ways. Consider making your storage and networking virtualization friendly.

Sometimes convenience starts at home.

Friday, November 28, 2008

Two! Two! Two! Two great Overcast podcasts for your enjoyment

It's been a busy week or so for Geva Perry and I, as we took Overcast to a joint podcast with John Willis's CloudCafe podcast, and had a fabulous discussion with Greg Ness of Archimedius.net. Both podcasts are available from the Overcast blog.

The discussion with John focused on definitions in the cloud computing space, and some of the misconceptions that people have about the cloud, what it can and can't do for you, and what all that crazy terminology refers to. John is an exceptionally comfortable host, and his questions drove a deep conversation about what the cloud is, various components of cloud computing, and adjunct terms like "cloudbursting". It was a lot of fun to do, and I am grateful for John's invitation to do this.

Greg Ness demonstrated his uniquely deep understanding of what network security entails in a virtualized data center, and how automation is the lynch pin of protecting that infrastructure. Topics ranged from this year's DNS exploit and the pace at which systems are getting patched to address it, to the reasons why the static network we all knew and loved is DOA in a cloud (or even just a virtualized) world. I really admire Greg, and find his ability to articulate difficult concepts with the help of historical insight very appealing. I very much appreciate his taking time out of his busy day to join us.

We are busy lining up more great guests for future podcasts, so stay tuned--or better yet, subscribe to Overcast at the Overcast blog.

Monday, November 24, 2008

Is IBM the utlimate authority on cloud computing?

There was an interesting announcement today from IBM regarding their new "Resiliant Cloud" seal of approval--a marketing program targeted at cloud providers, and at customers of the cloud. The idea is simple, if I am reading this right:
  • IBM gets all of the world's cloud vendors to pay them a services fee to submit their services to a series of tests that validate (or not) whether the cloud is resiliant, secure and scalable. Should the vendor's offering pass, they get to put a "Resiliant Cloud" logo on their web pages, etc.

  • Customers looking for resiliant, secure and scalable cloud infrastructure then can select from the pool of "Resiliant Cloud" offerings to build their specific cloud-based solutions. Oh, and they can hire IBM services to help them distinguish when to go outside for their cloud infrastructure, and when to convert their existing infrastructure. I'm sure IBM will give a balanced analysis as to the technology options here...

I'm sorry, but I'm a bit disappointed with this. IBM has been facing a very stiff "innovator's dilemma" when it comes to cloud computing, as noted by GigaOm's Stacy Higgenbotham:
"IBM has been pretty quiet about its cloud efforts. In part because it didn’t want to hack off large customers buying a ton of IBM servers by competing with them. The computing giant hasn’t been pushing its own cloud business until a half-hearted announcement at the end of July, about a month and half after a company exec had told me IBM didn’t really want to advertise its cloud services."
She goes on to note, however, that IBM has some great things in the works, including a research project in China that shows great promise. That's welcome news, and I look forward to IBM being a major player on the cloud computing stage again. However, this announcement is just an attempt at making IBM the "godfather" of the cloud market, and that's not interesting in the least.

Still, I bet if you want to be an IBM strategic partner, you'd better get on board with the program. Amazon, are you going to pay the fee? Microsoft? Google? Salesforce.com? Anyone?

Friday, November 21, 2008

Do Your Cloud Applications Need To Be Elastic?

I got to spend a few hours at Sys-Con's Cloud Computing Expo yesterday, and I have to say it was most certainly an intellectually stimulating day. Not only was just about every US cloud startup represented in one way or another, but included were an unusual conference session, and a meetup of fans of CloudCamp.

While listening in on a session, I overheard one participant ask how the cloud would scale their application if they couldn't replicate it. This triggered a strong response in me, as I really feel for those that confuse autonomic infrastructures with magic applied to scaling unscalable applications. Let me be clear, the cloud can't scale your application (much, at least) if you didn't design it to be scaled. Period.

However, that caused me to ask myself whether or not an application had to be horizontally scalable in order to gain economically while running in an Infrastructure as a Service (IaaS) cloud. The answer, I think, is that it depends.

Chris FlexFleck of Citrix wrote up a pretty decent two part explanation of this on his blog a few weeks ago. He starts out with some basic costs of acquiring and running 5 Quad-core servers--either on-premises (amortized over 3 years at 5%) or in a colocation data center--against the cost of running equivalent "high CPU" servers 24X7 on Amazon's EC2. The short short of his initial post is that it is much more expensive to run full time on EC2 than it is to run on premises or in the colo facility.

How much more expensive?
  • On-premises: $7800/year
  • Colocation: $13,800/year
  • Amazon EC2: $35,040/year
I tend to believe this reflects the truth, even if its not 100% accurate. First, while you may think "ah, Amazon...that's 10¢ a CPU hour", in point of fact most production applications that you read about in the cloud-o-sphere are using the larger instances. Chris is right to use high CPU instances in his comparison at 80¢/CPU hour. Second, while its tempting to think in terms of upfront costs, your accounting department will in fact spread the capital costs out over several years, usually 3 years for a server.

In the second part of his analysis, however, Chris notes that the cost of the same Amazon instances vary based on the amount of time they are actually used, as opposed to the physical infrastructure that must be paid for whether it is used or not (with the possible exception of power and AC costs). This comes into play in a big way if the same instances are used judiciously for varying workloads, such as the hybrid fixed/cloud approach he uses as an example.

In other words, if you have an elastic load, plan for "standard" variances on-premises, but allow "excessive" spikes in load to trigger instances on EC2, you suddenly have a very compelling case relative to buying enough physical infrastructure to handle excessive peaks yourself. As Chris notes:
"To put some simple numbers to it based on the original example, let's assume that the constant workload is roughly equal to 5 Quadcore server capacity. The variable workload on the other hand peaks at 160% of the base requirement, however it is required only about 400 hours per year, which could translate to 12 hours a day for the month of December or 33 hours per month for peak loads such as test or batch loads. The cost for a premise only solution for this situation comes to roughly 2X or $ 15,600 per year assuming existing space and a 20% factor of safety above peak load. If on the other hand you were able to utilize a Cloud for only the peak loads the incremental cost would be only $1,000. ( Based on Amazon EC2 )
Premise Only
$ 15,600 Annual cost ( 2 x 7,800 from Part 1 )
Premise Plus Cloud
$ 7,800 Annual cost from Part 1
$ 1,000 Cloud EC2 - ( 400 x .8 x 3 )
$ 8,800 Annual Cost Premise Plus Cloud "
The lesson of our story? Using the cloud makes the most sense when you have an elastic load. I would postulate that another option would be a load that is not powered on at full strength 100% of the time. Some examples might include:
  • Dev/test lab server instances
  • Scale-out applications, especially web application architectures
  • Seasonal load applications, such as personal income tax processing systems or retail accounting systems
On the other hand, you probably would not use Infrastructure as a Service today for:
  • That little accounting application that has to run at all times, but has at most 20 concurrent users
  • The MS Exchange server for your 10 person company. (Microsoft's multi-tenant Exchange online offering is different--I'm talking hosting your own instance in EC2)
  • Your network monitoring infrastructure
Now, the managed hosting guys are going to probably jump down my throat with counter arguments about the level of service provided by (at least their) hosting clouds, but my experience is that all of these clouds actually treat self-service as self-service, and that there really is very little difference between do-it-yourself on-premises and do-it-yourself in cloud.

What would change these economics to the point that it would make sense to run any or all of your applications in an IaaS cloud? Well, I personally think you need to see a real commodity market for compute and storage capacity before you see the pricing that reflects economies in favor of running fixed loads in the cloud. There have been a wide variety of posts about what it would take [pdf] to establish a cloud market in the past, so I won't go back over that subject here. However, if you are considering "moving my data center to the cloud", please keep these simple economics in mind.

Thursday, November 20, 2008

Reuven Cohen Invents The "Unsession"

Gotta luv the Ruv. One of the highlights of this week's Sys-Con Cloud Computing Expo was Reuven's session on World-Wide Cloud Computing, "presented" to a packed room filled with some of the most knowledgeable cloud computing fans you'll ever see--from vendors, SIs, customers, you name it.

Reuven got up front, showed a total of two slides (to introduce himself, because if you're Ruv, it takes two slides to properly introduce yourself. :-) ), then kicked off a totally "unconference" like hour long session. The best way I can think of to describe it was it was that he a) went straight to the question and answer period, and b) asked questions of the audience, not the other way around. Now, he may just have been lazy, but I think he took advantage of the right sized room with the right subject matter interest and expertise at the right time to shake things up.

The result was an absolutely fascinating and wide ranging discussion about what it would take to deliver a "world wide cloud", a dream that many of us have had for a while, but that has been a particular focus of Reuven's. I can't recount all aspects of the discussion here, quite obviously, but I thought I would share the list of subjects covered that I noted during the talk:
  • federation
  • firewall configuration
  • data encryption
  • Wide Area Network optimization
  • latency
  • trust
  • transparency
  • the community's role in driving cloud specifications
  • interoperability
  • data portability
  • data ownership
  • metadata/policy/configuration/schema ownership
  • cloud brokerages
  • compliance
  • Payment Card Industry
  • Physical to Virtual and Physical to Cloud
  • reliability
  • SLA metadata
  • data integrity
  • identity
  • revocable certificates (see Nirvanix)
  • content delivery networks (and Amazon's announcement)
  • storage
Now, I'm not sure that we solved anything in the discussion, but everyone walked away learning something new that afternoon.

Got a session to present to a room of 100 or less? Not sure how to capture attention in a set of slides? The heck with it, pull a "Reuven" and turn the tables. If you have an audience eager to give as well as take, you could end up enlightening yourself as much as you enlighten everyone else.

Thanks, Ruv, and keep stirring things up.

Monday, November 17, 2008

Amazon launches CloudFront Content Delivery Service

Quick note before I go to bed. Amazon just announced their previously discussed content delivery network service, CloudFront, tonight. Jeff Barr lays it out for you on the AWS blog, and Werner Vogels adds his vision for the service. To their credit, they are pushing the concept that the way the service is designed, it can do much more than traditional content delivery services; potentially acting as a cacheing and routing mechanism for applications distributed across EC2 "availability zones".

I think Thorsten von Eiken of RightScale gives the most honest assessment of the service tonight. He praises the simplicity of use, noting that his product supports all CloudFront functionality today. Noting that CloudFront is a "'minimum viable product' offering" at this time, he also notes that there are several restrictions, and that there are some features that leave a lot to be desired. That being said, both Amazon and RightScale are clear that this is a necessary service for Amazon to offer, and that it is indeed useful today.

More when I've had a chance to evaluate it, but congrats again to the Amazon team for staying a few steps ahead.

Update: Stacy Higginbotham adds some excellent insight from the GigaOm crew on CloudFront's effect on the overall CDN market. The short short is that Amazon's "pay-as-you-go" pricing severely undercuts the major CDN vendors for small and medium businesses.

Friday, November 14, 2008

Why the Choice of Cloud Computing Type May Depend On Who's Buying

Thanks to Ron K. Jeffries' Cloudy Thinking blog, I was directed to Redmonk's Stephen O'Grady (who I now subscribe to directly) and his excellent post titled Cloud Types: Fabric vs Instance. Stephen makes an excellent observation about the nature of Infrastructure as a Service (called increasingly "Utility Computing" by Tim O'Reilly followers) and Platform as a Service (that one remains consistent). His observation is this:
"...Tim seems to feel that they are aspects of the types, while I’m of the opinion that they instead define the type. For example, by Tim’s definition, one characteristic of Utility Computing style clouds is virtual machine instances, where my definitions rather centers on that.

Here’s how I typically break down cloud computing styles:

Fabric

Description: A problematic term, perhaps, because a few of the vendors employ it towards different ends, but I use it because it’s descriptive. Rather than deploy to virtualized instances, developers building on this style cloud platform write instead to a fabric. The fabric’s role is to abstract the underlying physical and logical architecture from the developer, and - typically - to assume the burden of scaling.
Example: Google App Engine

Instance

Description: Instance style clouds are characterized by, well, instances. Unlike the fabric cloud, there is little to no abstraction present within instance based clouds: they generally recreate - virtually - a typical physical infrastructure composed of instances that include memory, processing cycles, and so on. The lack of abstraction can offer developers more control, but this control is typically offered at the cost of transparent scaling.
Example: Amazon EC2"

I love that distinction. First, for those struggling to see how Amazon/GoGrid/Flexiscale/etc. relates to Google/Microsoft/Salesforce.com/Intuit/etc., it delineates a very clear difference. If you are reserving servers on which to run applications, it is IaaS. If you are running your application free of care about which and how many resources are consumed, then it is PaaS. Easy.

However, I am even more excited by a thought that occurred to me as I read the post. One of the things that this particular distinction points out is the likelihood that the buyers of each type would be different classes of enterprise IT professionals.

Its not black and white, but I would be willing to bet heavily that :
  • The preponderance of interest in IaaS is from those whose primary concern is system administration; those with complex application profiles, who want to tweak scalability themselves, and who want the freedom to determine how data and code get stored, accessed and acted upon.

  • The preponderance of interest in PaaS is from those whose primary concerns is application development; those with a functional orientation, who want to be more concerned about creating application experiences than worrying about how to architect for deployment in a web environment (or whatever the framework provides).

In other words, server jockeys chose instances, while code jockeys choose fabric.

Now, the question quickly becomes, if developers can get the functionality and scalability/reliability/availability required from PaaS, without hiring the system administrators, why would any enterprise choose IaaS unless they were innovating at the architecture level? On the other hand, if all you want to do is add capacity to existing functionality, or you require an unusual or even innovative architecture, or you need to guarantee that certain security and continuity precautions are in place, why would you ever choose PaaS?

This, in turn, boils right back down to the PaaS spectrum I spoke of recently. Choose your cloud type based on your true need, but also take into account the skill set you will require. Don't focus on a single brand just because it's cool to your peers. Pick IaaS if you want to tweak infrastructure, otherwise by all means find the PaaS platform that best suits you. You'll probably save in the long run.

Now, I've clearly suppressed the fact that developers probably still want some portability...though I must note that choosing a programming language alone limits function portability. (Perhaps that's OK if the productivity values out weigh the likelihood of having to port.) Also, the things that system administrators are doing in the enterprise are extremely important, like managing security, data integrity and continuity. There are no guarantees that any of the existing PaaS platforms can help you with any of that.

Something to think about, anyway. What do you think? Will developers lean towards PaaS, while system administrators lean towards IaaS? Who will win the right to choose within the enterprise?

Wednesday, November 12, 2008

In Cloud Computing, a Good Network Gives You Control...

There is a simple little truth that is often looked over when people discuss "moving to the cloud". It is a truth that is so obvious, it is obscure; so basic, it gets lost in the noise. That truth is simply this: you may move all of your servers to the cloud, you may even move all of your storage to the cloud, but an enterprise will always have a network presence in its own facilities. The network is ubiquitous, not only in the commodity sense, but also in the physical sense.

To date, most IT folks have had a relatively static view of networks. They've relied on networking equipment, software and related services to secure the reliability of TCP/IP and UDP packets moving from physical place to physical place. Yeah, there has been a fair measure of security features thrown in, and some pretty cool management to monitor that reliability, but the core effort of networks to date was to reduce the risk of lost or undeliverable network packets--and "static" was very "in".

However, the cloud introduces a factor that scares the bejeezus out of most IT administrators: a dynamic world that gives the appearance of a complete lack of control. How does IT control the security of their data and communications between their own facilities, the Internet and third party cloud providers? How do they secure the performance of systems running over the Internet? Is it possible to have any view into the health and stability of a cloud vendor's own infrastructure in a way meaningful to the Network Operations Centers we all know and love?

When it comes to infrastructure, I have been arguing that the network must take more of a role in the automation and administration of public, private and hybrid clouds. However, let me add that I now think enterprises should look at the network as a point of control over the cloud. Not necessarily to own all of that control--services such as RightScale and CohesiveFT, or cloud infrastructures such as Cassatt or 3TERA have a critical role to play in orchestration and delivery of application services.

However, their control of your resources relies entirely on the network as well, and you will likely have federated and/or siloed sets of those infrastructure management systems scattered across your disparate "cloud" environment. The network remains the single point of entry into your "cloud", and as such should play a key role in coordinating the monitoring and management activities of the various components that make up that "cloud".

Greg Ness outlined some of this in his excellent post on Infrastructure 2.0, (and this recent one on cloud computing in the recession), a theme picked up by Chris Hoff and others. All of these bloggers are sounding a clarion call to the network vendors, both large and small, that have a stake in the future of enterprise IT. Support dynamic infrastructures--securely--or die. I only add that I don't believe that its enough to make dynamic work, I think it is critical to make sure the enterprise feels they are in control of "their own" cloud environment, whether or not it contains third party services, runs in dozens of data centers, or changes at a rate to quick for human decision makers to manage.

What are some of the ways that the network can give you control over a dynamic infrastructure? Here's my "off the top of my head" list of some of the ways:
  • There needs to be a consistent way to discover and evaluate new VMs, bare metal deployments, storage allocations, etc. and the network can play a key role here.

  • There also needs to be consistent monitoring and auditing capabilities that work across disparate cloud providers. This doesn't necessarily have to be provided by the network infrastructure, but network-aware system management tools seem as logical a place to start as any.
  • Networks should take an active role in virtualization, providing services and features to enable things like over WAN VM migration, IP address portability and discovery of required services and infrastructure during and after VM migration. Where your servers run should be dependent on your needs, not your network's abilities.

  • At times the network should act like the human nervous system and take action before the "brain" of the cloud is even aware something is wrong. This can take the form of agreed upon delegation of responsibility in failure and over-utilization situations, with likely advancement to an automated predictive modelling approach once some comfort is reached with the symbiotic relationship between the network and management infrastructures.

Believe me, I know I have much to learn here. I can tell you that my soon-to-be employer, Cisco, is all over it, and has some brilliant ideas. Aristra Networks is a startup with a pretty good pedigree that is also aggressively looking at cloud enabled networks. I can only assume that F5, Nortel, Extreme and others are also seriously evaluating how they can remain competitive in such a rapidly changing architecture. What exactly this will look like is fuzzy at this point, but the next few months are going to be monster.

In the meantime, ask yourself not what you can do to advance your network, but what your network can do to advance you...

Two Announcements to Pay Attention To This Week

I know I promised a post on how the network fits into cloud computing, but after a wonderful first part of my week spending time first catching up on reading, then one-on-one with my 4-year old son, I'm finally digging into what's happened in the last two days in the cloud-o-sphere. While the network post remains important to me, there were several announcements that caught my eye, and I thought I'd run through two of them quickly and give you a sense of why they matter

The first announcement came from Replicate Technologies, Rich Miller's young company, which is focusing initially on virtualization configuration analysis. The Replicate Datacenter Analyzer (RDA) is a powerful analysis and management tool for evaluating the configuration and deployment of virtual machines in an enterprise data center environment. But it goes beyond just evaluating the VMs itself, to evaluating the server, network and storage configuration required to support things like vMotion.

Sound boring, and perhaps not cloud related? Well, if you read Rich's blog in depth, you may find that he has a very interesting longer term objective. Building on the success of RDA, Replicate aims to become a core element of a virtualized data center operations platform, eventually including hybrid cloud configurations, etc. While initially focused on individual servers, one excellent vision that Rich has is to manage the relationships between VMs in such a tool, so that operations taken on one server will take into account the dependencies on other servers. Very cool.

Watch the introductory video for the fastest impression of what Replicate has here. If you manage virtual machines, sit up and take notice.

The other announcement that caught my eye was the new positioning and features introduced by my alma mater, Cassatt Corporation this week. I've often argued that Cassatt is an excellent example of a private cloud infrastructure, and now they are actively promoting themselves as such (although they use the term "internal cloud").

It's about freaking time. With a mature, "burned in", relatively technology agnostic platform that has perhaps the easiest policy management user experience ever (though not necessarily the prettiest), Cassatt has always been one of my favorite infrastructure plays (though I admit some bias). They support an incredible array of hardware, virtualization and OS platforms, and provide the rare ability to manage not only virtual machines, but also bare metal systems. You get automated power management, resource optimization, image management, and dynamic provisioning. For the latter, not only is server provisioning automated, but also network provisioning--such that deploying an image on a server triggers Cassatt to reprogram the ports that the target server is connected to so that they sit on the correct VLAN for the software about to be booted.

The announcement talks a lot about Cassatt's monitoring capabilities, and a service they provide around application profiling. I haven't been briefed about these, but given their experience with server power management (a very "profiling focused" activity) I believe they could probably have some unique value-add there. What I remember from six months ago was that they introduced improved dynamic load allocation capabilities that could use just about any digital metric (technical or business oriented) to set upper and lower performance thresholds for scaling. Thus, you could use CPU utilization, transaction rates, user sessions or even market activity to determine the need for more or less servers for an application. Not too many others break away from the easy CPU/memory utilization stats to drive scale.

Now, having said all those nice things, I have to take Cassatt to task for one thing. Throughout the press release, Cassatt talks about Amazon and Google like infrastructure. However, Cassatt is doing nothing to replicate the APIs of either Amazon (which would be logical) or Google (which would make no sense at all). In other words, as announced, Cassatt is building on their own proprietary protocols and interfaces, with no ties to any external clouds or alternative cloud platforms. This is not a very "commodity cloud computing" friendly approach, and obviously I would like to see that changed. And, the truth is, none of their direct competitors are doing so, either (with the possible exception of the academic research project, EUCALYPTUS).

The short short of it is if you are looking at building a private cloud, don't overlook Cassatt.

There was another announcement from Hyperic that I want to comment on, but I'm due to chat with a Hyperic executive soon, so I'll reserve that post for later. The fall of 2008 remains a heady time for cloud computing, so expect many more of these types of posts in the coming weeks.

Monday, November 10, 2008

Change, Take Two

As those of you that follow me on Twitter already know, last Friday (11/7) was my last day at Alfresco. Although my time there was very short, it was one of the most valuable experiences of my almost 20 year career. I learned more about the state of enterprise Java applications, the importance of ECM to almost every business, and the great advantage that open source has in the enterprise software market. Not bad for just short of six months. The company and its product are both incredible, and I owe a lot to the amazing field team at Alfresco, especially Matt Asay, Scott Davis, Luis Sala and Peter Monks.

Why am I moving on so soon, then? Well, I'm happy to say that I'm taking on the role of Marketing Manager/Technology Evangelist for Data Center Virtualization at Cisco Systems, Inc. (including Cloud Computing and Cisco's Data Center 3.0 strategy, at least as it relates to virtualization). This is a dream job in many ways, as Cisco is still in the formative stages of its cloud computing strategy (for both service providers and end users), and I have the chance to be part of what will most likely be the most important producer of cloud and virtualization enabled networking software, services and equipment.

This is also an opportunity in which both my passion for blogging and my passion for cloud computing come directly into play. I am excited to work with both Doug Gourlay and Peter Linkin, who are incredibly smart people with a vision I can really get my head around.

Lest you fret that I'm going to go "all Cisco" here on The Wisdom of Clouds, let me assure you that this was pretty much the first point of negotiation when they approached me about the position. The Wisdom of Clouds (or whatever it morphs into--more on that later in the week) will remain my independent blog, in which I will endeavor to provide the same humble insight into the state of the cloud market and technology that I always have. Cisco specific posts will largely be confined to the Data Center group's excellent blog, where I will become a regular contributor.

In the end, what sold me on joining Cisco was the excellent opportunity to explore the "uncharted territory" that is the network's role in cloud computing. More on that in my next post.

Thursday, November 06, 2008

A Quick Guide To The "Big Four" Cloud Offerings

We live in interesting times--in fact, historic times. From the highs of seeing the election of a presidential candidate inspire millions to see opportunity where they saw none before, to the lows of experiencing first hand financial pressures that we previously only glimpsed when our parents or grandparents told us tales of hardship and conservation.

For me, in the context of this blog, the explosion of the cloud into mainstream information technology has been undeniably exciting and overwhelming. In the last several weeks, we have seen key announcements and rumors revealing the goals and aspirations of current cloud superstars, as well as the well executed introduction of a new major player. As the dust settles from this frenzy, it becomes clear that the near term cloud mindshare will be dominated by four major players.

(There are, of course, many smaller companies addressing various aspects of cloud computing, and at times competing directly against one or more of these four. However, these are the one's that have the most mindshare right now, and are backed by some of the most trusted names in web and enterprise computing.)

Below is a table that lists these key players, and compares their offerings from the perspective of four core defining aspects of clouds. As this is a comparison of apples to oranges to grapefruit to perhaps pastrami, it is not meant to be a ranking of the participants, nor a judgement of when to choose one over the other. Instead, what I hope to do here is to give a working sysadmin's glimpse into what these four clouds are about, and why they are each unique approaches to enterprise cloud computing in their own right. Details about each comment can be found in the text below the table.

Without further ado:

Provider"On-Premises" OptionDevelopment TechnologyPortabilityReliability
Amazon AWSBring Your OwnWhatever you can get to run in an AMICode is easy to move. The rest...?Reliability by Spreading The Wealth
Google App EngineSure...for your dev environment* Hip coders love Python, right?Are you kidding?Trust the Magic
Microsoft AzureAbsolutely! The future is hybrid.* If you love MSFT, you already know it. (.NET)You mean between Microsoft products, right?Promises, promises
Salesforce.com (force.com)Absolutely not! The future is pure cloudIt's new, it's improved, it's APEX!Heh. That's funny...We Got Magic, Too.

Amazon AWS
  • Amazon does not now provide, nor do they show any interest in the the future of providing, an "on-premises" option for their cloud. However, the EUCALYPTUS research project is an example of an open source private cloud being built to Amazon's API specifications for EC2 and S3. Whether this will ever see commercial success (or support from Amazon), is yet to be determined.
  • Amazon pretty much provides servers and storage, with a few integration and data services thrown in for good measure. The servers are standard OSes, with full root/Administrator access, etc. In theory, you should be able to get any code running on an Amazon machine image (AMI) that you could run on a physical server with the same OS, bar certain hardware and security requirements.
  • As the AMIs are standard OS images, moving applications off of Amazon should be easy. However, moving data or messaging off of S3/SimpleDB and SQS will probably take a little more work. Still simple, but there is no standard packaging of data for migration between cloud providers today.
  • Reliability in AWS is primarily provided by replicating AMIs and data across geographically distributed "availability zones". The idea there is to isolate outages to a zone, so by cloning services and data between zones, one should always have at least one instance handy should others go down.
Google App Engine
  • Google provides an open source development kit that allows developers to create App Engine apps and test them on local systems before deploying them into Google's cloud. There is no true replicate of App Engine itself that can be used in a private cloud, not are their plans for one that I know of. To be frank, I'm not sure why you would want one.
  • Google's SDK has started with open sourced, but Google specific, APIs in the Python scripting language. Other languages have been promised, but this is what you need to know today.
  • Given the unique nature of the Python APIs, the Big Table-based data architecture and the lack of partners exploring clones of the environment, portability is not an option at this point. Nor, it seems, are Google encouraging it, though they are always quick to point out that data itself can be retrieved from Google at will, via APIs. Mapping to a new infrastructure, though, is on the customer's dime.
  • For high scale-dependent web applications, Google App Engine is the winner hands down. They know how to replicate services, provide redundant architecture under the covers and secure their perimeter. All the customer has to do is deploy their software and trust Google to do their thing.
Microsoft Azure
  • Microsoft makes a point of defining their cloud platform in terms of a hybrid public/private infrastructure. Their mantra of "Software-plus-Service" is an homage to having parts of your enterprise systems run in house, and other parts running in Azure. In many ways, Microsoft is letting the market decide for them how much of the future is "pure cloud", and how much isn't.
  • The first platform that Microsoft supports is understandably their own, .NET. If you already use .NET, you've hit the cloud computing jackpot with Azure. If not, you can learn it, or wait for the additional languages/platforms promised in the coming months and years.
  • Microsoft wants to make portability extremely simple...within its own product line. Like the others, there are ways to get your data via APIs, but there is no simple mechanism to port between Azure and other cloud platforms.
  • At this point, we can only guess at the reliability of the Microsoft cloud. Will it match the relatively solid record of the current Live properties, or will it run like a Microsoft operating system...
Salesforce.com (force.com and Sites)
  • Mark Benioff was adamant at their Dreamforce conference this week that SF is going to kill the idea of on-premises software. It is an ambitious goal; one that smart people like Geva Perry think is going to happen anyway. However, I'm not so sure. The long and the short off it, however, is you can forget any "on-premises" version of Force.com or Sites in the foreseeable future.
  • Force.com operates with a custom data model, a custom user interface framework, and a custom domain-specific programming language. While new to most developers, it appears to be extremely easy to learn, and cuts out a lot of the "down and dirty" stuff when it comes to programming against the SF applications and data model.
  • Again, while you can go and get you data programatically at will, there are no simple mechanisms for doing so, nor is there anywhere to move APEX code. Portability is not really an option here.
  • As in the Google case, SF hides so much of the underlying infrastructure that you just have to trust they can handle your application for you. In SF's case, however, they reportedly rely on vertical scaling, so there may be limits to how high they can scale.

Overcast: Conversations on Cloud Computing #2

Geva and I completed the second of our series of podcasts on Tuesday. This is a shortened conversation, as we only had a few days between recording numbers 1 and 2, but it covers some key announcements that were made in that time. Specifically, the show covers:
  • Salesforce.com's Force.com Sites and the new relationship between Salesforce and Amazon Web Services. I wrote about this earlier.
  • The announcement of cooperation between RightScale and the open source EUCALYPTUS project from the University of California Santa Barbara. You can read more about this on Geva's blog here and here.
  • Also mentions of CohesiveFT, Elastra, Google App Engine, Microsoft and more...

Tuesday, November 04, 2008

Salesforce.com Announces They Mean Business

I had some business to take care of in downtown San Francisco this morning, and on my way to my destination, I strolled past Moscone Center, the site of this year's Dreamforce conference. The news coming out of that conference had peaked my interest a day earlier--I'll get to that in a minute--but when I saw the graphics and catch phrase of the conference, I had to laugh. Not in mockery, mind you; it was just ironic.

There, spanning the vast entrances of both Moscone North and South was nothing but blue skies and fluffy white...wait for it...clouds. In other words, the single theme of the conference visuals was, I can only assume, cloud computing. Not CRM, not "making your business better", but an implementation mechanism; a way of doing IT. That's the irony, in my mind; that in this amazing month or so of cloud computing history, one of the companies most aggressively associating themselves with cloud computing was a CRM company, not a compute capacity or storage provider.

Except, Salesforce.com was already blurring the lines between PaaS and SaaS, even as they open the door to their partners and customers taking advantage of IaaS where it makes sense. Even before Marc Benioff's keynote yesterday, it was clear that force.com was far more than a way to simply customize the core CRM offering. Granted, most applications launched there took advantage of Salesforce.com data or services in one way or another, but there was clear evidence that the SF gang were targeting a PaaS platform that stood alone, even as it provided the easiest way to draw customers into the CRM application.

The core of the new announcement, Sites, appears to simply be an extension of this. The idea behind Sites is to provide a web site framework that allows customers to address both Intranet and Internet applications without needing to run any infrastructure on-premises. Of course, if you find the built in SF integration makes adopting the CRM platform easier, then SF would be happy to help. Their goal, you see, is summed up in the conference catch phrase: "The End of Software". (Of course, let's just ignore the fact that force.com is a software development platform, any way you cut it.)

Skeptical that you can get what you need from a single PaaS offering? Here's where the genius part of the day's announcements come in; simply utilize Amazon for the computing and storage needs that force.com was unable to provide. Heck, yeah.

Allow me to observe something important, here. First, note that Salesforce does not have an existing packages software model; thus, there is no incentive whatsoever to offer an on-premesis alternative. Touche, Microsoft. Second, note that Salesforce.com has no problem whatsoever with partnering with someone who does something better than them. En guarde, Google. Finally, pay attention to the fact Salesforce.com is expanding its business offerings in a way that both serves existing customers in increasingly powerful ways, while inviting new, non CRM customers to use productive tools that just happen to include integration with the core offering. PaaS as a marketing hook, not necessarily a business model in and of itself. (If it succeeds on its own, that's icing on the cake.)

In a three week period that has seen some of the most revolutionary cloud computing announcements, Salesforce.com managed to not only keep themselves relevant, but further managed to make a grab for significant cloud mindshare. Fluffy, white, cloud mindshare.

Monday, November 03, 2008

Overcast: Conversations on Cloud Computing

I'm happy to announce that Geva Perry and I have started a new podcast, titled "Overcast: Conversations on Cloud Computing". The show covers cloud computing, virtualization, application infrastructures and related topics, and we will be inviting a number of prominent guests in coming weeks to inform us and our listeners about both the truly revolutionary and the simply evolutionary aspects of the cloud. We are also both skeptical enough to ask some tough questions about the reality of the cloud from time to time. We hope you'll enjoy the discussion, and provide plenty of feedback.

Geva has some additional details and insights on his blog. Visit the podcast blog at http://overcast.typepad.com, which provides a synopsis of the discussion, and links to relative sites, etc. Subscribe, either via RSS or in iTunes, and if you want to participate in a future show, contact me at james dot urquhart at yahoo dot com.

Here is the summary of Show #1:

We discuss recent news in cloud computing including:

We also discuss:

  • The potential roles of IBM, HP and Sun in cloud computing
  • The recent debate between Tim O'Reilly and Nick Carr on the Network Effect in cloud computing
  • The notion of vendor "vision lock-in" -- announcements aimed at making potential customers take pause before selecting a cloud vendor
  • and much more...

Friday, October 31, 2008

Microsoft Azure May Be Too Good To Ignore

There is an interesting twist coming out of the events of last week that I think warrants explicit notice. By "connecting the dots" on a series of observations, one can come to the conclusion that Microsoft is now well in advance of any of the other major software systems vendors when it comes to establishing the platform of tomorrow. And, yes, though most of the Oslo/Azure world is not open source, it just may be better.

Start with the Azure and Oslo announcements, demonstrations and presentations from PDC 2008. As I noted earlier, Microsoft now has the most advanced PaaS offering on the market, much more enterprise friendly than Google AppEngine. (AppEngine wins out for extreme scale web applications, however--hands down.)

I watched the high level overview of Oslo given by Douglas Purdy, a product unit manager working on next generation tools at Microsoft. It was riveting. The thought and engineering that went into the models and tools demonstrated in that 70+ minutes were revolutionary to me. Meta-programming of generalized models with the capability to underwrite both domain-specific and generalized programming languages. If that sentence doesn't make sense, watch the video.

Here's the thing. Oslo goes beyond the Java/C# debates. It creates a platform in which complex systems can be described by a series of fairly simple models, assembled together as needed to meet the task at hand. Text tools for the low level stuff, visual tools for the high level assembly, etc. It really was an eye opening view of what Microsoft has been working on now for a few years.

Now take a look at Azure, and the "All-Enterprise" infrastructure that will be available there. Identity services (OpenID, no less), an Enterprise Service Bus, relational and unstructured database services--the list goes on. If you take the time to learn .NET, you can get an amazing experience where the development tools just flow into the deployment models, which just flow into the operational advantages, whether on-premises or in the cloud.

Yeah, Azure is PaaS and Microsoft-centric to start, but may just work as advertised. Note as well that all this functionality required little or no open source. As James Governor notes:
"...[C]ustomers always vote with their feet, and they tend vote for something somewhat proprietary - see Salesforce APEX and iPhone apps for example."
and Dion Almaer (who happens to be a Google developer) notes:
"We can’t just be Open, we have to be better!" [Emphasis his]
Let it be known that Microsoft may have actually thrown down the gauntlet, started growing their "Tribe", in which case the rest of the industry would need to decide quickly how to respond.

I still maintain that Azure is only interesting to the existing Microsoft development pool. However, it they have great success in creating and hosting enterprise applications, IBM and HP (and Amazon and Google) are going to have a tough time convincing others that their option is the better option. If Oslo provides a technology that revolutionizes development, it seals the fate of many if not all PaaS platforms (i.e. "adapt or die"). This would mean that any power law effects that may exist would go in Microsoft's favor. Azure FTW.

Postlude: All of this was triggered by a Nick Carr observation of a
Jack Schofield story. I recommend highly reading both, as well as Governor's and Almaer's posts.

Tuesday, October 28, 2008

Why I Think CohesiveFT's VPN-Cubed Matters

You may have seen some news about CohesiveFT's new product today--in large part thanks the the excellent online marketing push they made in the days preceding the announcement. (I had a great conversation with Patrick Kerpan, their CTO.) Normally, I would get a little suspicious about how big a deal such an announcement really is, but I have to say this one may be for real. And so do others, like Krishnan Subramanian of CloudAve.

CohesiveFT's VPN-Cubed is targeting what I call "the last great frontier of the cloud", networking. Specifically, it is focusing a key problem--data security and control--in a unique way. The idea is that VPN-Cubed gives you software that allows you to create a VPN of sorts that is under your personal control, regardless of where the endpoints reside, on or off the cloud. Think of it as creating a private cloud network, capable of tying systems together across a plethora of cloud providers, as well as your own network.

The use case architecture is really very simple.


Diagram courtesy of CohesiveFT

VPNCubed Manager VMs are run in the network infrastructure that you wish to add to your cloud VPN. The manager then acts as a VPN gateway for the other VMs in that network, who can then communicate to other systems on the VPN via virtual NICs assigned to the VPN. I'll stop there, because networking is not my thing, but I will say it is important to note that this is a portable VPN infrastructure, which you can run on any compatible cloud, and CohesiveFT's business is to create images that will run on as many clouds as possible.

Patrick made a point of using the word "control" a lot in our conversation. I think this is where VPN-Cubed is a game changer. It is one of the first products I've seen target isolating your stuff in someone else's cloud, protecting access and encryption in a way that leaves you in command--assuming it works as advertised...and I have no reason to suspect otherwise.

Now, will this work with PaaS? No. SaaS? No. But if you are managing your applications in the cloud, even a hybrid cloud, and are concerned about network security, VPN-Cubed is worth a look.

What are the negatives here? Well, first I think VPN is a feature of a larger cloud network story. This is the first and only of its kind in the market, but I have a feeling other network vendors looking at this problem will address it in a more comprehensive solution.

Still, CohesiveFT has something here: it's simple, it is entirely under your control, and it serves a big immediate need. I think we'll see a lot more about this product as word gets out.

Monday, October 27, 2008

Even Microsoft is Cautious About the Legal State of the Cloud, and More

Tucked in a backwater paragraph of this interesting interview with Microsoft corporate VP Amitabh Srivastava is an interesting note about the prioritization and pacing for rollout of Azure into the various data centers Microsoft owns worldwide:
"Also, for now, Azure services will be running in a single Microsoft data center (the Quincy, Wash. facility). Sometime next year, Microsoft will expand that to other U.S. data centers and eventually move overseas, though that brings with it its own set of geopolitical issues that Srivastava said that the company would just as soon wait to tackle."
No kidding. Let's not even get into the unique legal challenges that Microsoft faces in the EU (perhaps especially because they are proposing a Windows-only cloud offering?). Just figuring out how to lay out the technical and business policies around data storage and code execution will be a thrill for the be-all, end-all PaaS offering that is Azure.

(On a side note, perhaps it presents a unique opportunity for regulation-aware infrastructure?)

There was one positive note in this interview, however. Apparently Microsoft has non-.NET code running internally on Azure, and will offer those services sometime next year. Furthermore, services must meet a template today, but template-independent services are currently on the roadmap. Perhaps a move from PaaS to IaaS is also in store?

Microsoft chooses the Azure PaaS to the Clouds

The Microsoft PDC2008 keynote presentation just concluded, and the team in Redmond announced Azure, a very full featured cloud PaaS allowing for almost the entire .NET stack to run in Microsoft's data centers, or on-premises at your organization. (The keynote will be available on-demand on the Microsoft PDC site.)

I find myself impressed, underwhelmed and, in fact, a little disappointed. Here's how that breaks down:

Impressed
  • This is clearly the most full featured PaaS out there. Service frameworks, a service bus, identity, database services (both relational and unstructured), a full featured IDE integration. No one else is doing this much--not even Google.

  • I love the focus on hybrid implementations (both on-premesis and "in the cloud"). Software plus Services can really pay off here, as you look at the demonstrations give in the keynote.

  • The identity stuff is a key differentiator. Not your Live account, but whatever federated identity you are using.

Underwhelmed
  • They used an opportunity to announce a revolutionary change to Microsoft's technology and business to demonstrate how all the things people have already been doing in .NET can be shoehorned into the cloud. Product recalls? Really?

  • It started to sound like they would dig deep into architecture and radical new opportunities, but in the end they just showed off an awful lot of "gluing" existing products together. *Yawn*

Disappointed
  • Its PaaS. There is no Amazon-killer, no opportunity for the masses to leverage Microsoft data centers, no ability to deploy "raw" Windows applications into the cloud. Just a tool to force adoption of full scale .NET development and Microsoft products. Good for Microsoft, but will it win any converts?

  • I wanted more from Ray. I wanted a peek into a future that I never considered; an understanding of where it was that Microsoft's DNA was going to advance the science of the cloud, rather than just provide Microsoft's spin on it. With the possible exception of identity, I'm not sure I saw any of that.

So, a good announcement overall, but pretty much well within the bounds of expectations, perhaps even falling short in a couple of places. I can't wait to see how the market reacts to all of this.

By the way, Azure is only open to PDC2008 participants at first. The floodgates will slowly be opened over the next several months--in fact, no upper bound was given.

Friday, October 24, 2008

Is Amazon in Danger of Becoming the Walmart of the Cloud?

Update: Serious misspelling of Walmart throughout the post initially. If you are going to lean an argument heavily on the controversial actions of any entity, spell their name right. Mea culpa. Thanks to Thorsten von Eicken for the heads up.

Also, check out Thorsten's comment below. Perhaps all is not as bleak as I paint it here for established partners...I'm not entirely convinced this is true for the smaller independent projects, however.


I grew up in the great state of Iowa. After attending college in St. Paul, Minnesota, I returned to my home state where I worked as a computer support technician for Cornell College, a small liberal arts college in Mount Vernon, Iowa. It was a great gig, with plenty of funny stories. Ask me over drinks sometime.

While in Mount Vernon, there was a great controversy brewing--well, nation wide, really--amongst the rural towns and farm villages struggling to survive. You see, the tradition of the family farm was being devastated, and local downtowns were disappearing. Amidst this traumatic upheaval appeared a great beast, threatening to suck what little life was left out of small town retail businesses.

The threat, in one word, was Walmart.

Walmart is, and was, a brilliant company, and their success in retail is astounding. In a Walmart, one can find almost any household item one needs under a single roof, including in many cases groceries and other basic staples. Their purchasing power drives prices so low, that there was almost no way they can get undercut. If you have a WalMart in your area, it might find it the most logical place to go for just about anything you needed for your home.

That, though, was the problem in rural America. If a Walmart showed up in your area, all the local household goods stores, clothing stores, electronics stores and so on were instantly the higher price, lower selection option. Mom and Pop just couldn't compete, and downtown businesses disappeared almost overnight. The great lifestyle that rural Americans led with such pride was an innocent bystander to the pursuit of volume discounts.

Many of the farm towns in Iowa were on the lookout then, circa 1990, for any sign that Walmart might be moving in. (They still are, I guess.) When a store was proposed just outside of Cedar Rapids, on the road to Mount Vernon, all heck broke loose. There was strong lobbying on both sides, and businesses went on a media campaign to paint Walmart as a community killer. The local business community remained in conflict and turmoil for years on end while the store's location and development were negotiated.

(The concern about Walmart stores in the countryside is controversial. I will concede that not everyone objects to their building stores in rural areas. However, all of the retailers I knew in Mount Vernon did.)

If I remember correctly, Walmart backed off, but its been a long time. (Even now, they haven't given up entirely.)

While I admire Amazon and the Amazon Web Services team immensely, I worry that their quest to be the ultimate cloud computing provider might force them into a similar role on the Internet that Walmart played in rural America. As they pursue the drive to bring more and better functionality to those that buy their capacity, the one-time book retailer is finding themselves adding more and more features, expanding their coverage farther and farther afield from just core storage, network and compute capacity--pushing into the market territory of entrepreneurs who seized the opportunity to earn an income off the AWS community.

This week, Amazon may have crossed an invisible line.

With the announcement that they are adding not just a monitoring API, not just a monitoring console, but actual interactive management user interface, with load balancing and automated scaling services, Amazon is for the first time creeping into the territory held firm by the partners that benefited and benefited from Amazon's amazing story. The Sun is expanding into the path of its satellites, so to speak.

The list of the endangered potentially include innovative little projects like ElasticFox, Ylastic and Firefox S3, as well as major cloud players such as RightScale, Hyperic and EUCALYPTUS. These guys cut their teeth on Amazon's platform, and have built decent businesses/projects serving the AWS community.

Not that they all go away, mind you. RightScale and Hyperic, for example, support multiple clouds, and can even provide their services across disparate clouds. EUCALYPTUS was designed with multiple cloud simulations in mind. Furthermore, exactly what Amazon will and won't do for these erstwhile partners remains unclear. Its possible that this may work out well for everyone involved. Not likely, in my opinion, but possible.

Sure, these small shops can stay in business, but they now have to watch Amazon with a weary eye (if they weren't already doing that). There is no doubt that their market has been penetrated, and they have to be concerned about Amazon doing to them what Microsoft did to Netscape.

Or Walmart did to rural America.

Thursday, October 23, 2008

Amazon Enhances "The Proto-Cloud"

Big news today, as you've probably already seen. Amazon has announced a series of steps to greatly enhance the "production" nature of its already leading edge cloud computing services, including (quoted directly from Jeff Barr's post on the AWS blog):
  • Linux on Amazon EC2 is now in full production. The beta label is gone.
  • There's now an SLA (Service Level Agreement) for EC2.
  • Microsoft Windows is now available in beta form on EC2.
  • Microsoft SQL Server is now available in beta form on EC2.
  • We plan to release an interactive AWS management console.
  • We plan to release new load balancing, automatic scaling, and cloud monitoring services.
There is some great coverage of the announcement already in the blog-o-sphere, so I won't repeat the basics here. Suffice to say:
  • Removing the beta label removes a barrier to S3/EC2 adoption for the most conservative of organizations.
  • The SLA is interestingly organized to both allow for pockets of outages while promoting global up-time. Make no mistake, though, some automation is required to make sure your systems find the working Amazon infrastructure when specific Availability Zones fail.
  • Oh, wait, they took care of that as well...along with automatic scaling and load balancing.
  • Microsoft is quickly becoming a first class player in AWS, which removes yet another barrier for M$FT happy organizations.
Instead, let me focus in this post on how all of this enhances Amazon's status as the "reference platform" for infrastructure as a service (IaaS). In another post, I want to express my concern that Amazon runs the danger of becoming the "WallMart" of cloud computing.

First, why is it that Amazon is leading the way so aggressively in terms of feature sets and service offerings for cloud computing? Why does it seem that every other cloud provider seems to be catching up to the services being offered by Amazon at any given time? For example:
The answer in all cases is because Amazon has become the default standard for IaaS feature definition--this despite having no user interface of their own (besides command line and REST), and using "special" Linux images (the core Amazon Machine Images) that don't provide root access, etc. The reason for the success in setting the standard here is simple: from the beginning, Amazon has focused on prioritizing feature delivery based on barriers to adoption of AWS, rather than on building the very best of any given feature.

Here's how I see it:
  • In the beginning, there was storage and network access. Enter S3.
  • Then there were virtual servers to do computational tasks. Enter EC2, but with only one server size.
  • Then there were significant complaints that the server size wasn't big enough to handle real world tasks. Enter additional server types (e.g. "Large") and associated pricing
  • Then there was the need for "queryable" data storage. Enter SimpleDB.
  • Somewhere in the preceding time frame, the need for messaging services was identified as a barrier. Enter Amazon Simple Queue Service.
  • Now people were beginning to do serious tasks with EC2/S3/etc., so the issues of geographic placement of data and workloads became more of a concern. (This placement was both for geographic fail over, and to address regulatory concerns.) Enter Availability Zones.
  • Soon after that, delivering content and data between the zones became a serious concern (especially with all of the web start ups leveraging EC2/S3/etc.) Enter the announced AWS Content Delivery Service
  • Throw in there various partnership announcements, including support for MySQL and Oracle.
By this point, hundreds of companies had "production" applications or jobs running on Amazon infrastructure, and it became time to decide how serious this was. In my not-so-humble opinion, the floundering economy, its effects on the Amazon retail business, and the predictions that cloud computing could benefit from a weakened economy fed into the decision that its time to remove the training wheels and leave "beta" status for good. Add an official SLA, remove the "beta" label, and "BAM!", you suddenly have a new "production" business to offset the retail side of the house.

Given that everyone else was playing catchup to these features as they came out (mostly because competitors didn't realize what they needed to do next, as they didn't have the customer base to draw from), it is not surprising that Amazon now looks like they are miles ahead of any competitor when it comes to number of customers and (for cloud computing services) probably revenue.

How do you keep the competitors playing catchup? Add more features. How do you select which features to address next? Check with the customer base to see what their biggest concerns are. This time, the low hanging fruit was the management interface, monitoring, and automation. Oh, and that little Windows platform-thingy.

Now, I find it curious that they've pre-announced the monitoring and management stuff today. Amazon isn't really in the habit of announcing a feature before they go private-beta. However, I think there is some concern that they were becoming the "command-line lover's cloud", and had to show some interest in competing with the likes of VirtualCenter in the mind's eye of system administrators. So, to undercut some perceived competitive advantages from folks like GoGrid and Slicehost, they tell their prospects and customers "just give us a second here and we will do you right".

I think the AWS team has been brilliant, both in terms of marketing and in terms of technology planning and development. They remain the dominant team, in my opinion, though there are certainly plenty of viable alternatives out there that you should not be shy of both in conjunction with and in place of Amazon. Jeff Barr, Werner Vogels and others have proven that a business model that so many other IT organizations failed at miserably could be done extremely well. I just hope they don't get too far ahead of themselves...as I'll discuss separately.