Tuesday, December 18, 2007

Some more skepticism about Amazon WS as a business plan

I've been interested in Don McAskill's review of Amazon's SimpleDB in light of SmugMug's future plans. He is very positive that he can use this service for what it is intended for; infinitely scalable storage and retrieval of small, structured data sets. His post is a good one if you want to get a clearer idea of what you can and shouldn't do with SimpleDB.

However, I worry for Don. As a growing number of voices have been pointing out, committing your business growth to Amazon, especially as a startup, may not be a great thing. Kevin Burton, founder and CEO of spinn3r and Tailrank, notes that this depends on what your processing and bandwidth profiles are, but there are a large number of services that would do better to buy capacity from, say, a traditional managed hosting facility.

Burton uses the term "vendor lock-in" a few times, which certainly echos comments that Simon and I have been making recently. But Burton brings up an additional point about bandwidth costs that I think have to be carefully considered before you jump on the Amazon-as-professional-savior bandwagon. He notes that for his bandwidth intensive business, Amazon would cost 3X what it currently costs spinn3r to access the net.

Burton goes on to suggest an alternative that he would love to see happen: bare metal capacity as a service. Similar to managed hosting, the idea would be for the system vendors to lease systems for a cost somewhat above what it would take to buy the system, but broken down over 2-3 years. Since the credit worthiness of most startups is an issue, lease default concerns can be mitigated by keeping the systems on the vendor's premises. Failure to pay would result in blocked access to the systems, for both the customer and their customers.

I like this concept as a hybrid between the "cloud" concepts and traditional server ownership. Startups can get the capacity they need without committing capital that could be used to hire expertise instead. On the negative side, however, this does nothing to reduce operational costs at the server levels, other than eliminating rack/stack costs. And Burton says nothing about how such an operation would charge for bandwidth, one of his key concerns about Amazon.

There have been a few other voices that have countered Kevin, and I think they should definitely be heard as this debate grows. Jay at thecapacity points out the following:
[B]usiness necessitates an alternate reality and if expediency, simplicity and accuracy mean vendor constraint, so be it.
I agree with this, but I think that it is critical that businesses choose to be locked in with open eyes, and a "disaster recovery" plan should something go horribly wrong. Remember, it wasn't that long ago that Amazon lost a few servers accidentally.

(Jay seems to agree with this, as he ends his post with:
When companies talk about outsourcing these components, or letting a vendor’s software product dictate their business & IT processes… I always check to make sure my lightsaber is close.
This is in reference to Marc Hedlund's post, “Jedi’s build their own lightsabers”.)

Nitin Borwankar, a strong proponent of Amazon SimpleDB commented on Kevin's post that SimpleDB is a long tail play, and that the head of the data world would probably want to run on their own servers. This is an incredibly interesting statement, as it seems to suggest that even though SimpleDB scales almost infinitely from a technical perspective, it doesn't so much from a business prospective.

On a side note, its been a while since I spoke about complexity theory and computing, but let me just say that this tension between "Get'r done" and "ye kanna take our freedom!" is exactly the kind of tension what you want in a complex system. As long as utility/cloud computing stays at the phase change between these two needs, we will see fabulous innovation that allows computing technologies to remain a vibrant and ever innovating ecosphere.

I love it.

Friday, December 14, 2007

Lessons in schitzophrenia from Sun's customers

From Don McAskill (via Robert Scoble) and Jonathon Schwartz comes fascinating insight into the complete dichotomy that is the IT world today. What I find especially interesting is the range of personalities ranging from the paranoid (e.g. "anything Sun does in the Intel/Linux/Windows space is bad for SPARC/Solaris") to the zealot (e.g. "screw what got you here, open everything up and do it low/zero margin").

I'm not surprised that the CTO audience (as represented by McAskill) was more eager to push Sun into new technologies than the CIOs. First, Sun has always been a company by engineers, for engineers, to engineer. Their sales success has come from selling to a technical audience, not a business audience. (Contrast this with IBM, or even Microsoft at the department level.) Second, CIOs are always struggling to keep up with the cost of implementing new technologies, while CTOs are being pushed to discover and implement them--in part to keep their own technical staff's skills relevant in the modern marketplace.

That's not to say that CIOs aren't technology conscious or CTOs don't care about the bottom line--I grossly exaggerated to make a point, after all--but the tendencies indicated by Jonathon aren't surprising in this light.

What I find especially fascinating, however, is that even though both the business and technical cases are made for utility/cloud computing, its the grunts that are blocking implementation in even the most forward thinking data center. Again, utility computing touches everything and everyone, and that is scary as hell, even to a hard core techie.

Thursday, December 13, 2007

"The techno-utility complex" and cloud lock-in

OK, so in the process of commenting on Nick Carr's post, "The techno-utility complex", I came up with a term I like: cloud lock-in. This goes to my earlier conversation about vendor lock-in in the capacity on demand world--aka cloud computing. I like the term because it reinforces the truth: there is no single compute cloud, and the early leaders in the space don't want there to be one. Rather, they are hell bent on collecting your dimes every hour, and making it damn expensive for you to move your revenue stream elsewhere.

My advice stands: if you are greenfield, and data security and access are less of an issue for you, go for EC2, the managed hosting "cloud bank", or the coming offerings from Google or Microsoft. However, if you want to take a more conservative approach towards gaining the economic benefits of utility computing, make your own cloud first.

Wednesday, December 12, 2007

Software fluidity and system security

I came across this fascinating conversation tonight between Rich Miller (whom I've exchanged blogs with before) and Greg Ness regarding the intense relationship between network integrity, system security and VM-based software fluidity. What caught my attention about this conversation is the depth at which Rich, Greg and others have thought about this branch of system security--what Greg refers to as Virtsec. I learned a lot by reading both authors' posts.

I know nothing about network security or virtualization security, frankly, but I know a little about network virtualization and the issues that users have in replicating secure computing architectures across pooled computing resources. Rich makes a comment in the discussion that I want to comment on from a purely philosophical point of view:

Consider this: It's not only network security, but also network integrity that must be maintained when supporting the group migration of VMs. If one wants to move an N-tier application using VMware's VMotion, one wants a management framework that permits movement only when the requirements of the VM "flock" making up the application are met by the network that underpins the secondary (destination) location. By that, I mean:

  • First, the assemblage of VMs need to arrive intact.

    If, because of a change in the underpinning network, a migration "flight plan" no longer results in a successful move by all the piece parts, that's trouble. If disaster strikes, you don't want to find that out when invoking the data center's business continuity procedure. All the VMs that take off from the primary location, need to land at the secondary.


  • Second, the assemblage's internal connections as well as connections external to the "flock" must continue to be as resilient in their new location as they were in their original home.

    If the use of VMotion for an N-tier application results in the a new instance of the application that ostensibly runs as intended, but is susceptible to an undetected, single point of network failure in its new environment, someone in the IT group's network management team will be looking for a new job.
Here is exactly where I believe application architectures are suddenly critical to the problem of software fluidity. In a well contained multi-tier application (a very turn-of-the-millennium concept) it is valid to consider the migration of the "flock" as a network integrity problem. However, when it comes to the modern world of SOA, BPM and application virtualization, suddenly application integrity becomes a dynamic discovery issue which is only partly dependent on network access.

In other words, I believe most modern enterprise software systems can't rely on the "infrastructure" to keep their components running when they are moved around the cloud. Its not good enough to say "gee, if I get these running on one set of VMs, I shouldn't have to worry about what happens if those VMs get moved". Rich hints strongly at understanding this, so I don't mean to accuse him of "not getting it". However, I wonder what Replicate Technologies is prepared to tell their clients about how they need to review their application architectures to work in such a highly dynamic environment. I'd love to hear it from Rich.

Also, from Greg, I'd be interested in knowing if he's thought beyond the effects on network security of virtsec to the effects on application security. At the very least, I think an increasing dependency on dynamic discovery of required resources (e.g. services, data sources, EAI etc.) means an increased need for virtsec to be application aware as well as network aware. I apologize if I'm missing a virtsec 101 concept here, as I haven't yet read all that Greg has written about the subject, but I'm disturbed that the little I've read so far seems to assume that VMs can be managed purely as servers, a common mistake when considering end-to-end Service Level Automation (SLAuto) needs.

I intend to keep one eye on this discussion, as virtsec is clearly a key element of software fluidity in a SOA/BPM/VM world. It is even more critical in a true utility/cloud computing world, where your software would ideally move undetected between capacity offered by entirely disparate vendors. Its no good chasing the cheapest capacity in the world if it costs you your reputation as a security conscious IT organization.

By the way, Rich, I dropped the virtual football after your last post in our earlier conversation...I just found a blog entry I wrote about 6 months ago during that early discussion about VM portability standards that I never posted. Aaaargh... My apologies, because I liked what you were saying, though I was concerned about non-VM portability and application awareness at the time as well. I continue to follow your work.

Monday, December 10, 2007

User Experience and Fluidity (er, Patration...)

First, thanks to Simon in reminding me about his seminal post defining a new key industry term, patration, to define what I call software fluidity. Is he serious? Only your use of the term in every day life will tell... [Insert cheezy smiley face here.]

Second, if you haven't run across it yet, check out the debate between Robert Scoble/Nick Carr and Michael Krigsman/the Enterprise Irregulars about the need for enterprise software vendors to learn from the "drive to sexiness" of consumer software. My personal opinion? I've worked in enterprise software for years, and I still don't understand why engineers take no pride in making something amazing to install, learn and use. All the effort goes into command line tools and cool functions, little goes into human experience. How has Apple remained relevant all of these years? A focus on sexiness, without losing sight of functionality.

I agree with Nick, sexiness and functionality/stability are not mutually exclusive--except in the eyes of most enterprise software vendors...

Wednesday, December 05, 2007

Oracle makes DBs fluid(?)

Well, Oracle is making a play at making databases portable via virtualization. This was a problem in the pure VMWare world, as no one was comfortable with running their production databases in a VM. I'm not saying that is instantly solved by any means, but "certified by Oracle" is a hell of pitch...

http://www.networkworld.com/community/node/21849

How fluid is your software?

I come from a software development background, and I can never quite get the itch to build the perfect architecture out of my system. That's partly why it is so hard for me to blog about power, even though it is an absolutely legitimate topic, and a problem that needs to be attacked from many fronts. However, power is not a software issue, it is a hardware and facilities issue, and my heart just isn't there when it comes to pontificating.

All is not lost, however, as Cassatt still plays in the utility computing infrastructure world, and
I get plenty of exposure to dynamic provisioning, service level automation (SLAuto) and the future of capacity on demand. And to that end, I've been giving a lot of thought to the question of what, if any, software architecture decisions should be made with utility computing in mind.

While a good infrastructure platform won't require wholesale changes to your software architecture (and none at all, if you are willing to live with consequences that will become obvious later in this discussion), the very concept of making software mobile--in the changing capacity sense, not the wireless device sense--must lead all software engineers and architects to contemplate what happens when their applications are moved from one server to another, or even one capacity provider to another. There are a myriad of issues to be considered, and I aim to cover just a few of them here.

The term I want to introduce to describe the ability of application components to migrate easily is "fluidity". The definition of the term fluidity includes "the ability of a substance to flow", and I don't think its much of a stretch to apply the term to software deployments. We talk about static and dynamic deployments today, and a fluid software system is simply one that can be moved easily without breaking the functionality of the system.

An ideally fluid system, in my opinion, would be one that could be moved in its entirety or in pieces from one provider to another without interruption. As far as I know, nobody does this. (3TERA claims they can move a data center, but as I understand it you must stop the applications to execute the move.) However, for purely academic reasons, let's analyze what it would take to do this:


  1. Software must be decoupled from physical hardware. There are currently two ways to do this that I know of:
    • Run the application on a virtual server platform
    • Boot the file system dynamically (via PXE or similar) on the appropriate capacity
  2. Software must loosely coupled from "external" dependencies. This means all software must be deployable without hard coded reference to "external" systems on which it is dependent. External systems could be other software processes on the same box, but the most critical elements to manage here are software processes running on other servers, such as services, data conduits, BPMs, etc.
  3. Software must always be able to find "external" dependencies. Loose coupling, as most of you know, is sometimes easier said than done, especially in a networked environment. Critical here is that the software can locate, access and negotiate communication with the external dependencies. Service registries, DNS and CMDB systems are all tools that can be used to help systems maintain or reestablish contact with "external" dependencies.
  4. System management and monitoring must "travel" with the software. Its not appropriate for a fluid environment to become a Schrodinger's box, where the state of the system becomes unknown until you can reestablish measurement of its function. I think this may be one of the hardest requirements to meet, but at first blush I see two approaches:
    • Keep management systems aware of how to locate and monitor systems as they move from one system to another.
    • Allow monitoring systems to dynamically "rediscover" systems when they move, if necessary. (Systems maintaining the same IP address, for instance, may not need to be rediscovered.)

This is just a really rough first cut of this stuff, but I wanted to put this out there partly to keep writing, and partly to get feedback from those of you with insights (or "incites") into the concept of software fluidity.

In future posts I'll try to cover what products, services and (perhaps most importantly) standards are important to software fluidity today. I also want to explore whether "standard" SOA and BPM architectures actually allow for fluidity. I suspect they generally do, but I would not be suprised to find some interesting implications when moving from static SOA to fluid SOA, for instance.

Respond. Let me know what you think.

Off-topic: Its official, rock beats paper and scissors

Having some fun with Google Trends, I ran the following comparison:


which yielded the following result:

where blue is rock, red is paper and yellow is scissors.
As I read this, rock has consistently beat paper and scissors for four years running.
Of course, my assertion is based on Google's observation that Trends can predict the future.