Friday, February 29, 2008

Fun with Simon

Simon Wardley created a couple of posts this week that make for good smiles. The first is his maturity model for cloud computing:



This one I agree with. Very funny, but funny because it reflects truth.

The second is a post on open source computing. I completely disagree with the concept that open source can keep up with closed source in terms of innovation (Anne Zelenka makes a great argument here), and that closed source is bad for ducks (see Simon's post).

However, I do believe that standardization spreads faster with open source than with closed source. For what its worth, I would also like to see a major utility computing platform release its technology to open source. (Well, at least the components that are required for portability.) I just wonder why any of them would without pressure from the market.

My equations would reflect the "Schrodinger's Cat" aspects of closed source products prior to the introduction of accepted standards,

open source == kindness to ducks
closed source == ambivolence towards ducks; could go either way
:-)

FriendFeed

I just wanted to let everyone know that my entire time-wasting life--er, online research--can be found at http://friendfeed.com/jamesurquhart. I love this site, but wonder how the heck they are going to make any money. There are no ads or anything.

If you are on friendfeed, subscribe to my feed. Several other big name bloggers are also there, which makes it very cool for understanding what they are reading or commenting on.

Wednesday, February 27, 2008

Enterprise Architecture, Business Continuity and Integrating the Cloud

(Update: Throughout the original version of this post, I had misspelled Mr. Vambenepe's name. This is now corrected.)

William Vambenepe, a product architect at Oracle focusing on enterprise management of applications and middleware, pointed me to a blog by David Linthicum on IntelligentEnterprise that makes the case for why enterprise architects must plan for SaaS. In a very high level, but well reasoned post, Linthicum highlights why SaaS systems should be considered a part of enterprise architectures, not tangential to them.

As Vambenepe points out, perhaps the most interesting observation from Linthicum is the following:

Third, get in the mindset of SaaS-delivered systems being enterprise applications, knowing they have to be managed as such. In many instances, enterprise architects are in a state of denial when it comes to SaaS, despite the fact that these SaaS-delivered systems are becoming mission-critical. If you don't believe that, just see what happens if Salesforce.com has an outage.

I don't want to simply repeat Vambenepe's excellent analysis, and I absolutely agree with him. So let me just add something about SLAuto.

Take a look at Vambenepe's immediate response:
I very much agree with this view and the resulting requirements for us vendors of IT management tools.
Now add the comments from Microsoft's Gabriel Morgan that I discussed a couple of weeks ago.
Take for example Microsoft Word. Product Features such as Import/Export, Mail Merge, Rich Editing, HTML support, Charts and Graphs and Templates are the types of features that Customer 1.0 values most in a product. SaaS Products are much different because Customer 2.0 demands it. Not only must a product include traditional product features, it must also include operational features such as Configure Service, Manage Service SLA, Manage Add-On Features, Monitor Service Usage Statistics, Self-Service Incident Resolution as well.
Gabriel's point boiled down to the following equation:
Service Offering = (Product Features) + (Operational Features)
which I find to be entirely in agreement with Linthicum and Vambenepe.

As I am wont to do, let me push "Operational Features" as far as I think they can go.

In the end, what customers want from any service--software, infrastructure or otherwise--is control over the balance of quality, cost and time-to-market. Quality is measured through specific metrics, typically called service level metrics. Service level agreements (SLAs) are commitments to maintain service level metrics within commonly agreed boundaries and rules. In the end, all of these "operational features" are about allowing the end user to either

  1. define the service level metrics and/or their boundaries (e.g. define the SLA), or
  2. define how the system should respond if a metric fails to meet the SLA.

Item "2" is SLAuto.

I would argue that what you don't want is a closed loop SLAuto offering from any of your vendors. In fact, I propose right here, right now, that a standard (and, I am sure Simon Wardley would argue, open source) protocol or set of protocols for the following:
  1. Defining service level metrics (probably already exists?)
  2. Defining SLA bounds and rules (may also exist?)
  3. Defining alerts or complex events that indicate that an SLA was violated

Vendors could then use these protocols to build Operational Features that support a distributed SLAuto fabric, where the ultimate control over what to do in severe SLA violations can be controlled and managed outside of any individual provider's infrastructure, preferably at a site of the customer's choosing. This "customer advocate" SLAuto system would then coordinate with all of the customer's other business systems' individual SLAuto to become the automated enforcer of business continuity. In the end, that is the most fundamental role of IT, whether it is distributed or centralized, in any modern, information driven business.

"Nice, James," you say. "Very pretty 'pie-in-the-sky' stuff, but none of it exists today. So what are we supposed to do now?"

Implement SLAuto internally in your own data centers with your existing systems, that's what. Integrate SLAuto for SaaS as you understand the Operational Feature APIs from your vendors, and those vendors, your SLAuto vendor and/or your systems talent can develop interfaces into your own SLAuto infrastructure.

Evolve towards nirvana, don't try to reach it by taking tabs of vendor acid.

If you want more advice on how to do all of this, drop me a line (james dot urquhart at cassatt dot com) or comment below.

Tuesday, February 26, 2008

Jonathon Schwartz hints a MySQL cloud

Robert Scoble interviewed Jonathan Schwartz today using his ultracool Nokia N95/Qik personal broadcasting package. During that interview, Jonathan made an interesting non-announcement. It seems, he notes, a natural fit that a data center expert like Sun could leverage their new highly scalable database environment, MySQL, to build a MySQL cloud service.

I think this is would be awesome; not only because it forces Oracle to consider getting into the same market (thus potentially creating a competitive commodity database service market), but also because it opens all kinds of possibilities for add-on capabilities that might not be economically feasible to develop in a traditional enterprise sales model.

Here's my only suggestion to my former boss, Mr. Schwartz. Buy Endeca. Not because they own the ecommerce search for most combination "bricks-and-mortar"/online retailers (they do), but because the technology has been developed in such a way that it can be used as tag-based search for just about any data source. (They don't present it that way, but I got an in depth demo, and that is what it is.) What I imagine would be a competitive advantage for Sun/MySQL would be a cost-per-byte data source with both SQL and tag-based or unstructured querying. Buy Endeca!

OK, soon we will have capacity, storage and databases in the cloud. Who wants to be first in the "System Management as a Service" game?

Monday, February 25, 2008

Comments on Paul Wallis: Cloud Computing

Paul WillisWallis has an excellent post tying the history of prior utility/cloud/grid computing attempts to the current hype. I've been trying to comment for a while, but haven't been able to get comment submission to work until today. This is a reworking of that response, in case it doesn't get through moderation for some reason.

Let me just say that, contrary to Paul's description of my position may sound to others, I am not blindly "pro-cloud". In fact, I firmly recommend that existing enterprise data centers and applications think hard before going "outside" to a commercial capacity-on-demand provider. In most cases, it would actually be better for such enterprises to convert their own infrastructure to a utility computing model first, while the necessary technologies and businesses mature.

I also define the cloud broadly, to include SaaS, PaaS (e.g. force.com) and HaaS (e.g. Amazon, Mosso, etc.). SaaS is in clearly in play today, HaaS is being experimented with, but PaaS may be the most interesting facet of the cloud in the long term.

That being said, Paul provides very valuable information in this post, and I for one very much appreciate the work put onto it. It is very true that bandwidth is something to be nervous about (especially when Amazon charges as much as it does for bandwidth), and I have had some interesting discussions (such as the one Paul references) about how data integration will happen over the cloud. Finally, cloud-lockin is indeed something to be concerned about; as in, what happens if my first choice provider sucks? Can I move my applications, data, etc. to someone else cheaply enough that it doesn't put me out of business? Simon Wardley has a good post on that today.

Update: Er, two seconds and I could have confirmed the spelling of Paul's last name. Sorry, Paul!

HPC in the Cloud

Check out Blue Collar Computing. High Performance Computing is one area that should really benefit from utility computing models. Imagine gaining access to the worlds most powerful computers (with reasonable assistance from experts on programming and deploying on those systems) at a price made reasonable by paying only your "share" of resource usage costs.

Cool to see someone try this business model out for real.

Wednesday, February 20, 2008

Data Goes SLAuto at Oracle

Thanks to Steve Jones, check out this presentation from David Chappell, Oracle VP and CTO of SOA, titled "Next-Generation Grid Enabled SOA". (A shorter written article can be found in at SOA Magazine's site.) Chappell outlines the work that Oracle is doing at turning the traditional model of application scalability on its head; instead of a fixed amount of database resources and scaling the applications/services horizontally, scale the database (using a cool complex adaptive systems approach) and alleviate much of the need to scale apps and services (except for CPU bound services). For someone like me, that's mind blowing.

Add to that the fact that the data management functions are relatively homogenous (though the infrastructure may not be), and aware of its resource utilization, and you can see why they are claiming a certain amount of hardware-metric based SLAuto.

(Hardware metric based SLAuto is based in measurements of hardware components, such as CPU utilization, memory utilization and so on. Software-based SLAuto usually uses business metrics such as transaction rates, active accounts, etc. to make scaling decisions.)

The catch? Well, everything must be written to use the "Data Grid" if its to take advantage of these capabilities. Legacy applications need not apply. (Could be the deal killer for David's "Not your MOM's Bus" concept.)

It seems to me that if Oracle wants this approach to catch on, it should open source a reference implementation as soon as possible. I'm not an expert at the most recent data processing approaches, but it would seem to me that Map-Reduce approaches would be complimentary to the Data Grid. However, Hadoop implementations would generally only be integrated with a data grid if there was an open source alternative. Otherwise, MySQL will continue to be the first choice. Open Source would also speed up integration between the data grid and infrastructure automation such as Cassatt and its competitors.

Dave hints at a URL for more info on the Oracle site, but I can't find it. If anyone tracks it down, I would appreciate any help I can get.

Government Data and the Greater Cloud

These days, so much is being made of cloud computing from the "capacity-on-demand" perspective, that I thought I'd take the time to review an interesting "service" that I consider an element of the "cloud system" in the larger sense. It's not a Web Service as such, but a site that performs a valuable service that can be utilized in business or government applications.

John Udell introduce me to EveryBlock with his discussion of how EveryBlock is exposing the value of free access to government data. This public information site is heavily processing data readily available from public agencies, allowing any user (or other system) to query the data in ways not originally intended by the agency. John has an excellent example of these types of queries, but I played around with the site a little, and I can see that this is an excellent resource. Imagine what this service can do for the legal, construction and journalism industries.

Of course, John notes that the government data isn't nearly as *digitally* available as it needs to be. I hate to "me too" his post, but I have to concurr. These agencies aren't *good* cloud citizens until they expose their public data via a (hopefully simple) API.

(Caveat: I am still trying to determine if EveryBlock has an API, but their site HTML looks parsable enough.)

Many of the "cloud computing" definitions I've seen lately have had to do with how easy it is to move resources around between "capacity-on-demand" vendors. I want to submit that the "cloud system" is everything that is available to support computing via the Internet, including the key services that are ready to be integrated into other applications. Google Maps for instance. Or Flickr. Just as long as they remain decoupled from the clients that use them, any electronically accessible Internet property has a potential to contribute to the cloud system.

Friday, February 15, 2008

A Day of Storage and the Cloud

My reading began this morning with Nick's covereage of the Amazon S3/EC2/AWS outage. Perhaps most interesting to me, though, were the comments. A variety of people responded to note that we perhaps are holding the cloud to impossibly high standards, while others noted that this was supposed to be a distributed service, and an extended downtime like this indicates a certain lack of redundancy. I find this facinating, in light of the recent discussion of cloud lock-in. Not surprising, just facinating.

Let me explain.

While I remain extremely concerned about the proprietary operational approaches taken by most "capacity-on-demand" providers--many based on open source platforms, ironically--I think it is important to acknowledge that:
  1. 100% uptime is unreasonable for any platform in its infancy, including S3
  2. Not everyone will be negatively impacted as much by a three hour outage as some
  3. SLAs should be set if service is extremely critical to a business. Ironically, Amazon has limits on who and what they will provide SLAs for.
  4. Even with a three hour outage, Amazon S3 is probably the best service of its kind...for now.

That last point is critical, as Nick put up another post later in the day highlighting EMC's plans to enter the cloud storage market---in a big way. The competitors to Amazon are coming, and that fact may change the equation for how much leeway Amazon has in the future.

Assuming it is not super onerous to copy data from one provider to another--Storage may in fact be the earliest of the commodity cloud components if this is true--an alternative approach will make it that much simpler for an unsatisfied customer to make a move. This, in turn, will make some who will tolerate an outage now, well, less tolerant.

I anxiously await Amazon's explaination for the glitch.

By the way, Robert Scoble certainly believes Amazon has won the cloud market in its entirety already. He is way off, of course. Do you know how much datacenter capacity there is in corporate America alone? There is no way one company that is spending a fraction of the budget on building new data centers that Microsoft, Google and Yahoo are will create a barrier of entry that high. Amazon is a typical first enterant, ala Netscape. Hopefully the market is different enough, though, that they can build a survivor.

Thursday, February 14, 2008

Latency: Obstacle to cloud computing, or opportunity?

I was challenged in the comments to my Cloud Computing Heats Up post regarding my criticism of pupwhines' post that in turn criticized cloud computing. The anonymous author of the comment thought I was too hard on pupwhines, and wanted to know what my response specifically was to the challenge latency presents to distributed computing. I responded there, but I want to expand a little bit on the topic, as it is indeed important to understand, and backs my contention that there will need to be some software architectural changes made to leverage the cloud system.

(Quick note: I've alluded to this before, but I strongly believe there is no one cloud, but a bunch of siloed clouds today with *some* limited integration between them. More of a frontal system, really.)

Latency is an issue in most IT application environments today. There is no question that "traditional" tiered application design scales well at the processing layer, but has real issues at the data layer. There is simply no easy way to manage a traditional relational database architecture over a widely distributed environment. Pupwhines' contention that joining a table between two SaaS vendor implementations would be disaster is right on. In modern technology terms, it would be insane.

However, this is the disruptive aspect of cloud computing: the architectures you know and love are no longer necessarily best practices in a world where your functionality and capacity is:
  1. not necessarily your own,
  2. not necessarily integrated, and
  3. splayed out across this 5.1×108 km2 rock we live on
There are new technical advances being made today in the companies that already rely on cloud principles (think Google, Amazon, Microsoft, etc.). These advances will change the way you design and deploy software, but they will enable a world where proximity of data means less and less.

In fact, you probably already leverage one of these technical changes: increased bandwidth. Indulge me in some autobiographical narrative to illustrate.

Back in the late '90s, while I was a Senior Principal Consultant with Forte Software, Inc, the legendary(?) distributed application development platform vendor, one of my key roles was advising clients on how to best architect for high performance, high scalability and high availability. Forte was an early service oriented architecture, but it ran on the 10Mb/100Mb networks of the time. Thus, the rule for message passing between components (UI<->service or service<->service) was (in order of priority):
  1. Send as few messages over the network as possible
  2. Send the smallest messages possible

Thus, it was better to send large messages once than many small messages, but you wanted to optimize each message as much as possible.

To this end, best practices was to create data services and to actually deploy these services directly on the database server hardware. It was more important to process the relational mapping of data into the object mapping according to need in a timely fashion--thus avoiding unecessary network traffic--than to divide processing responsibility so that there was no custom application components running on the RDBMS hardware.

Fast forward to the 1G/10G networks of today. From what I am seeing, it is actually considered bad practice to do what I described above. While at Sun, I actually got admonished by a (very competent) manager for suggesting the way around Sun Access Managers horrible performance was to deploy the identity server and database on the same box (with our custom login and registration UIs deployed on separate, horizontally scalable servers). Pure architectural heresy. He was right in many ways: doing so would have put the business logic tier into horizontally locked architecture, but that wasn't his point. "We don't deploy our software on our database servers" was the gist of his argument.

So, faster networks have already changed the so-called "laws of physics" that software architects must design around. Given this, it seems easy to postulate that additional advances in network bandwidth will open additional opportunities for architectural change. In fact, it already has; check out Gigaspaces for a cool (though controversial) alternative to horizontally replicated service architectures.

Will bandwidth really grow at a rate that will make a difference to the current IT generation(s)? Many postulate it has to, even if the core network operators resist. As I noted in my response, Cisco's new Nexus 7000 series is a sign of times to come. Does anyone deny that 40G and 100G networks have the opportunity to change the laws of physics? (Disclaimer: I know just enough about networking to be dangerous, so I may be overstating the case...but change is still clearly on the horizon.)

Even if network bandwidth doesn't change at all, or any additional bandwidth is chewed up by demand at existing rates, there are other software architectural advances that will revolutionize certain kinds of computing. I spoke in my response about MapReduce and its open source implementation, Hadoop. For processing large, distributed data loads, this architecture is eliminating boundaries created by traditional scaleout RDBMS-based approaches. Google has used this approach to tie data from every one of its properties (including acquired properties, such as Blogger) into a single user identity and profile. Talk about a distributed join problem...

(Another quick note: hats off to Google and Yahoo respectively for their work in this space. I know from my past life at Sun what a pain in the whatever this is, and I love the seamlessness I experience on these sites.)

One other major advancement is the increasing sophistication of business integration technologies, from traditional application integration (force.com, boomi.com, BizTalk, Lombardi, etc.) to data integration options (Informatica, Business Objects, etc.) to subscription data propogation techniques. These integration options can allow one to go back to some of the basics I spoke of before: do as much processing as possible on Saas vendor A's infrastructure before sharing the relevant data with vendor B. Not as perfect as a join in many cases, but in a service oriented world, a common, required approach for most.

Perhaps the most important point I want to make today, however, is that today--in the modern IT era--many of these technologies are either future tech or not what was used to build existing applications. Given that, what does an existing datacenter do? Stick to my recommendation; convert your own datacenter into a utility/cloud today, and begin to leverage the maturing compute grid/cloud computing ecosystem as it and your applications mature.

Tuesday, February 12, 2008

Green Aware Programming

monkchips writes about "green aware programming", as coined by Christopher O’Connor, vice-president strategy and market management, Tivoli. I responded and pointed out that "green = cheap" in the utility computing world.

Sunday, February 10, 2008

Analyzing the Green opportunity

I just want to quickly bring Ken Oestreich's analysis of the Green Grid meeting in San Francisco (Day 1 and Day 2), and its aftermath to your attention. Pay special attention to the aftermath post, as it is one of the most well thought out statements of the status and opportunity for the Green Grid organization I have seen.

Ken really knows his stuff with respect to the Green Data Center movement, so if you have any interest in the subject at all, subscribe to his blog. His earlier analysis of DC energy efficiency metrics is an all time classic on the subject.

Friday, February 08, 2008

Off Topic: Blog domain change

With increased interest in Service Level Automation in the Datacenter, I decided to take the advice of many a more successful blogger than me, and register my own domain name for the blog. I debated quite a bit about which name to select, but in the end I wanted a domain that would travel with me regardless of where my career might take me in the future. (No plans to leave Cassatt, but you never know where the future might take you...)

To that end, this blog will now officially reside at http://blog.jamesurquhart.com. The original URL, http://servicelevelautomation.blogspot.com will continue to operate, but will redirect you to the new site. All feeds should also continue to work unchanged.

Please let me know if you have problems by emailing me at james dot urquhart at cassatt dot com.

Thursday, February 07, 2008

The importance of operations to online services customers

I hadn't caught up on Gabriel Morgan's blog in a while, so I'm a week or so late in seeing his interesting post on the importance of operations features in a SaaS product offering. Gabriel works at Microsoft on the team that is looking at the Software plus Services offerings introduced by Ray Ozzie a few months ago. According to Gabriel, being a software product company, Microsoft has occasionally been slow to learn a key lesson in the online services game:

In the traditional packaged software business, product features define what a product is but Customer 2.0 expects to have direct access to operational features within the Service Offering itself.

Take for example Microsoft Word. Product Features such as Import/Export, Mail Merge, Rich Editing, HTML support, Charts and Graphs and Templates are the types of features that Customer 1.0 values most in a product. SaaS Products are much different because Customer 2.0 demands it. Not only must a product include traditional product features, it must also include operational features such as Configure Service, Manage Service SLA, Manage Add-On Features, Monitor Service Usage Statistics, Self-Service Incident Resolution as well. In traditional packaged software products, these features were either supported manually, didn't exist or were change requests to a supporting IT department.

In other words "Service Offering = (Product Features) + (Operational Features)".

Wow. What a simple way to state something I've been concerned about for some time now: as you move your enterprise into the cloud, will your service providers (be it SaaS, HaaS, PaaS or others) provide you with the tools and data you need to successfully operate your business? How will you be able to interact with both the service provider's software and personel to make sure those operations run a) according to your wishes, and b) with no negative impact on your business?

Gabriel goes on:
Guess who builds and supports these Operational Features? Your friendly neighborhood IT department in conjunction with the Operations and Service Offering product group. This raises the quality bar for your traditional IT shop.
Heck, yeah. And guess what? Should a business do something crazy--oh, say, select SaaS products from more than one vendor to integrate into their varied business processes--they will need not only to build solid operational ties with each vendor, but integrate those operational features across vendors. Think about that.

How best to do that? You shouldn't be surprised when I tell you that a key element of the solution is SLAuto under the control of the business. Managing SaaS systems to business-defined service levels will be a critical role of IT in the cloud-scape of tomorrow.

Wednesday, February 06, 2008

Cloud computing heats up

Today's reading has been especially interesting, as it has become clear that a) "cloud computing" is a concept that more and more IT people are beginning to understand and dissect, and b) there is the corresponding denial that comes with any disruptive change. Let me walk you through my reading to demonstrate.

I always start with Nick Carr, and today he did not disappoint. It seems that IBM has posited that a single (distributed) computer could be built that could run the entire Internet, and expand as needed to meet demand. Of course, this would require the use of Blue Gene, an IBM technology, but man does it feed right into Nick's vision of the World Wide Computer. To Nick's credit, he seems skeptical--I know I am. However, it is a worthy thought experiment to think how one would design distributed computing to be more efficient if one had control over the entire architecture from chip to system software. (Er, come to think of it, I could imagine Apple designing a compute cloud...)

I then came across an interesting breakdown of cloud computing by John M Willis, who appears to contribute to redmonk. He breaks down the cloud according to "capacity-on-demand" options, and is one of the few to include a "turn your own capacity into a utility" component. Unfortunately, he needs a little education of these particular options, but I did my best to set him straight. (I appreciate his kind response to my comment.) If you are trying to understand how to break down the "capacity-on-demand" market, this post (along with the comments) is an excellent starting place.

Next on the list was a GigaOm post by Nitin Borwankar stating his concept of "Data Property Rights" and expressing some skepticism about the "data portability" movement. At first I was concerned that he was going to make an argument reinforced certain cloud lock-in principles, but he actually makes a lot of sense. I still want to see Data Portability as an element of his basic rights list, but he is correct when he says if the other elements are handled correctly, data portability will be a largely moot issue (though I would argue it remains a "last resort" property right).

Dana Blankenhorn at ZDNet/open-source covers a concept being put forth by Etelos, a company I find difficult to describe, but that seems to be an "application-on-demand" company (interesting concept). "Opportunity computing", as described by Etelos CEO Danny Kolke describes the complete set of software and infrastructure required to meet a market opportunity on a moments notice. “Opportunity computing is really a superset of utility computing,” Kolke notes. Blankenhorn adds,

"It’s when you look at the tools Kolke is talking about that you begin to get the picture. He’s combining advertising, applications, the cash register, and all the relationships which go into those elements in his model. "

In other words, it seems like prebuilt ecommerce, CRM and other applications that can quickly be customized and deployed as needed, to the hosting solution of your choice. My experience with this kind of thing is that it is impossible to satisfy all of the people, all of the time, but I'm fascinated by the concept. Sort of Platform as a Service with a twist.

Finally, the denial. The blog "pupwhines" remains true to its name as its author whimpers about how Nick "has figured out that companies can write their own code and then run it in an outsourced data center." Those of you that have been following utility/cloud computing know that this misses the point entirely. Its not outsourcing capacity that is new, but its the way it is outsourced--no contracts for labor, no work-order charges for capacity changes, etc. In other words, just pay for the compute time.

With SLAuto, it gets even more interesting as you would just tell the cloud "run this software at these service levels", and the who, what, where and how would be completely hidden from you. To equate that with the old IBM/Accenture/{Insert Indian company here} mode of outsourcing is like comparing your electric utility to renting generators from your neighbors. (OK, not a great analogy, but you get the picture.)

Another interesting data point for measuring the booming interest in utility and cloud computing is the fact that my Google Alerts emails for both terms have grown from one or two links a day, to five or more links each and every day. People are talking about this stuff because the economics are so compelling its impossible not to. Just remember to think before you jump on in.

Sunday, February 03, 2008

Off topic: Hey, my wife is an entrepreneur!!!

This is just too much fun.

Mia has been amazing throughout Emery's pregnancy, birth and early weeks. Her focus and determination has really made the addition of a new "startup" much easier than I anticipated (though its early yet :). She has seen to it that Owen knows he is still her favorite little boy, and I love the time we have been able to spend together between my family leave and the late nights.

She faces the difficult choice of returning to school in the next few weeks, and I just wanted to say publicly that I am so proud of all she is doing. She is the true entrepreneur in our family.