I have to admit I find myself growing more impressed with the MapReduce (and related algorithm) community every day. I spend the better part of an hour watching Stu Hood of Rackspace/Mailtrust discussing MapReduce, Mailtrust's use of it for daily log processing, and comparing it to SQL. I'm a MapReduce newbie, so I was happy to find Stu's overview clear, careful and at a level I could grasp.
His overview of Hadoop (an open source implementation of a MapReduce framework) was equally enlightening, and I learned that Hadoop is more than the framework, but it includes a distributed file system as well. This is where I think SLAuto starts to become important, as it will be critical not only to monitor which systems in a Hadoop cluster are alive at any time (thus providing access to their storage), but also to correct failures by remounting disks on additional nodes, provisioning new nodes to meet increased data loads, etc. Granted, I know just enough to be dangerous here, but I would bet that I could sell the value of SLAuto in a MapReduce environment.
Another interesting overview of the MapReduce space comes from Greg Linden. (Damn, now I've mentioned Greg twice in a row...my groupie tendencies are really showing these days! -) Greg points us to notes taken at the Hadoop Summit by James Hamilton, an architect on the Windows Live Platform Services team. I haven't read through them all yet, but I like the breakdown of many of the big projects getting a lot of coverage among techies these days: Yahoo's PIG and HBase, as well as Microsoft's DRYAD. Missing is CouchDB, but I plan to watch Jan Lehnardt's talks  on that as soon as I get a moment.
Again, the reason MapReduce is being covered in a blog about Service Level Automation and utility computing is that as soon as I see "tens of thousands of nodes", I also see "no way human beings can meet the SLAs without automation". At least not without significant costs compared to automating. System provisioning, monitoring, autonomic scaling and fail-resistance are not built in to Hadoop, they are simply easy to support. Something else is needed to provide SLAuto support at the infrastructure layers.