Google Wave is to the Internet what Lost is to TV

I remember when the TV show Lost first came on the air. People loved it. People who hadn't even seen it loved it and spewed their praise. Years went by and that excitement dwindled but you still had the die hards, screaming for more. Eventually, not wanting to be entirely left out, I sat down and watched an episode. I just didn't get it. I mean I really didn't get it. I wanted to like it but I ultimately just had no clue what was going on.

Google Wave is kind of like that. I got my invite last week and hopped on, extremely excited. But there were no people to interact with. It was like that big desert island in lost. There were supposed to be some people, somewhere on the island, but you don't know their intentions or who they are. I just kind of wandered through the mass of with:public waves trying to catch one that could hold my interest but found nothing.

Unlike Lost though, I think Google Wave just needs critical mass to succeed. I remember when I was an undergrad at Purdue I did the math on GNUnet and determined that in order for it to succeed as a straight P2P network it would require content to be seeded. I went as far as determining that all major P2P networks must have been seeded at some point.

Fortunately Google has the ability to seed Wave with plenty of interesting content and people, more than enough for it to succeed. It will be interesting to see whether Google sees this as another play thing and discards it, as it has with Dodgeball, Notebook, and several other products or lets it play outside the sandbox with all the other big kids like Search, Mail, Maps, etc.

I am bmatheny on Google Wave.


Migrating from Akamai to CloudFront

There seems to be some confusion on basic usage of S3 and CloudFront, and how they are related. There are also some gotchas when it comes to using the services that may not be obvious at first glance. I recently moved some data from Akamai to S3/CloudFront and had to 'translate' concepts for former Akamai users. Below are some of the items I addressed.


  • S3 is a simple key/value store with an HTTP interface.
  • A key is just a string but you can think of it as the filename.
  • The value associated with a key is referred to as an object.
  • The top level container for objects on S3 is referred to as a bucket.
  • CloudFront is a CDN that uses S3 as an origin server.
  • CloudFront has the notion of a distribution. A distribution is simply an S3 bucket that can be served via CloudFront.
  • A CloudFront distribution is associated with a single S3 bucket.
  • CloudFront is a CDN, S3 is more like an HTTP based file server


  • CloudFront offers no purge mechanism like Akamai does.
  • You will need to use different keys (filenames) for CloudFront objects that change since they will not expire.
  • S3 has no real notion of users or roles, natively. An access key and secret key are used for authentication. If a user leaves your organization who had these keys, you will need to reset them.
  • Because the URI for an object on S3/CloudFront refers to a key, the string must be an exact match. This also means that there is no native way to handle double slashes. If you are prone to referencing files with a double slash, this can be a problem.

Best Practices

  • Create a CNAME (different ones) for both your CloudFront distribution and your S3 bucket if the objects contained in them will be consumed by a web browser. This will help you be flexible if you ever want to point your CNAME at another CloudFront distribution.
  • Enable logging if you are publishing static assets, in particular JS/CSS that are subject to change. This will help you determine whether content is still in use.


Really the only additional thing we needed, besides some user education, was the notion of actual users with roles. We needed regular users that were tied to a particular bucket and had basic rights (upload, download, etc) and admin users that could create new buckets/distributions, etc. We decided to go with Bucket Explorer for this. Bucket Explorer essentially sits on top of S3 and provides an explorer like interface for users. In addition, it allows you to create BE users with their own roles (admin,user), usernames and passwords. The nice thing about this is I don't have to hand out the access and secret key to a large number of users.


Back at it

Back at the end of July I ended my tenure as VP of Product at Compendium Blogware. I had a wonderful experience with that company and had the good fortune of working with a really talented, smart group of engineers. Despite that, as we neared the end of our first OEM integration, I found myself looking for a new challenge. Compendium had gotten to the point of being on auto-pilot from a product perspective and the big challenges that I originally joined for had been tackled.

Towards the middle of July I was contacted by a recruiter with local Q&A/Search service ChaCha. ChaCha was looking for a replacement for their recently departed VP of Operations which didn't sound like a good fit for me but I was interested in the scale and the problem space and agreed to talk with them. After several rounds of interviews I ended up accepting a position as VP of Engineering and started on August 10th.

Since joining ChaCha I have also taken on the operations, QA and helpdesk teams as well as some PM and UI responsibilities. The issues and problems that I expected to have to tackle are as challenging as I thought they would be but of a completely different nature. ChaCha has an excellent team and although I'm still the "new guy" I've been able to make (from my perspective) a lot of positive changes, particularly with respect to team structure and product development processes.

So, now that I'm not at a blogging company I'll be getting back to my trusty blogger blog and writing again. Looking forward to it.