Smart and Gets Things Done Isn't Enough (redux)

I've recently had the pleasure of helping some clients with staffing needs. I haven't been helping from a recruiting perspective but more from a candidate vetting and team building perspective. I have long been from the Spolsky school of thought on hiring. Candidates should be smart and get things done.

Joel's premise is very simple. When recruiting, don't worry about years of experience, educational background, publications and so on. Instead ask yourself two key questions. Is the candidate smart, and can they get things done? If so hire, if not pass. For a town like Indianapolis, this can be a great way to hire, especially if you are using a technology stack that most candidates don't have experience in.

The thinking behind this premise is that if someone is smart, learning a new technology/language/etc is trivial. That's the easy part. I've personally found this to be very true and often hired people with little to no experience in the technologies employed at a company. But, smarts isn't enough. If you're a PhD with loads of publications and can't code your way out of a paper bag or worse, just can't produce, then you don't meet the second criteria. Hire people who get things done.

Recently I've been helping a client hire a senior systems engineer. As I started talking to potential candidates I realized that I needed to add a third criteria. Understands the fundamentals. I look at it this way. If I was looking for a doctor, smart and gets things done only gets them so far. If they can't diagnose, based on experience and understanding some fundamentals, I don't want them as a doctor. There are just some things that doctors need to know.

Granted, I'm not hiring anyone that will be in a life and death situation. However, there are some things in my opinion that people need to know. It's not a lot, but it's important. If you're a system engineer and you can't tell me the difference between a switch and a hub, or what some basic RAID levels are and what good applications are for them, those just seem like basic fundamentals to me. If you're a software engineer and you can't give me a basic rundown of the differences between a hash table and a binary tree, those are fundamentals.

Can people learn the fundamentals? Sure. Of course. Absolutely. And a smart person will pick them up very quickly. However, while they're learning those fundamentals they're likely making mistakes, some that could be very costly to recover from. This isn't a new idea but my point is this. Knowing the fundamentals gives you a broader base for sound decision making, and great employees make sound decisions.


8 Apps for Mac Developers

I write software on my Mac. Java, Ruby, PHP, Groovy, whatever. Sometimes it's running in a VM on my mac, sometimes it's running natively. Now, when I first switched to a Mac 3 years ago, I was coming from 7 years of being a Linux desktop guy who was used to WindowMaker. Keystrokes and shortcuts for everything, the shell and VIM, obnoxious customizations, that's the life for me. Below is my short list of apps that help me achieve Mac zen.

Alfred App is a functional replacement for Spotlight and mostly for Quicksilver as well. In a keystroke you can launch an app, open a file, calculate some numbers, manipulate iTunes, go to a web page, or any other number of things. Alfred learns from your usage patterns and gets better (quickly) at knowing what you intend to do. Love it. Pay for the power pack, it's cheap and supports the project.

Getting Started with Alfred from Alfred App on Vimeo.

SizeUp allows you to entirely manage your windows with only keyboard shortcuts. Maximize, move between monitors, split the screen, center, etc. It's magical.

Quicksilver Triggers
So why do I still have my trusty Quicksilver around if I have the amazing Alfred? Triggers! I like to be able to fire up an app via shortcuts. For example, the terminal console comes up automatically when I type 'alt + c'. Very nice.

zsh and oh-my-zsh
I switched to zsh after almost 10 years as a bash user and still finding all kinds of useful things it does for you. The oh-my-zsh add-on adds a bunch of nice functionality; themes, plugins, etc. You can find my fork of oh-my-zsh here. Some of my favorite zsh features:

  • Intelligent tab completion. Say you have the following files in a folder: servs.csv, servs-1.csv, servs-2.csv, ...., servs-n.csv. Just type -2 (or -n), hit tab, and zsh will complete the file name for you. No typing s-n like in bash.
  • Share history across sessions. If you ever have long running shell sessions, being able to hop back and forth between them and have a shared history is extremely nice.
  • Global variables. I find myself often doing something like: find . | grep -v svn or ls -la | grep -v svn. Just do alias -g XC='|grep -v svn'. Now you can run find . XC and be done with it.

Growl and Hardware Growler
Growl is great. It can annoy me about all kinds of things like failed unit tests, coworkers bothering me, etc. HardwareGrowler is a nice add-on that let's me know about all things hardware related: network card status, airport status, USB device changes, etc. Again, no sexy screenshot here but I find it pretty useful.

Skim PDF Reader and Note Taker
I read a lot of mostly technical and legal documents as PDF files. OS X comes with several options for reading PDF files (Preview is built in, Adobe Reader is free) but I dislike both apps: memory hog, takes up lots of disk space, is slow, has this terrible update mechanism that likes to overtake my computer, is missing lots of features (unless you upgrade to a professional version). I'm just not a fan. Skim is a free PDF Reader and Note Taker for OSX. I especially prefer Skim because it's lightweight (like Preview), but feature rich (note taking, bookmarks, highlighting). This screenshot shows 2 different types of notes (inline and anchored) and highlighting.

Homebrew is a package manager for OS X. I switched to it from Port because frankly Port sucked and was often missing or had out of date packages. Once you've installed brew you'll instantly have access to all of the nice command line tools you've become accustomed to: GNU coreutils (ls, df, etc), memcache, links, etc. Just brew install foo or brew search foo. That's it.

Okay so Spaces isn't exactly new but it's essential to my workflow (which is also may or may not be fairly specific to me). Think of spaces as being a collection of desktops. Each desktop can have it's own apps running on it. For me, I have 4 spaces (each accessible as keyboard shortcut alt-1 through alt-4) that each contain specific windows.

  • Space 1: Instant messenger, IRC, email
  • Space 2: Web browser
  • Space 3: Console windows, IDE, debuggers, VMware, etc
  • Space 4: Multimedia, iTunes, etc
This allows me to switch between essentially different work environments and helps me minimize distractions. IM windows stay on the IM space while I'm doing development work, etc. SizeUp also helps me out here by allowing me to very easily move windows between spaces.

That's it for now.


What's next?

My last day at ChaCha was on 10/15/2010. Since I left I've done a bunch of traveling, seen some family, done some volunteer work and generally tried to relax a bit. No one seems to care about that though, everyone just wants to know what's next. Well..


Google Wave is to the Internet what Lost is to TV

I remember when the TV show Lost first came on the air. People loved it. People who hadn't even seen it loved it and spewed their praise. Years went by and that excitement dwindled but you still had the die hards, screaming for more. Eventually, not wanting to be entirely left out, I sat down and watched an episode. I just didn't get it. I mean I really didn't get it. I wanted to like it but I ultimately just had no clue what was going on.

Google Wave is kind of like that. I got my invite last week and hopped on, extremely excited. But there were no people to interact with. It was like that big desert island in lost. There were supposed to be some people, somewhere on the island, but you don't know their intentions or who they are. I just kind of wandered through the mass of with:public waves trying to catch one that could hold my interest but found nothing.

Unlike Lost though, I think Google Wave just needs critical mass to succeed. I remember when I was an undergrad at Purdue I did the math on GNUnet and determined that in order for it to succeed as a straight P2P network it would require content to be seeded. I went as far as determining that all major P2P networks must have been seeded at some point.

Fortunately Google has the ability to seed Wave with plenty of interesting content and people, more than enough for it to succeed. It will be interesting to see whether Google sees this as another play thing and discards it, as it has with Dodgeball, Notebook, and several other products or lets it play outside the sandbox with all the other big kids like Search, Mail, Maps, etc.

I am bmatheny on Google Wave.


Migrating from Akamai to CloudFront

There seems to be some confusion on basic usage of S3 and CloudFront, and how they are related. There are also some gotchas when it comes to using the services that may not be obvious at first glance. I recently moved some data from Akamai to S3/CloudFront and had to 'translate' concepts for former Akamai users. Below are some of the items I addressed.


  • S3 is a simple key/value store with an HTTP interface.
  • A key is just a string but you can think of it as the filename.
  • The value associated with a key is referred to as an object.
  • The top level container for objects on S3 is referred to as a bucket.
  • CloudFront is a CDN that uses S3 as an origin server.
  • CloudFront has the notion of a distribution. A distribution is simply an S3 bucket that can be served via CloudFront.
  • A CloudFront distribution is associated with a single S3 bucket.
  • CloudFront is a CDN, S3 is more like an HTTP based file server


  • CloudFront offers no purge mechanism like Akamai does.
  • You will need to use different keys (filenames) for CloudFront objects that change since they will not expire.
  • S3 has no real notion of users or roles, natively. An access key and secret key are used for authentication. If a user leaves your organization who had these keys, you will need to reset them.
  • Because the URI for an object on S3/CloudFront refers to a key, the string must be an exact match. This also means that there is no native way to handle double slashes. If you are prone to referencing files with a double slash, this can be a problem.

Best Practices

  • Create a CNAME (different ones) for both your CloudFront distribution and your S3 bucket if the objects contained in them will be consumed by a web browser. This will help you be flexible if you ever want to point your CNAME at another CloudFront distribution.
  • Enable logging if you are publishing static assets, in particular JS/CSS that are subject to change. This will help you determine whether content is still in use.


Really the only additional thing we needed, besides some user education, was the notion of actual users with roles. We needed regular users that were tied to a particular bucket and had basic rights (upload, download, etc) and admin users that could create new buckets/distributions, etc. We decided to go with Bucket Explorer for this. Bucket Explorer essentially sits on top of S3 and provides an explorer like interface for users. In addition, it allows you to create BE users with their own roles (admin,user), usernames and passwords. The nice thing about this is I don't have to hand out the access and secret key to a large number of users.