Migrating from Akamai to CloudFront

There seems to be some confusion on basic usage of S3 and CloudFront, and how they are related. There are also some gotchas when it comes to using the services that may not be obvious at first glance. I recently moved some data from Akamai to S3/CloudFront and had to 'translate' concepts for former Akamai users. Below are some of the items I addressed.


  • S3 is a simple key/value store with an HTTP interface.
  • A key is just a string but you can think of it as the filename.
  • The value associated with a key is referred to as an object.
  • The top level container for objects on S3 is referred to as a bucket.
  • CloudFront is a CDN that uses S3 as an origin server.
  • CloudFront has the notion of a distribution. A distribution is simply an S3 bucket that can be served via CloudFront.
  • A CloudFront distribution is associated with a single S3 bucket.
  • CloudFront is a CDN, S3 is more like an HTTP based file server


  • CloudFront offers no purge mechanism like Akamai does.
  • You will need to use different keys (filenames) for CloudFront objects that change since they will not expire.
  • S3 has no real notion of users or roles, natively. An access key and secret key are used for authentication. If a user leaves your organization who had these keys, you will need to reset them.
  • Because the URI for an object on S3/CloudFront refers to a key, the string must be an exact match. This also means that there is no native way to handle double slashes. If you are prone to referencing files with a double slash, this can be a problem.

Best Practices

  • Create a CNAME (different ones) for both your CloudFront distribution and your S3 bucket if the objects contained in them will be consumed by a web browser. This will help you be flexible if you ever want to point your CNAME at another CloudFront distribution.
  • Enable logging if you are publishing static assets, in particular JS/CSS that are subject to change. This will help you determine whether content is still in use.


Really the only additional thing we needed, besides some user education, was the notion of actual users with roles. We needed regular users that were tied to a particular bucket and had basic rights (upload, download, etc) and admin users that could create new buckets/distributions, etc. We decided to go with Bucket Explorer for this. Bucket Explorer essentially sits on top of S3 and provides an explorer like interface for users. In addition, it allows you to create BE users with their own roles (admin,user), usernames and passwords. The nice thing about this is I don't have to hand out the access and secret key to a large number of users.

1 comment:

meghanasmily03 said...

I found this blog in search results while i am searching for aws jobs in hyderabad. Thank You admin sharing for this information. Which is useful for me.