This week we re-launched labs.eweek.com to serve as a home base eWEEK Labs team, and as a place to test out some of the products and services we cover in more "real world" setting than what we get within the confines of our San Francisco lab.
To that end, we've set up shop in the cloud. We intend to move around among cloud services from time to time, but, as with so many of the products we encounter, we've begun our cloudy adventures at Amazon's EC2 service.
I spun up a fresh copy of Wordpress 3.1 atop Ubuntu 10.04, created a child theme based on that project's default, Twenty Ten theme, and imported all pointers to our reviews and analysis pieces from eWEEK.com, as well as the past few years of blog posts from our assorted Labs blogs.
Over at AWS, I opted for the service's smallest instance size, t1.micro, figuring that if and when we melted our server with overuse, the experience would be educational. After a few quick tests with the open source load testing tool siege, I could tell that it wouldn't take a whole lot to overmatch the t1.micro instance type, so I took a few steps to buttress the site.
I thought we might get along OK for a while with this configuration, but the first big test would come on newsletter day, when a significant number of visitors would hit the site within a small window. Wednesday morning, when those newsletter visitors starting hitting the site, it didn't take long for our shiny new Wordpress site to crash.
When the site stopped responding, I stopped the instance at Amazon and started it back up at a larger size. I hadn't set up my computer at home with the EC2 API tools, but they're available in the Ubuntu repositories, so it was easy to take care of that. Eric Hammond has a great howto on embiggening EC2.
I ratcheted the instance up to a c1.medium size, and associated the larger instance with the site's elastic IP address, which took care of the outage, and then set about figuring out how to ratchet it back down. I'd been prepared for our site to have trouble with the micro instance size, but expected it to fare better than it did.
I turned next to our server logs and noticed that our newsletter included a broken link (something I should have caught earlier), which was leading to a lot of 404 error. The WP Super Cache plugin doesn't cache 404s by default, but I figured out how to configure it to do so. We'll see what effect the more expansive caching policy has for the site. I'm sure there are plenty of tweaks we'll have to make along the way--before packing up the site and trying something new.