Back in April, I blogged about how my eWEEK Labs comrades and I had thrown up this WordPress blog on an Amazon Web Services EC2 instance in search of cloud testing adventure. Fortunately for us thrill-seekers, we got a big old lump of adventure early on, when our newsletter distribution sent a concentrated burst of traffic at our newly-minted site, melting our “can we get away with this” micro-sized EC2 instance.
After a bit instance-resizing and some tweaks to our caching and CDN configuration, we made it through the next few months without any further meltdowns. However, I had a long list of “ought to dos” for nailing down the system administration setup for the site that I never quite got around to, including:
- Establishing some decent version control for our WordPress code and regular database backups
- Setting up site monitoring, so we needn’t rely on tsking from the IT dept sys admin who’d previously tended the site (without incident) to alert us when something went wrong
- Providing for some semi-auto scaling, for some future explosion of site traffic and overall fame
- Setting up some sane processes for applying system updates to the Ubuntu server running on our EC2 instance
I recently reviewed a relatively new Platform as a Service product, called PHP Fog, which supports PHP and MySQL, and which, as a PaaS, offered me a solution for sweeping some of my untended-to sys admin chores under a rug:
- Version control via Git is rolled into the product
- PHP Fog cloud accounts come with an associated New Relic account for keeping tabs on performance
- Apps on PHP Fog are set up behind a load balancer, with a separate db instance and support for manually spawning new PHP-serving instances
- PaaS means never having to update the OS
Having enjoyed my review experience with PHP Fog (and noting its relative low cost among comparable solutions), I migrated our blog over to a “silver cloud” account and, in preparation for our newsletter distribution day (today), bumped up our instance count to two instances.
First thing this morning, I checked out our site to find that even in the relatively hands-off administration world of PaaS, there remained plenty of room for adventure. I visited a bunch of different pages on our site, and on many of those page loads, the site was serving up a gzipped file instead of the expected Web page. The caching plugin we use for the site, wp-super-cache, includes a compression option that doesn’t behave well with every Web host.
I hadn’t noticed this issue during our tests, but I figured that I’d resolve it easily enough by disabling the compression feature. Most WordPress plugins tend to store their settings in MySQL, but wp-super-cache stores settings in a configuration file. The plugin is written to be configured through the WordPress admin interface, but on PHP Fog files modified through the version control system don’t automatically sync up with files modified with the app itself, so it’s important to make these changes through the version control system. PHP Fog isn’t alone in this, I’ve also experienced this file modification mismatch issue with Red Hat’s OpenShift Express, although the Red Hat service provides an option for downloading files changed with the app for adding back into version control.
Anyway, I pulled down our site code on my home computer with Git, turned off the compression, and found that the errors persisted. Figuring that the error might be tied to too much traffic, I spawned another pair of instances to help cover the load (PHP Fog lets you spin these up and down, and prorates the cost per day). I later noticed that some of my instances still had compression turned on, and some had it turned off–the config file change hadn’t made it to all of my instances.
Intending to get our instances back in sync, I ratcheted our instance count back down to one, which did away with our gzip-serving problem, replacing it with this PHP Fog “we’re currently starting your server” page:
As a PHP Fog support rep (the company has very friendly, responsive support) explained to me through the handy live chat function on their site, when I scaled my instances up and back down again, the operation led to some sort of race condition which ended up leaving our site out of commission for about an hour and a half. The rep added that the PHP Fog team was addressing the issue, and that it shouldn’t happen again.
The up and down scaling issue was a definite bug–that sort of scaling is a central part of the PHP Fog product. The file syncing issues I knew about before deploying, and I’m interested to see how the company address this–their documentation references future distributed file system functionality. The wp-super-cache compression snafu is on me, for lack of sufficient testing. That option is switched off by default, because of the issues it can cause, and I had it enabled because it worked fine in my previous, straight IaaS configuration.
I might have been more upset by these snags if I wasn’t in search of testing adventure in the first place — it would, after all, have been much simpler to leave the site in the capable hands of our company sys admin. All told, I’m planning on sticking with PHP Fog for a while longer, and trying out a handful of other potential WordPress homes with a test domain. I recently toyed with OpenShift Express and WordPress, and I’m intent on getting the open source blogging app running on Microsoft’s Azure.