Ten years ago, when the 9/11 terrorist attacks hit the eastern United States, storage and security concerns about business data weren’t that different from what we have today. Large enterprises housed their data in mainframes and array clusters, and they had archive tapes carted away to mysterious locations. Midrange businesses also did this but on a much smaller scale. Small businesses generally kept everything in PCs or burned their records onto CDs-something that’s rarely done now.
There are three main differences between then and now: Storing data was much slower and the work more tedious because of less network bandwidth; a lot more archive data was stored on physical tape; and today’s physical devices hold an order of magnitude more data than they did in 2001.
Another differentiator is that physical data protectors, such as Iron Mountain, were much busier 10 years ago because the Internet was only starting to be used as the conveyor for stored data files.
Now, with broadband connectivity, private networks and solid-state disks as standards, the much faster movement of data from one physical server to on-site or off-site arrays or virtual tape libraries has led to the most important IT phenomenon since the Internet itself: the cloud. Subscription-based cloud services and virtualized storage are by far the most important results of connectivity and capacity upgrades in the last decade.
Rise of the Cloud Storage Option
All the major storage providers and telcos now offer some sort of cloud-storage option, and dozens of start-ups have come into play with inexpensive-and often unlimited-cloud-storage services aimed at consumers, prosumers and solo businesses. If you want to keep your data within sight, companies such as Seagate, Iomega and Buffalo offer terabytes’ worth of storage in inexpensive toaster- and pocket-size devices.
On the enterprise side, entire backup data centers are the status quo. These redundant data centers were once highly expensive investments that, in concept, mirrored a company’s IT operations.
“The whole concept of data center backup and redundancy has been shifted since 9/11,” Eric Lundquist, eWEEK’s editorial leader, wrote in 2009. “I’d argue this has happened as more and more research, money and innovation have moved into this sector.
“However, unless that redundancy is consistently tested, you risk the very real danger of out-of-sync data and out-of-gas electrical backup generators. Backup is now a much more real-time fluid concept that meshes nicely with new developments in cloud computing and virtualization.”
Thanks to all this innovation, we’ve been exposed to plenty of opportunities to keep our data safe. However, the real question is this: Since the events of 9/11 shocked us in so many ways, have we really learned to pay attention to the effective retention of business data? Nearly 3,000 people lost their lives in that infamous terrorist attack, but what’s reported less often is that dozens of organizations lost all or most of their data in those New York City buildings, which housed several data centers.
Not Enough DR in Place
Analysts have reported that 30 percent to 40 percent of all IT shops — and nearly half of SMBs — either have no disaster-recovery system or don’t know how to use it correctly. Even if a shop does have a DR apparatus in place and tests it occasionally, there are many examples of such systems not performing according to plan. If such a disaster were to occur, many of those companies would be out of business.
Star examples of how improved bandwidth and storage capacity can help a business survive a disaster are cloud-based backup replication, mirroring and disaster recovery. These services are becoming available from vendors such as Acronis, Commvault, Hewlett-Packard, IBM, Nasuni, Seagate i365 and Symantec.
An enterprise doesn’t have to purchase hardware or software for this purpose. It just needs to subscribe to one of these Web services, make sure all critical data is covered, and test it once a quarter or so.
On that shocking morning of Sept. 11, 2001, these types of distributed recovery services-then handled primarily by private networks-were deployed mostly by large and high-transaction enterprises. If that event has taught IT professionals anything at all, it ought to be that all organizations need to be prepared for other disastrous events that could happen at any time, at any location — and without warning.