Opinion: Data storage is growing faster than total bandwidth. Does this mean we're at the point of overlapping backups?
A year ago when I was testing 10-Gigabit Ethernet switches at the University of Hawaii, several things became clear. The first was that those switches were really fast. They really can pump data at their rated speeds, and that means sometimes three or four parallel 10-gig channels. Thats a lot of data.
At the time I wondered what all that bandwidth might be used for. Sure, the backbone of a big ISP might need that, but thats a limited market. And it seemed that the number of enterprises that might need multiple 10-gig pipes was even more limited. But I guess that every time I wonder whether theres such a thing as too much bandwidth, I should just slap myself and think about something else. Theres never too much bandwidth. Its kind of like how you cant be too thin or too rich (not that Id know about either of those).
But as I talk with IT managers, its clear that if theres one thing they need, its faster access to storage, especially backup storage. Unless companies are willing to invest in truly vast backup facilities, they can find themselves with some difficult choices. You can choose between spending a fortune to back everything up or spending less but making backups less often. The reason? Backup technology is still kind of slow.
Yes, its true that Fibre Channel runs at 2G bps these day. But when youre backing up a large enterprise, you still need a lot of 2-gig pipes. More importantly, you need to be able to write to your backup media at a high speed, and thats hard to do. Right now, the best alternative is to back up your disks to more disks, but thats expensive and it can make off-site storage tough. Tapes are more portable, but theyre slow.
As the amount of data stored in the enterprise grows, it makes the backup problem bigger. The data becomes more valuable on one hand, but on the other it becomes harder to protect. If backups could be faster, the process might be more palatable. Thats where 10Gig comes in.
A bigger pipe will help move the information needed for backups faster, making that process faster. But there are some gotchas here. For one thing, just making the pipe bigger doesnt really solve much. Its got to be bigger the whole way from the disk heads to the backup medium. There are improvements here, but there need to be more improvements. For example, you can now buy 10Gig server adapters. The problem is that while they do support a 10Gig data rate, their effective throughput isnt nearly that fast.
The same is true at the other end. While there are a few storage devices that can handle the 10Gig data rate, getting the information saved to the backup medium isnt necessarily all that fast. So where does that leave us?
Right now, it means that you have to plan your really big backups for days when the company is shut down, which means your weekends are toast. And it means that you still need really fast backup storage just for incremental backups. But in the long run what it means is that were running out of options as data storage grows faster than total bandwidth. And if your backups take much longer, you could find yourself having to start the next incremental before youve finished the last one.
Check out eWEEK.coms Storage Center for the latest news, reviews and analysis on enterprise and business storage hardware and software.
Be sure to add our eWEEK.com developer and Web services news feed to your RSS newsreader or My Yahoo page