eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.
1Defining Big Traffic
2What Is Causing Big Traffic?
The main causes of Big Traffic are due to IT advances instituted in only the last few years: widespread adoption of virtualization and scale-out systems; long-distance live migrations; data replication and backup; and cutting-edge applications specifically written for WAN-based distribution, such as Hadoop, MapReduce, MongoDB and Cassandra.
3Are There Any Parameters?
The volume of Big Traffic is growing with no end in sight; Forrester Research predicts that machine-generated application data will grow 50 percent annually for at least the next several years.??Ã Furthermore, according to a report sanctioned by storage giant EMC, the growth of the digital universe will multiply itself 44 times between 2009 and 2020. That’s a serious amount of data to move.
4Who Is Most Affected by the Increase in Big Traffic?
5Who Faces the Biggest Big Traffic Challenges?
The rise of Big Traffic poses major challenges for enterprise data centers and their management, which must solve problems involving latency and inconsistent throughput in data movement. This is not to mention keeping basic storage and access functionality to the data up and running throughout the system. This directly affects business continuity, disaster recovery and mission-critical operations.
6How to Optimize an Existing System for Big Traffic
7How Realistic Are Current Solutions?
8Ramifications of Optimizing a Network
When an enterprise optimizes its networks for speed and security, CIOs and CTOs need to know in advance that some of these changes could impact their networks and data centers adversely. For example, any time new software is introduced into a legacy system, it may not coexist well with older software or older versions of software, which could cause system crashes.
9General Guidelines for Optimization
What data points should an enterprise follow so as to ensure that a selected optimization solution has the smallest operational footprint on IT as a whole? Factors should include: ease of deployment, full transparency to network probes, compatibility with existing systems (testing will be required here) and lower power consumption.
10How Does the Future Look?
We will continue to see a continued increase in data and data movement, the transformation of data centers from static islands of computer and storage resources into unified resource pools, more use of virtualization as a cost-and-labor saving tool, and more use of cloud services to take the load off in-house data centers.