Defining Big Traffic
Defining Big Traffic
Big Traffic is server-to-server traffic traversing WAN links connecting data centers. This is different from most Big Data workloads, which commonly emanate from user-to-machine or machine-to-machine traffic.
What Is Causing Big Traffic?
The main causes of Big Traffic are due to IT advances instituted in only the last few years: widespread adoption of virtualization and scale-out systems; long-distance live migrations; data replication and backup; and cutting-edge applications specifically written for WAN-based distribution, such as Hadoop, MapReduce, MongoDB and Cassandra.
Are There Any Parameters?
The volume of Big Traffic is growing with no end in sight; Forrester Research predicts that machine-generated application data will grow 50 percent annually for at least the next several years.??Ã Furthermore, according to a report sanctioned by storage giant EMC, the growth of the digital universe will multiply itself 44 times between 2009 and 2020. That's a serious amount of data to move.
Who Is Most Affected by the Increase in Big Traffic?
Most affected are large organizations. Within them, specifically the line-of-business people, CIOs, CTOs, storage administrators, disaster recovery and server staff members, networking administrators and data center managers are impacted the most.
Who Faces the Biggest Big Traffic Challenges?
The rise of Big Traffic poses major challenges for enterprise data centers and their management, which must solve problems involving latency and inconsistent throughput in data movement. This is not to mention keeping basic storage and access functionality to the data up and running throughout the system. This directly affects business continuity, disaster recovery and mission-critical operations.
How to Optimize an Existing System for Big Traffic
Possible solutions include purchasing more bandwidth, data deduplication, compression and thin-provisioning software. Data centers can also use conventional WAN optimization software; use application acceleration solutions; and explore new technologies specific to this type of traffic.??Ã
How Realistic Are Current Solutions?
Some of these potential solutions are simply impossible to implement in some cases. For example, if the data centers are far away from each other, effective optimization may not ever work as well as needed; or if the service provider cannot provision bandwidth between sites fast enough.
Ramifications of Optimizing a Network
When an enterprise optimizes its networks for speed and security, CIOs and CTOs need to know in advance that some of these changes could impact their networks and data centers adversely. For example, any time new software is introduced into a legacy system, it may not coexist well with older software or older versions of software, which could cause system crashes.
General Guidelines for Optimization
What data points should an enterprise follow so as to ensure that a selected optimization solution has the smallest operational footprint on IT as a whole? Factors should include: ease of deployment, full transparency to network probes, compatibility with existing systems (testing will be required here) and lower power consumption.
How Does the Future Look?
We will continue to see a continued increase in data and data movement, the transformation of data centers from static islands of computer and storage resources into unified resource pools, more use of virtualization as a cost-and-labor saving tool, and more use of cloud services to take the load off in-house data centers.