TECH ANALYSIS: Cisco OTV Extends Layer 2 Between Data Centers

 
 
By Cameron Sturdevant  |  Posted 2010-02-10 Email Print this article Print
 
 
 
 
 
 
 

Cisco is turning on a new feature in the Nexus 7000 platform to enable the extension of Layer 2 networking between geographically separated data centers. Working with VMware, whose VMotion technology requires Layer 2 connectivity, and NetApp, Cisco seeks to extend the benefits of Layer 2 interconnect.

Cisco is introducing Overlay Transport Virtualization to extend Layer 2 networking between data centers. 

I had a chance to see the newly announced OTV feature of the Nexus 7000 platform in action at the Cisco campus on announcement day, Feb. 9. Working with engineers from VMware and NetApp (OTV is not limited to technology provided by these partner companies), I saw virtual machines move from one data center to another while still running. VMotion requires Layer 2 connectivity, and it would have previously required a Herculean effort to implement without OTV.

With OTV, Cisco is extending Layer 2 functionality while containing failures such as broadcast storms inside the physical domains without propagating the debilitating effects of the storm to the other data centers. 

The fundamental technology that I saw in action is made possible by putting a Nexus 7000 at the edge of each data center and then enabling and configuring the OTV feature. A Cisco ACE 4400 Series Global Site Selector-basically a DNS server on steroids-was called and used IP to instantiate a DCI (Data Center Interconnect) between data centers. As currently implemented, there can be as many as 32 data centers interconnected in a single OTV domain.

Before getting into the ins and outs of Cisco OTV, it's important to be clear that DCI was possible prior to the introduction of the new feature set. The measure that data center managers will need to use when evaluating OTV is how much labor and network implementation effort (if any) will be eliminated relative to the cost of putting Cisco's hardware in every data center. 

During the proof-of-concept demonstration, the two chief examples used to demonstrate the need for Cisco OTV were VMware VMotion and Microsoft Exchange running in a clustered environment. The actual demonstration showed a virtual machine running in a very heavy OLTP (online transaction processing) benchmark migrate from a pretend data center in San Jose, Calif., to a pretend data center in Sacramento. In fact, that physical infrastructure supporting the demonstration was in the room next door using a WAN simulator to emulate the cloud through which the IP overlay was created.  

Representatives from NetApp were also on hand and are part of a jointly enabled and supported offering with Cisco and VMware. NetApp uses its hardware plus FlexCache software to support changes in usage patterns as reflected in the shifting of a virtual machine from one data center to another. 

As is the tendency in data center advances, there is a lot of "working on the engine while the car is moving down the freeway" in OTV. The technology demonstration showed how an additional data center could be added with four lines of instruction in about 5 minutes. Of course, the part I was interested in was the work and planning required prior to the magic 5 minutes. 

I plan on returning to Cisco in the near future to use the (very large, very expensive, very hard to move) test rig to more fully investigate the amount of work needed to line up all the moving parts that make OTV work. I suspect, however, based on my time with Cisco, VMware and NetApp, that the product will prove to generally support their claims of simplified setup.

The old saw is that when Cisco is involved, every problem is solved with IP. And that is the case with OTV. Basically, the underlying technology encapsulates Ethernet frames inside IP packets and sends them out through the Nexus 7000 at the network edge. Routing is based on MAC addresses, which are tracked and managed by each of the participating edge devices. The Global Site Selector keeps track of where applications were running and where they are currently located if VMotion pushed the workload to a different data center.

For clients with existing connections, this means that state is maintained and firewalls and load balancers are kept happy with the addition of a bit of latency as traffic is directed to where the virtual machine is currently located. New connections are directed to the virtual machine with no additional overhead.

Layer 2 DCI is about to get a whole lot more interesting even if it is a technology that only a mother could love.


 
 
 
 
Cameron Sturdevant Cameron Sturdevant is the executive editor of Enterprise Networking Planet. Prior to ENP, Cameron was technical analyst at PCWeek Labs, starting in 1997. Cameron finished up as the eWEEK Labs Technical Director in 2012. Before his extensive labs tenure Cameron paid his IT dues working in technical support and sales engineering at a software publishing firm . Cameron also spent two years with a database development firm, integrating applications with mainframe legacy programs. Cameron's areas of expertise include virtual and physical IT infrastructure, cloud computing, enterprise networking and mobility. In addition to reviews, Cameron has covered monolithic enterprise management systems throughout their lifecycles, providing the eWEEK reader with all-important history and context. Cameron takes special care in cultivating his IT manager contacts, to ensure that his analysis is grounded in real-world concern. Follow Cameron on Twitter at csturdevant, or reach him by email at cameron.sturdevant@quinstreet.com.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Thanks for your registration, follow us on our social networks to keep up-to-date
Rocket Fuel