Cisco is turning on a new feature in the Nexus 7000 platform to enable the extension of Layer 2 networking between geographically separated data centers. Working with VMware, whose VMotion technology requires Layer 2 connectivity, and NetApp, Cisco seeks to extend the benefits of Layer 2 interconnect.
is introducing Overlay Transport Virtualization to extend Layer 2 networking
between data centers.
had a chance to see the newly announced OTV feature of the Nexus 7000 platform in action at the
Cisco campus on announcement day, Feb. 9. Working with engineers from VMware
and NetApp (OTV is not limited to technology provided by these partner
companies), I saw virtual machines move from one data center to another while
still running. VMotion requires Layer 2 connectivity, and it would have
previously required a Herculean effort to implement without OTV.
OTV, Cisco is extending Layer 2 functionality while containing failures such as
broadcast storms inside the physical domains without propagating the
debilitating effects of the storm to the other data centers.
fundamental technology that I saw in action is made possible by putting a Nexus
7000 at the edge of each data center and then enabling and configuring the OTV
feature. A Cisco ACE 4400 Series Global Site
Selector-basically a DNS server on steroids-was called and used IP to
instantiate a DCI (Data Center Interconnect)
between data centers. As
currently implemented, there can be as many as 32 data centers interconnected
in a single OTV domain.
getting into the ins and outs of Cisco OTV, it's important to be clear that DCI
was possible prior to the introduction of the new feature set. The measure that
data center managers will need to use when evaluating OTV is how much labor and
network implementation effort (if any) will be eliminated relative to the cost
of putting Cisco's hardware in every data center.
the proof-of-concept demonstration, the two chief examples used to demonstrate
the need for Cisco OTV were VMware VMotion and Microsoft Exchange running in a
clustered environment. The actual demonstration showed a virtual machine
running in a very heavy OLTP (online transaction processing) benchmark migrate
from a pretend data center in San Jose, Calif., to a pretend data center in
Sacramento. In fact, that physical infrastructure supporting the demonstration
was in the room next door using a WAN simulator to emulate the cloud through
which the IP overlay was created.
from NetApp were also on hand and are part of a jointly enabled and supported
offering with Cisco and VMware. NetApp uses its hardware plus FlexCache
software to support changes in usage patterns as reflected in the shifting of a
virtual machine from one data center to another.
is the tendency in data center advances, there is a lot of "working on the
engine while the car is moving down the freeway" in OTV. The technology
demonstration showed how an additional data center could be added with four
lines of instruction in about 5 minutes. Of course, the part I was interested
in was the work and planning required prior to the magic 5 minutes.
plan on returning to Cisco in the near future to use the (very large, very
expensive, very hard to move) test rig to more fully investigate the amount of
work needed to line up all the moving parts that make OTV work. I suspect,
however, based on my time with Cisco, VMware and NetApp, that the product will
prove to generally support their claims of simplified setup.
old saw is that when Cisco is involved, every problem is solved with IP. And
that is the case with OTV. Basically, the underlying technology encapsulates
Ethernet frames inside IP packets and sends them
out through the Nexus 7000 at the network edge. Routing is based on MAC
addresses, which are tracked and managed by each of the participating edge
devices. The Global Site Selector keeps track of where applications were
running and where they are currently located if VMotion pushed the workload to
a different data center.
clients with existing connections, this means that state is maintained and
firewalls and load balancers are kept happy with the addition of a bit of
latency as traffic is directed to where the virtual machine is currently
located. New connections are directed to the virtual machine with no additional
2 DCI is about to get a whole lot more
interesting even if it is a technology that only a mother could love.