It's been a little more than 20 years since I designed my first from-scratch cable plant, but one basic fact hasn't changed, no matter how much the physical media and the wire protocols have evolved. That eternal truth is really quite simple: The boss and the users don't care about the network until it ceases to work. Then they care a lot.
The good news is that the physical and link layers are much easier to deal with than they were in my Wild West days. There's no more weighing the pros and cons of Token Ring compared with Ethernet, no more coaxial cables running from one workstation to the next, no more IBM Type 1 shielded twisted-pair cables to wrestle with, and available bandwidth to the desktop is 100 times what we had to work with in those days. But the bad news is that we may need to rethink the way we connect desktops to the network if VDI (virtual desktop infrastructure) ceases to be a novelty and becomes part of the norm.
That's because, as a rule, desktops connect through a single port on a single access switch. Switches are reliable enough that such a connection strategy works, and modern cabling plants can withstand most physical stresses. But latency can prove to be a silent killer, making the best-planned VDI deployment into a nightmare, no matter what the underlying cause.
The best way to avoid latency surprises is to know what the network is already tasked with before one adds more traffic to the network. Logs and traffic analysis are useful ways to acquire that knowledge, but there's really no substitute in my book for spending some time in front of the blinking lights.
"Follow the flashing LEDs" will only get you so far, of course, if your patch panels look like an octopus wrestling a bowl of spaghetti. I can't stand a messy panel on the best of days, but when I'm trying to figure out a network problem in a wiring closet where cables are tangled, unlabeled and routed without any thought to how someone might troubleshoot, my mood oscillates between "Jesus wept" and the version of Ezekiel 25:17 featured in the movie Pulp Fiction.
There's only so much one can do to beef up the network at the point of the access switch; running multiple cables to the desktop is cost-prohibitive in many environments, so there's always going to be a single point of failure at that level. What can be accomplished more easily and economically is to mesh access switch connections to the distribution layer, providing each switch's connected devices with multiple paths to the core network and associated resources.
Load balancing and link trunking are old hat when connecting core networks and distribution networks, but it's time to push these technologies out to the connections between access networks and distribution networks. When used in a meshed environment, these can provide against almost any contingency that doesn't involve a backhoe punching into a building's data connections.
Although techniques such as Teradici's PCoIP (PC over IP) exist to wring efficiency out of network connections at a higher level than basic network connectivity, these won't help in an environment that's poorly designed or where the network is frequently saturated.
In obsessing over the last few hundred feet, I'm focusing on a small piece of the much larger puzzle of making VDI technology work for business. However, it's a piece that's very easy to overlook, and one that's not terribly difficult to fix in the early stages of a project.