How do you securely manage devices that are located at customer sites or branch offices? Hopefully the answer doesn’t include driving there every time something goes down.
If it does-and you’ve got enough offices and devices to take advantage of the price and architecture-then Lantronix’s ManageLinx “virtual device network” could help you out of a pickle.
ManageLinx establishes machine-to-machine secure remote access to just about any IP-enabled equipment. In many cases, this can be accomplished through firewalls, although in my testing I found this to not be as easy as it might sound. A complete solution includes the DSM (Device Service Manager) and multiple DSCes (Device Service Controllers).
At a high level, each DSC establishes a secure SSH-based tunnel back to the DSM, where a directory service resides that transparently connects DSCes. A technician at one location can quickly and easily gain access to remote equipment over the Internet via the secure DSC-to-DSM-to-DSC tunnel.
ManageLinx is primarily geared to the remote product service market. No one just sells a piece of office or manufacturing equipment anymore; now everything from network copiers to closed circuit video surveillance systems to industrial monitoring and control equipment comes with the option to purchase a proactive service agreement. To distinguish themselves in today’s rough economic market, equipment manufacturers need to monitor customer product performance, diagnose failures, trigger corrective workflows and perform service repairs.
Aberdeen Group research shows that RPS (remote product services)-or “smart services”-reduce service calls by 30 percent or more. With an average cost of $209, an organization with just 50 technicians making three calls a day can save $2.3 million a year. Service organizations can not only increase asset uptime and decrease mean time to repair, but also grow service revenues and profitability by offering proactive monitoring and corrective service while decreasing the number of truck rolls.
Yet there are uses for the ManageLinx solution beyond service contracts on office equipment. Large enterprises can use the system to support remote servers and network devices in branch offices. Consultants can evolve to become managed service providers who keep an eye on client systems remotely.
The applications for such a solution encompass remote monitoring and service of medical equipment, security systems, inventory control systems, B2B data sharing, and home automation.
The system comprises-at a minimum-two DSCes and one DSM. The DSM is a 1U proxy connection point, directory service, and Web-based management system. Essentially a Linux server, the DSM is the brains of the operation, configuring and monitoring DSCes, setting up automated device discovery on remote networks, keeping track of how to connect to each DSC, and building secure SSH tunnels over TCP/IP.
The DSC is a smaller box that can be bolted to the wall or under a desk and be powered over a separate power supply or power over Ethernet.
DSCes, once configured, find a way out onto the Internet and start communicating with the DSM. Between them is an OpenSSH v4.3/4.4 tunnel with default configuration of a 2048-bit RSA public key for authentication and a 128-bit AES encryption. It supports other encryption algorithms, such as 3DES, Blowfish and Arcfour. A DSC can be a Device Controller, in which case it is providing remote access to devices, or a Host Controller, where it is an entry point to the remote network. In other words, a technician uses the Host Controller to tunnel to the Device Controller and gain access to devices.
Devices are given VIP (virtual IP) addresses, and access control lists can be built to restrict or permit access to them.
Lantronix hosted the DSM and shipped me two DSCes. I logged into the DSM over the Internet. This can be done via http or https, but in my case only http was enabled, so I used that even though it seems silly to even offer the option of being able to access the entire control center of the secure remote access solution via a non-encrypted channel.
I quickly and easily configured one DSC to be at the branch office and one to be at the data center, and then saved these configuration bootstraps to a USB key. Although the process was very easy, it could have been easier. At times the management GUI is confusing (after configuring settings, click “save profile” and then click “cancel” to return to the previous screen). The management GUI can be flaky sometimes. For example there were many times when I was unsure if I was in configure or monitor mode.
I plugged the DSC’s Ethernet cables into two separate networks that were firewalled from each other using moderate security, as is common in a test lab. Then I plugged them in. There was a sequence of flashing lights, which told me to insert the USB containing the bootstrap file, then a sequence of lights indicated that it read the bootstrap and was contacting the DSM. Then the lights turned all green and indicated the system was ready to go. Even if I had been a non-technical user at a branch location, I could handle that.
I returned to my DSM Web console and saw that the two DSCes were now green, indicating that everything was running smoothly. I configured the branch office-or Device Controller-to allow access to a workstation on that LAN, and I configured the data center-or Host Controller-for a VIP address that pointed to that workstation’s IP address. I saved the configurations and quickly opened an RDP session on the remote workstation through my newly created tunnel. It worked like a charm.
I tried a multitude of devices, ranging from Windows and Linux workstations to test equipment such as the BreakingPoint Systems BP-1K to a CyberPower PDU to an APC NetBotz 420 environmental monitoring system. I used different protocols, such as VNC, RDP, telnet, http, and https to access different devices. Everything worked smoothly, and the VIP address made each device appear as if it were directly connected to my local network.
However, I once tried to view streaming video from a closed circuit security camera, and that was pretty much the kiss of death. ManageLinx, with tunneling through the DSM as a proxy, is simply not built to support streaming video. The solution works best in applications where ease of deployment outweighs the need for high throughput. It was designed for remote service monitoring, configuration, and maintenance, not for all-out remote access.
Now that I knew what it could do, I started playing around with different network configurations. I found I needed a pretty permissive firewall policy for the DSC to quickly and easily establish contact with the DSM. It all revolves around needing port 22 open. Administrators can configure the devices to use multiple other ports in the hopes of finding one open (and chances are you will).
However, there were situations where I had to reconfigure the firewall at the branch office so the likelihood of plug-and-play deployment by a non-technical user at the branch office becomes unlikely. To be fair, using the DSC inside of a properly configured firewall will create a remote access solution that is secure across many layers of the OSI model.
In the right circumstances, Lantronix can provide a powerful and comprehensive secure remote device management solution by coupling additional products with ManageLinx. SecureLinx Branch Office Manager is an IP KVM with per-port power management and a built-in 10/100 unmanaged Ethernet switch. SecureLinx Spider is a small KVM-over-IP device that puts console access directly on the LAN with no client software. Deploying a combination of DSCes and SecureLinx products would allow for remote managing of up to several hundred devices from a central location.
The trial kit-one DSM and five DSCes-starts at $18,000. Each additional DSC is $1,000.
Matthew D. Sarrel is executive director of Sarrel Group, an IT test lab, editorial services, and consulting firm in New York City.