IT departments are charged with ensuring the security and availability of company applications and data. Delivering on this mandate can be difficult enough on closely managed, company-owned machines under the direct control of IT.
However, administrators are now facing, with increasing frequency, the additional wrinkle of supporting PCs over which ultimate control lies outside of the company. A growing number of employees are looking to use their own personal PCs in their jobs as well.
Two key factors that are playing into this loss of control by IT departments are trends toward consumerization and mobilization in computing. The range of computing product options marketed to individual users has expanded, and the ease with which these systems can move between home and work has increased. As a result, IT departments are faced with supporting or tolerating systems that their users have brought in from home.
What’s more, some companies actively pursue employee-owned notebook schemes as a means of boosting productivity and reducing support costs among workers who are technology-savvy enough to shape and maintain their own desktop environments.
Companies always have the option of banning the use of systems company data or applications on systems brought from home (with the inevitable exception of users with enough clout to bend the rules), but there are plenty of situations in which the line between authorized and forbidden systems can’t be so clear. For instance, companies work in many cases with contractors or partners who own their own machines, or telecommuting workers who live outside the range of IT support staff.
Employee-owned or -controlled notebook schemes aren’t appropriate for every company, industry or employee. However, regardless of your company’s policy toward user-controlled systems, it makes sense to develop a set of strategies for ensuring sufficient levels of data security and application accessibility for user systems that fall outside of a strict system management regime.
It’s not an easy problem to tackle, most importantly because the administrative rights over a system with a user-controlled desktop or notebook scheme are such that users have the right to install arbitrary applications and drivers on their machines, some of which could be malicious or harmful in purpose or practice.
Based on eWEEK Labs tests and research, I’ve identified two primary approaches to the problem of working with user-controlled desktop and notebook systems. The first approach involves reaching a sort of compromise between user control and management policy on a given machine. The second involves carving out for IT an isolated and closely managed environment within an otherwise unmanaged system. Both routes have their drawbacks, benefits and prospects for improvement as the technologies on which they depend continue to mature.
Approach One: A Negotiated Settlement
The first approach to managing what are essentially unmanaged systems should be a familiar one because it’s the approach that most home users-as well as a striking number of corporate shops-employ for their Windows-based desktops and notebooks: The user gets administrative rights on his or her machine, and IT administrators layer on policies and products intended to prevent damage or instability caused by malware and unpatched bugs and vulnerabilities.
As a matter of policy, companies can mandate the use of anti-virus applications and frequent system patching. They also can direct users to divide their system administration and daily computing tasks into separate administrator and limited-rights accounts. On Windows Vista, the UAC (User Account Control) feature automates administration/daily use rights separation by limiting the privileges of administrative-rights users by default, and by requiring confirmation for operations that require elevated rights.
Administrators can add another layer of management to this scheme by employing NAC (network access control) to confirm and enforce user compliance with these policies by conditioning access to corporate network resources on their satisfaction.
Moving a step further, administrators can employ an application whitelisting product with a large database of known-good applications, such as Bit9’s Parity, to preserve their users’ freedom to control their computing environments and to install the software of their choice-from a pool of vetted applications. What’s more, an application whitelisting product gives administrators the option of removing from the whitelist applications known to conflict with key company software.
While this strategy for dealing with user-controlled systems should be mostly familiar to administrators and users, there are drawbacks to sharing control over a desktop or notebook in this way. For instance, administrators can mandate security baselines and enforce those baselines through NAC, but unless users can be counted on to keep their systems in order, IT can find itself stuck on a treadmill of bringing quarantined systems back into conformance.
More importantly, the fact that ultimate control over the host operating system lies in the users’ hands must result in a trust gap of sorts, as users’ actions can lead to security issues that could potentially evade the detection of company-mandated anti-virus software.
What’s more, focusing your management measures at the client OS level can lead to restricted platform options, and platform flexibility-the freedom for users to opt for Mac OS X or Linux over Windows, for instance-is often a significant driver for user-controlled system strategies.
Finally, with every mandated management layer that’s added to a user-controlled machine-particularly as we move into the realm of whitelisting-the machine moves farther from being user-controlled, and those management layers may prove difficult to keep in place.
Looking forward, I expect to see application whitelisting and privilege management technologies, including those that ship by default with Windows, mature to the point where today’s all-or-nothing, superuser-versus-limited-rights state of affairs will give way to allow for broad user control within a generously sized, but closely vetted, range of operations and installable applications.
Approach Two: A Separate Peace
Approach Two: A Separate Peace
Since trying to enforce good policy on machines that lie outside the control of IT is such a tricky proposition, a simpler way to install controls involves situating a tightly controlled desktop environment within the employee’s machine through desktop virtualization.
The most mature means of providing users with desktop environments that are segregated from their hardware involve SBC (server-based computing) products such as Microsoft Terminal Services and Citrix Systems’ XenApp (formerly known as Presentation Server). These products enable administrators to deliver managed desktop environments or individual applications to their users.
In addition to traditional server-based computing, companies can deliver managed desktop sessions hosted from individual virtual machines running in the data center atop hypervisor products such as multiple VMware ESX Server or Citrix XenServer desktop virtual machines. Users can then access the hosted sessions through a remote desktop technology such as VNC (Virtual Network Computing) or Microsoft’s RDP (Remote Desktop Protocol).
This approach offers more flexibility than server-based computing because VM-based desktops can be treated the same as typical desktops, in terms of the sorts of applications to which they can play host. However, SBC and VDI (virtual desktop infrastructure) share the same significant downside: Both strategies rely on continuous network connectivity to keep user desktops accessible.
For the many situations in which stable network connectivity cannot be relied upon, client-side desktop virtualization options-such as VMware’s ACE-enable IT departments to deploy virtual computing environments that run atop a Type 2 hypervisor, which is itself hosted under the user’s client operating system.
Over the past few years, the range of Type 2 hypervisor options has expanded such that most client operating systems, including Windows, Mac OS X, Linux and Solaris, can be outfitted to host an x86-based guest environment. The SBC and VDI approaches to desktop virtualization are also cross-platform friendly, as remote desktop clients are available for most client operating systems as well.
Client-side virtualization products place an added hardware resource burden on desktops and notebooks, however. In particular, RAM requirements for machines that host virtual desktop instances are greater. Similarly, not all applications run happily in a virtualized hardware environment, a limitation most likely to materialize for graphics-intensive applications.
Finally, just as with the nonvirtualized user-controlled system approach I laid out above, the fact that both SBC/VDI and client-side virtualization run under a host operating system makes it difficult to exorcise issues of trust and security when that host is managed outside the domain of company administrators.
Looking forward, I expect to see support for much stronger isolation between multiple operating environments running on a single-client machine improve as Type 1 hypervisors begin to ship on notebooks and desktops. Citrix and VMware have both discussed plans for embedding “bare-metal” hypervisors in future notebooks, which should help resolve issues around deploying trusted, closely managed guest environments alongside user-controlled environments.
Executive Editor Jason Brooks can be reached at [email protected].