The outsourced nature of the cloud and the inherent loss of control that goes along with that means that sensitive data must be carefully monitored to ensure it is always protected. But how do you monitor a database server when the underlying hardware moves every day or even over the course of the day-often without your knowledge? To further complicate things, how do you ensure that your cloud computing vendor’s database administrators and system administrators aren’t abusing their privileges by inappropriately copying or viewing confidential records?
These are just some of the obstacles that an enterprise must overcome when deploying a secure database platform in a cloud computing environment. These obstacles alone may prevent some organizations from moving from their on-premises approach. What follows are three of the most critical architectural issues you’ll need to resolve as you transfer applications with sensitive data to the more flexible computing model of the cloud.
Issue No. 1: Monitoring a constantly changing environment
Virtualization and cloud computing lend greater flexibility and efficiency by giving you the ability to move servers and add or remove resources as needed in order to maximize the use of your systems and reduce expense. This often means that the database servers housing your sensitive data are constantly being provisioned and deprovisioned, with each of these instances representing a potential target for hackers.
The dynamic nature of a cloud infrastructure makes monitoring data access much more difficult and, if the information in those applications is subject to regulations such as the Payment Card Industry Data Security Standard (PCI DSS) or the Health Insurance Portability and Accountability Act (HIPAA), you need to be able to demonstrate that it’s secure.
When considering solutions to monitor activity on these dynamic database servers, the key is to find a methodology that is easily deployed on new database servers without management involvement. That almost certainly requires a distributed model where each instance in the cloud has a sensor or agent running locally. This software must have the ability to be provisioned automatically along with the database software-without requiring intrusive system management.
In a multitenancy environment, it will not always be possible to reboot whenever you need to install, upgrade or update the agents and the cloud vendor may put limitations on installation of software requiring certain privileges. The right architecture will allow you to see exactly where your databases are hosted at any point in time. It will allow you to centrally log all activity and flag suspicious events across all servers wherever they reside.
Working in a WAN
Issue No. 2: Working in a WAN
Many current database activity monitoring solutions utilize a network sniffing model to identify malicious queries-an approach that is simply not feasible in cloud environments where the network is essentially the entire Internet.
Adding a local agent that sends all traffic to a remote server for processing doesn’t work well with these models either, for reasons outlined later. Instead, you’ll need to find a solution that is designed for distributed processing where the local sensor is able to analyze traffic autonomously.
Keep in mind that the cloud computing resources you are procuring are likely to be on a WAN, and network bandwidth and network latency will make off-host processing inefficient. The very concept of cloud computing (where are those servers, anyway?) likely prevents you from being able to colocate a server close to your databases, which means the time and resources spent sending every transaction to a remote server for analysis will inhibit network performance and prevent timely interruption of malicious activity.
A better approach when securing databases in cloud computing is to utilize a distributed monitoring solution based on “smart” agents so that, once a security policy is set for a monitored database, that agent or sensor is able to implement the necessary protection and alerting locally. This will prevent the network from becoming the gating factor for performance.
For remote management of distributed data centers, you’ll also want to test the WAN capabilities of your chosen software. It should encrypt all traffic between the management console and sensors in order to limit exposure of sensitive data. Performance can also be enhanced through various compression techniques so that policy updates and alerts are efficiently transmitted.
Who Has Privileged Access to Your Data?
Issue No. 3: Who has privileged access to your data?
One of the most difficult elements to monitor in any database implementation is the activity of privileged users. DBAs and system administrators have many options at their disposal to access and copy sensitive information, often in undetected ways (or in ways that can be easily covered up). In cloud computing environments, there are unknown personnel at unknown sites with these access privileges. Add to this the fact that you cannot possibly conduct the same level of background checks on third parties as you do for your own staff, and it’s easy to see why protecting against inside threats can be difficult.
One way to resolve this is through separation of duties, ensuring that the activities of privileged third parties are monitored by your own staff, and that the pieces of the solution on the cloud side of the network cannot be defeated without raising alerts. You’ll also need the ability to closely monitor individual data assets (for example, a credit card table), regardless of the method used to access it.
Sophisticated users with privileges can create new views, insert stored procedures into a database or generate triggers that compromise information without the SQL command looking suspicious. Look for a system that knows when the data is being accessed in violation of the policy, without relying solely on query analytics.
Look carefully before you leap
The complexity of monitoring databases in a cloud architecture may lead some to conclude that it is simply not worth changing from dedicated systems or perhaps just not yet. However, most enterprises will likely determine that it is simply a matter of time before they deploy applications with sensitive data on one of these models. Leading organizations have already begun to do so, and the tools are now catching up with the customer requirements driven by the issues raised here.
If your business would benefit from deploying databases in the cloud, security should not prevent you from moving forward. Just make sure your security methodologies adequately address these special cases.
Slavik Markovich is co-founder and CTO of Sentrigo. Slavik has over 13 years of experience in infrastructure, security and software development. Previously, Slavik was vice president of R&D and chief architect at DB@net, a leading IT architecture consultancy, and led projects for clients such as Orange, Comverse, Actimize and Oracle. In addition, Slavik held positions at several IT consulting companies. Slavik is a renowned authority on Oracle and Java/JavaEE technologies, and has contributed to open-source projects such as Spring Framework Toplink integration (later incorporated by Oracle). He is a regular speaker at industry conferences. He holds a BS degree in Computer Science. He can be reached at info@sentrigo.com.