Data breaches once again played a prominent role in this past year’s media coverage. While many organizations make it seem like an unavoidable fact, the reality is that there are a plenty of things that enterprises can do right now to help prevent unauthorized access to their systems and data.
In this eWEEK Data Points article, Maria Colgan, Master Product Manager for Oracle Database, shares with readers some best practices on how organizations can better secure their data at the database level.
Data Point No. 1: Understand your environment – review your system configuration.
Many data breaches occur because of simple misconfiguration errors – database settings that unintentionally elevate the risk of breach by lowering the security posture of the system. These can be as simple as inadequate password policies or as complex as poorly configured network encryption routines. Some database vendors provide tools to assess the security of their databases. For many database platforms, the Center for Internet Security (CIS) publishes benchmark checklists that can be used to assess a systems configuration. The United States Department of Defense (DOD) Defense Information Systems Agency (DISA) also offers “Secure Technical Implementation Guides” (STIG) that provide even more detail on recommendations to lock down your systems.
Data Point No. 2: Lock the back door – encrypt all your data.
Data is among the most valuable assets of a business, and encryption is a critical step in ensuring that it remains secure. Although many people still believe encryption has an impact on performance, the advent of cloud computing and the latest encryption technologies often means that is no longer the case. All of the major database vendors provide some form of database encryption, usually referring to the feature as something like Transparent Data Encryptionor Native Database Encryption. This type of encryption helps protects against attacks that try to circumvent the database’s access control mechanisms by preventing attackers from accessing and reading your data through the operating system, backup copies, or on the storage array, keeping the database secure from these out-of-band attacks.
To make sure that all your sensitive data is encrypted, you should consider encrypting your entire application database. Oracle has seen several cases in which organizations have tried to encrypt just a few columns they painstakingly identified as sensitive, only to learn that sensitive data was also in several other tables and columns. Don’t get stuck in a continuous loop of analyzing which data is sensitive and just targeting those few data elements – the cost of the continuous analysis and re-implementation of encryption is outweighed by the benefit of just encrypting all of the application data.
Data Point No. 3: Build a wall around it – Use a database firewall.
While encryption is an excellent first step to ensure your data stays safe, another crucial step is to make sure that your database doesn’t get accessed by unauthorized parties. Properly configuring a database firewall means database activity is being monitored, pre-emptively detecting and even blocking unauthorized access, application bypass and SQL injections.
When configuring your database firewall, you should define policies that help you easily identify anomalous activity. In most cases, database workloads are repetitive – with a well-defined group of application servers and clients using the same consistent set of programs to access the database. Different database firewall vendors offer their own unique paradigms for policy development, but almost all of them have some way to identify exceptions to normal client activity. In some cases, this profiling of normal activity can be as fine grained as identifying normal SQL activity for a database, so that the database firewall can even block SQL Injection attacks.
Data Point No. 4: Monitor everything – audit your database.
Auditing databases regularly is one of the best ways to minimize the risk of your data being exposed to external threats or unauthorized access. After all, one of the biggest issues in security is lack of visibility. Organizations don’t know what they don’t know, and unless regular audit processes are in place, there is no way to identify where vulnerabilities lie and where misconfigurations are leaving sensitive data unprotected.
Remember that a network-based monitor can only see the commands that traverse the network. If your database allows direct local connections that aren’t routed over the network, the database firewall may not see them. A network-based monitor will also frequently miss SQL that is hex-encoded or that is dynamically created using your database’s procedural language.
A good practice is to have all data definition and data control language (DDL and DCL) audited – especially changes to user profiles or privileges and creation or modification of stored procedures. If you have done the work of identifying sensitive data objects, you should also consider auditing access to those objects–especially if the access occurs outside of the normal application activity.
Data Point No. 5: Limit what they see – define strict access rules.
While leveraging data is crucial for many business functions across an organization, it doesn’t mean that everyone should have access to all the data. There are many ways in which an organization can restrict access to sensitive data without impacting the work of its employees. Strive to restrict your users and administrators to only the privileges required for their business function. The first step is to determine which data is needed by each business function and then set strict rules on who gets to access specific business data sets. This is one of the critical tools to help prevent internal malicious actors from misusing data.
If your database supports it, use access control mechanisms to separate the duties of database and system administration from managing the data within the database. At a minimum, you should audit access to data by privileged users. You may be able to avoid granting the database’s default administration role–which is usually far more privileged than required for day-to-day administration–and instead create less-powerful roles that are tailored for the work an administrator performs.
Data Point No. 6: Make it harmless – mask your data.
Sometimes, application developers and administrators need a test environment as they build, maintain, and deploy business applications. In many cases, testing and development will require data sets that are equivalent in size and complexity to production, resulting in many organizations cloning the production database to create these lower level environments. When that happens, the security risk inherent in the production database is suddenly multiplied as there are now two (or perhaps, FIVE) copies of the data. Reduce risk by masking the data – replacing sensitive data with artificially generated or scrambled data that has no inherent sensitivity. The industry term for this is static data masking.
There is another type of masking – dynamic data masking. Some databases provide this feature (vendors may refer to this feature as dynamic data masking, data redactionoronline data transformation). What you are looking for is the ability to change the presentation of data based upon a security policy, without modifying the underlying data. Use dynamic data masking to control the proliferation of sensitive data elements, and to reduce the chance of malicious or accidental disclosure of sensitive data elements. The difference between static and dynamic masking is that static data masking is destructive – it actually changes the data. Dynamic data masking is non-destructive, with no change made to the underlying data. For example, in most cases someone accessing a credit card number should not see the entire number – just the last four digits. This is where dynamic data masking comes into play.
Data Point No. 7: Autonomous options for securing data.
Given how important it is to keep up with patching, autonomous systems are a critical tool in protecting data. The vast majority (85 percent) of security breaches today attack known vulnerabilities that have available patches. Often those patches have been out for months or even years but haven’t been applied, because it is never convenient to bring a system down. Using machine learning, autonomous systems can constantly scan for threats and anomalies and apply patches automatically with minimal downtime. The emergence of self-driving, self-securing and self-repairing technologies offers organizations a smarter way to handle the avalanche of constant patching and re-patching required especially when cybersecurity talent can be scarce. By implementing autonomous technologies, IT leaders will be free to establish more comprehensive risk awareness and prevention strategies and that’s important if you want to secure the crown jewels.