Oracle has improved the software significantly to emphasize automation.
Database 11g is the cornerstone of Oracles dynamically allocated computing grids and should garner the attention of database managers with its improved management, recovery and table compression capabilities. Oracle Database 11g, released Aug. 20, also takes much of the guesswork out of advanced database tuning.
I tested Database 11g Enterprise Edition for Linux x86 32-bit systems. The 64-bit Linux edition, the Linux version most likely to be used by Oracles enterprise customers in a production environment, and all other platforms are expected by the end of the year, according to company officials.
Pricing for Oracle Database 11g is the same as it was for 10g: For example, the Standard Edition One costs $149 per named user and $4,995 per processor, and the Enterprise Edition costs $800 per named user or $40,000 per processor.
Focus on automation
Three years in the making, Oracle Database 11g has a slew of new and improved features that focus on automatically improving the performance of the database, queries, memory and storage usage.
To read about why Oracles Ellison says SAAS doesnt pay, click here.
One of these is a "what-if" tool called Database Replay. During testing, this tool allowed me to capture an actual database workload on a production system and replay it on a test system.
Database Replay is not a feature to be toyed with lightly, however, as getting the best results may require restarting the production database. When I walked through the setup wizard, I had to acknowledge several stern (and appropriately so) warnings that the system was ready to capture the workload without causing disruption.
Database 11g was able to take the captured workload and transform it into what are called Replay Files. The wizard also created metadata files needed to process the workload. I was able to play back the captured workload on a test system.
The ability to test and work with actual workloads is among the most accurate methods to predict how application and system changes will impact real-life performance.
Along those lines, the SPA (SQL Performance Analyzer) enables database administrators to predict the impact that system changes will have on such factors as the SQL execution plan. SPA can be used to anticipate performance changes due to a database upgrade, tuning, schema changes, statistics gathering, database parameter changes and even operating system or hardware changes.
I created an STS (SQL Tuning Set) to hold the workload information, including the execution plans, binds and statistics on execution.
I created a Guided Workflow to create a sequence of steps to execute two trial SPA tests. The tuning sets included SQL statements and execution statistics, along with the execution context.
The STS I created ran against a test human resources database. After running the STS, I applied the patch and replayed the tuning set with the patches enabled. In my test, the SPA showed that there was an improvement in most of the execution of most of the SQL, as well as a regression in performance.
The SQL Tuning Advisor in Database 11g was able to use the information found in the Guided Workflow and make specific suggestions for tuning the SQL text.
The new SQL Query Result Cache improves application performance by caching SQL query and Oracle PL/SQL function results in memory. Until the data in the database object is modified, the cached query results are used. The performance improvement depends on the underlying data remaining fairly static. In general, though, using the optional Result Cache yields the most benefit when used for frequently executed SQL queries and PL/SQL functions.
Extended (or multicolumn) statistics, new in Oracle Database 11g, can help determine if there is a relationship between two or more columns in a table. I used the extended statistics capability to add a set of statistics that showed the relationship between customers, states and countriesall data stored in a single table. The CBO (Cost-Based Optimizer) can use these statistics to reveal multicolumn relationships in the table.
During my tests, I was able to see the CBO calculate the correct selectivity of the single-column predicateseven columns to which a function, such as UPPER (lname), had been applied. As a result, the CBO was able to determine the selectivity and cardinality of the column data and could use the extended statistics to correlate columns.
During testing, I set up Database 11g to track and store all transactional changes to records, and the database has the ability to store this information for the lifetime of the record.
Additional memory is needed to store the transaction records, however, and DBAs should be mindful of using flashback only on records deemed to require this extra monitoring capability.
When I configured my tests, I specified a memory quota of 10MB and a retention period of one year for the transaction archive. During tests, I was able to change data such as the salaries of various employees and then use the flashback data archive to restore values that I later determined to be incorrect.
This is certainly an area that will be of interest to forensic data investigators. In general, the amount of metadata created and stored in Database 11g will make it increasingly difficult for insiders to fake records or clean up change trails. Only the most diligent and obsessive wrongdoers will be able to eradicate the tampering evidence in Database 11g.
Oracle has beefed up security in Database 11g. I used Transparent Database Encryption to scramble data stored on disk. Accessed through the Web-based Enterprise Manager database control, I used the database encryption to obscure individual columns. My tests showed that using the encrypted datain my case, made-up credit limitswasnt that much slower than using unencrypted data.
A rather complicated key storage and management procedure is used to meet regulatory requirements. In a nutshell, because of limited space, each table with encrypted columns has a single key, which is itself encrypted with the database master key and stored in the data dictionary with the table. No keys are stored in the clear, and the database key is stored in a security module external to the database.
The important thing for DBAs and IT security staff is that the master database key must be secured, and its location must remain known so that it can be used when its time to decrypt the data. I point this out because database master keys change (or should change) over time. DBAs must put a plan in place now for ensuring that future staff will have access to the keys and therefore the data when needed.
The Enterprise Manager Database Control management and monitoring tool has been improved in Oracle Database 11g, enhancing security and providing more detailed reporting. First introduced in Oracle Database 10g, Enterprise Manager uses a tabbed browsing interface to show performance history, alerts, host configuration, patch workflow and table space contents, and serves as an entry point for most administrative tasks.
Oracle is concerned about storage, and data compression has been used aggressively in Database 11g to reduce storage requirements.
During tests, I created two tables and compressed one to see the difference in size. I also compared the difference in time needed to access the compressed data compared with the uncompressed data. The results were quite favorable, although I will continue to test this feature to get an idea of what data is most suited for compression and will report on that as results become available.
The tables I used for testing contained just more than 900,000 sales records. The uncompressed table was approximately 36MB and the compressed table was 19MBmore than half the size of the uncompressed data. While there was about a 47 percent reduction in size, access time to perform operations on the two tables differed by only approximately 1 to 2 percent.
Check out eWEEK.coms for the latest news, reviews and analysis about productivity and business solutions.