Oracle on April 6 added a specialized integration module to its big data product and services lineup.
Oracle’s Data Integrator for Big Data enables customers to move from merely stockpiling lots of data to making business decisions based on that data; streamlining their Hadoop development; and enhancing data transparency and data governance across the organization, said Jeff Pollock, Oracle’s Vice President of Product Management.
Generating value from big data requires the right tools to move and prepare data to effectively discover new insights. If there is too much chaff in the data, it slows down entire processes. In order to use those insights, new data must integrate securely with existing data, infrastructure, applications, and processes.
Data Integrator for Big Data also provides customers with access to an increased number of diverse data types from on-premises and cloud sources, helps deliver increased performance for growing data volumes, and enrich data quality for business decisions and regulatory compliance, he said.
The new module is designed to use an entire Hadoop cluster, running natively without requiring proprietary code to be installed or a separate server to be run.
“With this new product, Oracle has made it possible for our customers to be big data ETL developers without having to learn Scala, Pig or Oozie code,” Pollock said. “In fact, Oracle is the only vendor that can automatically generate Spark, Hive and Pig transformations from a single mapping which allows our customers to focus on business value and the overall architecture rather than multiple programming languages.”
Oracle Data Integrator for Big Data, Oracle Data Integrator 12c, Oracle GoldenGate for Big Data, and Oracle GoldenGate 12c are part of Oracle’s data integration portfolio, which includes data services, data federation, metadata management, data quality, bulk data movement, and real-time replication.