Syncsort Delivers Native Mainframe Hadoop, Spark Data
Yet, while Yogurtcu said Syncsort expects MapReduce will still be the prevalent compute framework in production, the high level of interest should translate into more Spark deployments, mostly running on Hadoop. Apache Spark is an open-source data processing engine built for speed, ease of use and sophisticated analytics. It is designed to perform both batch processing and new workloads like streaming, interactive queries and machine learning. Spark and Hadoop are not competitors, as Hadoop does things that Spark doesn't. While many Hadoop vendors and users are replacing the MapReduce computation framework with Spark, there also is the Hadoop ecosystem as a whole, which includes the HDFS storage system and NoSQL key value stores like HBase. Spark doesn't do storage; it only works with the existing storage system. Last September, Syncsort announced the integration of the "Intelligent Execution" capabilities of its DMX data integration product suite with Apache Spark. Intelligent Execution enables users to visually design data transformations once and then run them anywhere—across Hadoop, MapReduce, Spark, Linux, Windows or Unix, on-premises or in the cloud, the company said."The second part of our announcement is about making access to mainframe data as simple as possible," Yogurtcu told eWEEK. "Two of our big insurance customers had hundreds and hundreds of tables that they needed to transform data from. So we are shipping a tool called Data Funnel and with that you can access 800 to 1,000 tables at once. It parallelizes data access and brings all of these tables in parallel. Access to large volumes of data at once is the second part of our announcement. This is to increase productivity and improve development time." With the new Data Funnel, users can now take hundreds of tables, and in one step, load them into the Hadoop Distributed File System (HDFS), she said. In addition, with new support for Fujitsu NetCOBOL, Syncsort supports both IBM z Systems and Fujitsu mainframes. This move comes in response to strong demand in the Asia Pacific and Central and Eastern Europe, Middle East and Africa (CEMEA) markets, the company said. "Syncsort continues to leverage their mainframe and big data expertise to solve complex technology issues that prevent organizations from leveraging Hadoop and Spark to store, process and analyze their mainframe data," said George Gilbert, lead big data analyst at Wikibon, in a statement. "Syncsort's new features don't require hard-to-find skills that companies don't want to spend money and time to acquire."
To help mainframe users facing challenges getting data into Hadoop, Syncsort introduced its new high-speed DMX Data Funnel, which enables users to ingest hundreds of database tables at once.