The new "runner" will let developers target Dataflow pipeline for execution in a Spark cluster, Cloudera said.
Google announced Cloud Dataflow last June as a managed service designed to help companies ingest and analyze huge data sets both in batch processing and in real-time streaming mode.
In December, the company released a Cloud Dataflow Software Development Kit
into the open-source community to encourage software developers to write applications that integrate easily with the managed service as well as with other execution environments.
One of the results of that move is a version of Cloud Dataflow that runs on Cloudera's distribution of the open-source Apache Spark engine for large-scale data processing. The new Dataflow "runner" announced Jan. 20 by Cloudera
will allow developers to target a Dataflow pipeline for execution on a cloud-hosted or on-premise Spark cluster as well as on Google's managed service.
One of the most compelling aspects of Cloud Dataflow is its support for pipeline logic that can execute both in batch and streaming mode, Josh Wills, senior director of data science at Cloudera, said in the company's blog post
announcing the new development.
Cloud Dataflow's streaming capabilities are more advanced than those available with Spark Streaming while its batch execution engine optimizes the performance of pipelines that do not process streaming data, Wills said.
Cloud Dataflow combines several major technologies that Google has used internally for years for large-scale data processing, including MapReduce, the FlumeJava batch processing engine and the MillWheel stream-processing engine. "Dataflow is a synthesis of our investments," in data processing technologies, said Eric Schmidt, a product manager with Google's Cloud Platform team. "From a developer's perspective, it is a programming model and a managed service," he said.
The Cloud Dataflow SDK that Google released last December gives developers a way to write big data applications that combine batch and stream processing capabilities without the need for separate programming models or separate infrastructures for running them.
"What they would have to do previously is run a different SDK," for each mode Schmidt said. "You would either have a set of users doing a static MapReduce batch job, or you would have another camp [doing streaming analytics]," he said. "We wanted to merge both batch and stream and have one combined service infrastructure" for running both, he said.
Google released the SDK into the open-source community in December to ensure that Dataflow is ported to other execution environments, as well, he said. The Cloudera Apache Spark announcement is one example of the direction that Google has in mind for Dataflow, he said.
One of the key questions when Google first announced Dataflow was whether developers using the programming model would be locked into Google infrastructure for running their pipelines. "Our strategy has been to extend the SDK to open source so they can extend it to other environments," Schmidt said.
With Tuesday's announcement, Cloud Dataflow now can run on Google's infrastructure, a Spark cluster or a local machine, he said.
Google's moves are designed to better position the company in the emerging market for services and technologies that can help enterprises extract business value from massive data sets. Over the years, many companies have gotten better at harvesting all kinds of data from transactional systems, clickstreams, system logs, machine sensors, mobile devices and other sources. But they have struggled to extract value from it, both because of the limitations of traditional database management technologies and the complexity involved in building a data processing infrastructure for big data sets.