Google has proposed that its Dataflow technology for writing programs for large-scale data processing jobs be considered for inclusion as an Apache Software Foundation Incubator project.
The goal is to foster more collaborative effort and governance around the technology so it can be used to enable the development of data pipelines that are portable across multiple execution engines both on-premises and in the cloud. As part of the proposal, Google wants its Dataflow programming model, Dataflow Software Development Kit and associated “runners” to be bundled under a single ASF incubating project.
Supporting Google in its proposal to the Apache Software Foundation are a slew of other technology companies, including PayPal, Cloudera, Talend and Data Artisans. For any code to be considered for inclusion in the Apache Software Foundation, it has to first go through a mandatory incubation period, during which several issues including those pertaining to copyright licenses and future direction are decided.
“We believe this proposal is a step towards the ability to define one data pipeline for multiple processing needs, without tradeoffs, which can be run in a number of runtimes, on-premise, in the cloud, or locally,” Google Software Engineer Frances Perry and Product Manager James Malone wrote Jan. 20.
Cloud Dataflow, which is Google’s managed data processing service based on the technology, will continue as usual and will not be affected by the proposal to move the SDK, programming model and other components to the ASF, the two Google managers said.
Google designed its Dataflow technology to help developers write enterprise applications or data pipelines that are capable of running on different big data engines such as Apache Spark, Apache Flink and Google’s own Cloud Dataflow. It consists of a set of software development kits that Google says can be used to define data processing jobs in streaming and in batch mode for large data sets. Dataflow does especially well with high-volume computation situations, according to Google.
In December 2014, the company released its Dataflow SDK to the open-source community in a bid to spur more development activity around the technology. Another reason to open-source the SDK was to quell concerns that using Dataflow would lock people into Google’s technology and infrastructure.
Since then, the SDK has been used to create what Google has described as pluggable runners for connecting and running Dataflow pipelines to Apache Spark, Apache Flink and Google-hosted Cloud Dataflow engines.
Bringing the Dataflow programming model and SDKs to the ASF space should yield several benefits, Perry and Malone claimed in their blog. For instance, the technology will let developers focus on the application or data pipeline itself and not so much on the underlying big data engine on which it will run. Dataflow will also enable the development of pipelines that are portable across different engines so people do not have to worry about scrapping their business logic or rewriting everything from scratch each time they want to move to a new engine, the two Googlers said.
Google has previously described Dataflow as a combination of several technologies that it has used and tested internally for years. Examples include MapReduce, and its MillWheel and FlumeJava stream processing and batch processing engines, respectively.