VMware announced a new open-source project known as Serengeti, which is an effort to virtualize the Apache Hadoop platform.
VMware has announced a new open-source
project, dubbed Serengeti, to enable enterprises to quickly
deploy, manage and scale Apache Hadoop in virtual and cloud environments.
In addition, VMware announced
enhancements to Spring for Apache Hadoop as well as integrations with vSphere
and Apache Hadoop to deploy a highly available Hadoop platform. VMware is
working with the Hadoop community to contribute extensions that will make key
components virtualization-aware to support elastic scaling and further
improve Hadoop performance in virtual environments.
Apache Hadoop has the potential to
transform business by allowing enterprises to harness very large amounts of
data for competitive advantage, Jerry Chen, vice president of Cloud and
Application Services at VMware, said in a statement. It represents one
dimension of a sweeping change that is taking place in applications, and
enterprises are looking for ways to incorporate these new technologies into
their portfolios. VMware is working with the Apache Hadoop community to allow
enterprise IT to deploy and manage Hadoop easily in their virtual and cloud
Apache Hadoop is emerging as the de
facto standard for big data processing; however, deployment and operational
complexity, the need for dedicated hardware, and concerns about security and
service-level assurance prevent many enterprises from leveraging the power of
Hadoop, VMware said in a press release. By decoupling Hadoop nodes from the
underlying physical infrastructure, VMware can bring the benefits of cloud
infrastructurerapid deployment, high-availability, optimal resource
utilization, elasticity and secure multi-tenancyto Hadoop, the company said.
In a blog post about Serengeti, Richard McDougall, CTO for application
infrastructure at VMware, said:
Hadoop gives the ability to store
massive amounts of data in a reliable data store, and MapReduce provides a
data-parallel programming framework to compute against that data. We have
observed that the majority of our customers are using many of the higher-level
ecosystem tools, which utilize the power of the underlying data-parallel Hadoop
platform through familiar data-access methodssuch as Hive for query access, or
Pig for script-based data processing.
Available for free download under
the Apache 2.0 license, Serengeti is a one-click deployment toolkit that
allows enterprises to leverage the VMware vSphere platform to deploy a highly
available Apache Hadoop cluster in minutes, including common Hadoop components
like Apache Pig and Apache Hive. By using Serengeti to run Hadoop on
VMware vSphere, enterprises can easily leverage the high-availability, fault-tolerance
and live migration capabilities of the worlds most trusted, widely deployed
virtualization platform to enable the availability and manageability of Hadoop
Hadoop must become friendly with
the technologies and practices of enterprise IT if it is to become a
first-class citizen within enterprise IT infrastructure, Tony Baer, principal
analyst at OVUM, said in a statement. The resource-intensive nature of large
big data clusters makes virtualization an important piece that Hadoop must
accommodate. VMwares involvement with the Apache Hadoop project and its new Serengeti
Apache project are critical moves that could provide enterprises the
flexibility that they will need when it comes to prototyping and deploying
VMware is working with the leading
Apache Hadoop distribution vendors, including Cloudera, Greenplum, Hortonworks,
IBM and MapR to support a wide range of distributions.
To further simplify and speed
enterprise use of Hadoop, VMware is working with the Hadoop community to
contribute changes to the Hadoop Distributed File System (HDFS) and Hadoop
MapReduce projects to make them virtualization-aware, so that data and
compute jobs can be optimally distributed across a virtual infrastructure.
These changes will enable enterprises to achieve a more elastic, secure and
high availability Hadoop cluster. The extensions can be found here.
In his post, McDougall, from several
perspectives, addressed how and why Hadoop would benefit from virtualization,
A full big-data platform typically
consists of the Hadoop distributed file system and core Map-reduce, hBase, Pig,
Hive, Sqoop and a big-SQL database using traditional SQL or distributed SQL
(like Greenplum DB) for more regularly accessed semi-structured data. A good
strategy is to architect a common shared platform, on which all of the big-data
technologies can reside. By virtualizing, all hardware nodes can be common,
eliminating the need for special hardware for master services (the NameNode) so
that if multiple clusters are deployed, you no longer need to provision and
special servers for each of the master services.
VMware also announced updates to Spring
for Apache Hadoop, an open-source project first launched in February
2012 to make it easy for enterprise developers to build distributed processing
solutions with Hadoop. These updates allow Spring developers to easily build
enterprise applications that integrate with the HBase database, the Cascading
library and Hadoop security. Spring for Apache Hadoop is free
to download and available under the open-source Apache 2.0 license.
Together, these projects and
contributions are designed to help accelerate Hadoop adoption and enable
enterprises to leverage big data analytics applications, such as Cetas, to
obtain real-time, intelligent insight into large quantities of data. VMware acquired Cetas in April, and the Cetas
analytics service is available at www.cetas.net.
Darryl K. Taft covers the development tools and developer-related issues beat from his office in Baltimore. He has more than 10 years of experience in the business and is always looking for the next scoop. Taft is a member of the Association for Computing Machinery (ACM) and was named 'one of the most active middleware reporters in the world' by The Middleware Co. He also has his own card in the 'Who's Who in Enterprise Java' deck.