eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.
1Data Snapshots to Protect Data
Without snapshots (of all the data in a storage device), if a user accidentally deletes a file, the deletion is replicated three times and the data is lost. The same applies to application data corruption. Many Hadoop users have lost valuable data due to such incidents. With snapshots, users can easily recover data to a point in time, the same way they would in any enterprise-class file system.
2Data Mirroring for Backup
3Rolling Software Upgrades
4Multi-tenancy for Application-Sharing
5Performance and Scale
6High Availability
High availability with automated stateful failover and self-healing is essential in every enterprise environment. As businesses roll out Hadoop, the same set of standards should apply as for other enterprise applications. With automated stateful failover and high availability, enterprises can eliminate single points of failure from unplanned and planned downtime to protect users against the unexpected.
7Ability to Integrate Into Existing Environments
The limitations of the Hadoop Distributed File System require whole-scale changes to existing applications and extensive development of new ones. Enterprise-grade Hadoop requires full random/read support and direct access with NFS (Network File System) to simplify development and ensure business users can access the information they need directly.
8Ease of Use
With mainstream adoption comes the need for tools that don’t require specialized skills and programmer services. New Hadoop developments must be simple for users to operate and to explore patterns in the data. This includes direct access with standard protocols from existing tools and applications without changing applications or the deployment of special connectors or clients.
9Simplified Data Management
With clusters constantly expanding well into the petabyte range, simplifying how data is managed is critical. The next generation of Hadoop distributions should include volume management to make it easy to apply policies across directories and file contents without managing individual files. These policies include data protection, retention, snapshots and access privileges.
10Ease of Development
Simply put, the limitations of the Hadoop Distributed File System require wide-scale changes to existing applications and extensive development of new ones. The next-generation storage services layer provides full random/read support and provides direct access with NFS. This dramatically simplifies development. Existing applications and workflows can be used and just the specific steps requiring parallel processing need to be converted to take advantage of the MapReduce framework.