Birst to Help Midsize Enterprises Wrangle Hadoop, Big Data

Birst, a software as a service business intelligence and analytics provider, has announced support for Hadoop to help enterprises, particularly midsize organizations, wrangle big data.

With more and more organizations looking at Apache Hadoop to wrangle big data, the need for tools to make Hadoop more palatable for business users has arisen and Birst has burst on the scene to help.

Birst, a business analytics software provider, has announced support for Apache Hadoop, combining the massive scale of Hadoop datasets with Birst€™s multi-dimensional database. Birst is focused on making Hadoop more business-friendly, particularly for midsize organizations that have fewer resources than large enterprises.

With the combination of Birst and Hadoop, business users can now aggregate and visualize big data, such as Website interactions, social media and cloud traffic, quickly and easily, the company said. Typically, this would have required extensive extract, transform and load (ETL) processes and lots of effort€”something that has prevented midsize organizations from making big data actionable.

Birst has branched out to extend its data warehouse technology and business analytics capabilities with big data integration. Traditionally, integrating and optimizing structured information from SAP, Salesforce, cloud sources and relational databases, Birst is now extending the same flexibility to big data, the company said. Many organizations recognize the value big data has to offer, but€”except for the very large who can manage the complexity€”it is beyond the reach for most. Birst has lowered the adoption barrier by giving users the capability to treat big data like any other data set.

€œBusiness Analytics is changing as the volume of data from online Web interactions skyrockets and customers increasingly want to browse, query or merge transactional data with interaction data,€ Rick Spickelmier, CTO of Birst, said in a statement. €œData in Hadoop is not well-suited for business intelligence and to make it actionable takes a lot of work. Birst€™s automated multi-dimensional database allows organizations to quickly and easily take big data and make sense of it.€

Birst provides access to data stored in Hadoop and equips the business analyst with the power to discover new relationships and patterns in data without locking them into manual ETL processes. With Birst€™s agile BI solution and Hadoop€™s massive store, business users can now:

€¢ Obtain high-level analytic insight on massive amounts of data. Birst creates multi-dimensional models from subsets of Hadoop data and allows business users to browse, query or visualize big data.
€¢ Seamlessly elect between real-time access to Hadoop data or integrating Hadoop data with other data sources, including SAP, Salesforce, operational and financial information into automatically created multi-dimensional data sets.
€¢ Tap into the power of massive scale from petabytes of data using Hadoop€™s distributed file system to report on extremely large data volumes.
€¢ Deliver insights to a broad set of individuals in a readily consumable manner via dashboards, reports, ad-hoc queries and mobile delivery€”all of which can be modified quickly and easily.

€œInformation managers must fundamentally rethink their approach to data by planning for all the dimensions of information management," Mark Beyer, research vice president at Gartner, said in a statement. "The business's demand for access to the vast resources of big data gives information managers an opportunity to alter the way the enterprise uses information.€

Birst support for Hadoop is included in the Birst business analytics platform at no additional charge. Hadoop support will be generally available in 30 days.

Apache Hadoop serves as a foundation of cloud computing and is at the epicenter of big data solutions, Apache Software Foundation officials said. Hadoop enables data-intensive distributed applications to work with thousands of nodes and exabytes of data. Hadoop also enables organizations to more efficiently and cost-effectively store, process, manage and analyze the growing volumes of data being created and collected every day. And it connects thousands of servers to process and analyze data at supercomputing speed.

Apache Hadoop Vice President Arun Murthy, who used to run the nearly 50,000-node Hadoop configuration at Yahoo before leaving to co-found Hortonworks, said Hadoop 1.0 is a major step for Hadoop, but there is still additional work to be done to make Hadoop even more enterprise-friendly. Some of this work is being done under the Hadoop MapReduce next-generation effort, he said. Results from this effort are expected to land in the next major release of Hadoop, which is due sometime in the middle of 2012, Murthy said.