In-Memory Databases Driving Big Data Efficiency: 10 Reasons Why

By Chris Preimesberger  |  Posted 2013-02-15 Print this article Print
Dealing With the Half Life of Data Value

Dealing With the Half Life of Data Value

Businesses need to move fast. As soon as data enters an organization, its half life begins. Data is more valuable in real time and its value diminishes over time until it becomes obsolete and irrelevant. With in-memory IT, data can be processed and stored in real-time, giving insights that can be used immediately as opposed to 24 hours or more later.


With a clear trend upward in big data workloads in all vertical segments, enterprises are searching for the most efficient ways to manage the processing, storage and retrieval of data in both batches and in real time. Because IT managers have asked for it, new technology is coming into the market to help ease big data integration and enable faster analysis for business decisions. One key strategy utilizes the in-memory database, which enables faster processing for high-volume data. By using in-memory database technology, real-time applications for verticals such as financial services, digital advertising, telecom and mobile Web, can gain a number of benefits. The latest big data applications are possible mainly because in-memory DBs can be anywhere from 10 to 100 times faster than conventional databases installed on hard drives. Leading providers of in-memory DBs are SAP's HANA, Oracle's TimesTen, ScaleOut, Birst, SAS, Terracotta and Sweden's Starcounter. For this slide show, eWEEK spoke with Eric Frenkiel, CEO and co-founder at MemSQL to discuss this trend. MemSQL positions itself as the world’s fastest database.


Submit a Comment

Loading Comments...
Manage your Newsletters: Login   Register My Newsletters

Rocket Fuel