Why Enterprises Should Embrace Machine Data Analytics

1 of 10

Why Enterprises Should Embrace Machine Data Analytics

In the Third Age of Data, organizations are beginning to realize the promise of large-scale machine data and how to store it.

2 of 10

First Age of Data: Content Mostly Originated With Humans

Many familiar names in IT made their name during the First Age of Data. Traditional IT infrastructure was designed around data predominantly created by humans, from email and documents to business transactions, databases and records. Twenty to 30 years ago, the volume of data was driven primarily by business processes in the form of online transactions. It was for things such as a bank generating customer statements or similar documents. Those transactions were conducted through mainframes, stored in traditional databases, transferred across storage area networks and related infrastructure until they ended up in your mailbox.

3 of 10

Second Age of Data: Explosion in Content

Then a Second Age of Data arose, still human-centric but driven less by processes and more by an explosion in content: office documents, streaming audio/video, digital imaging and photography, email and websites, to name a few. The addition of this variety of file types, formats and sizes on top of traditional data volume soon led to a huge increase in storage requirements. Pioneering vendors from the previous transactional age were subsequently replaced by scale-out companies that could meet the need of scale.

4 of 10

Third Age of Data: Handling the Rising Tide of Machine Data

Now, in the Third Age of Data, industries are faced with a rising tide of data being generated from machines—sensor data, imaging, data capture, logging or monitoring and more. This hyperscale growth in machine-generated data provides a wealth of opportunities for enterprises to find new insights from complex systems.

5 of 10

New Parameters for High-Volume Data Storage

At its core, the challenge of machine data in the Third Age of Data involves dealing with new parameters: a previously unheard-of volume and variety, now compounded by the speed and frequency with which machines generate large-scale unstructured data. This volume, variety and velocity create a "data multiplier effect" that can translate into orders of magnitude inflation in the scale of the data being collected. In the next three slides, we explore the challenges of the Third Age of Data: volume, variety and velocity.

6 of 10

Challenge 1: Machine Data Volume

Embedded sensors in automobiles and roadways supply information about location, speed, direction and operation, allowing for everything from better traffic management to vehicle monitoring, routing and entertainment. Similarly, network packet, traffic, and call and log monitoring provide insight into service operations and security, keeping IT data centers or telecommunications networks safe and sound. The scale of this sensor and packet data is massive and growing every year.

7 of 10

Challenge 2: Machine Data Variety

In some industries, companies rely on constant tiny measurements from ground or equipment-mounted sensors and devices, as well as huge and complex satellite imagery, weather models geospatial data and more. Similarly, many companies have a mix of data sources (different machine systems and humans, plus others.), with corresponding differences in data size and type. In either case, systems and storage optimized for one end of the spectrum may not be able to readily handle the other.

8 of 10

Challenge 3: Machine Data Velocity

This may be the most challenging aspect of the Third Age of Data. A good baseball analogy would be third base, known as the "hot corner" for the many hard-hit balls the area receives. Sensors, satellites, networked systems and connected vehicles all have one thing in common: they never sleep. These machines typically operate on the basis of continual measurement—24 hours a day, seven days a week, 365 days a year—constantly streaming data for processing and storage. Moreover, the flood of data can quickly spike. In life sciences, for example, large-scale systems or teams rapidly generate tens of millions of files—or multi terabyte-size models—in just a few hours. Keeping up with that data load, and more importantly, understanding its constant ebb and flow, is an equally massive challenge.

9 of 10

Insight Must Be Gained at Scale Because Scale Gets Higher All the Time

The modern combination of flash-first hybrid storage appliances and real-time analytics has a profound impact in managing the onslaught of machine data in this Third Age of Data. It establishes new levels of performance, scalability, efficiency and reliability, and delivers the data-aware visibility that's crucial to make accurate decisions and assessments in moments. The age of machine data holds tremendous promise and opportunity across a broad range of industries, but only for those that have the ability to gain insight at scale.

10 of 10

8 Ways to Speed Up Your Organization's Data Storage Performance

In the spirit of the past week's Independence Day celebrations in the United States, eWEEK in this slide show offers some practical ways for IT managers and storage administrators to free themselves from the tyranny of slow-moving storage data movement. Let's face it: Information technology is nothing if not all about speed. Fast data access is critical for a competitive enterprise because productivity slows down markedly when applications—and people—have to wait for data. People do not like waiting for anything, especially customers of competitive businesses. In the past generation or so, IT architects have discovered that something as simple as placing the data source closer to the processing engine in the server can speed the transfer and flow of data in great measure. Things have not been as simple as that on the storage side, however; storage has always been like a funnel, with a large...
Top White Papers and Webcasts