The Sushi Principle: Why Data Is Better Raw Than 'Cooked'

1 of 10

The Sushi Principle: Why Data Is Better Raw Than 'Cooked'

Organizations often need to use data in a different way than originally planned. For flexibility and accuracy, they need fast access to data in its raw state.

2 of 10

How Systems Are Currently Set Up

Most data systems are made up of three systems: ETL (extract, transform and load) system, databases and a business intelligence layer. Here is how most companies build IT systems: They buy the three systems noted above, hire a database architect, a database administrator, a project manager and a dozen engineers. They then decide on data schema(s) and wait months while the engineers put everything together. Finally, they spend years finding bugs and fixing errors. Efficient? Not very.

3 of 10

Where Data Comes From and Where It's Stored

Raw data comes from servers, production databases, the Internet of things sensors and devices, and other periodic measuring or reporting devices. Most businesses store this data in data warehouses, data marts or so-called data lakes (a huge central pool of data). The classic data warehouse splits data into facts about the world, dimensions that describe the facts, and aggregates or metadata that summarize the facts.

4 of 10

Data Warehouses Contain 'Cooked' Data

Before they store data, businesses "cook" it by cleaning, compressing, de-duplicating, normalizing, filtering and summarizing it, all in the name of optimizing access to the data. Many businesses move normalized information from fact tables to dimension table indexes; they build bitmap and B-tree indices—often many per table. This leads to pre-computed aggregates and building of online analytical processing (OLAP) cubes, materialized views and other aggregates.

5 of 10

So Why Do We Cook Data?

Most cooking of data is actually an application-level optimization. Normalizing data, building indexes and pre-computing aggregates make data warehouses fast and compact, but they also make data warehouses error-prone, difficult to maintain and difficult to understand. With today's distributed architectures and faster processors, it's now more efficient—and accurate—to pull information from raw data than it is to depend on summary tables that pull from many different systems and apply obscure logic.

6 of 10

The Sushi Principle, Step 1: Don't Pre-Process

Don't pre-process data when you pull it in, store it or query it. Use a well-tested pipeline and keep it simple. Resist the temptation to add business logic, or you'll end up with concerns about accuracy, won't be able to get back to the original data source, and are likely to break the pipeline and potentially lose data.

7 of 10

The Sushi Principle, Step 2: Store All Data

Store all your data with all its raw details, but partitioned and sorted for fast analysis. They can be in different silos, but a centralized management system is a must.

8 of 10

The Sushi Principle, Step 3: Summarize and Sample

Summarize and sample at query time for direct access to complete, accurate and fresh data. This guarantees that you will have the most up-to-date information possible.

9 of 10

Use Cases: Facebook and LinkedIn Were Early Adopters

At first, these two social networking giants built data lakes before figuring out how to make the data ready to use. They were among the first companies to realize that raw data had more potential value than data that had been cooked. They pulled in data with as little pre-processing as possible into data lakes. They then developed processes that analyzed raw data in response to queries, delivering fresh, customized, "ready-to-eat" information instead of just pulling up pre-built, black-box summaries and aggregates.

10 of 10

10 Reasons Tape Backup Remains Important to the Enterprise

Digital tape is about the hardest-to-kill storage IT there is, unless you count carving out data onto rocks, the way it was done hundreds of thousands of years ago. Tape technology celebrated its 63rd birthday on May 21; IBM first made available its IBM 726 Magnetic tape reader/recorder in 1952. Strangely, unlike later IBM tape drives, the original 726 could read tape backward and forward. Tape has managed to get better with age. When tape first went to market, the media itself weighed 935 pounds and held 2.3MB of data. In 2015, that much tape weighs closer to 12 pounds, and 2.3MB would comprise one large photo or a short pop song. Tape storage densities are broken regularly; IBM's tape team recently demonstrated an areal recording density of 123 billion bits of uncompressed data per square inch on low-cost, particulate magnetic tape. The breakthrough represents the equivalent of a 220TB...
Top White Papers and Webcasts