1The Sushi Principle: Why Data Is Better Raw Than ‘Cooked’
2How Systems Are Currently Set Up
Most data systems are made up of three systems: ETL (extract, transform and load) system, databases and a business intelligence layer. Here is how most companies build IT systems: They buy the three systems noted above, hire a database architect, a database administrator, a project manager and a dozen engineers. They then decide on data schema(s) and wait months while the engineers put everything together. Finally, they spend years finding bugs and fixing errors. Efficient? Not very.
3Where Data Comes From and Where It’s Stored
Raw data comes from servers, production databases, the Internet of things sensors and devices, and other periodic measuring or reporting devices. Most businesses store this data in data warehouses, data marts or so-called data lakes (a huge central pool of data). The classic data warehouse splits data into facts about the world, dimensions that describe the facts, and aggregates or metadata that summarize the facts.
4Data Warehouses Contain ‘Cooked’ Data
Before they store data, businesses “cook” it by cleaning, compressing, de-duplicating, normalizing, filtering and summarizing it, all in the name of optimizing access to the data. Many businesses move normalized information from fact tables to dimension table indexes; they build bitmap and B-tree indices—often many per table. This leads to pre-computed aggregates and building of online analytical processing (OLAP) cubes, materialized views and other aggregates.
5So Why Do We Cook Data?
Most cooking of data is actually an application-level optimization. Normalizing data, building indexes and pre-computing aggregates make data warehouses fast and compact, but they also make data warehouses error-prone, difficult to maintain and difficult to understand. With today’s distributed architectures and faster processors, it’s now more efficient—and accurate—to pull information from raw data than it is to depend on summary tables that pull from many different systems and apply obscure logic.
6The Sushi Principle, Step 1: Don’t Pre-Process
Don’t pre-process data when you pull it in, store it or query it. Use a well-tested pipeline and keep it simple. Resist the temptation to add business logic, or you’ll end up with concerns about accuracy, won’t be able to get back to the original data source, and are likely to break the pipeline and potentially lose data.
7The Sushi Principle, Step 2: Store All Data
8The Sushi Principle, Step 3: Summarize and Sample
9Use Cases: Facebook and LinkedIn Were Early Adopters
At first, these two social networking giants built data lakes before figuring out how to make the data ready to use. They were among the first companies to realize that raw data had more potential value than data that had been cooked. They pulled in data with as little pre-processing as possible into data lakes. They then developed processes that analyzed raw data in response to queries, delivering fresh, customized, “ready-to-eat” information instead of just pulling up pre-built, black-box summaries and aggregates.