The Sushi Principle: Why Data Is Better Raw Than 'Cooked'

 
 
By Chris Preimesberger  |  Posted 2015-10-06
 
 
 
 
 
 
 
 
 
  • Previous
    1 - The Sushi Principle: Why Data Is Better Raw Than 'Cooked'
    Next

    The Sushi Principle: Why Data Is Better Raw Than 'Cooked'

    Organizations often need to use data in a different way than originally planned. For flexibility and accuracy, they need fast access to data in its raw state.
  • Previous
    2 - How Systems Are Currently Set Up
    Next

    How Systems Are Currently Set Up

    Most data systems are made up of three systems: ETL (extract, transform and load) system, databases and a business intelligence layer. Here is how most companies build IT systems: They buy the three systems noted above, hire a database architect, a database administrator, a project manager and a dozen engineers. They then decide on data schema(s) and wait months while the engineers put everything together. Finally, they spend years finding bugs and fixing errors. Efficient?  Not very.
  • Previous
    3 - Where Data Comes From and Where It's Stored
    Next

    Where Data Comes From and Where It's Stored

    Raw data comes from servers, production databases, the Internet of things sensors and devices, and other periodic measuring or reporting devices. Most businesses store this data in data warehouses, data marts or so-called data lakes (a huge central pool of data). The classic data warehouse splits data into facts about the world, dimensions that describe the facts, and aggregates or metadata that summarize the facts.
  • Previous
    4 - Data Warehouses Contain 'Cooked' Data
    Next

    Data Warehouses Contain 'Cooked' Data

    Before they store data, businesses "cook" it by cleaning, compressing, de-duplicating, normalizing, filtering and summarizing it, all in the name of optimizing access to the data. Many businesses move normalized information from fact tables to dimension table indexes; they build bitmap and B-tree indices—often many per table. This leads to pre-computed aggregates and building of online analytical processing (OLAP) cubes, materialized views and other aggregates.
  • Previous
    5 - So Why Do We Cook Data?
    Next

    So Why Do We Cook Data?

    Most cooking of data is actually an application-level optimization. Normalizing data, building indexes and pre-computing aggregates make data warehouses fast and compact, but they also make data warehouses error-prone, difficult to maintain and difficult to understand. With today's distributed architectures and faster processors, it's now more efficient—and accurate—to pull information from raw data than it is to depend on summary tables that pull from many different systems and apply obscure logic.
  • Previous
    6 - The Sushi Principle, Step 1: Don't Pre-Process
    Next

    The Sushi Principle, Step 1: Don't Pre-Process

    Don't pre-process data when you pull it in, store it or query it. Use a well-tested pipeline and keep it simple. Resist the temptation to add business logic, or you'll end up with concerns about accuracy, won't be able to get back to the original data source, and are likely to break the pipeline and potentially lose data.
  • Previous
    7 - The Sushi Principle, Step 2: Store All Data
    Next

    The Sushi Principle, Step 2: Store All Data

    Store all your data with all its raw details, but partitioned and sorted for fast analysis. They can be in different silos, but a centralized management system is a must.
  • Previous
    8 - The Sushi Principle, Step 3: Summarize and Sample
    Next

    The Sushi Principle, Step 3: Summarize and Sample

    Summarize and sample at query time for direct access to complete, accurate and fresh data. This guarantees that you will have the most up-to-date information possible.
  • Previous
    9 - Use Cases: Facebook and LinkedIn Were Early Adopters
    Next

    Use Cases: Facebook and LinkedIn Were Early Adopters

    At first, these two social networking giants built data lakes before figuring out how to make the data ready to use. They were among the first companies to realize that raw data had more potential value than data that had been cooked. They pulled in data with as little pre-processing as possible into data lakes. They then developed processes that analyzed raw data in response to queries, delivering fresh, customized, "ready-to-eat" information instead of just pulling up pre-built, black-box summaries and aggregates.
 

Imagine somebody proposing this project: "We want to build a critical piece of infrastructure. The project will use old, outdated technology. We'll buy three different expensive commercial technologies off the shelf and stitch them together ourselves. The team will write business logic in obscure scripting languages, won't comment on the code and won't test anything." All too often, that's the reality beneath the shiny surface of a proposal to build mission-critical business intelligence systems. Staff members spend huge amounts of time discussing which data to leverage and how, what pipelines to use, what kind of business logic to apply to data in the pipeline, how to compress data, index it and how to name things. No wonder the resulting system is typically delivered late—and largely untested. All this processing of data—before a single query is written—"cooks" the data by pre-processing it in a way that limits how it can be used. Organizations often need to use data in a completely different way than originally planned. For maximum flexibility and accuracy, they need fast access to data in its raw state. This eWEEK slide show, using insight from behavioral analytics-at-scale provider Interana, explains this point of view.

 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
Rocket Fuel