eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.
1How to Navigate the Fragmented Data Landscape on AWS
Since launching in early 2006, Amazon Web Services (AWS) has grown into an expansive range of offerings for virtually every area of infrastructure and software—from storage, networking and computing, to applications and software developer tools. Companies re-platforming to AWS find a diverse and complicated set of options, so many that they may find themselves suffering a “tyranny of choice” for each area of their IT strategy. Data certainly is an essential part of every company, and AWS provides many options for storing, managing and analyzing it. In this eWEEK slide show, using industry information from analytics provider Dremio, we explain how to navigate all of this. While the discussion is specific to AWS, the same can be said about Microsoft Azure and Google Cloud—there are many choices to make, and none is particularly easy.
2AWS Has Two Main Types of Storage: File Systems and Database
Companies generate many types of data. File systems are used for storing files, and databases are used for storing application data. AWS provides multiple options for each of these areas. In addition, companies have specific applications, such as email, whose data is managed by an equivalent service on AWS.
3For File Systems, AWS Has Three Main Offerings: EBS, EFS and S3
While each AWS compute instance comes with a small amount of local storage, the file system offerings are designed to separate compute and storage and to remain available even as compute comes and goes through elastic operations. There are many differences between the three, but a few highlights are that S3 is the lowest in cost, EBS is the highest in performance and EFS is between the two. S3 and EFS are accessible by many compute instances, but data on EBS is only accessible by one instance at a time.
4Databases Have More Than 10 Options, Including RDBMS and NoSQL
For relational databases, AWS offers RDS, which provides services for Oracle, Microsoft SQL Server, MySQL, Postgres and MariaDB. It also provides its own database, Aurora, in two compatibility versions, for MySQL and Postgres. For NoSQL, AWS offers its own DynamoDB and Neptune, as well as Elasticsearch Service.
5In Data Analysis, AWS Has Three Primary Offerings
The AWS service for data warehousing is called Redshift, a term from astronomy, but also a double entendre for moving away from Oracle. For Hadoop workloads, there’s EMR. And for analytics on the data lake (S3), AWS now provides Athena, which is based on Presto, as well as Redshift Spectrum, which allows you to use your Redshift infrastructure to also query your data in S3.
6AWS Also Provides Tools for Data Movement
ETL (extract, transform, load) workloads are addressed through AWS Glue, a product with features that are familiar to users of other ETL tools, but also with integrations to most of the AWS services we have covered. For more elaborate work, there’s AWS Data Pipeline, which orchestrates multistep custom scripts across the services and has more of a software developer in mind in terms of the end user.
7Yes, There Are a Lot of Options
These options all have tradeoffs, and each has a “sweet spot” in terms of functionality, cost, security and performance. Choosing the best option is a significant undertaking. Your team will need to have a good understanding of the needs for your application, and you’ll need to develop a sense for the underlying cost model as the needs for your application evolve. The right decision today may need to be revisited down the road.
8There’s No Obvious Analytics Standout
There are three major factors that make this choice especially important, starting with data volume. Analytics is where the data from many of these services needs to come together for visualization, reporting, predictive models and more. When looking at the three primary factors that amplify the criticality of choosing the best service on AWS, look first at data volume. In analytics, there is greater data volume, which impacts the ongoing costs of the service. A difference of just a few percentage points can have a very significant impact on the monthly bill.
9A Key Analytics Factor: Schema Diversity
As data is brought together, various schemas need to be accommodated into the final analytical store. Each service has very different requirements in this area. Redshift is a traditional relational database, and a comprehensive schema must be designed and maintained to use the service effectively. EMR is a schema-on-read approach, so the data is easy to ingest, and the need to make sense of the various structures of data is deferred to the time of access through jobs such as MapReduce. Athena uses S3 for storage, so the data is easy to ingest, but relational structures need to be defined at query time using the Presto engine.
10Performance Requirements Can be a Litmus Test for Some Apps
Companies have many types of needs when it comes to analytics: from canned reports, to interactive BI and visualization, to machine learning and predictive models. The performance capabilities of each of these AWS services is dramatically different. For interactive needs, Redshift is likely to be the only option that meets the needs of most organizations. For canned reports, Athena or Redshift Spectrum are good options. For complex analytics or data preparation, EMR may be the best option.
11Most Companies Will Need to Use a Mix of All Three Analytics Services
Given the options for Redshift, EMR and Aurora, there is no single option that is best for all analytics workloads. Redshift has the best performance and the highest cost, but it also requires more careful planning and management, and is best suited to relational data. EMR is a variation of Hadoop designed for AWS and its various services. Aurora has the appeal of providing access to raw data in S3, but its performance isn’t well-suited to certain workloads (e.g., interactive BI), and costs can be unpredictable. Companies will likely need all three, and they will need to rely on AWS Glue and AWS Data Pipeline to move data between the services.
12What AWS Really Needs: A Service That Simplifies This Complexity
The primary end user for AWS is the software developer. But for data analytics, the typical end user is not someone who is going to write code or edit scripts. Instead, the end user for data analytics is the data consumer—users of BI and data science tools. Data consumers expect a number of key things for accessing the data: SQL as the primary interface, to work with their favorite tools; self-service features, such as a data catalog to search for datasets; the ability to define a logical model that captures the business meaning of the data; data lineage capabilities; and fast, interactive access, which is required for interactive exploration and analysis.