Duration

3 days

Audience:

Employees of federal, state and local governments; and businesses working with the government.

Prerequisites

  • Familiarity with big data technologies, including Apache Hadoop and HDFS
  • Knowledge of big data technologies such as Pig, Hive, and MapReduce is helpful but not required
  • Working knowledge of core AWS services and public cloud implementation
  • Students should complete the AWS Essentials course or have equivalent experience
  • Basic understanding of data warehousing, relational database systems, and database design

Course Description:

Learn how to build and leverage best practices for big data solutions on AWS.

This course introduces you to cloud-based big data solutions and Amazon Elastic MapReduce (EMR), the Amazon Web Services (AWS) big data platform. In this course, we show you how to use Amazon EMR to process data using the broad ecosystem of Hadoop tools like Pig and Hive. We also teach you how to create big data environments, work with Amazon DynamoDB and Amazon Redshift, realize the benefits of Amazon Kinesis, and leverage best practices to design big data environments for security and cost-effectiveness.

What You’ll Learn:

  • Apache Hadoop in the context of Amazon EMR
  • The architecture of an Amazon EMR cluster
  • Launch an Amazon EMR cluster using an appropriate Amazon Machine Image and Amazon EC2 instance types
  • Appropriate AWS data storage options for use with Amazon EMR
  • Ingesting, transferring, and compressing data for use with Amazon EMR
  • Use common programming frameworks available for Amazon EMR including Hive, Pig, and Streaming
  • Work with Amazon Redshift to implement a big data solution
  • Leverage big data visualization software
  • Appropriate security options for Amazon EMR and your data
  • Perform in-memory data analysis with Spark and Shark on Amazon EMR
  • Options to manage your Amazon EMR environment cost-effectively
  • Benefits of using Amazon Kinesis for big data

Who Needs to Attend

This course is intended for partners and customers responsible for implementing big data environments, including:

  • Data scientists
  • Data analysts
  • Enterprise, big data solution architects

Course Outline

  1. Overview of Big Data and Apache Hadoop
  2. Benefits of Amazon EMR
  3. Amazon EMR Architecture
  4. Using Amazon EMR
  5. Launching and Using an Amazon EMR Cluster
  6. High-Level Apache Hadoop Programming Frameworks
  7. Using Hive for Advertising Analytics
  8. Other Apache Hadoop Programming Frameworks
  9. Using Streaming for Life Sciences Analytics
  10. Overview: Spark and Shark for In-Memory Analytics
  11. Using Spark and Shark for In-Memory Analytics
  12. Managing Amazon EMR Costs
  13. Overview of Amazon EMR Security
  14. Exploring Amazon EMR Security
  15. Data Ingestion, Transfer, and Compression
  16. Using Amazon Kinesis for Real-Time Big Data Processing
  17. AWS Data Storage Options
  18. Using DynamoDB with Amazon EMR
  19. Overview: Amazon Redshift and Big Data
  20. Using Amazon Redshift for Big Data
  21. Visualizing and Orchestrating Big Data
  22. Using Tableau Desktop or Jaspersoft BI to Visualize Big Data

Labs

This course allows you to test new skills and apply knowledge to your working environment through a variety of practical exercises