About this Course

Learn how to tackle big data problems with your own Hadoop clusters! In this course, you’ll deploy Hadoop clusters in the cloud and use them to gain insights from large datasets.

Course Cost
Free
Timeline
Approx. 3 weeks
Skill Level
intermediate
Get Certified Now

View the Nanodegree

Included in Product

Rich Learning Content

Interactive Quizzes

Taught by Industry Pros

Self-Paced Learning

Student Support Community

Join the Path to Greatness

This free course is your first step towards a new career with the Intro to Programming Nanodegree Program.

Free Course

Deploying a Hadoop Cluster

by

Enhance your skill set and boost your hirability through innovative, independent learning.

Icon steps 54aa753742d05d598baf005f2bb1b5bb6339a7d544b84089a1eee6acd5a8543d
 
 

Course Leads

Mat Leonard
Mat Leonard

Instructor

Prerequisites and Requirements

This course is intended for students with some experience with Hadoop and MapReduce, Python, and bash commands.

You’ll have to be able to work with HDFS and write MapReduce programs. You can learn about these in our Intro to Hadoop and MapReduce course.

The MapReduce programs in the course are written in Python. It is possible to use Java and other languages, but we suggest using Python, on the level of our Intro to Computer Science course.

You’ll also be using remote cloud machines, so you’ll need to know these bash commands:

  • ssh
  • scp
  • cat
  • head/tail

You’ll also need to be able to work in an editor such as vim or nano. You can learn about these in our Linux Command Line Basics course.

See the Technology Requirements for using Udacity.

Why Take This Course

Using massive datasets to guide decisions is becoming more and more important for modern businesses. Hadoop and MapReduce are fundamental tools for working with big data. By knowing how to deploy your own Hadoop clusters, you’ll be able to start exploring big data on your own.

What do I get?
Instructor videos Learn by doing exercises Taught by industry professionals