4.5  14 reviews on Udemy

Real World Hadoop - Hands on Enterprise Distributed Storage.

Master the art of manipulating files within a distributed storage enterprise platform with hands on Hadoop HDFS.
Course from Udemy
 287 students enrolled
 en
Learn how to navigate the HDFS file system
If you want to build a HDFS stack, Simply run a single command on your desktop, go for a coffee, and come back with a running distributed environment for cluster deployment
Quickly build an environment where Cloudera and HDFS software can be installed.
Ability to automate the installation of software across multiple Virtual Machines

The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems.

We will be manipulating the HDFS File System, however why are Enterprises interested in HDFS to begin with?

However, the differences from other distributed file systems are significant.

HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. 

HDFS provides high throughput access to application data and is suitable for applications that have large data sets. 

HDFS relaxes a few POSIX requirements to enable streaming access to file system data. 

HDFS is part of the Apache Hadoop Core project.

Hardware failure is the norm rather than the exception. An HDFS instance may consist of hundreds or thousands of server machines, each storing part of the file system’s data. The fact that there are a huge number of components and that each component has a non-trivial probability of  failure means that some component of HDFS is always non-functional. Therefore, detection of faults and quick, automatic recovery from them is a core architectural goal of HDFS.

Applications that run on HDFS have large data sets. A typical file in  HDFS is gigabytes to terabytes in size. Thus, HDFS is tuned to support large files. It should provide high aggregate data bandwidth and scale  to hundreds of nodes in a single cluster. It should support tens of   millions of files in a single instance.

A computation requested by an application is much more efficient if it  is executed near the data it operates on. This is especially true when the size of the data set is huge. This minimizes network congestion and  increases the overall throughput of the system. The assumption is that 
it is often better to migrate the computation closer to where the data  is located rather than moving the data to where the application is running. HDFS provides interfaces for applications to move themselves  closer to where the data is located. 

.

Here I present a curriculum as to the current state of my Cloudera courses.

My Hadoop courses are based on Vagrant so that you can practice and destroy your virtual environment before applying the installation onto real servers/VMs.

.

For those with little or no knowledge of the Hadoop eco system Udemy course : Big Data Intro for IT Administrators, Devs and Consultants

.

I would first practice with Vagrant so that you can carve out a virtual environment on your local desktop. You don't want to corrupt your physical servers if you do not understand the steps or make a mistake. Udemy course : Real World Vagrant For Distributed Computing

.

I would then, on the virtual servers, deploy Cloudera Manager plus agents. Agents are the guys that will sit on all the slave nodes ready to deploy your Hadoop services Udemy course : Real World Vagrant - Automate a Cloudera Manager Build

.

Then deploy the Hadoop services across your cluster (via the installed Cloudera Manager in the previous step). We look at the logic regarding the placement of master and slave services. Udemy course : Real World Hadoop - Deploying Hadoop with Cloudera Manager

.

If you want to play around with HDFS commands (Hands on distributed file manipulation). Udemy course : Real World Hadoop - Hands on Enterprise Distributed Storage.

.

You can also automate the deployment of the Hadoop services via Python (using the Cloudera Manager Python API). But this is an advanced step and thus I would make sure that you understand how to manually deploy the Hadoop services first. Udemy course : Real World Hadoop - Automating Hadoop install with Python!

.

There is also the upgrade step. Once you have a running cluster, how do you upgrade to a newer hadoop cluster (Both for Cloudera Manager and the Hadoop Services). Udemy course : Real World Hadoop - Upgrade Cloudera and Hadoop hands on


Real World Hadoop - Hands on Enterprise Distributed Storage.
$ 89.99
per course
Also check at

FAQs About "Real World Hadoop - Hands on Enterprise Distributed Storage."

About

Elektev is on a mission to organize educational content on the Internet and make it easily accessible. Elektev provides users with online course details, reviews and prices on courses aggregated from multiple online education providers.
DISCLOSURE: This page may contain affiliate links, meaning when you click the links and make a purchase, we receive a commission.

SOCIAL NETWORK