Hadoop Distributed File System

Hadoop Distributed File System 

HDFS was based on a paper Google published about their Google File System.Hadoop Distributed File System (HDFS) is a Java-based file system that provides scalable and reliable data storage that is designed to span large clusters of commodity servers.

HDFS runs on top of the existing file systems on each node in a Hadoop cluster. It is not POSIX compliant. It is designed to tolerate high component failure rate through replication of the data. Hadoop works best with very large files.The larger the file, the less time Hadoop spends seeking for the next data location on disk, the more time Hadoop runs at the limit of the bandwidth of your disks. Seeks are generally expensive operations that are useful when you onlyneed to analyze a small subset of your dataset. Since Hadoop is designed to run over your entire dataset, it is best to minimize seeks by using large files. Hadoop is designed for streaming or sequential data access rather than random access. Sequential data access means fewer seeks, since Hadoop only seeks to the beginning of each block and begins reading sequentially from there.Hadoop uses blocks to store a file or parts of a file.

A Hadoop block is a file on the underlying filesystem. Since the underlying filesystem stores files as blocks, one Hadoop block may consistof many blocks in the underlying file system. Blocks are large. They default to 64 megabytes each and most systems run with block sizes of 128 megabytes or larger.

Blocks have several advantages:

Firstly, they are fixed in size. This makes it easy to calculate how many can fit on a disk.
Secondly, by being made up of blocks that can be spread over multiple nodes, a file can be larger than any single disk in the cluster.
HDFS blocks also don’t waste space. If a file is not an even multiple of the block size, the block containing the remainder does not occupy the space of an entire block.

What HDFS Does

HDFS was designed to be a scalable, fault-tolerant, distributed storage system that works closely with MapReduce.
These specific features ensure that the Hadoop clusters are highly functional and highly available

How HDFS Works

An HDFS cluster is comprised of a NameNode which manages the cluster metadata and DataNodes that store the data. Files and directories are represented on the NameNode by inodes. Inodes record attributes like permissions, modification and access times, or namespace and disk space quotas.The file content is split into large blocks (typically 128 megabytes), and each block of the file is independently replicated at multiple DataNodes. The blocks are stored on the local file system on the datanodes. The Namenode actively monitors the number of replicas of a block. When a replica of a block is lost due to a DataNode failure or disk failure, the NameNode creates another replica of the block. The NameNode maintains the namespace tree and the mapping of blocks to DataNodes, holding the entire namespace image in RAM.

The NameNode does not directly send requests to DataNodes. It sends instructions to the DataNodes by replying to heartbeats sent by those DataNodes. The instructions include commands to: replicate blocks to other nodes, remove local block replicas, re-register and send an immediate block report, or shut down the node.

An typical HDFS cluster has many DataNodes.DataNodes store the blocks of data and blocks from different files can be stored on the same DataNode.When a client requests a file, the client finds out from the NameNode which DataNodes stored the blocks that make up that file and
the client directly reads the blocks from theindividual DataNodes. Each DataNode also reports to the NameNode periodically with the list ofblocks it stores. DataNodes do not require expensive enterprise hardware or replication at the hardware layer. The DataNodes are designed to run on commodity hardware and replication is provided at the software layer.

An example of HDFS

Think of a file that contains the phone numbers for everyone in the United States; the people with a last name starting with A might be stored on server 1, B on server 2, and so on. In a Hadoop world, pieces of this phonebook would be stored across the cluster, and to reconstruct the entire phonebook, your program would need the blocks from every server in the cluster. To achieve availability as components fail, HDFS replicates these smaller pieces onto two additional servers by default. (This redundancy can be increased or decreased on a per-file basis or for a whole environment; for example, a development Hadoop cluster typically doesn’t need any data re­dundancy.) This redundancy offers multiple benefits, the most obvious being higher availability.

Data Flow:

Read data from HDFS
1)client calls open() for Distributed filesytem,
2)DFS calls the namenode through RPC
3)NameNode returns the address of datanodes that have copy of block,to DFS.
4)DFS returns the FSDatainputstream to client for reading data from datanode(DfsInputStream which has stored datanode address ,connects to closest datanode).
5)when the end of block is reached,dfsinputstream will close connection to datanode
6)select another best datanode for reading data,then end of block reached,dfsinputstream will close connection to datanode.
Repeats untill when client finished reading data from datanodes


Writing data to HDFS
1)client creates the file by calling create() on DFS which makes RPC call to namenode for creating new file in the filesystem namespace.
2)DFS returns FSDataoutputstream to client for writing data .
3)client writes data,FSDataoutputstream split it into packets,which writes into internal queue is called data queue.
data queue is consumed by datastreamer which is responsible for asking namenode to allocate new blocks by picking a list suitable datanode to store replicas.
4)DFSOutputstream also maintain an internal queue of packets that are waiting for ack from datanode.
5)when client finished writing data,it calls close() function


Speak Your Mind