Contribute the Desired amount of Storage of DataNode to the Hadoop Cluster

Shubham Jangid
Oct 30, 2020

--

Step 1: — Add one extra hard disk with OS

Here I use VMware for perform task
Give Size 10GiB
Now start the virtual machine

Step 2 : — For check all connected HDD write CMD fdisk -l

Here we have two connected HDD

Step 3 : — For create new partition select new add HDD

Press n for new partition

Step 4 : — Press p for new primary partition

if you not want to give partition number just press ENTER key

Step 5 : — Give starting sector for partition

if you don’t want to give starting sector number just press ENTER key

Step 6 : — Give size of partition

+5G for give partition size 5GiB

+100MB for give partition size 100MiB

Here I give partition size 8GiB and press w for save partition

Step 7 : — Format created partition with ext4 extension

Step 8 : — Mount partition with directory name data_node because we use this directory as data node in Hadoop Cluster

Step 9 : — Do entry of mount in /etc/fstab file for make partition permanent

Step 10 : — Download and install jdk and Hadoop in both master and slave

jdk
hadoop

Configure Master for Hadoop cluster

Step 11 : — Create a directory for name node

Step 12 : — Configure /etc/hadoop/hdfs-site.xml file

Step 13 : — Configure /etc/hadoop/core-site.xml

Step 14 : — Format name node

Step 14 : — Start service of name node

Configure NameNode or slave

Step 15: — Configure /etc/hadoop/hdfs-site.xml

Step 16 : — Configure /etc/hadoop/core-site.xml

Here we give Master IP for connection

Step 17 : — Start services of datanode

Final Output

Here we check the slave share 8GiB space to cluster
Here we check the slave share 8GiB space to cluster

Thank you

--

--

No responses yet