How To Provide Elasticity Storage To Hadoop Slave From LVM (Logical Volume Management) ? BigData

How To Provide Elasticity Storage To Hadoop Slave From LVM (Logical Volume Management) ? BigData

Today I tell you about one more important and awesome concept, LVM (Logical Volume Management)

1604514973568.png

What is LVM ?

  1. LVM allows for very flexible disk space management.
  2. It provides features like the ability to add disk space to a logical volume and its filesystem
  3. while that filesystem is mounted and active and it allows for the collection of multiple physical hard drives and partitions into a single volume group
  4. which can then be divided into logical volumes.
  5. The volume manager also allows reducing the amount of disk space allocated to a logical volume, but there are a couple requirements.
  6. First, the volume must be unmounted.
  7. Second, the filesystem itself must be reduced in size before the volume on which it resides can be reduced.

So Let's Get Started

1604515494583.gif

I love to do work in steps, so let's do this in step also !

Steps:-

  • First we add our storage disk to our system, here 30 GiB and 25 Gib is our storage disk

1604515726768.png

  • Now we create Physical Volume

pvcreate /dev/sdc (To Convert disk into Physical Volume)

1604515900476.png

  • After Applying same on second disk to create PV

now let's info about Physical volume

cmd:- pvdisplay

1604516147710.png

  • After that we create Volume Group

cmd:- vgcreate hadoop_namenode (name of vg) /dev/sdv /dev/sdd (name of pv)

to check info of vg

cmd:- vgdisplay hadoop_namenode (name of vg)

1604516419338.png

  • Now After this we create our main Logical Volume

cmd:- lvcreate --size 40G (size of lv) --name hadoop_lv (name of lv) hadoop_namenode (name of vg)

1604516712465.png

  • Now our Logical Volume is created, now we have format or mount it to our main file of hadoop slave to give elasticity

first we format it

cmd:- mkfs.etx4 /dev/hadoop_namenode/hadoop_lv

1604516874114.png

This is my slave node folder /nn

1604517014804.png

Current size of /nn folder is 40kb

1604517112610.png

now we mount our drive to /nn

cmd:- mount /dev/hadoop_namenode/hadooop_lv /nn

1604517148397.png

Now Our /nn is become 40GiB

cmd:- df -h

1604517267706.png

Now our main thing to change the size of slave node on go

to perform this we to extend LV

cmd:- lvextend --size +5G /dev/hadoop_namenode/hadoop_lv

1604517488121.png

we also have to format the new 5Gib to use this

cmd:- resize2fs /dev/hadoop_namenode/hadoop_lv

Now it's become 45Gib in size, that's it

1604517623445.png

Thanks you, Make Sure to Like,Share,Comment and Follow