How to extend the Linux file system after increasing the EBS volume on the EC2 instance?

Hello folks…

Today, one of the instance running on AWS ec2 got full and we had to increase the file system without downtime. Luckily, recently AWS had implemented Modify Volume for the EBS volume and it made the process simple.

Now lets see How to extend Linux file system after increasing the EBS volume on an EC2 instance. This process should work on any Linux Machines.

Before upgrade, We had 8 GB on the ec2 instance as /dev/xvda1 . Below is the output of df -hT

[root@ip-172-31-19-10 ~]# df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
devtmpfs       devtmpfs  472M     0  472M   0% /dev
tmpfs          tmpfs     493M     0  493M   0% /dev/shm
tmpfs          tmpfs     493M   13M  480M   3% /run
tmpfs          tmpfs     493M     0  493M   0% /sys/fs/cgroup
/dev/xvda1     xfs       8.0G  904M  7.2G  12% /
tmpfs          tmpfs      99M     0   99M   0% /run/user/1000

Now, Lets run lsblk command and verify the same.

[root@ip-172-31-19-10 ~]# lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  8G  0 disk
└─xvda1 202:1    0   8G  0 part /

Now, Lest go to the AWS ec2 Console.

  • On the Elastic Block Storage sub-menu, click on Volumes
  • Select the Volume you want to extend
  • Once selected, Click on Actions and then click on Modify Volume
  • On the screen, Enter the Size as 12 GB for example and click Modify.

After the EBS volume is extended on AWS EBS, Let’s run the lsblk again.

[root@ip-172-31-19-10 ~]# lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  12G  0 disk
└─xvda1 202:1    0   8G  0 part /

You can see that the xvda has 12G now but xvda1 is still having 8G only.

Now, Lets run growpart on the disk and partition 1.

[root@ip-172-31-19-10 ~]# growpart /dev/xvda 1
CHANGED: partition=1 start=2048 old: size=16775168 end=16777216 new: size=25163743 end=25165791

Now, lets run lsblk again and verify whether the xvda1 is extended or not.

[root@ip-172-31-19-10 ~]# lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0  12G  0 disk
└─xvda1 202:1    0  12G  0 part /

Here you can see that the xvda1 has 12G size now.

Now if you run the df -hT, it will still show 8G in the file system. Yes, Now we have one more steps pending, and that is resizing the file system.

`[root@ip-172-31-19-10 ~]# df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
devtmpfs       devtmpfs  472M     0  472M   0% /dev
tmpfs          tmpfs     493M     0  493M   0% /dev/shm
tmpfs          tmpfs     493M   13M  480M   3% /run
tmpfs          tmpfs     493M     0  493M   0% /sys/fs/cgroup
/dev/xvda1     xfs       8.0G  904M  7.2G  12% /
tmpfs          tmpfs      99M     0   99M   0% /run/user/1000`

So, lets run xfs_growfs on / to grow the file system.

[root@ip-172-31-19-10 ~]# xfs_growfs /
meta-data=/dev/xvda1             isize=512    agcount=4, agsize=524224 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=2096896, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 2096896 to 3145467

Now, lets run df -hT and viola, the /dev/xvda1 , ie / is showing 12G now.

[root@ip-172-31-19-10 ~]# df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
devtmpfs       devtmpfs  472M     0  472M   0% /dev
tmpfs          tmpfs     493M     0  493M   0% /dev/shm
tmpfs          tmpfs     493M   13M  480M   3% /run
tmpfs          tmpfs     493M     0  493M   0% /sys/fs/cgroup
/dev/xvda1     xfs        12G  905M   12G   8% /
tmpfs          tmpfs      99M     0   99M   0% /run/user/1000

Hope this post was helpful 🙂

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Scroll to Top