Sooner or later we all run out of space. Today I am going to demo how to add a new
storage to Linux VM. First we will look at how to do this on local VM with virtualbox and vagrant,
then in AWS.
1. Adding a new volume locally.
2. Splitting disk into partitions
3. Spinning AWS EC2 instance and adding a new volume manually.
4. Attaching new volume with AWS CLI.
So let’s assume you have vagrant and virtualbox installed, let’s spin up a new VM:
vagrant init ubuntu/trusty64 && vagrant up && vagrant ssh
You can pick up newer version of Ubuntu of course, Xenial or Zesty, or any other Linux distro even, I have ubuntu/trusty64 vagrant box already downloaded, so I will be using that one.
First let’s check what we have already got there with ‘list block devices’ command:
vagrant@sensuclient:~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 40G 0 disk `-sda1 8:1 0 40G 0 part / vagrant@sensuclient:~$
Now let’s exit VM and stop it:
vagrant halt ==> sensuclient: Attempting graceful shutdown of VM...
Then we need to go to virtualbox and add new disk as shown below:
Once it is done, we can start VM and check devices again:
vagrant up && vagrant ssh vagrant@sensuclient:~$ sudo lsblk -f NAME FSTYPE LABEL MOUNTPOINT sda `-sda1 ext4 cloudimg-rootfs / sdb
As you can see new disk, ‘sdb’ has been added to the list.
Next we need to crate a filesystem:
vagrant@sensuclient:~$ sudo mkfs -t ext4 /dev/sdb mke2fs 1.42.9 (4-Feb-2014) /dev/sdb is entire device, not just one partition! Proceed anyway? (y,n) y Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 65808 inodes, 262880 blocks 13144 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=272629760 9 block groups 32768 blocks per group, 32768 fragments per group 7312 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Allocating group tables: done Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done
Now if we list devices we will see it’s FS type populated:
vagrant@sensuclient:~$ sudo lsblk -f NAME FSTYPE LABEL MOUNTPOINT sda `-sda1 ext4 cloudimg-rootfs / sdb ext4
Now disk is ready to be mounted:
sudo mkdir /newvolume sudo mount /dev/sdb /newvolume/
Last thing is to add it to fstab, so when we restart VM disk is still there:
vagrant@sensuclient:~$ sudo cat /etc/fstab LABEL=cloudimg-rootfs / ext4 defaults 0 0 /dev/sdb /newvolume ext4 defaults 0 0
Here is what columns of fstab mean:
1 – device name
2 – mount point
3 – filesystem type
4 – permissions
5 – backup option – 0 is no
6 – fsck scanning – 0 is no
2. Splitting disk into partitions.
Now, let’s split out disk to the partitions using GNU Parted – a partition manipulation program.
vagrant@sensuclient:~$ sudo parted -l Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 42.9GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 42.9GB 42.9GB primary ext4 boot Model: ATA VBOX HARDDISK (scsi) Disk /dev/sdb: 1077MB Sector size (logical/physical): 512B/512B Partition Table: loop Number Start End Size File system Flags 1 0.00B 1077MB 1077MB ext4
We can use device ids or names, let’s use ids, first we need to find it:
vagrant@sensuclient:~$ file /dev/disk/by-id/* /dev/disk/by-id/ata-VBOX_HARDDISK_VB652794d8-c5261a54: symbolic link to `../../sdb' /dev/disk/by-id/ata-VBOX_HARDDISK_VB685622b2-c8a29bf2: symbolic link to `../../sda' /dev/disk/by-id/ata-VBOX_HARDDISK_VB685622b2-c8a29bf2-part1: symbolic link to `../../sda1'
It is ‘/dev/disk/by-id/ata-VBOX_HARDDISK_VB652794d8-c5261a54’ for or newly added ‘sdb’ disk.
First thing we will do is label it with partition type, we are going to use GUID Partition Table (GPT):
vagrant@sensuclient:~$ sudo parted /dev/disk/by-id/ata-VBOX_HARDDISK_VB652794d8-c5261a54 mklabel gpt Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue? Yes/No? yes
Now let’s split into two parts:
vagrant@sensuclient:~$ sudo parted -a opt /dev/disk/by-id/ata-VBOX_HARDDISK_VB652794d8-c5261a54 mkpart primary 0% 50% Information: You may need to update /etc/fstab. vagrant@sensuclient:~$ sudo parted -a opt /dev/disk/by-id/ata-VBOX_HARDDISK_VB652794d8-c5261a54 mkpart primary 50% 100% Information: You may need to update /etc/fstab.
You can delete if something went wrong, in the example above I named both primary, let’s fix it:
vagrant@sensuclient:~$ sudo parted -a opt /dev/disk/by-id/ata-VBOX_HARDDISK_VB652794d8-c5261a54 rm 1 vagrant@sensuclient:~$ sudo parted -a opt /dev/disk/by-id/ata-VBOX_HARDDISK_VB652794d8-c5261a54 rm 2
And split again:
[code lang="bash" ] vagrant@sensuclient:~$ sudo parted -a opt /dev/disk/by-id/ata-VBOX_HARDDISK_VB652794d8-c5261a54 mkpart primary 0% 50% Information: You may need to update /etc/fstab. vagrant@sensuclient:~$ sudo parted -a opt /dev/disk/by-id/ata-VBOX_HARDDISK_VB652794d8-c5261a54 mkpart secondary 50% 100% Information: You may need to update /etc/fstab.
Let’s view it agian:
vagrant@sensuclient:~$ sudo parted -l Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 42.9GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 42.9GB 42.9GB primary ext4 boot Model: ATA VBOX HARDDISK (scsi) Disk /dev/sdb: 1077MB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 538MB 537MB primary 2 538MB 1076MB 538MB secondary
We need to create filesystem on each partition next:
vagrant@sensuclient:~$ sudo mkfs -t ext4 /dev/sdb1 mke2fs 1.42.9 (4-Feb-2014) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 32768 inodes, 131072 blocks 6553 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=134217728 4 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304 Allocating group tables: done Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done vagrant@sensuclient:~$ sudo mkfs -t ext4 /dev/sdb2 mke2fs 1.42.9 (4-Feb-2014) warning: 256 blocks unused. Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 32832 inodes, 131072 blocks 6553 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=134217728 4 block groups 32768 blocks per group, 32768 fragments per group 8208 inodes per group Superblock backups stored on blocks: 32768, 98304 Allocating group tables: done Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done
Our partitions are ready:
vagrant@sensuclient:~$ sudo lsblk -f NAME FSTYPE LABEL MOUNTPOINT sda `-sda1 ext4 cloudimg-rootfs / sdb |-sdb1 ext4 `-sdb2 ext4 vagrant@sensuclient:~$
We now can mount them, but first let’s create a mount point directories:
vagrant@sensuclient:~$ sudo mkdir newvol_part1 newvol_part2 vagrant@sensuclient:~$ sudo mount /dev/sdb1 /newvol_part1 vagrant@sensuclient:~$ sudo mount /dev/sdb2 /newvol_part2
And finally test it is working:
vagrant@sensuclient:~$ sudo echo test2 > /newvol_part2/testfile -bash: /newvol_part2/testfile: Permission denied
Why permission denied, as I am running as sudo? Well, because apart from running echo command,
there is also a redirection involved which is executed by the shell not ran as sudo.
We can fix it either by using ‘tee’ or running the whole thing as argument to shell executed by sudo:
echo test | sudo tee /newvol_part1/testfile echo test2 | sudo tee /newvol_part2/testfile
sudo sh -c "echo 'test' > /newvol_part1/testfile2"
I personally prefer ‘tee’ way.
vagrant@sensuclient:~$ cat /newvol_part*/testfile test test2
Test is positive, let’s move to AWS now.
3. Spinning AWS EC2 instance and adding a new volume manually.
So assuming we have access to AWS console and ssh key pair to connect to it,
we will need to do the next steps:
1. spin up an EC2 instance of type t2 micro and configure security group in a way so that we can connect to port 22 for ssh.
2. connect to it, run lsblk and make sure we only got single disk.
3. go to EBS section and create a volume.
4. attach the volume to our EC2 instance, the cool thing is we can do it while it is running, unlike virtualbox, where we had to stop it.
5. connect to it agian, run lsblk and make sure we can see second disk.
All the steps is demonstrated in the gif below:
So once we done what is listed above, the rest is exactly similar to what we did earlier with Ubuntu on Local VM.
First list partitions:
[ec2-user@ip-172-31-8-44 ~]$ sudo parted -l Model: Xen Virtual Block Device (xvd) Disk /dev/xvda: 8590MB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 128 1049kB 2097kB 1049kB BIOS Boot Partition bios_grub 1 2097kB 8590MB 8588MB ext4 Linux Error: /dev/xvdf: unrecognised disk label [ec2-user@ip-172-31-8-44 ~]$
Then label it and split into partitions and then list again:
[ec2-user@ip-172-31-8-44 ~]$ sudo parted /dev/xvdf mklabel gpt Information: You may need to update /etc/fstab. [ec2-user@ip-172-31-8-44 ~]$ sudo parted -a opt /dev/xvdf mkpart primary 0% 50% Information: You may need to update /etc/fstab. [ec2-user@ip-172-31-8-44 ~]$ sudo parted -a opt /dev/xvdf mkpart secondary 50% 100% Information: You may need to update /etc/fstab. [ec2-user@ip-172-31-8-44 ~]$ sudo parted -l Model: Xen Virtual Block Device (xvd) Disk /dev/xvda: 8590MB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 128 1049kB 2097kB 1049kB BIOS Boot Partition bios_grub 1 2097kB 8590MB 8588MB ext4 Linux Model: Xen Virtual Block Device (xvd) Disk /dev/xvdf: 1074MB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 537MB 536MB primary 2 537MB 1073MB 536MB secondary
Next install filesystem on partitions:
[ec2-user@ip-172-31-8-44 ~]$ sudo mkfs -t ext4 /dev/xvdf1 mke2fs 1.42.12 (29-Aug-2014) Creating filesystem with 523264 1k blocks and 131072 inodes Filesystem UUID: c8ea4c26-ba32-4400-8147-6228ed8a3e3a Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 Allocating group tables: done Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done [ec2-user@ip-172-31-8-44 ~]$ sudo mkfs -t ext4 /dev/xvdf2 mke2fs 1.42.12 (29-Aug-2014) Creating filesystem with 523264 1k blocks and 131072 inodes Filesystem UUID: e7b2ca1f-f876-4b8f-803f-697a2b8f7d50 Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 Allocating group tables: done Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done
Check our disks:
[ec2-user@ip-172-31-8-44 ~]$ lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT xvda └─xvda1 ext4 / df24ba12-defa-4725-9a31-2ff9b332ae90 / xvdf ├─xvdf2 ext4 e7b2ca1f-f876-4b8f-803f-697a2b8f7d50 └─xvdf1 ext4 c8ea4c26-ba32-4400-8147-6228ed8a3e3a [ec2-user@ip-172-31-8-44 ~]$
And finally mount and test filesystem:
[ec2-user@ip-172-31-8-44 ~]$ sudo mkdir /newvol_part1 /newvol_part2 [ec2-user@ip-172-31-8-44 ~]$ sudo mount /dev/xvdf1 /newvol_part1 [ec2-user@ip-172-31-8-44 ~]$ sudo mount /dev/xvdf2 /newvol_part2 [ec2-user@ip-172-31-8-44 ~]$ echo test | sudo tee /newvol_part1/testfile test [ec2-user@ip-172-31-8-44 ~]$ echo test | sudo tee /newvol_part2/testfile test [ec2-user@ip-172-31-8-44 ~]$ cat /newvol_part*/testfile test test
Finally let’s detach the volume, but first unmount file systems:
[ec2-user@ip-172-31-8-44 ~]$ sudo umount /dev/xvdf1 [ec2-user@ip-172-31-8-44 ~]$ sudo umount /dev/xvdf2 [ec2-user@ip-172-31-8-44 ~]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 8G 0 disk └─xvda1 202:1 0 8G 0 part /
4. Attaching new volume with AWS CLI.
Now we can detach the volume. Let’s look how we can attach volume with AWS cli now:
[ec2-user@ip-172-31-8-44 ~]$ aws ec2 attach-volume \ --volume-id vol-0c0677b3d644f0f0d \ --instance-id i-0f9063a259cec8f08 \ --device /dev/xvdf You must specify a region. You can also configure your region by running "aws configure".
Oh, didn’t quite work, let’s configure it:
[ec2-user@ip-172-31-8-44 ~]$ aws configure AWS Access Key ID [****************]: AWS Secret Access Key [****************]: Default region name [None]: eu-west-2 Default output format [None]:
And now try again:
[ec2-user@ip-172-31-8-44 ~]$ aws ec2 attach-volume \ --volume-id vol-0c0677b3d644f0f0d \ --instance-id i-0f9063a259cec8f08 \ --device /dev/xvdf { "AttachTime": "2017-12-02T16:37:26.855Z", "InstanceId": "i-0f9063a259cec8f08", "VolumeId": "vol-0c0677b3d644f0f0d", "State": "attaching", "Device": "/dev/xvdf" } [ec2-user@ip-172-31-8-44 ~]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 8G 0 disk └─xvda1 202:1 0 8G 0 part / xvdf 202:80 0 1G 0 disk [ec2-user@ip-172-31-8-44 ~]$
That is it, next time I hope to get my hands dirty with LVM.