LVM

The Logical Volume Manager


There are quite a few methods to add additional disks to a linux system. What we are interested in now is to add diskspace, and make use of it efficiently. The approach is that of a "bunch of disks", where we join disks / partitions in a logical group and create logical volumes. You can think of a logical volume (LV) as a partition that spans multiple disks, which in turn allows you to have mount points (eg /home, /srv ) or subdirectories the size of multiple disks.

This can be achieved with LVM, the Logical Volume Manager. Additional advantages:

LVM also allows you to achieve some performance gain by 'striping' the data, so that files are spread out over multiple disks, so the can be read faster

--------------

We start from a system with

		hda	hda1	/
		hda2	swap
	
		hdc	hdc1	/home	500 mb
		hdc2	unallocated	500mb

		sda	sda1		1GB
		sdb			1GB
So that's two IDE disks, and an additional 2 scsi disks. This is a vmware simulation and the disk sizes have been kept small. it's a proof-of-concept thing.

you can see the disks with fdisk -l. Create a partition table on one of the disks (/dev/sda). Don't format it yet. Dealing with Disks).
Leave /dev/sdb alone : no partition table, no filesystem, just the disk. (see further).

	# install logical volume manager
	apt-get install lvm2		#(suggests dmsetup : device mapper)

What we're trying to achieve is something like this :

The main idea here is that we use a Logical Group to join several disks/partitions together. Then, the total of available space is divided again into Logical volumes, that can be mounted to mount points so they're accessible as subdirectories of /. Note that you could create just 1 LV, occupying the complete LG, for /srv only, or multiple LV's irrelevant of the underlying disk sizes. Note also that you can create multiple Logical Groups. Very important : there is no relation between the LV's and the underlying partitions.

LVM can work with partitions (eg 1 partition spanning the entire disk), or with an unpartitioned disk (i.e. working directly on the disk, without partition table). The LVM HOWTO recommends using partitions. Disks without partition table will appear as unused space in a system without LVM so you may not be aware there is actually data on it. If there is at least a partition, other systems will see it as a partition (of an unknown type).

Because this is an exercise, we try 3 approaches :

  1. a disk without partitions (sdb)
  2. a disk with 1 partition covering the entire disk (sda => sda1)
  3. a disk with 2 partitions (hdc), one of which will be included in the LVM-configuration. (hdc2)

Just to see if it will work, we combine IDE and SCSI disks in the same logical group / volume

Initialize the disks and create a logical group

Initialize physical volumes (i.e. disks, partitions, ....)

		lvmtest:~# pvcreate /dev/sda1
		  Physical volume "/dev/sda1" successfully created
		lvmtest:~# pvcreate /dev/sdb
		  Physical volume "/dev/sdb" successfully created
		lvmtest:~# pvcreate /dev/hdc2
		  Physical volume "/dev/hdc2" successfully created

join them in a group (arbitrary called storage_01)

		vgcreate storage_1 /dev/sda1 /dev/sdb /dev/hdc2
		  Volume group "storage_1" successfully created

You add additional physical volumes to an existing group with vgextend volume_group_name /dev/name, where /dev/name is the name of a disk or partition.

creating logical volumes.

Most straightforward is a simple ("linear") LV with a size in MB. so, first we create a 1750 MB LV

		lvcreate -L1750 -n LV_1 storage_1

		lvmtest:~# lvcreate -L1750 -n LV_1 storage_1
		  /dev/hdb: open failed: Read-only file system
		  Rounding up size to full physical extent 1.71 GB
		  Logical volume "LV_1" created

Ass a result, you'll have a device /dev/storage_1/LV_1 that represents an amount of usable disk space, similar to a (formatted) partition.
To create a second LV, we check how much space is left en use that to create a 2nd LV

		lvmtest:~# vgdisplay |grep "Free"
		  Free  PE / Size       198 / 792.00 MB
		lvmtest:~# lvcreate -L792 -n LV_2 storage_1
		  /dev/hdb: open failed: Read-only file system
		  Logical volume "LV_2" created
		lvmtest:~#
	Lastly, you need to create filesystems for the logical devices,
	
	mkfs -t ext2 /dev/storage_1/LV_1
	mkfs -t ext2 /dev/storage_1/LV_2

Now mount them (or edit fstab for a permanent configuration) and voila, you've added 1.7 + 0.7 GB of storage to your system (composed of 2 disks of 1 GB + 500 mb leftovers). If, instead of 2 LV's, we'd created just one and mount it to /srv, you would have added a continuous space of 2.5 GB to your system, while no disk in the computer is actually larger than 1GB.

		lvmtest:~# df -h
		Filesystem            Size  Used Avail Use% Mounted on
		/dev/hda1             721M  248M  435M  37% /
		/dev/hdc1             496M  2.3M  468M   1% /home
		/dev/mapper/storage_1-LV_1
		                      2.5G   68M  2.3G   3% /srv
		lvmtest:~#

LVM is intended to be flexible, so you can add and remove disks and partitions, add and remove Logical Volumes and Groups, resize LV's, and even move your entire LV-constellation to another system. These recipes give step by step instructions on how to do all that without data loss

LVM HOWTO : common tasks
LVM HOWTO : recipes

Conclusions

This was an exercise. On a production system you would preferably

Don't mount / or /boot ... to a Logical Volume, and be careful with other directories. The LVM needs to have loaded before the files on the LV are available so boot files and other files needed by the system before the lvm is started, have no business there. It *is* possible to boot of an LV, but it requires tweaking the boot image.

There is no straight 1 to 1 mapping of partitions to directories (mount points), so data recovery scenarios by simply moving a disk to a new machine and mount it there, are out of the question (compare raid). Also : when one dis fails, the whole LV that depends on it fails. countermeasures : (hardware) raids and backups

Although the LVM system makes abstraction of the physical disks, you can still figure out where (on what physical disk) the bits of a particular file are located (see mapping Logical Volumes to Physical Volumes.). Figuring that out is not going to be easy, especially when you've implemented striping. What's more, the physical disk does not have a filesystem, as the filesystem is created on the Logical Volume, and possibly no partition table. No way you're going to simply retrieve a file from that disk should anything have gone wrong with the system.

Implemented on a redundant disk system (RAID, ...), and with a decent backup system in place (which goes without saying for production servers), LVM is can be helpful for managing disk space in a flexible manner.


Koen Noens
October 2007