RAID

Software Raid on Linux (Debian 4)


There are quite a few methods to add additional disks to a linux system. It is even possible to join "multiple disks into 1 logical volume. What we are interested in now is to add disk space, make use of it efficiently, but avoid the problem that the failing of 1 disk makes all data inaccessible (as is the case with LVM).

The technology that offers such feature is called a RAID system - a Redundant array of Independent Disks. In a RAID. Multiple disks are joined into an 'array' and the data is distributed over all the disks in the array. The way the data are distributed defines the RAID level : In a 'mirror' (RAID 1), one disk is an exact copy of the other disk. In a RAID-5, the data is not duplicated, but checksums are added so that if a disk fails, the data can be reconstructed based on the checksums and the remainder of data on other disks.

RAIDs can be created by hardware (RAID controllers) or through software. A hardware raid will present the resulting LUNs (Logical Unit) as disks to the operating system. Using a hardware raid is therefore not very different from using just disks - once the RAID is configured. A software raid is created by software that runs inside the operating system. This means the OS needs to be up and running before the RAID volumes become available. This means you can not boot off a software RAID (you can, but it complicates things - better just boot of an ordinary disk, or a hardware raid), and that trouble with the OS or the software will make your data on the disks unavailable.

You can, however, move a software raid from one machine to another. You can also do this with a hardware RAID, if the hardware is compatible. This is one of the reasons that a RAID is never a substitute for backups. A RAID will help your data survive a disk failure and might help to improve disk I/O performance, but you will still need backups to survive other hardware and system failures


This is an exercise in setting up a software RAID on a linux system. We're using a VMware virtual machine with 3 IDE drives and 3 SCSI drives :

note that this is just an exercise, to see how stuff works. On a production system, you wouldn't build a raid on an IDE master + slave because it will degrade performance : IDE controllers are not very good at handling multiple devices on the same channel. This, and other noteworthy considerations, are very well explained in the Linux Software RAID HowTo. It's recommended reading. We limit ourselves here to a "quick start guide to software RAID".

Getting started

We start by installing Linux (Debian Etch - but the concept applies to other distributions as well). While installing, we only partition the first IDE disk so that we get a bootable system. The other disks remain unconfigured for now. You could use the debian installer to set up volumes for RAID, but it can just as well be done afterwards.

Then, install 'Multiple Disk Admin" :

 apt-get install mdadm 

The questions that are asked during the setup are of minor importance because we'll review the configuration files afterwards, but for reference : we will not have the root filesystem (/) on an array, but we do want to start arrays automatically. The latter will be handled either by the kernel (if you partition the disks with type "part of an array"), or by mdmadm itself (see further).

Check that your system is now capable of using software RAID. If there exists a file /proc/mdstat (it's empty but you can check with cat /proc/mdstat ), you're good to go. Also have a look at /etc/mdadm/mdadm.conf and 'man mdadm.conf'.

Creating RAID arrays

Linux software Raids are composed of partitions, so you need to partition the disks. Partition in a RAID array need to be the same size. If you have disks of the same size, you can create a primary partition that covers the disk completely. With disks of different size, you need to create partitions of the same size, and you lose the extra space on the larger disk.

1-

Use fdisk to create partitions (re. dealing with disks). Create a partition on each of the 2 unused IDE disks, and on the (unused) scsi disks. Remember that you want partition sizes that go into 1 array need to be of the same size. Set the partition type to fd ('Linux raid autodetect') by using the fdsik command 't'.

Have a look in the /dev directory for /dev/md0, /dev/md1 and the likes. If they're not there, you need to create them, eg like this :

			mknod /dev/md0 b 9 0	 # create /dev/md0
			mknod /dev/md1 b 9 1	 # create /dev/md1
	

See man mknod. This creates a block device (b). 9 indicates "meta disks", and then the 'minor' numbers sequentially.

2-


		## create arrays
		#
		# raid 1 using hdc1 and hdd1
		mdadm --create /dev/md0 --chunk=4 --level=1 --raid-devices=2 /dev/hdc1 /dev/hdd1

		# raid 5 using sda1 sdb1 sdc1
		mdadm --create /dev/md0 --chunk=4 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1

		## create filesystems - ext2 can be optimized for use on RAID, so check the man
		mkfs -t ext2 /dev/md0
		mkfs -t ext2 /dev/md1

		## create additional mount points and edit /etc/fstab, eg
		/dev/md0	/srv		ext2	defaults	0 0 
		/dev/md1	/srv/store	ext2	defaults	0 0
	

If you've partitioned the physical disks with type "Linux raid auto", the (Debian) defaults will start the raid arrays during system startup. If not, you probably (I didn't check) would have to

/etc/defaults/mdadm would need an entry like ' AUTOSTART=true '

/etc/mdadm/mdadm.conf would need to list DEVICES and define an ARRAY, eg for the raid-1 array described here :

	DEVICES /dev/hdc1 /dev/hdd1
	ARRAY /dev/md0 level=raid1 num-devices=2 UUID=218b9181:d083dcd5:aef900d0:9688b0e3 devices=/dev/hdc1,/dev/hdd1
	

The command mdadm --examine -scan (run after mdadm --create) produces output that can be used to create the ARRAY entry for /etc/mdadm/mdadm.conf. You need to add the 'devices=' parameter, and the devices mentioned therein also need to be present in DEVICES.

managing disk space

Some RAID configurations, such as RAID-5, are suitable for adding disks (although you will have to 'grow' the filesystem afterwards. An other, very convenient way of managing disk space is using LVM over arrays. You can use arrays as physical devices for a logical group, in which you can then create logical volumes that you can mount to your directories. This combines the advantages of redundant disks (and other RAID features such as 'hot spare' disks) with those of logical volumes (spanning disks, partition and directory sizes independent of disk sizes, ...)

using arrays is similar to using real disks in lvm :

	## joining 2 arrays in a logical group and create logical volume(s) :

	raidtest:~# pvcreate /dev/md0
	  Physical volume "/dev/md0" successfully created
	raidtest:~# pvcreate /dev/md1
	  Physical volume "/dev/md1" successfully created

	raidtest:~# vgcreate vg01 /dev/md0 /dev/md1
	  Volume group "vg01" successfully created

	raidtest:~# vgdisplay | grep "Free"
	  Free  PE / Size       764 / 2.98 GB
 
	raidtest:~# lvcreate -l764 -n lv01 vg01
	  Logical volume "lv01" created
	
	## create filesystem and setting it up for mount
	raidtest:~# mkfs -t ext2 /dev/vg01/lv01
	raidtest:~# vim /etc/fstab
	raidtest:~# mount -a

	## result (after copying some files to /srv to see if things work
	raidtest:~# df -h
	Filesystem            Size  Used Avail Use% Mounted on
	/dev/hda1             714M  264M  412M  39% /

	/dev/mapper/vg01-lv01
	                      3.0G  225M  2.6G   8% /srv

	

Koen Noens
October 2007