Creating a new RAID 6 drive & attaching it to a Xen VM


I have 4 2TB drives in my NAS. They are split into 2 partitions, 500GB and the remaining ~1.5TB. The 1.5TB partitions were used in a RAID5 for media – music, backups of DVDs, etc. The remaining 500GB partitions were supposed to be for crucial documents, stuff that warranted double protection against disk failure.

Except for the past year, I never got around to creating the RAID6. That changed today.

Since the partitioning was done, it was surprising simple to get it up and running, and attached to the fileserver VM.

mdadm --create /dev/md125 --chunk=512 --level=6 --raid-devices=4 --spare-devices=0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 was the command needed to join the devices into the RAID array, and it automatically started sync-ing.

Next was pvcreate /dev/md125 to prepare the volume for LVM. After that, a vgcreate mirrordata /dev/md125 took care of the volume group. Finally, lvcreate -l100%VG -n lvm0 mirrordata created the actual logical volume. Note the -l100%VG – it’s the first time (I think) using it. Where I wanted the entire volume group used, it was easier than doing trial and error to get the proper size. I tried -L1000G, but got complaints that it was 4MB too short.

Which was annoying, because I liked the nice round number.

Anyway. Rounding off the whole thing was formatting it and mounting it in the VM. Formatting was just mkfs.ext4 /dev/mirrordata/lvm0, and mounting it was with xm block-attach 1 phy:/dev/mirrordata/lvm0 /dev/xvdc w. The key thing here that I originally overlooked was the phy: descriptor. Without it, xm complained “Error: Block device must have physical details specified”. Took me a while to realise what it wanted, but it’s all working now.

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_helium-lv_root
19G 2.3G 16G 13% /
tmpfs 498M 0 498M 0% /dev/shm
/dev/xvda1 485M 65M 395M 15% /boot
/dev/xvdb 4.0T 2.4T 1.4T 64% /home
/dev/xvdc 985G 200M 935G 1% /home/kyl191/mirrordata

=)

, ,

  1. No comments yet.
(will not be published)