ext3

How to setup an encrypted USB-disk software-RAID-1 on Debian GNU/Linux using mdadm and cryptsetup

This is what I set up for backups recently using a cheap USB-enclosure which can house 2 SATA disks and shows them as 2 USB mass-storage devices to my system (using only one USB cable). Without any further introduction, here goes the HOWTO:

First, create one big partition on each of the two disks (/dev/sdc and /dev/sdd in my case) of the exact same size. The cfdisk details are omitted here.

  $ cfdisk /dev/sdc
  $ cfdisk /dev/sdd

Then, create a new RAID array using the mdadm utility:

  $ mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1

The array is named md0, consists of the two devices (--raid-devices=2) /dev/sdc1 and /dev/sdd1, and it's a RAID-1 array, i.e. data is simply mirrored on both disks so if one of them fails you don't lose data (--level=1). After this has been done the array will be synchronized so that both disks contain the same data (this process will take a long time). You can watch the current status via:

  $ cat /proc/mdstat
  Personalities : [raid1]
  md0 : active raid1 sdd1[1] sdc1[0]
        1465135869 blocks super 1.1 [2/2] [UU]
        [>....................]  resync =  0.0% (70016/1465135869) finish=2440.6min speed=10002K/sec
  unused devices: 

Some more info is also available from mdadm:

  $ mdadm --detail --scan
  ARRAY /dev/md0 metadata=1.01 name=foobar:0 UUID=1234578:1234578:1234578:1234578

  $ mdadm --detail /dev/md0
  /dev/md0:
          Version : 1.01
    Creation Time : Sat Feb  6 23:58:51 2010
       Raid Level : raid1
       Array Size : 1465135869 (1397.26 GiB 1500.30 GB)
    Used Dev Size : 1465135869 (1397.26 GiB 1500.30 GB)
     Raid Devices : 2
    Total Devices : 2
      Persistence : Superblock is persistent
      Update Time : Sun Feb  7 00:03:21 2010
            State : active, resyncing
   Active Devices : 2
  Working Devices : 2
   Failed Devices : 0
    Spare Devices : 0
   Rebuild Status : 0% complete
             Name : foobar:0  (local to host foobar)
             UUID : 1234578:1234578:1234578:1234578
           Events : 1
      Number   Major   Minor   RaidDevice State
         0       8       33        0      active sync   /dev/sdc1
         1       8       49        1      active sync   /dev/sdd1

Next, you'll want to create a big partition on the RAID device (cfdisk details omitted)...

  $ cfdisk /dev/md0

...and then encrypt all the (future) data on the device using dm-crypt+LUKS and cryptsetup:

  $ cryptsetup --verbose --verify-passphrase luksFormat /dev/md0p1
  Enter your desired pasphrase here (twice)
  $ cryptsetup luksOpen /dev/md0p1 myraid

After opening the encrypted container with cryptsetup luksOpen you can create a filesystem on it (ext3 in my case):

  $ mkfs.ext3 -j -m 0 /dev/mapper/myraid

That's about it. In future you can access the RAID data by using the steps below.

Starting the RAID and mouting the drive:

  $ mdadm --assemble /dev/md0 /dev/sdc1 /dev/sdd1
  $ cryptsetup luksOpen /dev/md0p1 myraid
  $ mount -t ext3 /dev/mapper/myraid /mnt

Shutting down the RAID:

  $ umount /mnt
  $ cryptsetup luksClose myraid
  $ mdadm --stop /dev/md0

That's all. Performance is shitty due to all the data being shoved out over one USB cable (and USB itself being too slow for these amounts of data), but I don't care too much about that as this setup is meant for backups, not performance-critical stuff.

Update 04/2011: Thanks to Bohdan Zograf there's a Belorussian translation of this article now!

Note to self: Missing lvm2 and cryptsetup packages lead to non-working initrd very, very soon

I recently almost died from a heart attack because after a really horrible crash (don't ask), Debian unstable on my laptop wouldn't boot anymore. The system hung at "Waiting for root filesystem...", and I was in panic mode as I feared I lost all my data (and as usual my backups were waaay too old).

At first I was suspecting that something actually got erased or mangled due to the crash, either at the dm-crypt layer, or the LVM layer, or the ext3 filesystem on top of those. After various hours of messing with live CDs, cryptsetup, lvm commands (such as pvscan, pvs, vgchange, vgs, vgck) and finally fsck I still had not managed to successfully boot my laptop.

I finally was able to boot by changing the initrd from initrd.img-2.6.30-2-686 to initrd.img-2.6.30-2-686.bak in the GRUB2 menu (at boot-time), at which point it was clear that something was wrong with my current initrd. A bit of debugging and some initrd comparisons revealed the cause:

Both, the cryptsetup and lvm2 packages were no longer installed on my laptop, which made all update-initramfs invokations (e.g. upon kernel package updates) create initrds which did not contain the proper dm-crypt and lvm functionality support. Hence, no booting for me. I only noticed because of the crash, as I usually do not reboot the laptop very often (two or three times per year maybe).

Now, as to why those packages were removed I have absolutely no idea. I did not remove them knowingly, so I suspect some dist-upgrade did it and I didn't notice (but I do carefully check which packages dist-upgrade tries to remove, usually)...

Resizing ext3-on-LVM-on-dmcrypt file systems, moving disk space from one LV to another

Back in 2008 I wrote a small article about resizing LVM physical volumes. I had to do something similar, but slighly more complicated, recently. My /usr logical volume (LV) was getting full on my laptop disk, thus I wanted to shrink another LV and move some of that space to /usr. Here's one way you can do that.

Requirements: a Live CD containing all required utilities (cryptsetup, LVM tools, resize2fs), I used grml.

Important: If you plan to perform any of these steps, make sure you have recent backups! I take no responsibility for any data loss you might experience. You have been warned!

First, shutdown the laptop and boot using the Live CD. Then, open the dm-crypt device (/dev/hda3 in my case) by entering your passphrase:

  $ cryptsetup luksOpen /dev/hda3 foo

Activate all (newly available) LVM volume groups in that encrypted device:

  $ vgchange -a y

(maybe you also need a vgscan and/or lvscan, not sure)

Check how much free space we have for putting into our /usr LV:

  $ vgdisplay | grep Free
  Free  PE / Size       0 / 0   

OK, so we have none. Thus, we need to shrink another LV (/home, in my case) and put that newly freed space into the /usr LV. In order to do that, we have to check the current size of the /home LV:

  $ mount -t ext3 /dev/vg-whole/lv-home /mnt
  $ df --block-size=1M | grep -C 1 /mnt
  $ umount /mnt

(if you know how to find out the size of an ext3 file system without mounting it, please let me know) Update: See comments for suggestions.

Write down the total amount of 1M chunks of space on the file system (116857 in my case), we'll need that later. Now run 'fsck' on the /home LVM logical volume, which is needed for the 'resize2fs' step afterwards. This will take quite a while.

  $ fsck -f /dev/vg-whole/lv-home

Next step is resizing the ext3 file system in the /home LVM logical volume, making it 1GB smaller than before (of course you must have >= 1 GB of free space on /home for that to work). We use fancy bash calculations to do the math.

Note: I'm not so sure about the sizes here, in my first attempt something went wrong and resize2fs said "filesystem too small" or the like. Maybe I'm confusing the size units from 'df' and 'resize2fs', or the bash calculation goes wrong? Please leave a comment if you know more!

  $ resize2fs /dev/vg-whole/lv-home $((116857-1024))M

Then, we can safely reduce the LV itself. Note: order is very important here, you must shrink the ext3 filesystem first, and then shrink the LV! Doing it the other way around will destroy your filesystem!

  $ lvreduce -L -1G /dev/vg-whole/lv-home

Now that we have 1 GB of free space to spend on LVs, we assign that space to the /usr LVM logical volume like this:

  $ lvextend -L +1G /dev/vg-whole/lv-usr

As usual, we then run 'fsck' on the filesystem in order to be able to use 'resize2fs' to resize it to the biggest possible size (that's the default if resize2fs gets no parameters):

  $ fsck -f /dev/vg-whole/lv-usr
  $ resize2fs /dev/vg-whole/lv-usr

That's it. You can now shutdown the Live CD system and boot into the normal OS with the new space allocations:

  $ vgchange -a n
  $ cryptsetup luksClose foo
  $ halt

Resizing a dm-crypt / LVM / ext3 partition

I've bought a new hard drive for my laptop recently, because I finally got fed up with my constantly-full disk. Having to browse around in $HOME looking for stuff which can be safely deleted just because I want to run fetchmail (and that would fill up my disk) just sucks. So, after getting a cheapo 160 GB 2.5" disk (the old one was 80 GB), I had to move all my data to the new disk.

As I didn't want to re-install from scratch I started with dd'ing the whole disk over to the new one (using a live CD and an external USB hard-drive enclosure). This took pretty long, but went fine otherwise.

The new disk then contained all my partitions (hda1-hda3) and also GRUB in the MBR etc., as expected, but was still only 80 GB in size, of course. So the first step is to enlarge the hda3 partition, which is a dm-crypt volume that contains various LVM logical volumes (for /home, /usr, /var, swap, etc.), each of them using the ext3 filesystem (except for the swap volume, of course).

0. Perform backups, boot from a live CD

Important: If you plan to perform any of these steps, make sure you have recent backups! I take no responsibility for any data loss you might experience. You have been warned!

First off, you should boot from a live CD which has all the tools you'll need, including cryptsetup, LVM tools, resize2fs, etc. You can use the nice grml live CD for instance.

1. Resize partition

This sounds scary (and it is!), but the way I enlarged the encrypted hda3 partition was by first deleting it via fdisk. First, issue the "p" command in fdisk, write down the exact start cylinder of hda3. Then delete hda3. Now create a new hda3 partition which starts at exactly the same cylinder as the old hda3 but is larger, i.e. in my case it has ca. 80 GB additional space.

Your data will still be there if you don't screw up, and the partition is bigger now. Using something like gparted will likely not work as expected, as the partition is encrypted!

2. Resize dm-crypt volume

Nothing to be done, it seems dm-crypt automatically adapts and notices that the partition is bigger. Just "open" the encrypted volume using cryptsetup now:

  $ cryptsetup luksOpen /dev/hda3 foo

3. Resize LVM physical volume

Next step is to tell LVM about the new space. We first resize the LVM physical volume on the foo "partition" to use up all newly-available space.

  $ pvresize /dev/mapper/foo

4. Resize LVM logical volume

Now we can pump the new space into any of the logical volumes (or into multiple ones). I only increased one logical volume, my /home:

  $ lvresize -L +74 GB /dev/vg-whole/lv-home

5. Resize ext3 filesystem

The final step is to resize the ext3 filesystem on the lv-home logical volume (after running the obligatory fsck -n). I first used ext2resize, but that failed horribly:

  $ fsck -n /dev/vg-whole/lv-home
  $ ext2resize /dev/vg-whole/lv-home
  error: Invalid argument: seeking to 3258921205760

This seems to be a known bug, ext2resize apparently cannot handle large disks or something, and as I found out a few minutes later it's pretty much deprecated anyway. The better solution is to use resize2fs:

  $ fsck -n /dev/vg-whole/lv-home
  $ resize2fs /dev/vg-whole/lv-home

That's it. We can now reboot the system from disk and enjoy ca. 80 GB of additional hard drive space. Yay!

RAID5 + dm-crypt + LVM + ext3 Debian install and benchmarks

OK, so I've setup a RAID5 at home because I'm getting tired of failed disk drives and data losses.

Some notes:

  • The system consists of 3 x 300 GB IDE drives in software RAID5 (standard Linux kernel and mdadm), thus ca. 600 GB usable storage space.
  • I've used the stock Debian installer to set up all of this, no custom hacks or anything needed.
  • Each drive is on an extra IDE bus/controller (1x onboard/internal, 2x on an PCI IDE controller card), as broken IDE disks (lacking hot-swap capabilities) often take down the whole IDE bus with them; it's not a good idea to put two disks on one IDE bus.
  • The software stack is: RAID5 at the botton, dm-crypt on top of that to encrypt the whole RAID, LVM on top of that to partition the system into /, /usr, /var, /tmp, /home, and swap.
  • /boot is on an extra 1 GB partition (replicated on each drive) as GRUB doesn't work on RAIDed disks and I want to use GRUB, not LILO. GRUB is also installed on the MBR of each drive, so if one of them fails, the other two can still come up.
  • I installed and configured smartmontools to check the status of the drives, and hddtemp to check their temperature.
  • Stability tests so far: While the system is running, pull out one of the IDE drives (yes, they're not hot-swappable and that may not be such a good idea, usually). The system survived without data loss. Time for rebuilding the array: ca. 1 hour. Second test: while the system is running, pull the plug. The system survived that, too.

Some stats from bonnie++ if anybody cares:

Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
bonsai           2G 26727  72 39426  19 16690   7 28725  65 34164   7 215.3   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
bonsai,2G,26727,72,39426,19,16690,7,28725,65,34164,7,215.3,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++

(Now, if I only knew what all those figures mean ;-)

No, neither the software RAID5, nor the dm-crypt layer nor LVM cause any measurable performance degradation whatsoever (from my subjective feeling). I don't care enough to measure anything. The CPU is idling all the time.

Power consumption is rather high (partly due to the mainboard and CPU, but also because of the disks + fans) and the system is pretty loud, which both sucks on the long run. I plan an ultra-silent, ultra-low-power RAID5 with 2.5" disks attached via USB to a (silent, low-power) NSLU2 for later.

Syndicate content