Sunday, April 27, 2014

Setting Up a ZFS File System on Ubuntu 13.04

Or How I Stopped Worrying and Learned to Love ZFS

The Sad Story

Something terrible happened a while back. I decided to upgrade to Ubuntu 13.04 (I did say a while back right?) on my home desktop and did a pretty poor job remounting my disks after the reinstall. I guess I hit the reset button at the wrong time and BAM! There goes my main drive. Some very sad warnings about superblock corruption, much googling, and a last ditch attempt to sudo fsck.ext4 my way out of said corrupt superblocks ended only in tears. In a fit of sadness and anger I relayed to a friend of mine the troubles I was going through, to which he replied, “You’re screwed buddy, run ZFS instead.” (I may have cleaned up the language a bit.)

So here we are, installing and running ZFS. I better start by explaining my home setup and how I got into this mess in the first place. Previously, I had three physical disks on my computer:

  • One 80 GB hard drive that houses my OS and SWAP partition, formatted as ext4.
  • One 160 GB hard drive that houses my media, formatted as ext4.
  • One 320 GB hard drive that houses my home directory (with a couple VMs in there for good measure), formatted as ext4.

It was unfortunately the 320 GB drive that decided to checkout on me, but thankfully I did have a relatively recent backup (3 months old) that I can use. The whole experience though has made me a little weary about not having proper fault tolerance on my machine and so I’ve decided to try out ZFS. I also placed an order for two 2 TB drives. Along with an old 10000 RPM Raptor drive, I’m going to completely change out my three hard drives above.

The Solution

This was a particularly helpful article which I found and mainly followed to setup ZFS on my own computer.

We start with a fresh install of Ubuntu 13.04 on my Raptor drive. After that we install the “Native ZFS for Linux” PAA:

$ sudo add-apt-repository ppa:zfs-native/stable
$ sudo apt-get update
$ sudo apt-get install ubuntu-zfs

Next I needed to set up a ZFS storage pool. To do this properly I needed a list of all of my available hard drives:

$ ls -l /dev/disk/by-id/

This command returns the following output about my system drives:

total 0
lrwxrwxrwx 1 root root 9 Oct 10 21:49 ata-WDC_WD2002FAEX-007BA0_WD-WCAY01715319 -> ../../sdb
lrwxrwxrwx 1 root root 9 Oct 10 21:49 ata-WDC_WD2002FAEX-007BA0_WD-WCAY01780593 -> ../../sdc
lrwxrwxrwx 1 root root 9 Oct 10 21:49 ata-WDC_WD360GD-00FNA0_WD-WMAH91506266 -> ../../sda
lrwxrwxrwx 1 root root 10 Oct 10 20:49 ata-WDC_WD360GD-00FNA0_WD-WMAH91506266-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Oct 10 21:49 ata-WDC_WD360GD-00FNA0_WD-WMAH91506266-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Oct 10 20:49 ata-WDC_WD360GD-00FNA0_WD-WMAH91506266-part5 -> ../../sda5
lrwxrwxrwx 1 root root 9 Oct 10 21:49 scsi-SATA_WDC_WD2002FAEX-_WD-WCAY01715319 -> ../../sdb
lrwxrwxrwx 1 root root 9 Oct 10 21:49 scsi-SATA_WDC_WD2002FAEX-_WD-WCAY01780593 -> ../../sdc
lrwxrwxrwx 1 root root 9 Oct 10 21:49 scsi-SATA_WDC_WD360GD-00FWD-WMAH91506266 -> ../../sda
lrwxrwxrwx 1 root root 10 Oct 10 20:49 scsi-SATA_WDC_WD360GD-00FWD-WMAH91506266-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Oct 10 21:49 scsi-SATA_WDC_WD360GD-00FWD-WMAH91506266-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Oct 10 20:49 scsi-SATA_WDC_WD360GD-00FWD-WMAH91506266-part5 -> ../../sda5
lrwxrwxrwx 1 root root 9 Oct 10 21:49 wwn-0x50014ee20909213c -> ../../sdc
lrwxrwxrwx 1 root root 9 Oct 10 21:49 wwn-0x50014ee2b39c83f4 -> ../../sdb

Using a combination of this output, and the system programs “Disks” and “gparted” I figure out that the two newly installed 2 TB hard drives are:

scsi-SATA_WDC_WD2002FAEX-_WD-WCAY01715319
scsi-SATA_WDC_WD2002FAEX-_WD-WCAY01780593

I then used the command:

$ sudo zpool create -f tank mirror /dev/disk/by-id/scsi-SATA_WDC_WD2002FAEX-_WD-WCAY01715319 /dev/disk/by-id/scsi-SATA_WDC_WD2002FAEX-_WD-WCAY01780593

to generate a storage pool named tank in mirror mode, giving it the disk id’s of the two new disks. The flag -f was used in this case to force the mirroring to occur. What happened was that I already installed Ubuntu 13.04 on one of my 2 TB drives by accident (it was mounted as dev/sda). I ended up reloading the OS on the Raptor drive, but didn’t remove the previous install on one 2 TB drive. If your drives are cleanly formatted, you may not need to use this flag.

Also, as an aside, the flag is used if you want to make a mirrored pool with two hard drives of different sizes. I initially did this with my 160 GB and 320 GB drives to play around with ZFS. This works as well, but the total size of your mirrored drive pool with be the smallest of your two drives. In my case, the size of my mirrored pool, using my old drives was, 160 GB. There are some complex ways around this, but they involve careful partitioning of your drives, and buying matching drives seemed easier.

Once your ZFS pool is all set up, you can run the command:

$ sudo zpool status

to get information about the health of your pool. It looks like we made our mirrored pool here with no problems.

  pool: tank
 state: ONLINE
  scan: none requested
config:

    NAME                                           STATE     READ WRITE CKSUM
    tank                                           ONLINE       0     0     0
      mirror-0                                     ONLINE       0     0     0
        scsi-SATA_WDC_WD2002FAEX-_WD-WCAY01715319  ONLINE       0     0     0
        scsi-SATA_WDC_WD2002FAEX-_WD-WCAY01780593  ONLINE       0     0     0

errors: No known data errors

Finally, we can use the following command:

$ sudo zfs list

to check the available storage capacity and mount point of your ZFS pool. 1.78 TB of mirrored storage. Looks good to me.

NAME  USED    AVAIL   REFER  MOUNTPOINT
tank  108K    1.78T     30K  /tank

Now that we have a ZFS pool, we should do something with it. It is possible to use ZFS as your root directory, but the setup looks a little more involved that I’m use to. Instead I’m going to be using my ZFS pool by mounting as my home directory. First we make a filesystem under tank called “home”. This is done using the command:

$ sudo zfs create tank/home

Checking the list again, we now have:

$ sudo zfs list

NAME        USED  AVAIL  REFER  MOUNTPOINT
tank        108K  1.78T    30K  /tank
tank/home   30.5K 1.78T  30.5K  /home

Next we’ll want to copy over all of the files in the current home directory (on my OS hard drive) over to the ZFS filesystem, under the \home directory. Just to be extra sure that we copy everything over, I first log off of my account and then use Ctrl+Alt+F1 to log back into console mode. From here we do a complete recursive copy of my /home directory to /tank/home by using the command:

$ sudo cp -a /home/. /tank/home/

Finally, we set the setpoint of the filesystem to be the /home directory:

$ sudo zfs set mountpoint=/home tank/home

this should give you a warning about not being able to mount because /home is not empty. Thanks, OK then we’ll do an overlay mount using the command:

$ sudo zfs mount -vO -a

followed by the command:

$ sudo zfs mount

tank                            /tank
tank/home                       /home

to check that the zfs file system was mounted in the correct spot. Once done, you exit the console and then Ctrl+Alt+F7 to get back into graphical login. You’re now ready to enjoy all the benefits of a ZFS filesystem!


Written with StackEdit.

No comments:

Post a Comment