Monday, May 5, 2014

Moving My ZFS Pool to Ubuntu 14.04 LTS

Ubuntu 14.04 LTS is out! I’ve been waiting for the latest LTS release for a while now. Finally, I can synchronize the Ubuntu versions on my desktop and laptop. The way things are set up right now on my desktop, I have a single raptor drive that I use for the OS and SWAP partition, and a pair of 2 TB hard drives that are set up as a ZFS mirrored pool, which I mount as my /home folder. Having my /home directory on a separate drive from my OS has allowed me to update my OS easily in the past. However, this is the first time that I’ve done an Ubuntu update along with moving my new ZFS pool.

I found this website to be an awesome reference for a lot of what I am doing below.

First, we start off by trying to do an export of my ZFS pool using the command:

$ sudo zpool export -f tank

but this resulted in a warning, telling me that that I couldn’t unmount the pool because it was currently in use. As it turns out, when you set things up so that the ZFS pools is your /home directory, you run into issues like this. You can’t unmount the drive, because you’re using it when you’re logged in. In the end I decided to throw caution to the wind and just do the clean install of Ubuntu 14.04 on my raptor drive and worry about importing the pool later.

Setting up ZFS on your newly installed Ubuntu box is the same as before:

$ sudo add-apt-repository ppa:zfs-native/stable
$ sudo apt-get update
$ sudo apt-get install ubuntu-zfs

After that, it’s just a matter of importing the ZFS pool.

$ sudo zpool import -f tank

the -f will force the import to occur even though the pool is listed to be possibly used by another system. Remember how we didn’t actually export the ZFS pool? If you followed my previous article about setting up the zfs pool with the mountpoint at /home, then should also get a warning about not being able to mount the pool because /home is not empty. As before, we’ll do an overlay mount using the command:

$ sudo zfs mount -vO -a

followed by the command:

$ sudo zfs mount

tank                            /tank
tank/home                       /home

to to make sure that the ZFS file system was mounted in the correct spot. Huzzah! We’re back in business!


Written with StackEdit.

Sunday, April 27, 2014

Setting Up a ZFS File System on Ubuntu 13.04

Or How I Stopped Worrying and Learned to Love ZFS

The Sad Story

Something terrible happened a while back. I decided to upgrade to Ubuntu 13.04 (I did say a while back right?) on my home desktop and did a pretty poor job remounting my disks after the reinstall. I guess I hit the reset button at the wrong time and BAM! There goes my main drive. Some very sad warnings about superblock corruption, much googling, and a last ditch attempt to sudo fsck.ext4 my way out of said corrupt superblocks ended only in tears. In a fit of sadness and anger I relayed to a friend of mine the troubles I was going through, to which he replied, “You’re screwed buddy, run ZFS instead.” (I may have cleaned up the language a bit.)

So here we are, installing and running ZFS. I better start by explaining my home setup and how I got into this mess in the first place. Previously, I had three physical disks on my computer:

  • One 80 GB hard drive that houses my OS and SWAP partition, formatted as ext4.
  • One 160 GB hard drive that houses my media, formatted as ext4.
  • One 320 GB hard drive that houses my home directory (with a couple VMs in there for good measure), formatted as ext4.

It was unfortunately the 320 GB drive that decided to checkout on me, but thankfully I did have a relatively recent backup (3 months old) that I can use. The whole experience though has made me a little weary about not having proper fault tolerance on my machine and so I’ve decided to try out ZFS. I also placed an order for two 2 TB drives. Along with an old 10000 RPM Raptor drive, I’m going to completely change out my three hard drives above.

The Solution

This was a particularly helpful article which I found and mainly followed to setup ZFS on my own computer.

We start with a fresh install of Ubuntu 13.04 on my Raptor drive. After that we install the “Native ZFS for Linux” PAA:

$ sudo add-apt-repository ppa:zfs-native/stable
$ sudo apt-get update
$ sudo apt-get install ubuntu-zfs

Next I needed to set up a ZFS storage pool. To do this properly I needed a list of all of my available hard drives:

$ ls -l /dev/disk/by-id/

This command returns the following output about my system drives:

total 0
lrwxrwxrwx 1 root root 9 Oct 10 21:49 ata-WDC_WD2002FAEX-007BA0_WD-WCAY01715319 -> ../../sdb
lrwxrwxrwx 1 root root 9 Oct 10 21:49 ata-WDC_WD2002FAEX-007BA0_WD-WCAY01780593 -> ../../sdc
lrwxrwxrwx 1 root root 9 Oct 10 21:49 ata-WDC_WD360GD-00FNA0_WD-WMAH91506266 -> ../../sda
lrwxrwxrwx 1 root root 10 Oct 10 20:49 ata-WDC_WD360GD-00FNA0_WD-WMAH91506266-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Oct 10 21:49 ata-WDC_WD360GD-00FNA0_WD-WMAH91506266-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Oct 10 20:49 ata-WDC_WD360GD-00FNA0_WD-WMAH91506266-part5 -> ../../sda5
lrwxrwxrwx 1 root root 9 Oct 10 21:49 scsi-SATA_WDC_WD2002FAEX-_WD-WCAY01715319 -> ../../sdb
lrwxrwxrwx 1 root root 9 Oct 10 21:49 scsi-SATA_WDC_WD2002FAEX-_WD-WCAY01780593 -> ../../sdc
lrwxrwxrwx 1 root root 9 Oct 10 21:49 scsi-SATA_WDC_WD360GD-00FWD-WMAH91506266 -> ../../sda
lrwxrwxrwx 1 root root 10 Oct 10 20:49 scsi-SATA_WDC_WD360GD-00FWD-WMAH91506266-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Oct 10 21:49 scsi-SATA_WDC_WD360GD-00FWD-WMAH91506266-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Oct 10 20:49 scsi-SATA_WDC_WD360GD-00FWD-WMAH91506266-part5 -> ../../sda5
lrwxrwxrwx 1 root root 9 Oct 10 21:49 wwn-0x50014ee20909213c -> ../../sdc
lrwxrwxrwx 1 root root 9 Oct 10 21:49 wwn-0x50014ee2b39c83f4 -> ../../sdb

Using a combination of this output, and the system programs “Disks” and “gparted” I figure out that the two newly installed 2 TB hard drives are:

scsi-SATA_WDC_WD2002FAEX-_WD-WCAY01715319
scsi-SATA_WDC_WD2002FAEX-_WD-WCAY01780593

I then used the command:

$ sudo zpool create -f tank mirror /dev/disk/by-id/scsi-SATA_WDC_WD2002FAEX-_WD-WCAY01715319 /dev/disk/by-id/scsi-SATA_WDC_WD2002FAEX-_WD-WCAY01780593

to generate a storage pool named tank in mirror mode, giving it the disk id’s of the two new disks. The flag -f was used in this case to force the mirroring to occur. What happened was that I already installed Ubuntu 13.04 on one of my 2 TB drives by accident (it was mounted as dev/sda). I ended up reloading the OS on the Raptor drive, but didn’t remove the previous install on one 2 TB drive. If your drives are cleanly formatted, you may not need to use this flag.

Also, as an aside, the flag is used if you want to make a mirrored pool with two hard drives of different sizes. I initially did this with my 160 GB and 320 GB drives to play around with ZFS. This works as well, but the total size of your mirrored drive pool with be the smallest of your two drives. In my case, the size of my mirrored pool, using my old drives was, 160 GB. There are some complex ways around this, but they involve careful partitioning of your drives, and buying matching drives seemed easier.

Once your ZFS pool is all set up, you can run the command:

$ sudo zpool status

to get information about the health of your pool. It looks like we made our mirrored pool here with no problems.

  pool: tank
 state: ONLINE
  scan: none requested
config:

    NAME                                           STATE     READ WRITE CKSUM
    tank                                           ONLINE       0     0     0
      mirror-0                                     ONLINE       0     0     0
        scsi-SATA_WDC_WD2002FAEX-_WD-WCAY01715319  ONLINE       0     0     0
        scsi-SATA_WDC_WD2002FAEX-_WD-WCAY01780593  ONLINE       0     0     0

errors: No known data errors

Finally, we can use the following command:

$ sudo zfs list

to check the available storage capacity and mount point of your ZFS pool. 1.78 TB of mirrored storage. Looks good to me.

NAME  USED    AVAIL   REFER  MOUNTPOINT
tank  108K    1.78T     30K  /tank

Now that we have a ZFS pool, we should do something with it. It is possible to use ZFS as your root directory, but the setup looks a little more involved that I’m use to. Instead I’m going to be using my ZFS pool by mounting as my home directory. First we make a filesystem under tank called “home”. This is done using the command:

$ sudo zfs create tank/home

Checking the list again, we now have:

$ sudo zfs list

NAME        USED  AVAIL  REFER  MOUNTPOINT
tank        108K  1.78T    30K  /tank
tank/home   30.5K 1.78T  30.5K  /home

Next we’ll want to copy over all of the files in the current home directory (on my OS hard drive) over to the ZFS filesystem, under the \home directory. Just to be extra sure that we copy everything over, I first log off of my account and then use Ctrl+Alt+F1 to log back into console mode. From here we do a complete recursive copy of my /home directory to /tank/home by using the command:

$ sudo cp -a /home/. /tank/home/

Finally, we set the setpoint of the filesystem to be the /home directory:

$ sudo zfs set mountpoint=/home tank/home

this should give you a warning about not being able to mount because /home is not empty. Thanks, OK then we’ll do an overlay mount using the command:

$ sudo zfs mount -vO -a

followed by the command:

$ sudo zfs mount

tank                            /tank
tank/home                       /home

to check that the zfs file system was mounted in the correct spot. Once done, you exit the console and then Ctrl+Alt+F7 to get back into graphical login. You’re now ready to enjoy all the benefits of a ZFS filesystem!


Written with StackEdit.

Sunday, April 6, 2014

... But will it vim?

Now that I’ve found a great new editor to write blog posts in, I need to know if there is a way to integrate my favorite text editor. SPOILER ALERT: it’s vim!

The answer to the question is a resounding YES! It is shockingly both possible and fairly easy!

Ctrl + Shift + j brings up the web console in Chrome. From there, all you have to do is copy and paste the following code into the console and you’re off to the races1!

var ace = {}
ace.require = require
ace.define = define
ace.require(["ace/lib/net"], function(acenet) {
    acenet.loadScript("//cdnjs.cloudflare.com/ajax/libs/ace/1.1.01/keybinding-vim.js", function() {
        e = document.querySelector(".ace_editor").env.editor
        ace.require(["ace/keyboard/vim"], function(acevim) {
            e.setKeyboardHandler(acevim.handler);
        })
    })
})

However, you’ll have to do that each time you open up StackEdit in the browser. The better way to do this is to wrap the above code in a StackEdit extension. Just copy the code below into the StackEdit Settings -> Extensions -> UserCustom field, and now you’re really off to the races!

userCustom.onReady = function() {
    var ace = {}
    ace.require = require
    ace.define = define
    ace.require(["ace/lib/net"], function(acenet) {
        acenet.loadScript("//cdnjs.cloudflare.com/ajax/libs/ace/1.1.01/keybinding-vim.js", function() {
            e = document.querySelector(".ace_editor").env.editor
            ace.require(["ace/keyboard/vim"], function(acevim) {
                e.setKeyboardHandler(acevim.handler);
            });
        });
    });
    window.ace = ace;
};

Special thanks to this issue ticket on GitHub that outlined the solution.


Written with StackEdit.


  1. If you love vim as much as I do, you’ll also have to exclude the StackEdit URL in the Vimium extension for Chrome before vim bindings will start working in StackEdit.

Wednesday, March 12, 2014

... and then I found StackEdit

As you can see there hasn’t been much activity on my blog for the last few months. Some of that is due to getting acclimated to my new job. I’ve also been struggling with finding a good editor which easily allows me to express code and other exciting stuff without too much HTML overhead.

I’m not sure exactly how I ended up running into StackEdit, but it is just absolutely awesome. I’ve always enjoyed GitHub’s version of MarkDown as well as the version of Markdown being used over at StackOverflow and was pleasantly surprised by StackEdit’s amazing usability. It also has synchronization to Dropbox and Google Drive, as well as publishing *.md files directly to Blogger.

Alright, enough with the promotional marketing. The real question is, “Will this tool make it easier for me to blog?” Maybe, we’ll have to see.


Written with StackEdit.

Thursday, October 3, 2013

Installing Windows 8.1 64-bit Guest on Ubuntu 13.04 64-bit using VirtualBox 4.2.10

In the process of trying to install a copy of Windows 8.1 64-bit as a guest on my Ubuntu 13.04 64-bit machine I ran into a problem. After mounting the *.iso and starting up the VM, instead of being greated with a pretty Windows install GUI, I saw this:

Your PC needs to restart.
Please hold down the power button.
Error Code: 0x000000C4
Parameters
0x0000000000000091
0x000000000000000F
0xFFFFF80213D5DA80
0x0000000000000000

Which I found to be very disappointing. A little searching showed that there was already a solution if you were running as a Windows guest and Windows host, but it also surprisingly works in Ubuntu!

First we need to run the virtualboxmanage command and have it list all of the VMs. You can see, I’ve named my VM “Tomato8”:

$ vboxmanage list vms
"Tomato8" {89fc5582-b49a-432f-be6f-534f4876f302}

Now using the exact name of the VM that you want to install Windows 8.1 on, run the following command:

$ vboxmanage setextradata "Tomato8" VBoxInternal/CPUM/CMPXCHG16B 1

After that you should be ready to install Windows 8.1! It seems that for 64-bit versions of Windows 8.1 your CPU needs to support the CMPXCHG16B instruction.


Written with StackEdit.