nas build, part 1: software/OS plans

See part 0 for the previous in the series.


These posts are not going to be focused on the physical build, as that’s been readily covered (and better) by others.

I want to focus on the software side of the build, where I am (and everyone is, basically) doing something unique.

My goal has been the following:

  • Boot off a (ideally static/read-only) image…

  • …imaged to two thumbdrives (current & next)…

    • rollbacks become “just use the previous thumbdrive, not the newly-imaged one”
  • …derived from a virtual machine.

    • virtual machine can be checkpointed, backed up and speculatively modified

As such, I have a simple gentoo system on a 12GB virtual disk, with 5 1GB “drives” playing a placeholder role for the ZFS array in the real machine. I chose 12GB because while it’s easy to get 32GB thumbdrives, I wanted something that was even below a 16GB size, so that it could safely be (re)imaged onto such a thumbdrive without worrying about any sizing or boundry issues.

One thing that makes this setup work very well is the use of uuids for both grub command lines and /etc/fstab, instead of relative drive designations. Instead of having the (virtual disk) /dev/sda listed in /etc/fstab, and knowing that on boot, a thumbdrive is going to be reported as /dev/sdf, /etc/fstab simply has the UUID (or LABEL) of the drive/partition, and the mounter just figures it out.

    # /dev/sda2 /boot       ext2        defaults    0 0
    UUID=563893f3-c262-4032-84ac-be12fddff66b   /boot       ext2        defaults    0 0
    # /dev/sda3 /       ext4        noatime     0 0
    UUID=489dd7ad-a5e5-4727-8a9c-b11cca382038   /   ext4    noatime 0 0

Similarly, grub entries have root=UUID=[....], and devices are scanned to find the one that matches.

    echo    'Loading Linux 4.14.32-gentoo ...'
    linux   /vmlinuz-4.14.32-gentoo root=UUID=489dd7ad-a5e5-4727-8a9c-b11cca382038 ro init=/usr/lib/systemd/systemd
    echo    'Loading initial ramdisk ...'
    initrd  /initramfs-4.14.32-gentoo.img

The drives (both virtual and real) are in a ZFS raid-z2 configuration, as /dev/sda-sde. A primary volume exists as zfs_data (/data), and a subvolume for /home, which is a cleaner version of the previous btrfs (and previous to that, mdadm/raid10) situation configuration where /data/home was rebind-mounted as /home.

As packages were installed, anything that wants to touch /var/{lib,log} has been symlinked to /data/system/var/{...} on the RAID array.

While I’d still like to have something approaching a read-only boot device, in practice, so far, I’ve found that having a read/write boot and OS drive is great, because I can tweak configurations and even install packages “in-situ”, and then just make a record of changes I need to apply to the virtual machine the next time I spin it up to make a batch of changes.

I have a file on the NAS storage (/data/system/ChangeLog) which I use to not only record the details of tweaks to the virtual machine, but also to record changes I’ve made to the real machine that need to be reapplied to the virtual machine. As well, I use this to record other general “TODOs” for later and notes/reminders about the process (like the sequence/details for installing a new kernel + zfs modules + dracut + grub, or a checklist of things to look for after package installs/upgrades (entries in /var/{lib,log}, systemd unit installs, &c.).

When I want to create an image of the boot device to copy onto a thumb drive, I do the following:

  1. VirtualBox clone the machine, selecting the “current state” option;

  2. Once cloned, remove the VirtualBox machine clone, electing to “retain files” (this prevents the next step from complaining about duplicate identifiers);

  3. Identify the 12GB disk .vdi file, run eg. VBoxManage clonemedium disk earth-2018-04-15-disk2.vdi earth-2018-04-15.img --format=raw;

  4. pv earth-2018-04-15.img > /dev/sdg (or whatever the usb drive is; one could probably do /dev/disk/by-label/{thumb-drive-device-label} or something, too.)

nas build, part 0: introduction

Sometime in – h*ck I don’t know, 2006? – I last built my personal computer.

At the time, it was a combination Serious Desktop Workhorse and File Server.

In time, I’ve replaced the former with a ${day_job}-provided machine, have relegated my personal computing to a dedicated VM isolated from those work concerns (plus devices), and the machine I had built has become a very unbalanced NAS server. It had far too much compute power (and thus baseline energy consumption) for the menial task it was left with. As well, the size of the RAID array was great at the time, but in an era of 4K video and TimeMachine backups, it’s limited.

So, I decided to replace it with a proper NAS device.

My goals were the following:

  • order-of-magnitude storage increase: the current box has 1.8TB; shooting for 18-24 T

  • a bit more redundancy: current box is RAID10, and had a regular issue with /dev/hdc initially that thankfully resolved after a few replacements. Shipping from newegg to VT is ~2d if a drive does fail, but I’d rather have 2 drive-fail capability, especially with that drive size / rebuild time.

  • serious power reduction: the previous box (old i7) drew 140W idle, and I was shooting for ~40W;

  • simplicity in OS management: the machine had been running linux-4.4.1 forever, and grub-0.99, because it’s too fragile to change and I’m a scaredy-pants

As such, the plan became:

  • 5-6 6±2 TB drives in a RAID5 or 6 or Z2 configuration

  • explicitly lower-power CPU, limited memory

  • (eventually-)read-only thumb-drive boot, OS managed via virtual machine


I want to take a moment as early as possible to recognize Brian Moses’ DIY NAS builds, which have been a significant guide for this build; by that I mean I stole his plans on the hardware side, with some minor tweaks.

I wound up with the following hardware/costs:

component description cost =2255
case SilverStone DS380B 150
psu Corsair SF450 85
motherboard ASRock Z270M-ITX 130
cpu Intel Core i5-7600T 255
memory Ballistix DDR4 2666 (2×8GB) 185
drives (a) ×3 WD Red 8TB NAS – WD80EFZX 725
drives (b) ×3 Seagate IronWolf 8TB NAS – ST8000VN0022 725

The SilverStone because of the hot-swappable drive bays.

The ASRock because of 6 SATA6 ports native.

The Core i5-7600T because of the balance of high benchmarking scores and modern features with a TDP of only 35W.


I decided to stick with Gentoo as the OS, because I love it and more importantly I’m comfortable with it.

In advance of building the new server (“earth”, to complement fire (the firewall), air (the wifi), and water (my personal VM)), I decided to at least upgrade the software side of my current server (phoenix) with a thumb-drive OS build.

So, this OS build is not only going to be the basis for the next server, but it is going to take over the current server’s OS.

I’ve leveraged UUID- and LABEL-based configuration in grub and /etc/fstab in order to have the OS image work in both the virtualized and real environments.

In particular:

In reality I have 4 1TB drives in a btrfs RAID10 configuration.

In the virtualized environment, I have 4 1GB “drives” in the same configuration.

Both mounted as “/data“, but in /etc/fstab, it’s:

    LABEL="DATA" /data btrfs defaults,noatime,compress=lzo 0 0
    /data/home /home none bind 0 0

So no matter which is booting, the same thing is mounted.

For the boot disks, it’s all:

    # /dev/sda2 /boot ext2 defaults 0 0
    UUID=563893f3-c262-4032-84ac-be12fddff66b /boot ext2 defaults 0 0
    # /dev/sda3 / ext4 noatime 0 0
    UUID=489dd7ad-a5e5-4727-8a9c-b11cca382038 / ext4 noatime 0 0

So that no matter where the image is booted (virtual, thumbdrive, whatever) the mounts work fine.


See part 1 for the next in the series.