FreeBSD: ZFS pool using GPT labels
From this point on, I will never again create a ZFS pool without using GPT labels. Why? Because it makes life easier down the road - especially during operations such as adding or replacing disks.
While this post focuses on FreeBSD, the same technique should work on any Unix system using GPT and ZFS.
Overview
The “common” way of creating a ZFS pool involves something along the lines of:
zpool create tank mirror ada0 ada1
This will create a new zpool
named tank
using devices /dev/ada0
and /dev/ada1
in a mirrored configuration. So far so good.
There are mainly two problems apparent with this:
- The disk identifiers (eg.
ada0
) can change at any time - especially when adding, removing or changing disks on the host. - Identifying the physical disk is… tricky.
GPT (GUID Partition Table) allows to label individual partitions. Once a label has been assigned to a partition, it will show up as /dev/gpt/<label>
. Therefore, a particular partition can be accessed by label instead of by device, device driver or other means.
Using a GPT label allows to physically label the disk with the same label. Therefore, identifying which disk needs replacement is very easy. Without this 1-to-1 labelling, identifying the physical disk can be difficult as one would need to know to which physical port the disk is connected.
Creating a ZFS pool using GPT labels
Creating a ZFS pool which uses GPT labels is a simple process. However, it requires manually creating the GPT, the ZFS partition and the corresponding label for every device that will be part of the ZFS pool.
From hereon we assume that a mirror
pool is created using two 512GB SATA SSDs which show up as /dev/ada0
and /dev/ada1
.
For every device to be added to the pool, we first create a GPT:
gpart create -s GPT /dev/ada0
Next, we create the ZFS partition. At this point it is worth considering making the partition slightly smaller than the actual capacity available on the disk. In my specific case the SSDs report 477GB of usable space. Imagine having to replace a 512GB SATA SSD with a 512GB SATA SSD from a different manufacturer. It might happen that the replacement disk has a little bit less usable space (eg. 475GB instead of 477GB). In that case, you’re gonna find yourself in a world of pain quickly. Leaving a bit of empty space mitigates issues like this easily.
I’ll create partitions that are 470GB in size using 4k sectors:
gpart add -t freebsd-zfs -a 4k -s 470g ada0
Then, add a label to the newly created ZFS partition (in this example we choose the label to be disk0
):
gpart modify -l disk0 -i 1 ada0
Repeat this procedure for the remaining disk(s) using a different label for each.
After each disk has been prepared, the ZFS pool can be created. Here, we create a two-way mirror named ssd
.
zpool create ssd mirror /dev/gpt/disk0 /dev/gpt/disk1
If everything succeeded and the pool has been created, zpool status
should show the pool with the corresponding GPT labels:
pool: zroot
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
ssd ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gpt/disk0 ONLINE 0 0 0
gpt/disk1 ONLINE 0 0 0
errors: No known data errors
And there you have it - a ZFS pool using GPT labels! The resulting ZFS pool is (more?) resilient towards changing disk’s physical locations and adding/removing other disks on the same host.
Replacing disks is as easy as using zpool status
to figure out which disk needs replacement and looking for the same label on the physical disk (or drive bay).