|Name||The name of the pool.|
|Size||The total size of the pool, equal to the sum of the sizes of all top-level virtual devices.|
|Alloc||The amount of physical space allocated to all datasets and internal metadata. Note that this amount differs from the amount of disk space as reported at the file system level.|
|Free||The amount of unallocated space in the pool.|
|Frag||The amount of fragmentation in the pool.|
|Capacity||The amount of disk space used, expressed as a percentage of the total disk space.|
|Health||The current health status of the pool.|
|Altroot||The alternate root of the pool, if one exists.|
To create a Zpool you have to enter a Name and select the Virtual Devices you like to use.
“Root” and “Mount point” are optional.
|Root||Unless this option is specified the name of the Zpool is the same you entered under “Name”. On the screenshot you can see the Zpool is called “tank”, here you can change the name to a diffrent one.|
|Mount point||Here you can change the mount point of your Zpool. Default is /mnt/<Name> or /mnt/<Root> if you change to “Root”.|
On this page you can create Zpool's out of your Virtual Devices. (these must have been configured already)
The screenshot's below show a RAIDZ + Hot Spare. Because FreeBSD's ZFS implementation still does not actually use Hot Spares yet, if one of the drives fails it will NOT be automatically replaced by the Hot Spare. Hot Spare(s) can be assigned to multiple pools, and would be snagged by the pool that needs it first, if FreeBSD's ZFS implementation did so. This is actually serendipity as far as data safety is concerned according to some qualified sources.
We also see here that in this case I did forget to setup a virtual device first!