I believe that someone from the devs (or someone who has some experience with thin provisioning in zfs) should shed some light regarding this characteristic...
As far as I understand: Thin provisioning (sparse volumes) is the ability to "fake" the actual space of the filesystem.
Example: You have 4x1TB raidZ1 --> The real space is <=3TB
You use thin provisioning and trick the OS to believe that your actual filesystem is 10TB
Once you get occupied space close to the actual 3TB, you can add another disk in the array and you won't need to do anything else (no need to grow the filesystem)
Until this point this is what I understand ( I may be wrong - plus I don't even use ZFS)
A step by step example, showing the actual implementation by adding an extra disk to a full raidz which uses thin provisioning, by someone who knows how to use it would be a great thing, both for users to understand how to use it and for translators who like to create an easy to understand translation (this is why I need it)
This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
Thin provisioning (zfs sparse volumes)
- ChriZathens
- Forum Moderator

- Posts: 758
- Joined: 23 Jun 2012 09:14
- Location: Athens, Greece
- Contact:
- Status: Offline
Thin provisioning (zfs sparse volumes)
My Nas
Backup Nas: U-NAS NSC-400, Gigabyte MB10-DS4 (4x4TB Seagate Exos disks in RaidZ configuration - 32GB RAM)
- Case: Fractal Design Define R2
- M/B: Supermicro x9scl-f
- CPU: Intel Celeron G1620
- RAM: 16GB DDR3 ECC (2 x Kingston KVR1333D3E9S/8G)
- PSU: Chieftec 850w 80+ modular
- Storage: 8x2TB HDDs in a RaidZ2 array ~ 10.1 TB usable disk space
- O/S: XigmaNAS 11.2.0.4.6625 -amd64 embedded
- Extra H/W: Dell Perc H310 SAS controller, crosflashed to LSI 9211-8i IT mode, 8GB Innodisk D150SV SATADOM for O/S
Backup Nas: U-NAS NSC-400, Gigabyte MB10-DS4 (4x4TB Seagate Exos disks in RaidZ configuration - 32GB RAM)
- raulfg3
- Site Admin

- Posts: 4865
- Joined: 22 Jun 2012 22:13
- Location: Madrid (ESPAÑA)
- Contact:
- Status: Offline
Re: Thin provisioning (zfs sparse volumes)
Sorry I don't understand fine like you, only can show links that I find on google:
http://www.cuddletech.com/blog/pivot/entry.php?id=729
http://joyent.com/blog/thin-provisioning-in-zfs
http://www.cuddletech.com/blog/pivot/entry.php?id=729
http://joyent.com/blog/thin-provisioning-in-zfs
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)
Wiki
Last changes
HP T510
Wiki
Last changes
HP T510
-
Onichan
- Advanced User

- Posts: 238
- Joined: 04 Jul 2012 21:41
- Status: Offline
Re: Thin provisioning (zfs sparse volumes)
From what I understand thin provisioning is just having the space used by a volume/lun/whaterver be the actual size of the data stored even though the volume may be larger. From my personal testing and what I read ZFS by default uses thin provisioning and there actually isn't any way to force thick provisioning, but I didn't see anything about it in the webgui. You are more specifically asking about over provisioning. Now http://serverfault.com/questions/319331 ... run-out-of talks about over provisioning the zvols so it may be possible via CLI.
Pools/datasets are easily grow-able as well as zvols so why is it you need over provisioning when you can just grow everything as needed? Also you can grown the disks in windows while it is running once you grow the volumes so it's not like there should be any issues with growing.
Pools/datasets are easily grow-able as well as zvols so why is it you need over provisioning when you can just grow everything as needed? Also you can grown the disks in windows while it is running once you grow the volumes so it's not like there should be any issues with growing.
- daoyama
- Developer

- Posts: 394
- Joined: 25 Aug 2012 09:28
- Location: Japan
- Status: Offline
Re: Thin provisioning (zfs sparse volumes)
This is not true. You can change zvol from WebGUI of 9.1.0.1.344.Onichan wrote:it may be possible via CLI.
Also, you can set guaranteed size for a dataset from WebGUI of 9.1.0.1.344.
NAS4Free 10.2.0.2.2115 (x64-embedded), 10.2.0.2.2258 (arm), 10.2.0.2.2258(dom0)
GIGABYTE 5YASV-RH, Celeron E3400 (Dual 2.6GHz), ECC 8GB, Intel ET/CT/82566DM (on-board), ZFS mirror (2TBx2)
ASRock E350M1/USB3, 16GB, Realtek 8111E (on-board), ZFS mirror (2TBx2)
MSI MS-9666, Core i7-860(Quad 2.8GHz/HT), 32GB, Mellanox ConnectX-2 EN/Intel 82578DM (on-board), ZFS mirror (3TBx2+L2ARC/ZIL:SSD128GB)
Develop/test environment:
VirtualBox 512MB VM, ESXi 512MB-8GB VM, Raspberry Pi, Pi2, ODROID-C1
GIGABYTE 5YASV-RH, Celeron E3400 (Dual 2.6GHz), ECC 8GB, Intel ET/CT/82566DM (on-board), ZFS mirror (2TBx2)
ASRock E350M1/USB3, 16GB, Realtek 8111E (on-board), ZFS mirror (2TBx2)
MSI MS-9666, Core i7-860(Quad 2.8GHz/HT), 32GB, Mellanox ConnectX-2 EN/Intel 82578DM (on-board), ZFS mirror (3TBx2+L2ARC/ZIL:SSD128GB)
Develop/test environment:
VirtualBox 512MB VM, ESXi 512MB-8GB VM, Raspberry Pi, Pi2, ODROID-C1
- ChriZathens
- Forum Moderator

- Posts: 758
- Joined: 23 Jun 2012 09:14
- Location: Athens, Greece
- Contact:
- Status: Offline
Re: Thin provisioning (zfs sparse volumes)
Well the truth is that most people don't know about sparse volumes
I believe that a use case, demonstrating what we can accomplish with this feature would be much appreciated by all of us!!!
I believe that a use case, demonstrating what we can accomplish with this feature would be much appreciated by all of us!!!
My Nas
Backup Nas: U-NAS NSC-400, Gigabyte MB10-DS4 (4x4TB Seagate Exos disks in RaidZ configuration - 32GB RAM)
- Case: Fractal Design Define R2
- M/B: Supermicro x9scl-f
- CPU: Intel Celeron G1620
- RAM: 16GB DDR3 ECC (2 x Kingston KVR1333D3E9S/8G)
- PSU: Chieftec 850w 80+ modular
- Storage: 8x2TB HDDs in a RaidZ2 array ~ 10.1 TB usable disk space
- O/S: XigmaNAS 11.2.0.4.6625 -amd64 embedded
- Extra H/W: Dell Perc H310 SAS controller, crosflashed to LSI 9211-8i IT mode, 8GB Innodisk D150SV SATADOM for O/S
Backup Nas: U-NAS NSC-400, Gigabyte MB10-DS4 (4x4TB Seagate Exos disks in RaidZ configuration - 32GB RAM)
-
Onichan
- Advanced User

- Posts: 238
- Joined: 04 Jul 2012 21:41
- Status: Offline
Re: Thin provisioning (zfs sparse volumes)
I am on 9.1.0.1.431 and tried to make a thin provisioned zfs volume. When I clicked apply settings it gave an error about not being able to find a dataset. So I created a dataset with the same name and then created the volume and that seemed to work. The issue is when I go to create an iSCSI extent it doesn't show the thin provisioned volume. I created a non thin provisioned volume and it did show up.
- daoyama
- Developer

- Posts: 394
- Joined: 25 Aug 2012 09:28
- Location: Japan
- Status: Offline
Re: Thin provisioning (zfs sparse volumes)
You can't create the same name of dataset/volume. It is limitation of ZFS, not NAS4Free.Onichan wrote:I am on 9.1.0.1.431 and tried to make a thin provisioned zfs volume. When I clicked apply settings it gave an error about not being able to find a dataset. So I created a dataset with the same name and then created the volume and that seemed to work. The issue is when I go to create an iSCSI extent it doesn't show the thin provisioned volume. I created a non thin provisioned volume and it did show up.
However, it seems WebGUI does not report an error about this
(All children are created under pool)
To use the correct dataset, you must synchronize the pool from Disks|ZFS|Configuration|Synchronize until the bug is fixed.
NAS4Free 10.2.0.2.2115 (x64-embedded), 10.2.0.2.2258 (arm), 10.2.0.2.2258(dom0)
GIGABYTE 5YASV-RH, Celeron E3400 (Dual 2.6GHz), ECC 8GB, Intel ET/CT/82566DM (on-board), ZFS mirror (2TBx2)
ASRock E350M1/USB3, 16GB, Realtek 8111E (on-board), ZFS mirror (2TBx2)
MSI MS-9666, Core i7-860(Quad 2.8GHz/HT), 32GB, Mellanox ConnectX-2 EN/Intel 82578DM (on-board), ZFS mirror (3TBx2+L2ARC/ZIL:SSD128GB)
Develop/test environment:
VirtualBox 512MB VM, ESXi 512MB-8GB VM, Raspberry Pi, Pi2, ODROID-C1
GIGABYTE 5YASV-RH, Celeron E3400 (Dual 2.6GHz), ECC 8GB, Intel ET/CT/82566DM (on-board), ZFS mirror (2TBx2)
ASRock E350M1/USB3, 16GB, Realtek 8111E (on-board), ZFS mirror (2TBx2)
MSI MS-9666, Core i7-860(Quad 2.8GHz/HT), 32GB, Mellanox ConnectX-2 EN/Intel 82578DM (on-board), ZFS mirror (3TBx2+L2ARC/ZIL:SSD128GB)
Develop/test environment:
VirtualBox 512MB VM, ESXi 512MB-8GB VM, Raspberry Pi, Pi2, ODROID-C1
-
sodalimon
- Starter

- Posts: 25
- Joined: 26 Feb 2013 07:17
- Status: Offline
Re: Thin provisioning (zfs sparse volumes)
Well, actually it's quite simple and expected, but somehow it's out of sight 
(Took a while to figure it out for me too, since it has more than one parameter going on)
When one creates a zvolume it allocates spaces within the pool...
When one creates a zvolume with -s parameter (sparse) it does not allocate space within the pool...
I see it like the reservation system of datasets.
Example:
nas: tank # zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 22.8G 271G 32K /mnt/tank
nas: tank # zfs create -V 150G -o compression=off -o dedup=off -o sync=standard tank/nosparse
nas: tank # zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 171G 122G 32K /mnt/tank
nas: tank # zfs create -V 150G -s -o compression=off -o dedup=off -o sync=standard tank/sparse
nas: tank # zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 171G 122G 32K /mnt/tank
When creating a non-sparse volume, it reserves space for the volume, it is decreased from the pool's usable capacity... So it is guaranteed...
Additionally, we can also increase the reference (I assume ref is reference) reservation amount:
After increasing the (virtual) volume size of course...
nas: tank # zfs set volsize=200G tank/nosparse
nas: tank # zfs set refreservation=200G tank/nosparse
nas: tank # zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 216G 77.1G 32K /mnt/tank
And decrease too...
(now we don't need to decrease the volume size, but we could do too)
nas: tank # zfs set refreservation=100G tank/nosparse
nas: tank # zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 116G 177G 32K /mnt/tank
Besides...
Consider, setting an iSCSI target for your volume.
You share a virtual disk on network with a predetermined space...
You increase the pool.
The pool has space but the zvol is still the same in size.
You increase it...
You have to increase it every time the pool is grown...
In other words, one has to deal with every single volume (child) of a pool upon adding devices...
Or...
You thin provision (sparse) on creation.
When the pool gets larger, well, the volume is already there
So, having the ability to expand a pool is slightly something else in my opinion (kind of irrelevant, not altogether)...
On the other hand...
Pre-allocating space for a file system (before meeting ZFS) was something desired for filesystems like NTFS (less fragmentation, probably a little better performace etc).
I wonder how that goes with ZFS?
And when using NTFS over iSCSI over zvolume over ZFS
By the way, the gui does not alert/show error in the process...
(Took a while to figure it out for me too, since it has more than one parameter going on)
When one creates a zvolume it allocates spaces within the pool...
When one creates a zvolume with -s parameter (sparse) it does not allocate space within the pool...
I see it like the reservation system of datasets.
Example:
nas: tank # zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 22.8G 271G 32K /mnt/tank
nas: tank # zfs create -V 150G -o compression=off -o dedup=off -o sync=standard tank/nosparse
nas: tank # zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 171G 122G 32K /mnt/tank
nas: tank # zfs create -V 150G -s -o compression=off -o dedup=off -o sync=standard tank/sparse
nas: tank # zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 171G 122G 32K /mnt/tank
When creating a non-sparse volume, it reserves space for the volume, it is decreased from the pool's usable capacity... So it is guaranteed...
Additionally, we can also increase the reference (I assume ref is reference) reservation amount:
After increasing the (virtual) volume size of course...
nas: tank # zfs set volsize=200G tank/nosparse
nas: tank # zfs set refreservation=200G tank/nosparse
nas: tank # zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 216G 77.1G 32K /mnt/tank
And decrease too...
(now we don't need to decrease the volume size, but we could do too)
nas: tank # zfs set refreservation=100G tank/nosparse
nas: tank # zfs list tank
NAME USED AVAIL REFER MOUNTPOINT
tank 116G 177G 32K /mnt/tank
Besides...
Consider, setting an iSCSI target for your volume.
You share a virtual disk on network with a predetermined space...
You increase the pool.
The pool has space but the zvol is still the same in size.
You increase it...
You have to increase it every time the pool is grown...
In other words, one has to deal with every single volume (child) of a pool upon adding devices...
Or...
You thin provision (sparse) on creation.
When the pool gets larger, well, the volume is already there
So, having the ability to expand a pool is slightly something else in my opinion (kind of irrelevant, not altogether)...
On the other hand...
Pre-allocating space for a file system (before meeting ZFS) was something desired for filesystems like NTFS (less fragmentation, probably a little better performace etc).
I wonder how that goes with ZFS?
And when using NTFS over iSCSI over zvolume over ZFS
By the way, the gui does not alert/show error in the process...