Please read also the information on the bottom of this wiki page.
On this page we see an overview of the datasets and are we able to create/delete those:
Warnings, Hazards, and Best Practices When Using Dedup on ZFS Datasets
This is a FreeBSD & ZFS issue and not XigmaNAS. It is unknown at this time if it's a bug or not, but please take careful consideration of the information below before determining if you want to use the dedup option for your ZFS datasets. Deduplication, however, is a great feature to use if you know what your doing and take the precautions below.
It has come to our attention if you try to delete a particular dataset that uses deduplications without first deleting all snapshots, clones, and files within the dataset it will lead to the XigmaNAS system becoming unresponsive and freezing. What happens is during the deletion process of the dataset with deduplication, the system will eventually use up all available RAM and swap space. At that point the system crashes and requires a hard reset. If this happens you run the risk of damaging the zpool and the inability to import the zpool (importing will result in the same memory bomb condition).
Deleting a Dataset with Dedup Enabled
To prevent the above from happening you need to make sure you:
Once you make sure that those 3 steps are taken care of you should be able to delete the dataset without any issue.
It has also been noted that using deduplication there is a major increase of memory usage. For best use of this feature for performance reasons and less risk of compounded issues, you will need to have at a minimum 16GB of RAM or greater per 1Tb of space (32GB recommended per sources).
Deduplication is not for the everyday home user, which in turn is really not recommended for … ANYONE.
If you have read and understood everything above and you decide to use deduplication, do so at your own risk!
Be aware that if you use DEDUP, and then come asking for support, WE CAN'T HELP YOU! So be smart! Home Users don't use DEDUP!