I recently encountered cryptolocker on several Windows systems, it was unpleasant. I thought by using an iSCSI drive on a N4F zfs volume I could use zfs rollback to quickly recover from future data loses. I was testing such a scenario by using a win10 system where D: is the N4F iSCSI drive, copy files to it, shutdown the system and then rollback to some point to see what happens.
Version 10.3.0.3 - Pilingitam (revision 2942)
nas4free: ~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
miniPool 1.49T 3.65T 1.35T /mnt/miniPool
miniPool/iSCSI01 148G 3.75T 46.5G -
nas4free: ~#
nas4free: ~# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
miniPool/iSCSI01@autosnap_2016-09-13_23:09:34_daily 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-13_23:09:34_monthly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_00:00:00_daily 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_01:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_02:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_03:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_04:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_05:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_06:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_07:00:01_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_08:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_09:00:00_hourly 8.23M - 46.4G -
miniPool/iSCSI01@autosnap_2016-09-14_10:00:00_hourly 0 - 46.5G -
miniPool/iSCSI01@autosnap_2016-09-14_11:00:00_hourly 0 - 46.5G -
miniPool/iSCSI01@autosnap_2016-09-14_12:00:00_hourly 0 - 46.5G -
miniPool/iSCSI01@autosnap_2016-09-14_13:00:00_hourly 0 - 46.5G -
nas4free: ~# zfs rollback -r miniPool/iSCSI01@autosnap_2016-09-14_08:00:00_hourly
cannot rollback 'miniPool/iSCSI01': dataset is busy
nas4free: ~# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
miniPool/iSCSI01@autosnap_2016-09-13_23:09:34_daily 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-13_23:09:34_monthly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_00:00:00_daily 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_01:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_02:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_03:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_04:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_05:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_06:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_07:00:01_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_08:00:00_hourly 0 - 44.5G -
The rollback to miniPool/iSCSI01@autosnap_2016-09-14_08:00:00_hourly deleted the previous snapshots as expected but then couldn't complete the actual rollback.
This is problematic on a couple of levels. Firstly, the more recent snapshots are gone, secondly, I can't figure why the dataset is busy.
Any insight would be appreciated.
This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
sanoid on N4F
-
laterdaze
- NewUser

- Posts: 4
- Joined: 04 Apr 2015 17:58
- Status: Offline
Re: sanoid on N4F
Well, I now know that the dataset really is busy. There are only a couple of places in the FreeBSD code where the string "cannot rollback 'miniPool/iSCSI01': dataset is busy" could be generated. It seems one of the zfs related IOCtl calls returns EBUSY resulting in the message. Without using dtrace or instrumenting BSD with debugging printouts I will never "deduce" whats making it busy. In the "Managing ZFS File Systems in Oracle® Solaris 11.2" documentation I found this:
A ZFS volume as an iSCSI target is managed just like any other ZFS dataset, except that you cannot rename the dataset, roll back a volume snapshot, or export the pool while the ZFS volumes are shared as iSCSI LUNs. You will see messages similar to the following:
# zfs rename tank/volumes/v2 tank/volumes/v1
cannot rename 'tank/volumes/v2': dataset is busy
# zpool export tank
cannot export 'tank': pool is busy
Sounds like my situation. I then disabled the iSCSI Target at Services|iSCSI Target|settings. This allowed the rollback to proceed and it completed successfully. I reenabled the iSCSI Target, booted the test system and indeed the D: drive had been rolled back.
A ZFS volume as an iSCSI target is managed just like any other ZFS dataset, except that you cannot rename the dataset, roll back a volume snapshot, or export the pool while the ZFS volumes are shared as iSCSI LUNs. You will see messages similar to the following:
# zfs rename tank/volumes/v2 tank/volumes/v1
cannot rename 'tank/volumes/v2': dataset is busy
# zpool export tank
cannot export 'tank': pool is busy
Sounds like my situation. I then disabled the iSCSI Target at Services|iSCSI Target|settings. This allowed the rollback to proceed and it completed successfully. I reenabled the iSCSI Target, booted the test system and indeed the D: drive had been rolled back.