sanoid on N4F
Posted: 14 Sep 2016 23:22
I recently encountered cryptolocker on several Windows systems, it was unpleasant. I thought by using an iSCSI drive on a N4F zfs volume I could use zfs rollback to quickly recover from future data loses. I was testing such a scenario by using a win10 system where D: is the N4F iSCSI drive, copy files to it, shutdown the system and then rollback to some point to see what happens.
Version 10.3.0.3 - Pilingitam (revision 2942)
nas4free: ~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
miniPool 1.49T 3.65T 1.35T /mnt/miniPool
miniPool/iSCSI01 148G 3.75T 46.5G -
nas4free: ~#
nas4free: ~# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
miniPool/iSCSI01@autosnap_2016-09-13_23:09:34_daily 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-13_23:09:34_monthly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_00:00:00_daily 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_01:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_02:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_03:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_04:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_05:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_06:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_07:00:01_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_08:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_09:00:00_hourly 8.23M - 46.4G -
miniPool/iSCSI01@autosnap_2016-09-14_10:00:00_hourly 0 - 46.5G -
miniPool/iSCSI01@autosnap_2016-09-14_11:00:00_hourly 0 - 46.5G -
miniPool/iSCSI01@autosnap_2016-09-14_12:00:00_hourly 0 - 46.5G -
miniPool/iSCSI01@autosnap_2016-09-14_13:00:00_hourly 0 - 46.5G -
nas4free: ~# zfs rollback -r miniPool/iSCSI01@autosnap_2016-09-14_08:00:00_hourly
cannot rollback 'miniPool/iSCSI01': dataset is busy
nas4free: ~# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
miniPool/iSCSI01@autosnap_2016-09-13_23:09:34_daily 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-13_23:09:34_monthly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_00:00:00_daily 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_01:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_02:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_03:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_04:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_05:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_06:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_07:00:01_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_08:00:00_hourly 0 - 44.5G -
The rollback to miniPool/iSCSI01@autosnap_2016-09-14_08:00:00_hourly deleted the previous snapshots as expected but then couldn't complete the actual rollback.
This is problematic on a couple of levels. Firstly, the more recent snapshots are gone, secondly, I can't figure why the dataset is busy.
Any insight would be appreciated.
Version 10.3.0.3 - Pilingitam (revision 2942)
nas4free: ~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
miniPool 1.49T 3.65T 1.35T /mnt/miniPool
miniPool/iSCSI01 148G 3.75T 46.5G -
nas4free: ~#
nas4free: ~# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
miniPool/iSCSI01@autosnap_2016-09-13_23:09:34_daily 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-13_23:09:34_monthly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_00:00:00_daily 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_01:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_02:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_03:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_04:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_05:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_06:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_07:00:01_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_08:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_09:00:00_hourly 8.23M - 46.4G -
miniPool/iSCSI01@autosnap_2016-09-14_10:00:00_hourly 0 - 46.5G -
miniPool/iSCSI01@autosnap_2016-09-14_11:00:00_hourly 0 - 46.5G -
miniPool/iSCSI01@autosnap_2016-09-14_12:00:00_hourly 0 - 46.5G -
miniPool/iSCSI01@autosnap_2016-09-14_13:00:00_hourly 0 - 46.5G -
nas4free: ~# zfs rollback -r miniPool/iSCSI01@autosnap_2016-09-14_08:00:00_hourly
cannot rollback 'miniPool/iSCSI01': dataset is busy
nas4free: ~# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
miniPool/iSCSI01@autosnap_2016-09-13_23:09:34_daily 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-13_23:09:34_monthly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_00:00:00_daily 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_01:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_02:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_03:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_04:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_05:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_06:00:00_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_07:00:01_hourly 0 - 44.5G -
miniPool/iSCSI01@autosnap_2016-09-14_08:00:00_hourly 0 - 44.5G -
The rollback to miniPool/iSCSI01@autosnap_2016-09-14_08:00:00_hourly deleted the previous snapshots as expected but then couldn't complete the actual rollback.
This is problematic on a couple of levels. Firstly, the more recent snapshots are gone, secondly, I can't figure why the dataset is busy.
Any insight would be appreciated.