Until a file is changed or deleted, they take minimal space but as more files are changed or deleted, the snapshot grows. This feature often catches novice ZFS users unawares. In the domestic environment, one may have a regular backup process but need unscheduled backup to an external drive to take off site occasionally. The snapshot associated with the offsite backup will continue growing and must be retained if the offsite copy is to be refreshed - if not, just use zfs copy.
ZFS bookmarks provide a way to avoid this problem. An article here:
https://utcc.utoronto.ca/~cks/space/blo ... ksWhatFor
describes how bookmarks work. Essentially, a bookmark links to a transaction group which is a 64bit integer that zfs increments whenever it does something. Using a bookmark in the send command tells zfs to ignore all blocks with a lower number when building the stream to send. Bookmarks contain no data and cannot grow behind your back. Because they contain no data, they cannot be used to create clones but they can be used to generate a "send" stream. The information that would have been in a snapshot is on the target device so it is not needed.
Web searches have turned up no examples of how to use bookmarks. This note sets out the results of some of my experiments in the hope that the information may be useful to others.
The classic way to use zfs snapshots to send data to an external drive appears to be:
Code: Select all
# first time
zfs snapshot -r pool1/source@old
zfs send -R pool1/source@old | zfs receive -Fv pool2/target
# all subsequent times
zfs snapshot -r pool1/source@new
zfs send -i pool1/source@old pool1/source@new | zfs receive -Fv pool2/target
zfs destroy -r pool1/source@old
zfs rename pool1/source@new pool1/source@old
zfs destroy -r pool2/target@old
zfs rename pool2/target@new pool2/target@old
To back up a single filesystem, the following steps work for me:
Code: Select all
# first time
zfs snapshot pool1/source@old
zfs send pool1/source@old | zfs receive -Fv pool2/target
# don't get confused: as this is the first time through
# the new snapshot is called "old"
zfs bookmark pool1/source@old pool1/source#old # the hash denotes a bookmark
zfs destroy pool1/source@old
# all subsequent times
zfs snapshot pool1/source@new
zfs send -i pool1/source#old pool1/source@new | zfs receive -Fv pool2/target
zfs destroy pool1/source#old
zfs bookmark pool1/source@new pool1/source#old
zfs destroy pool1/source@new
zfs destroy pool2/target@old
zfs rename pool2/target@new pool2/target@old
Experiments show that the "mounted" flag is not set as might be expected. Running ls -lR on the target filesystem shows the directories of the descendent filesystems as empty while zfs list shows the filesystems to have the expected contents and to be "mounted". In the output below, zfs mount fails, claiming that the filesystem is already mounted. When the filesystem is unmounted and remounted, its contents become visible.
Code: Select all
nas4free: sshuser# ls -alR /mnt/pool2/target
total 33828
drwxr-xr-x 4 root wheel 7 Mar 16 15:19 .
drwxrwxrwx 3 root wheel 3 Mar 16 15:18 ..
drwxr-xr-x 2 root wheel 2 Mar 16 15:18 even
-rw-r--r-- 1 root wheel 10485760 Mar 16 15:18 file0
-rw-r--r-- 1 root wheel 11534336 Mar 16 15:18 file00
-rw-r--r-- 1 root wheel 12582912 Mar 16 15:19 file000
drwxr-xr-x 2 root wheel 2 Mar 16 15:18 odd
/mnt/pool2/target/even:
total 1
drwxr-xr-x 2 root wheel 2 Mar 16 15:18 .
drwxr-xr-x 4 root wheel 7 Mar 16 15:19 ..
/mnt/pool2/target/odd:
total 1
drwxr-xr-x 2 root wheel 2 Mar 16 15:18 .
drwxr-xr-x 4 root wheel 7 Mar 16 15:19 ..
nas4free: sshuser# zfs mount pool2/target/odd
cannot mount 'pool2/target/odd': filesystem already mounted
nas4free: sshuser# zfs unmount pool2/target/odd
nas4free: sshuser# zfs mount pool2/target/odd
nas4free: sshuser# ls -alR /mnt/pool2/target
total 33828
drwxr-xr-x 4 root wheel 7 Mar 16 15:19 .
drwxrwxrwx 3 root wheel 3 Mar 16 15:18 ..
drwxr-xr-x 2 root wheel 2 Mar 16 15:18 even
-rw-r--r-- 1 root wheel 10485760 Mar 16 15:18 file0
-rw-r--r-- 1 root wheel 11534336 Mar 16 15:18 file00
-rw-r--r-- 1 root wheel 12582912 Mar 16 15:19 file000
drwxr-xr-x 4 root wheel 5 Mar 16 15:19 odd
/mnt/pool2/target/even:
total 1
drwxr-xr-x 2 root wheel 2 Mar 16 15:18 .
drwxr-xr-x 4 root wheel 7 Mar 16 15:19 ..
/mnt/pool2/target/odd:
total 33819
drwxr-xr-x 4 root wheel 5 Mar 16 15:19 .
drwxr-xr-x 4 root wheel 7 Mar 16 15:19 ..
-rw-r--r-- 1 root wheel 34603008 Mar 16 15:19 file33
drwxr-xr-x 2 root wheel 2 Mar 16 15:18 notodd
drwxr-xr-x 2 root wheel 2 Mar 16 15:18 odd
/mnt/pool2/target/odd/notodd:
total 1
drwxr-xr-x 2 root wheel 2 Mar 16 15:18 .
drwxr-xr-x 4 root wheel 5 Mar 16 15:19 ..
/mnt/pool2/target/odd/odd:
total 1
drwxr-xr-x 2 root wheel 2 Mar 16 15:18 .
drwxr-xr-x 4 root wheel 5 Mar 16 15:19 ..
