*New 11.4 series Release:
2020-07-03: XigmaNAS 11.4.0.4.7633 - released!

*New 12.1 series Release:
2020-04-17: XigmaNAS 12.1.0.4.7542 - released


We really need "Your" help on XigmaNAS https://translations.launchpad.net/xigmanas translations. Please help today!

Producing and hosting XigmaNAS costs money. Please consider donating for our project so that we can continue to offer you the best.
We need your support! eg: PAYPAL

Bookmarks and external backup

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
CheshireCat
NewUser
NewUser
Posts: 11
Joined: 02 Mar 2013 16:12
Location: Cheshire, UK
Status: Offline

Bookmarks and external backup

#1

Post by CheshireCat »

ZFS snapshots preserve the data-blocks present at the time of the snapshot. They thus provide a mechanism to copy a filesystem frozen at the time of the snapshot. Incremental send makes updating the copy an efficient process as only the altered blocks are copied.

Until a file is changed or deleted, they take minimal space but as more files are changed or deleted, the snapshot grows. This feature often catches novice ZFS users unawares. In the domestic environment, one may have a regular backup process but need unscheduled backup to an external drive to take off site occasionally. The snapshot associated with the offsite backup will continue growing and must be retained if the offsite copy is to be refreshed - if not, just use zfs copy.

ZFS bookmarks provide a way to avoid this problem. An article here:
https://utcc.utoronto.ca/~cks/space/blo ... ksWhatFor
describes how bookmarks work. Essentially, a bookmark links to a transaction group which is a 64bit integer that zfs increments whenever it does something. Using a bookmark in the send command tells zfs to ignore all blocks with a lower number when building the stream to send. Bookmarks contain no data and cannot grow behind your back. Because they contain no data, they cannot be used to create clones but they can be used to generate a "send" stream. The information that would have been in a snapshot is on the target device so it is not needed.

Web searches have turned up no examples of how to use bookmarks. This note sets out the results of some of my experiments in the hope that the information may be useful to others.

The classic way to use zfs snapshots to send data to an external drive appears to be:

Code: Select all

# first time
zfs snapshot -r  pool1/source@old
zfs send -R  pool1/source@old | zfs receive -Fv  pool2/target

# all subsequent times
zfs snapshot -r  pool1/source@new
zfs send -i pool1/source@old  pool1/source@new | zfs receive -Fv  pool2/target
zfs destroy  -r pool1/source@old
zfs rename  pool1/source@new  pool1/source@old
zfs destroy  -r pool2/target@old
zfs rename  pool2/target@new  pool2/target@old
This process cannot be precisely replicated with bookmarks as the recursive option is not available.

To back up a single filesystem, the following steps work for me:

Code: Select all

# first time
zfs snapshot pool1/source@old 
zfs send pool1/source@old | zfs receive -Fv pool2/target 
# don't get confused: as this is the first time through
# the new snapshot is called "old"
zfs bookmark pool1/source@old pool1/source#old  # the hash denotes a bookmark
zfs destroy pool1/source@old

# all subsequent times
zfs snapshot pool1/source@new 
zfs send -i pool1/source#old pool1/source@new | zfs receive -Fv pool2/target
zfs destroy pool1/source#old
zfs bookmark pool1/source@new pool1/source#old
zfs destroy pool1/source@new
zfs destroy pool2/target@old
zfs rename pool2/target@new pool2/target@old
Fortunately it is possible to implement a recursive solution using zfs list recursively to generate the necessary information. The operation is not atomic, so it might not be suitable for data-centre use but in a domestic setting it does the job well enough. Code to recurse through a filesystem is appended.

Experiments show that the "mounted" flag is not set as might be expected. Running ls -lR on the target filesystem shows the directories of the descendent filesystems as empty while zfs list shows the filesystems to have the expected contents and to be "mounted". In the output below, zfs mount fails, claiming that the filesystem is already mounted. When the filesystem is unmounted and remounted, its contents become visible.

Code: Select all

nas4free: sshuser# ls -alR /mnt/pool2/target
total 33828
drwxr-xr-x  4 root  wheel         7 Mar 16 15:19 .
drwxrwxrwx  3 root  wheel         3 Mar 16 15:18 ..
drwxr-xr-x  2 root  wheel         2 Mar 16 15:18 even
-rw-r--r--  1 root  wheel  10485760 Mar 16 15:18 file0
-rw-r--r--  1 root  wheel  11534336 Mar 16 15:18 file00
-rw-r--r--  1 root  wheel  12582912 Mar 16 15:19 file000
drwxr-xr-x  2 root  wheel         2 Mar 16 15:18 odd

/mnt/pool2/target/even:
total 1
drwxr-xr-x  2 root  wheel  2 Mar 16 15:18 .
drwxr-xr-x  4 root  wheel  7 Mar 16 15:19 ..

/mnt/pool2/target/odd:
total 1
drwxr-xr-x  2 root  wheel  2 Mar 16 15:18 .
drwxr-xr-x  4 root  wheel  7 Mar 16 15:19 ..
nas4free: sshuser# zfs mount pool2/target/odd
cannot mount 'pool2/target/odd': filesystem already mounted
nas4free: sshuser# zfs unmount pool2/target/odd
nas4free: sshuser# zfs mount pool2/target/odd
nas4free: sshuser# ls -alR /mnt/pool2/target
total 33828
drwxr-xr-x  4 root  wheel         7 Mar 16 15:19 .
drwxrwxrwx  3 root  wheel         3 Mar 16 15:18 ..
drwxr-xr-x  2 root  wheel         2 Mar 16 15:18 even
-rw-r--r--  1 root  wheel  10485760 Mar 16 15:18 file0
-rw-r--r--  1 root  wheel  11534336 Mar 16 15:18 file00
-rw-r--r--  1 root  wheel  12582912 Mar 16 15:19 file000
drwxr-xr-x  4 root  wheel         5 Mar 16 15:19 odd

/mnt/pool2/target/even:
total 1
drwxr-xr-x  2 root  wheel  2 Mar 16 15:18 .
drwxr-xr-x  4 root  wheel  7 Mar 16 15:19 ..

/mnt/pool2/target/odd:
total 33819
drwxr-xr-x  4 root  wheel         5 Mar 16 15:19 .
drwxr-xr-x  4 root  wheel         7 Mar 16 15:19 ..
-rw-r--r--  1 root  wheel  34603008 Mar 16 15:19 file33
drwxr-xr-x  2 root  wheel         2 Mar 16 15:18 notodd
drwxr-xr-x  2 root  wheel         2 Mar 16 15:18 odd

/mnt/pool2/target/odd/notodd:
total 1
drwxr-xr-x  2 root  wheel  2 Mar 16 15:18 .
drwxr-xr-x  4 root  wheel  5 Mar 16 15:19 ..

/mnt/pool2/target/odd/odd:
total 1
drwxr-xr-x  2 root  wheel  2 Mar 16 15:18 .
drwxr-xr-x  4 root  wheel  5 Mar 16 15:19 ..
I hope this helps someone.
You do not have the required permissions to view the files attached to this post.
NAS4Free 11.2.0.4.6195
HP Microserver N40L 1.5GHz
8GB ECC RAM
2*4TB WD Red as ZFS mirror; 2*6TB WD Red as ZFS mirror
Intel NIC

Post Reply

Return to “ZFS (only!)”