This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
-
reggaeman
- NewUser

- Posts: 10
- Joined: 26 Oct 2014 11:33
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPool
Thank you for the reply.
About the snapshot format.
I already check that (viewtopic.php?f=70&t=2197#p11201), but even if I set the parameters in Services|CIFS/SMB|Share|Edit to %Y_%m_%d_%H%M_autosnap_type%S the snapshop save folder is always like this for example: 20141104_2058_autosnap_type01
Its something like the snapshot name it comes from the script itself, not from Services|CIFS/SMB|Share|Edit - Shadow Copy format.
About the snapshot format.
I already check that (viewtopic.php?f=70&t=2197#p11201), but even if I set the parameters in Services|CIFS/SMB|Share|Edit to %Y_%m_%d_%H%M_autosnap_type%S the snapshop save folder is always like this for example: 20141104_2058_autosnap_type01
Its something like the snapshot name it comes from the script itself, not from Services|CIFS/SMB|Share|Edit - Shadow Copy format.
-
xuesheng
- Starter

- Posts: 57
- Joined: 23 Jun 2012 10:56
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPool
Yes. The message I quoted explains how to change the Samba settings to make them work with the snapshot names created by Fritz's script. I think earlier versions of the script package used a different snapshot name format which was not very compatible with Samba's Shadow Copy support.Its something like the snapshot name it comes from the script itself, not from Services|CIFS/SMB|Share|Edit - Shadow Copy format.
You can edit the scripts to change the format used for the snapshot names but you will need to carefully check the scripts to see if any other changes are required to make sure the scripts will work with new snapshot name format.
You'll probably need to update the "Shadow Copy format" settings for the CIFS/SMB service too (though I think there are some problems with Shadow Copy support in the version of Samba currently used by NAS4Free).
-
reggaeman
- NewUser

- Posts: 10
- Joined: 26 Oct 2014 11:33
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPool
I make some chnagnes but I didnot find something.
if anyone can help me about, I will appreciate.
Thanks!
if anyone can help me about, I will appreciate.
Thanks!
-
Onichan
- Advanced User

- Posts: 238
- Joined: 04 Jul 2012 21:41
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPool
If you just want to change the date format then you should just need to change the DATE_FORMAT_SNAP_NAME variable in common/commonSnapFcts.sh, line 11. I haven't tested it, but that looks like it should be right. No idea if this will make it no longer see your old format snapshots so it might not auto delete them.reggaeman wrote:I make some chnagnes but I didnot find something.
if anyone can help me about, I will appreciate.
Thanks!
-
reggaeman
- NewUser

- Posts: 10
- Joined: 26 Oct 2014 11:33
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPool
I make the changes in common/commonSnapFcts.sh, I check it and it looks ok.
Also I make a rolbback with 2 different snapshot (the old name format and the new) and it is ok.
Thanks again for the help!
Also I make a rolbback with 2 different snapshot (the old name format and the new) and it is ok.
Thanks again for the help!
-
karlandtanya
- Starter

- Posts: 49
- Joined: 23 Jan 2014 15:31
- Location: nelson twp, OH, USA
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
Question regarding Backup script:
I'm using scripts_NAS4Free-2.0-beta1, and see in backupData.sh this usage:
so, I have two nearly identical servers right now backing up from one to the other by rsync.
/root/.ssh is already set up so that this occurs automatically using public and private keys.
each server has a filesystem at /mnt/data:
the primary
and the backup
I would like to use zfs replication instead, and have already added the cron entry in the primary for snapshots (it works beautifully afaict, and thank!!). I expect that after the first replication the backup will get its snapshots as it receives them from the primary. Perhaps there is benefit to make snapshots on the backup as well, but I'm not sure why I'd do that. One step at a time...right now the goal is replication:
From the help included in the script, it looks like i would do something like this from the primary (gizmonas)
where fsSource and fsDest both appear as "data", and -s root@gizmobutt,/root/.ssh/id_rsa tells the script that the dest is actually over there on gizmobutt.
Of course I expect it to take a while the first time as it's been rsync all up to this point so from a zfs point of view there is no previous snapshot to start from on the dest side.
I am a little nervous as I can see some misunderstanding on my part trashing both filesystems.
Is my command line above the correct way to replicate from the primary to the backup server in this scenario?
Thanks to anyone who responds for your kind advice and thanks esp. to all the folks who develop, support, and distribute this for free!!
I'm using scripts_NAS4Free-2.0-beta1, and see in backupData.sh this usage:
Code: Select all
Usage: backupData.sh [-s user@host[,path2privatekey]] [-b maxRollbck] [-c compression[,...]] fsSource[,...] fsDest
...
-s user@host[,path2privatekey]: Specify a remote host on which the destination filesystem is located
...
/root/.ssh is already set up so that this occurs automatically using public and private keys.
each server has a filesystem at /mnt/data:
the primary
Code: Select all
gizmonas: ~ # zfs list
NAME USED AVAIL REFER MOUNTPOINT
data 5.29T 1.63T 5.23T /mnt/dataCode: Select all
gizmobutt: ~ # zfs list
NAME USED AVAIL REFER MOUNTPOINT
data 5.23T 1.68T 5.23T /mnt/data
From the help included in the script, it looks like i would do something like this from the primary (gizmonas)
Code: Select all
/path/to/backupData.sh -s root@gizmobutt,/root/.ssh/id_rsa data data
Of course I expect it to take a while the first time as it's been rsync all up to this point so from a zfs point of view there is no previous snapshot to start from on the dest side.
I am a little nervous as I can see some misunderstanding on my part trashing both filesystems.
Is my command line above the correct way to replicate from the primary to the backup server in this scenario?
Thanks to anyone who responds for your kind advice and thanks esp. to all the folks who develop, support, and distribute this for free!!
-
xuesheng
- Starter

- Posts: 57
- Joined: 23 Jun 2012 10:56
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
I don't think this will do what you want ~ the script allows more than one source filesystem to be specified but only one destination can be specified. This means that all of the source filesystems will be backed up below the specified destination.From the help included in the script, it looks like i would do something like this from the primary (gizmonas)
/path/to/backupData.sh -s root@gizmobutt,/root/.ssh/id_rsa data data
This behaviour is documented in the backupData.sh script
Code: Select all
# Example:
# "backupData.sh tank/nas_scripts tank_backup" will create a backup of the ZFS fs
# "tank/nas_scripts" (and of all its sub-filesystems) in the ZFS fs "tank_backup".
# I.e. After the backup (at least) an fs "tank_backup/tank/nas_scripts" will exist.viewtopic.php?f=70&t=2197&start=50#p16722
- MoloMuy
- Forum Moderator

- Posts: 175
- Joined: 17 Feb 2013 14:40
- Location: Palma de Mallorca - España
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
Hi I am trying to pass the NAS to state S3 Suspend what is holding up plate of NAS with this script
manageAcpi.sh
I run it from cron with this command
/mnt/RAIDz1/.scripts/manageAcpi.sh -a 19:30,23:00 -n 192.168.1.19+192.168.1.20,600,3
executed properly but in the system log gives me this
Date and Time User Event
Nas Jan 12 13:23:40 kernel: acpi0 : device_suspend failed
Jan 12 13:23:40 nas acpi : 20150112 resumed at 13:23:40
Jan 12 13:23:37 nas acpi : 20150112 suspend at 13:23:37
and clearly does not suspend the computer
PB is a Gigabyte Technology Co., Ltd. GA- 970A -D3
NAS4Free Version 9.2.0.1 - Shigawire (revision 972) embedded
if I run over SSH acpiconf -s 3 gives me the same error in the system log
Nas Jan 12 14:49:04 kernel: acpi0 : device_suspend failed
any suggestions?
sorry for my english
thx´s
manageAcpi.sh
I run it from cron with this command
/mnt/RAIDz1/.scripts/manageAcpi.sh -a 19:30,23:00 -n 192.168.1.19+192.168.1.20,600,3
executed properly but in the system log gives me this
Date and Time User Event
Nas Jan 12 13:23:40 kernel: acpi0 : device_suspend failed
Jan 12 13:23:40 nas acpi : 20150112 resumed at 13:23:40
Jan 12 13:23:37 nas acpi : 20150112 suspend at 13:23:37
and clearly does not suspend the computer
PB is a Gigabyte Technology Co., Ltd. GA- 970A -D3
NAS4Free Version 9.2.0.1 - Shigawire (revision 972) embedded
if I run over SSH acpiconf -s 3 gives me the same error in the system log
Nas Jan 12 14:49:04 kernel: acpi0 : device_suspend failed
any suggestions?
sorry for my english
thx´s
Servidor NAS sobre NAS4Free v11.0.0.4 - Sayyadina (revisión 3460) x64 embedded on a Pendrive 4GB, 32 GB RAM, CPU AMD FX(tm)-4100 Quad-Core, 6x HD WD RED 6TB on 1 x LSI 9211-8i on RaidZ1=32,5 TB on 1 vdev on a Gigabyte GA-970A-D3
- b0ssman
- Forum Moderator

- Posts: 2438
- Joined: 14 Feb 2013 08:34
- Location: Munich, Germany
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
s3 suspend is not a high priority in the freebsd development.
try with the 9.3 version.
try with the 9.3 version.
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.
- MoloMuy
- Forum Moderator

- Posts: 175
- Joined: 17 Feb 2013 14:40
- Location: Palma de Mallorca - España
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
Hi b0ssman thanks for reply
I've also tested with version NAS4Free -x64 -embedded- 9.3.0.2.1283.img and makes me the same
thx
I've also tested with version NAS4Free -x64 -embedded- 9.3.0.2.1283.img and makes me the same
thx
Servidor NAS sobre NAS4Free v11.0.0.4 - Sayyadina (revisión 3460) x64 embedded on a Pendrive 4GB, 32 GB RAM, CPU AMD FX(tm)-4100 Quad-Core, 6x HD WD RED 6TB on 1 x LSI 9211-8i on RaidZ1=32,5 TB on 1 vdev on a Gigabyte GA-970A-D3
- MoloMuy
- Forum Moderator

- Posts: 175
- Joined: 17 Feb 2013 14:40
- Location: Palma de Mallorca - España
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
Hello I 'm using the script manageAcpi.sh
run the script with cron every 15 min
/mnt/RAIDz-1/.scripts/manageAcpi.sh -p 120 -w 600 -s 600 -a 19:30,21:00 -n 192.168.1.19+192.168.1.20,600,5
The problem is that if network traffic, NAS is shuttdown , I'm putting wrong or should add that this does not happen ?
Sorry for my english
very thx´s
run the script with cron every 15 min
/mnt/RAIDz-1/.scripts/manageAcpi.sh -p 120 -w 600 -s 600 -a 19:30,21:00 -n 192.168.1.19+192.168.1.20,600,5
The problem is that if network traffic, NAS is shuttdown , I'm putting wrong or should add that this does not happen ?
Sorry for my english
very thx´s
Servidor NAS sobre NAS4Free v11.0.0.4 - Sayyadina (revisión 3460) x64 embedded on a Pendrive 4GB, 32 GB RAM, CPU AMD FX(tm)-4100 Quad-Core, 6x HD WD RED 6TB on 1 x LSI 9211-8i on RaidZ1=32,5 TB on 1 vdev on a Gigabyte GA-970A-D3
-
fritz
- experienced User

- Posts: 84
- Joined: 12 Dec 2012 16:40
- Contact:
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
Hi
manageAcpi.sh should not be run periodically in cron, but should be started in a command script as it runs as an endless loop
fritz
manageAcpi.sh should not be run periodically in cron, but should be started in a command script as it runs as an endless loop
fritz
O/S: NAS4Free 11.1.0.4 - Atomics (revision 5017) (Embedded 64bit), installed on 8GB USB flash drive
https://github.com/fritz-hh
https://github.com/fritz-hh
- MoloMuy
- Forum Moderator

- Posts: 175
- Joined: 17 Feb 2013 14:40
- Location: Palma de Mallorca - España
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
thanks for replying
all scripts helpful started in a command script as it runs as an endless loop?
very thx´s
all scripts helpful started in a command script as it runs as an endless loop?
very thx´s
Servidor NAS sobre NAS4Free v11.0.0.4 - Sayyadina (revisión 3460) x64 embedded on a Pendrive 4GB, 32 GB RAM, CPU AMD FX(tm)-4100 Quad-Core, 6x HD WD RED 6TB on 1 x LSI 9211-8i on RaidZ1=32,5 TB on 1 vdev on a Gigabyte GA-970A-D3
-
fritz
- experienced User

- Posts: 84
- Joined: 12 Dec 2012 16:40
- Contact:
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
I am not sure to understand what you mean.
Did you manage to get it working on your nas?
Did you manage to get it working on your nas?
O/S: NAS4Free 11.1.0.4 - Atomics (revision 5017) (Embedded 64bit), installed on 8GB USB flash drive
https://github.com/fritz-hh
https://github.com/fritz-hh
- MoloMuy
- Forum Moderator

- Posts: 175
- Joined: 17 Feb 2013 14:40
- Location: Palma de Mallorca - España
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
f you already operates me
thank you very much
thank you very much
Servidor NAS sobre NAS4Free v11.0.0.4 - Sayyadina (revisión 3460) x64 embedded on a Pendrive 4GB, 32 GB RAM, CPU AMD FX(tm)-4100 Quad-Core, 6x HD WD RED 6TB on 1 x LSI 9211-8i on RaidZ1=32,5 TB on 1 vdev on a Gigabyte GA-970A-D3
-
fritz
- experienced User

- Posts: 84
- Joined: 12 Dec 2012 16:40
- Contact:
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
Hi all,
v2.0 of the scripts has been released (There are only few minor changes since v2.0-beta1)
cheers,
fritz
v2.0 of the scripts has been released (There are only few minor changes since v2.0-beta1)
cheers,
fritz
O/S: NAS4Free 11.1.0.4 - Atomics (revision 5017) (Embedded 64bit), installed on 8GB USB flash drive
https://github.com/fritz-hh
https://github.com/fritz-hh
- MoloMuy
- Forum Moderator

- Posts: 175
- Joined: 17 Feb 2013 14:40
- Location: Palma de Mallorca - España
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
Hi im runing command script
/mnt/RAIDz-1/.scripts/manageAcpi.sh -p 120 -w 600 -s 600 -a 19:30,21:00 -n 192.168.1.19+192.168.1.20,600,5
my nas shutdown having network traffic SAMBA
the command to put this right?
thx´s
/mnt/RAIDz-1/.scripts/manageAcpi.sh -p 120 -w 600 -s 600 -a 19:30,21:00 -n 192.168.1.19+192.168.1.20,600,5
my nas shutdown having network traffic SAMBA
the command to put this right?
thx´s
Servidor NAS sobre NAS4Free v11.0.0.4 - Sayyadina (revisión 3460) x64 embedded on a Pendrive 4GB, 32 GB RAM, CPU AMD FX(tm)-4100 Quad-Core, 6x HD WD RED 6TB on 1 x LSI 9211-8i on RaidZ1=32,5 TB on 1 vdev on a Gigabyte GA-970A-D3
- MoloMuy
- Forum Moderator

- Posts: 175
- Joined: 17 Feb 2013 14:40
- Location: Palma de Mallorca - España
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
Thx´s for new versión and your work
Servidor NAS sobre NAS4Free v11.0.0.4 - Sayyadina (revisión 3460) x64 embedded on a Pendrive 4GB, 32 GB RAM, CPU AMD FX(tm)-4100 Quad-Core, 6x HD WD RED 6TB on 1 x LSI 9211-8i on RaidZ1=32,5 TB on 1 vdev on a Gigabyte GA-970A-D3
-
fritz
- experienced User

- Posts: 84
- Joined: 12 Dec 2012 16:40
- Contact:
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
Your command seems to be OK, but you need to add "&" at the end of the command.
You should of course make sure that your motherboard supports ACPI S5, and that your bios is configured accordingly (activate wake on LAN...)
You should of course make sure that your motherboard supports ACPI S5, and that your bios is configured accordingly (activate wake on LAN...)
O/S: NAS4Free 11.1.0.4 - Atomics (revision 5017) (Embedded 64bit), installed on 8GB USB flash drive
https://github.com/fritz-hh
https://github.com/fritz-hh
- MoloMuy
- Forum Moderator

- Posts: 175
- Joined: 17 Feb 2013 14:40
- Location: Palma de Mallorca - España
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
very thx´s
Servidor NAS sobre NAS4Free v11.0.0.4 - Sayyadina (revisión 3460) x64 embedded on a Pendrive 4GB, 32 GB RAM, CPU AMD FX(tm)-4100 Quad-Core, 6x HD WD RED 6TB on 1 x LSI 9211-8i on RaidZ1=32,5 TB on 1 vdev on a Gigabyte GA-970A-D3
- MoloMuy
- Forum Moderator

- Posts: 175
- Joined: 17 Feb 2013 14:40
- Location: Palma de Mallorca - España
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
The NAS server shutdown and turns on properly by Wake On Lan
but continuous turning off if network traffic from another ip that you have not added to the command line
the line command run from commands scripts
/mnt/RAIDz-1/.scripts/manageAcpi.sh -p 120 -w 600 -s 600 -a 19:30,21:00 -n 192.168.1.19+192.168.1.20,600,5 &
very thx´s
but continuous turning off if network traffic from another ip that you have not added to the command line
the line command run from commands scripts
/mnt/RAIDz-1/.scripts/manageAcpi.sh -p 120 -w 600 -s 600 -a 19:30,21:00 -n 192.168.1.19+192.168.1.20,600,5 &
very thx´s
Servidor NAS sobre NAS4Free v11.0.0.4 - Sayyadina (revisión 3460) x64 embedded on a Pendrive 4GB, 32 GB RAM, CPU AMD FX(tm)-4100 Quad-Core, 6x HD WD RED 6TB on 1 x LSI 9211-8i on RaidZ1=32,5 TB on 1 vdev on a Gigabyte GA-970A-D3
-
carloskar
- Starter

- Posts: 19
- Joined: 28 Oct 2013 19:56
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
Hi
I have a NAS that is up and running during the day and my disks are set to sleep after 20 minutes. But since I run the manageSnapshots.sh script once every hour the disks spin up, and most of the times the snapshot created is empty (ie no changes has been made in the file system).
Would it be possible to before doing anything to the disks check if there has been any writes to the disks and if there has only then create a new snapshot? To avoid creating empty snapshots but mainly to avoid spinning up the disks.
(Anyways, thanks for some great scripts, been using them since day one of my N4F experiment
)
I have a NAS that is up and running during the day and my disks are set to sleep after 20 minutes. But since I run the manageSnapshots.sh script once every hour the disks spin up, and most of the times the snapshot created is empty (ie no changes has been made in the file system).
Would it be possible to before doing anything to the disks check if there has been any writes to the disks and if there has only then create a new snapshot? To avoid creating empty snapshots but mainly to avoid spinning up the disks.
(Anyways, thanks for some great scripts, been using them since day one of my N4F experiment
11.2.0.4 - Omnius (revision 5748)
ASRock E3C226D2I, Intel(R) Celeron(R) CPU G1820 @ 2.70GHz, 8GB DDR3 ECC
ZFS main pool: Samsung SSD 850 EVO 1TB
ZFS backup pool: 3x WD10EFRX 1TB + 1x WD20EFRX 2TB, mirror, synced daily with main pool
ASRock E3C226D2I, Intel(R) Celeron(R) CPU G1820 @ 2.70GHz, 8GB DDR3 ECC
ZFS main pool: Samsung SSD 850 EVO 1TB
ZFS backup pool: 3x WD10EFRX 1TB + 1x WD20EFRX 2TB, mirror, synced daily with main pool
-
locslikes
- NewUser

- Posts: 10
- Joined: 30 Mar 2013 01:50
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
Hi
First off thanks for all who have helped to develop these scripts!
I have a scenario where my nas pulls data off servers to keep backups via ssh. The problem i'm having is that as it's acting as a client the -s switch doesn't seem to work and it goes to sleep even during a ssh connection.
my switches are: manageAcpi.sh -s +300 -c23:00,21:00,5 -m -v &
so basically i it switches on at 9:30 via bios setting then it should start pulling the data off the server via ssh until its finished then sleep but as it's not the server receiving the ssh connection it goes to sleep at 11pm which is the start of the curfew. I could set the curfew to a later time but the whole point is for it take as long as it needs then just go to sleep.
Am i missing something or because it's not the server it's not detecting an incoming ssh connection?
Cheers!
First off thanks for all who have helped to develop these scripts!
I have a scenario where my nas pulls data off servers to keep backups via ssh. The problem i'm having is that as it's acting as a client the -s switch doesn't seem to work and it goes to sleep even during a ssh connection.
my switches are: manageAcpi.sh -s +300 -c23:00,21:00,5 -m -v &
so basically i it switches on at 9:30 via bios setting then it should start pulling the data off the server via ssh until its finished then sleep but as it's not the server receiving the ssh connection it goes to sleep at 11pm which is the start of the curfew. I could set the curfew to a later time but the whole point is for it take as long as it needs then just go to sleep.
Am i missing something or because it's not the server it's not detecting an incoming ssh connection?
Cheers!
-
trendco
- Starter

- Posts: 70
- Joined: 20 Jan 2013 18:59
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
Has anyone these scripts running on Linux (ZOL)?
-
carloskar
- Starter

- Posts: 19
- Joined: 28 Oct 2013 19:56
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
I have now created a wrapper for the manageSnapshots script that should run every hour. The wrapper is started at postInit and makes sure that the manageSnapshots runs once every hour, but only if there has been write activity on the zpool where the filesystem that should be snapshotted is located.carloskar wrote:Would it be possible to before doing anything to the disks check if there has been any writes to the disks and if there has only then create a new snapshot? To avoid creating empty snapshots but mainly to avoid spinning up the disks.
For example, my script is located at "/mnt/intvol0/scripts/fritz-2.0/manageSnapshotsWrapper.sh" and I want it to run at two minutes past every hour with arguments "-h 24 -d 15 -w 8 -m 12 intvol0" to manageSnapshots.sh, so I have this as a postInit script:
Code: Select all
/mnt/intvol0/scripts/fritz-2.0/manageSnapshotsWrapper.sh 2 -h 24 -d 15 -w 8 -m 12 intvol0 &Code: Select all
#!/bin/sh
################################################################################
# Author: carloskar
#
# This script is a wrapper for the hourly call to fritz's manageSnapshots.sh,
# but instead of calling it hourly it should be called as a postInit script
# and it will make sure to call manageSnapshots.sh once every hour at the
# specified minute.
#
# The purpose of the script is to only call manageSnapshots.sh if there has been
# write activity on the zpool, this is mainly to avoid having the disks spin up
# once every hour but it also avoids creating "empty" snapshots.
#
# The script first copies itself to /var/scripts/ (which is on a RAM-disk at least
# in the embedded install) and then runs the script from that location. This is
# because the script calls itself once every hour could potentially cause a disk
# read and the disk spin up and that is what the script was intended to avoid.
#
# 'zpool iostat' is used to determine if there has been any write operation
# so it obviously can not make a decision if it should create a snapshot or not
# per filesystem, only per pool.
# The usage of zpool iostat was inspired by a script by Milhouse at forums.freenas.org:
# https://forums.freenas.org/index.php?threads/unsure-of-sata-drive-spindown.1053/page-2#post-5605
#
# The script shall be called at postInit.
# Usage:
# manageSnapshotsWrapper.sh minute [args] filesystem
#
# minute : The minute into the hour when the snapshot should be created. For example,
# a 2 will create a snapshot every hour two minutes past the full hour.
# args : arguments to manageSnapshots.sh (excluding the filesystem)
# filesystem : the zfs filesystem, last argument to manageSnapshots.sh and contains
# the zpool to monitor write activity on.
################################################################################
################################################################################
# CONFIGURATION
################################################################################
#
# The directory where fritz's scripts are located
SCRIPTS_DIR="/mnt/intvol0/scripts/fritz-2.0"
#
# The directory where this script is supposed to execute from, prefferably a RAM disk
DESTINATION_DIR="/var/scripts"
#
################################################################################
# Initialization of the script name
readonly SCRIPT_NAME=`basename $0` # The name of this file
# set script path as working directory
cd "`dirname $0`"
# get the source dir that now is the working directory
readonly CURRENT_DIR=`pwd`
# Last argument is always the file system, needed for the log file name.
if [ $# -gt 0 ]; then
eval FILESYSTEM=\${$#}
# Check if the filesystem is available
if ! zfs list "$FILESYSTEM" 1>/dev/null 2>/dev/null; then
echo "ERROR: Unknown file system: \"$FILESYSTEM\""
exit 1
fi
else
echo "ERROR(1): Missing arguments! Need at least 'minute' and 'filesystem'. Args='$@'"
exit 1
fi
# Get the zpool name for the filesystem
POOL=`echo $FILESYSTEM | cut -d'/' -f1`
# Log file is relative the directory where the script was called from,
# so the first call when the script executes from non-volatile memory will write
# the log to non-volatile memory, and when the script is called from the
# RAM-disk then the log is written to the RAM-disk.
LOGFILE="$CURRENT_DIR/log/$SCRIPT_NAME.$FILESYSTEM.log"
readonly TMPFILE="$DESTINATION_DIR/tmp/$SCRIPT_NAME.$FILESYSTEM.tmp"
_log() {
echo "`date +'%Y%m%d_%H%M%S'` $@" >> $LOGFILE
}
if [ "$1" == "wait" ]; then
# Process output from zpool iostat
# Remove first argument to this script, 'wait'. $@ now only contains the arguments to manageSnapshots.sh
shift
# Skip 3 lines of 'zpool iostat' headers...
for H in 1 2 3; do
read HEADER
done
# First output line is the average since system start, ignore it.
read POOL_NAME POOL_USED POOL_AVAIL POOL_OP_READ POOL_OP_WRITE POOL_BW_READ POOL_BW_WRITE
# Second output line is the average for the period. This is the interesting part.
# Wait for 'zpool iostat' to write the second output.
read POOL_NAME POOL_USED POOL_AVAIL POOL_OP_READ POOL_OP_WRITE POOL_BW_READ POOL_BW_WRITE
if [ $POOL_OP_WRITE -gt 0 ]; then
_log "write_ops=$POOL_OP_WRITE, create snapshot."
sh $SCRIPTS_DIR/manageSnapshots.sh $@
else
_log "No writes no snapshot."
fi
else
# The script is starting
_log "------------------------"
_log "Startup, args=$@"
_log "zpool = $POOL"
# The first argument when starting up is the minute when the snapshot should be created.
if [ $# -gt 1 ]; then
# Have at least 'minute' and 'filesystem' arguments.
# $1 is 'minute', verify that it is a number and that it is valid (0-59)
if ! [ "$1" -eq "$1" ] 2> /dev/null; then
_log "ERROR: Arg0 (minute)='$1': not a number."
exit 1
elif ! [ "$1" -ge 0 -a "$1" -le 59 ]; then
_log "ERROR: Arg0 (minute)='$1': not within the valid range, 0-59."
exit 1
fi
eval SNAPSHOT_MIN=$1
# Remove the first argument, 'minute', from the argument list
shift
_log "minute = $SNAPSHOT_MIN"
else
_log "ERROR(2): Missing arguments! Need at least 'minute' and 'filesystem'. Args='$@'"
exit 1
fi
# Make sure the destination directory exists
if [ ! -d "$DESTINATION_DIR" ]; then
`mkdir -p $DESTINATION_DIR`
fi
# Must have a temporary directory in the destination
if [ ! -d "$DESTINATION_DIR/tmp" ]; then
`mkdir -p $DESTINATION_DIR/tmp`
fi
# Must have a log directory in the destination
if [ ! -d "$DESTINATION_DIR/log" ]; then
`mkdir -p $DESTINATION_DIR/log`
fi
# If the temporary file for this FILESYSTEM already exists then it is probably already running
if [ -e "$TMPFILE" ]; then
_log "ERROR '$TMPFILE' already exists, is the script already running for the current file system?"
exit 1
else
# No tempfile exists so create one.
touch "$TMPFILE"
fi
# Only copy script to destination if it does not already exist
if [ ! -e "$DESTINATION_DIR/$SCRIPT_NAME" ]; then
cp $CURRENT_DIR/$SCRIPT_NAME $DESTINATION_DIR
fi
# Update the log-file path so log are written to RAM-disk from now
LOGFILE="$DESTINATION_DIR/log/$SCRIPT_NAME.$FILESYSTEM.log"
# The main loop
while [ true ]; do
# Get the current time and extract the minute and second part
current_time=`date +'%H:%M:%S'`
current_min=`echo $current_time | cut -d':' -f2`
current_sec=`echo $current_time | cut -d':' -f3`
# Calculate the next snapshot time in seconds relative HH:00:00 where HH is the current hour.
if [ $current_min -lt $SNAPSHOT_MIN ]; then
# Current minute is less than the snapshot minute, so the next snapshot minute is during this hour (current_hour+0)
let next=0 > /dev/null
else
# The next snapshot minute is during the next hour, add 60 minutes
let next=60*60 > /dev/null
fi
let next=next+SNAPSHOT_MIN*60 > /dev/null
# Subtract the current MM:SS from the next time to get the number of seconds until next
let period=next-current_min*60-current_sec > /dev/null
if [ $period -eq 0 ]; then
# No use having period of zero seconds, set it to 60 minutes instead
let period=60*60 > /dev/null
fi
_log "Period: $period seconds"
# Call zpool iostat for the pool, with the calculated 'period' and count set to 2.
# Pipe the output to a new instance of this script with the first argument 'wait'.
`zpool iostat $POOL $period 2 | $DESTINATION_DIR/$SCRIPT_NAME wait $@`
done
fi
exit 0
11.2.0.4 - Omnius (revision 5748)
ASRock E3C226D2I, Intel(R) Celeron(R) CPU G1820 @ 2.70GHz, 8GB DDR3 ECC
ZFS main pool: Samsung SSD 850 EVO 1TB
ZFS backup pool: 3x WD10EFRX 1TB + 1x WD20EFRX 2TB, mirror, synced daily with main pool
ASRock E3C226D2I, Intel(R) Celeron(R) CPU G1820 @ 2.70GHz, 8GB DDR3 ECC
ZFS main pool: Samsung SSD 850 EVO 1TB
ZFS backup pool: 3x WD10EFRX 1TB + 1x WD20EFRX 2TB, mirror, synced daily with main pool
-
dundermiflin
- Starter

- Posts: 27
- Joined: 12 Oct 2013 14:11
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
I have two problems and I don't know how to fix it :
First is :
checkPools.sh : issue occured during execution
The problem, is the scripts is sending a mail every hour with this ERROR.......I can't fix, but I don't want to stop the script......just ignore this error
Second one :
manageSnapshots.sh : issue occured during execution
First is :
checkPools.sh : issue occured during execution
I don't know hot to fix the 512B -> 4096B withou losing data (I don't have where to put all the data while creating a new 4096B pool)20150331_140000 INFO -------------------------------------
20150331_140001 INFO Starting checking of pools
20150331_140001 ERROR pool: JANGINA
20150331_140001 ERROR state: ONLINE
20150331_140001 ERROR status: One or more devices are configured to use a non-native block size.
20150331_140001 ERROR Expect reduced performance.
20150331_140001 ERROR action: Replace affected devices with devices that support the
20150331_140001 ERROR configured block size, or migrate data to a properly configured
20150331_140001 ERROR pool.
20150331_140001 ERROR scan: scrub repaired 0 in 8h23m with 0 errors on Mon Mar 30 04:13:21 2015
20150331_140001 ERROR config:
20150331_140001 ERROR NAME STATE READ WRITE CKSUM
20150331_140001 ERROR JANGINA ONLINE 0 0 0
20150331_140001 ERROR raidz1-0 ONLINE 0 0 0
20150331_140001 ERROR ada0 ONLINE 0 0 0 block size: 512B configured, 4096B native
20150331_140001 ERROR ada1 ONLINE 0 0 0 block size: 512B configured, 4096B native
20150331_140001 ERROR ada2 ONLINE 0 0 0 block size: 512B configured, 4096B native
20150331_140001 ERROR ada3 ONLINE 0 0 0 block size: 512B configured, 4096B native
20150331_140001 ERROR errors: No known data errors
The problem, is the scripts is sending a mail every hour with this ERROR.......I can't fix, but I don't want to stop the script......just ignore this error
Second one :
manageSnapshots.sh : issue occured during execution
The snapshot script is sending this error recurrently.....if I delete the lock, then the snapshots works ok , but just two or three times...then this error comes back
20150330_230000 INFO -------------------------------------
20150330_230001 INFO Starting snapshot script for dataset "JANGINA" (depth: -1)
20150330_230001 INFO Keeping up to 24 hourly / 15 daily / 8 weekly / 12 monthly snapshots (<0 = all)
20150330_230001 ERROR Could not start script: Another instance is running or stopped abnormally
20150330_230001 ERROR In the latter case, please delete manually the corresponding lock: "./tmp/locks/manageSnapshots.sh.JANGINA.lock"
20150330_230001 INFO JANGINA: Snapshot creation deactivated (no snapshot created)
-
neptunus
- experienced User

- Posts: 79
- Joined: 11 Jun 2013 08:50
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
Fritz and others,
I’m using the scripts: Backup,Snapshot,Standby,Scrub,CheckPools... every day! Very nice ones! Thanks…
When I’m using backupData.sh to backup data to a remote host within my LAN, the speed are very slow (12.5MB/s). When I test my bandwidth with iperf. I get 110MB/s in both directions.
Using of mbuffer is maybe a good idea to add to the backup script.
http://everycity.co.uk/alasdair/2010/07 ... s-receive/
http://blog.bitratchet.com/2013/11/11/u ... -services/
Unfortunately mbuffer is not installed on default.
There might be another good alternative to move the data, someone ideas?
I’m using the scripts: Backup,Snapshot,Standby,Scrub,CheckPools... every day! Very nice ones! Thanks…
When I’m using backupData.sh to backup data to a remote host within my LAN, the speed are very slow (12.5MB/s). When I test my bandwidth with iperf. I get 110MB/s in both directions.
Using of mbuffer is maybe a good idea to add to the backup script.
http://everycity.co.uk/alasdair/2010/07 ... s-receive/
http://blog.bitratchet.com/2013/11/11/u ... -services/
Unfortunately mbuffer is not installed on default.
There might be another good alternative to move the data, someone ideas?
-
Onichan
- Advanced User

- Posts: 238
- Joined: 04 Jul 2012 21:41
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
12.5MB is the exact speed of 100Mb NIC. When you are using iperf you are doing it between the backup host and NAS?
Are you sure the backup host is connected at 1Gb and that whatever is between the backup host and the main NAS is all 1Gb as well?
Are you sure the backup host is connected at 1Gb and that whatever is between the backup host and the main NAS is all 1Gb as well?
-
neptunus
- experienced User

- Posts: 79
- Joined: 11 Jun 2013 08:50
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
Yes it is between my two nas4free machines. Tested with iperf in both directions.Onichan wrote:12.5MB is the exact speed of 100Mb NIC. When you are using iperf you are doing it between the backup host and NAS?
Yes I'm sure because I get 106MB/s from host a to host b and 108 MB/s from host b to host a.Are you sure the backup host is connected at 1Gb and that whatever is between the backup host and the main NAS is all 1Gb as well?
With a simpel zfs send/recv test the speeds are about ~12.5MB/s (it's a coincidence that the speed are about a 100Mbit/s NIC).
-
MRVa
- NewUser

- Posts: 5
- Joined: 20 Jul 2014 19:59
- Status: Offline
Re: Helpful scripts: Backup,Snapshot,Standby,Scrub,CheckPools...
Hello:
I'm a complete *newb* when it comes to FreeBSD, though I have used several generations of FreeNAS, then, more recently, NAS4Free. Stayed with the latter because this is just a home server running on my old Athlon 64 Desktop, which is maxed out with 4gb of RAM.
I have successfully installed the extended GUI add on 9.2.0.1, and that all works well.
I have tried to get "manageacpi" script running several times without success, though I seem to get close. The machine starts up as expected, and everything else seems to work. I think the script is starting, because I get an email from the system entitled.
ManageAcpi.sh: Invalid arguments
The text of the email is:
20151129_064908 INFO -------------------------------------
20151129_064908 ERROR No mandatory arguments should be provided
I suspect I have some sort of syntax error in the command string, but since I don't know much at all about FreeBSD commands, I have no idea what it is. Something simple, I suspect. But whatever it is, it's preventing the script from actually doing it's job -- I think. The old Athlon would sleep without difficulty in the Windows world, but not yet in NAS4Free.
Here is my current string: Any of you gurus see where I fouled it up? Do I need to add or eliminate spaces here somewhere? Did I leave something out?
/mnt/Mushkin/DATA/Scripts/manageAcpi.sh -p 120 -w 300 -a 11:50,12:10 -c 23:00,6:00,3 -n 6:00,23:00,3,30,192.168.0.63 +192.168.0.55 +192.168.0.56 +192.168.0.62
Thanks to all, and thanks for the script. For home use, if there's one small feature that would make NAS4Free more attractive, it would be building this feature into the system. But I'm sure I can get this to work. I'm just hoping somebody can save me a lot of experimentation. It may be something else, but I've checked the collateral instructions a couple of times. I'm hoping this message will at least help with diagnostics.
Thanks,
Mike
I'm a complete *newb* when it comes to FreeBSD, though I have used several generations of FreeNAS, then, more recently, NAS4Free. Stayed with the latter because this is just a home server running on my old Athlon 64 Desktop, which is maxed out with 4gb of RAM.
I have successfully installed the extended GUI add on 9.2.0.1, and that all works well.
I have tried to get "manageacpi" script running several times without success, though I seem to get close. The machine starts up as expected, and everything else seems to work. I think the script is starting, because I get an email from the system entitled.
ManageAcpi.sh: Invalid arguments
The text of the email is:
20151129_064908 INFO -------------------------------------
20151129_064908 ERROR No mandatory arguments should be provided
I suspect I have some sort of syntax error in the command string, but since I don't know much at all about FreeBSD commands, I have no idea what it is. Something simple, I suspect. But whatever it is, it's preventing the script from actually doing it's job -- I think. The old Athlon would sleep without difficulty in the Windows world, but not yet in NAS4Free.
Here is my current string: Any of you gurus see where I fouled it up? Do I need to add or eliminate spaces here somewhere? Did I leave something out?
/mnt/Mushkin/DATA/Scripts/manageAcpi.sh -p 120 -w 300 -a 11:50,12:10 -c 23:00,6:00,3 -n 6:00,23:00,3,30,192.168.0.63 +192.168.0.55 +192.168.0.56 +192.168.0.62
Thanks to all, and thanks for the script. For home use, if there's one small feature that would make NAS4Free more attractive, it would be building this feature into the system. But I'm sure I can get this to work. I'm just hoping somebody can save me a lot of experimentation. It may be something else, but I've checked the collateral instructions a couple of times. I'm hoping this message will at least help with diagnostics.
Thanks,
Mike