Latest News:
2019-02-05: XigmaNAS 12.0.0.4.6412 - BETA released!
2019-01-22: XigmaNAS 11.2.0.4.6400 - released!
We really need "Your" help on XigmaNAS https://translations.launchpad.net/xigmanas translations. Please help today!
Producing and hosting XigmaNAS cost money, please consider a donation to our project so we can continue to offer you the best.
We need your support! eg: PAYPAL
2019-02-05: XigmaNAS 12.0.0.4.6412 - BETA released!
2019-01-22: XigmaNAS 11.2.0.4.6400 - released!
We really need "Your" help on XigmaNAS https://translations.launchpad.net/xigmanas translations. Please help today!
Producing and hosting XigmaNAS cost money, please consider a donation to our project so we can continue to offer you the best.
We need your support! eg: PAYPAL
Froze then failed with Error 5 on boot (RootOnZFS)
-
- NewUser
- Posts: 12
- Joined: 15 Jul 2018 21:31
- Status: Offline
Froze then failed with Error 5 on boot (RootOnZFS)
XigmaNAS 11.2.0.4.6026 full, RootOnNFS installed on mirrored SSDs (full specs in the next post - I had forgotten and needed to work out what they were!).
The system was working perfectly and had been up for around 2 or 3 weeks, with a few re-boots. From memory, I had upgraded to the latest build of XigmaNAS via the gui from the build that came out soon after the name change from Nas4Free to XigmaNas (so, I think, I had a fresh RootOnZFS install of the first version that was called XigmaNas).
When the problem started, I had enabled ExtendedGUI via the OneButtonInstaller a day or so beforehand and automatic purges of the recycle directories on my various CIFS/SMB shares.
At the time the system froze, I have just added a new library for a large directory of movies to Plex (latest version, recently upgraded via the OBI GUI) and Plex was in the middle of downloading cover art and metadata.
I had a monitor hooked up to the XigmaNas box at the time and saw the screen blank. I waited a while and then rebooted. On boot, the box gets past the boot menu and then halts with the following on the screen:
da5: Serial Number WD-WX41DA40LJ8Hda 5: 600.000MB/s transfers
da5: Command Queueing enabled
da5: 5723166MB (11721045168 512 byte sectors)
da5 quirks=0x8<4k>
Trying to mount root from zfs:zroot/ROOT/upgrade-2018-09-22-132504 []...
GEOM_MIRROR: Device mirror/gswap launched (2/2).
random: unblocking device.
Mounting from zfs:zroot/ROOT/upgrade=2018-09-22-132504 failed with error 5; retrying for 3 more seconds
Mounting from zfs:zroot/ROOT/upgrade=2018-09-22-132504 failed with error 5.
Loader variables:
vfs.root.mountfrom=zfs:zroot/ROOT/upgrade-2018-09-22-132504
Manual root filesystem specification:
<fstype>:<device> [options]
Mount <device> using filesystem <fstype>
and with the specified (optional) option list.
e.g. ufs:/dev/da0s1a
zfs:tank
cd9660:/dev/cd0 ro
(which is equivalent to: mount -t cd9660 -o re /dev/cd0 /)
? List valid disk boot devices
. Yield 1 second (for background tasks)
<empyty line>
mountroot>
If I then press enter, I get a panic and it tells me that it is unable to mount root, then reboots and gets back to the same screen.
Does anyone have any ideas why this might have happened or how I can troubleshoot and find out? As it's mirrored RootOnNFS, can I restore a previous snapshot or something or am I left with having to re-install from fresh and then restoring an old config? [I would prefer not to do that without working out what went wrong as I would like to make sure it doesn't happen again if possible]
The system was working perfectly and had been up for around 2 or 3 weeks, with a few re-boots. From memory, I had upgraded to the latest build of XigmaNAS via the gui from the build that came out soon after the name change from Nas4Free to XigmaNas (so, I think, I had a fresh RootOnZFS install of the first version that was called XigmaNas).
When the problem started, I had enabled ExtendedGUI via the OneButtonInstaller a day or so beforehand and automatic purges of the recycle directories on my various CIFS/SMB shares.
At the time the system froze, I have just added a new library for a large directory of movies to Plex (latest version, recently upgraded via the OBI GUI) and Plex was in the middle of downloading cover art and metadata.
I had a monitor hooked up to the XigmaNas box at the time and saw the screen blank. I waited a while and then rebooted. On boot, the box gets past the boot menu and then halts with the following on the screen:
da5: Serial Number WD-WX41DA40LJ8Hda 5: 600.000MB/s transfers
da5: Command Queueing enabled
da5: 5723166MB (11721045168 512 byte sectors)
da5 quirks=0x8<4k>
Trying to mount root from zfs:zroot/ROOT/upgrade-2018-09-22-132504 []...
GEOM_MIRROR: Device mirror/gswap launched (2/2).
random: unblocking device.
Mounting from zfs:zroot/ROOT/upgrade=2018-09-22-132504 failed with error 5; retrying for 3 more seconds
Mounting from zfs:zroot/ROOT/upgrade=2018-09-22-132504 failed with error 5.
Loader variables:
vfs.root.mountfrom=zfs:zroot/ROOT/upgrade-2018-09-22-132504
Manual root filesystem specification:
<fstype>:<device> [options]
Mount <device> using filesystem <fstype>
and with the specified (optional) option list.
e.g. ufs:/dev/da0s1a
zfs:tank
cd9660:/dev/cd0 ro
(which is equivalent to: mount -t cd9660 -o re /dev/cd0 /)
? List valid disk boot devices
. Yield 1 second (for background tasks)
<empyty line>
mountroot>
If I then press enter, I get a panic and it tells me that it is unable to mount root, then reboots and gets back to the same screen.
Does anyone have any ideas why this might have happened or how I can troubleshoot and find out? As it's mirrored RootOnNFS, can I restore a previous snapshot or something or am I left with having to re-install from fresh and then restoring an old config? [I would prefer not to do that without working out what went wrong as I would like to make sure it doesn't happen again if possible]
Last edited by Ahab on 19 Oct 2018 12:55, edited 1 time in total.
-
- NewUser
- Posts: 12
- Joined: 15 Jul 2018 21:31
- Status: Offline
Re: Froze then failed with Error 5 on boot (RootOnZFS)
Specs:
XigmaNAS 11.2.0.4.6026 full, RootOnNFS installed on mirrored SSDs (2x Kingston A400 120GB)
CPU: AMD Phenom II X6 1100T Black Edition - 3.3GHz Six Core
MOBO: Gigabyte GA-880GA-UD3H
RAM: 16GB 2x Kingston 8GB KVR133D3E9SK2/16G ( 2Rx8 PC3 - 10600E - 9 - 12 - E3 )
HDD Controller: LSI SAS 9211-8i 6Gb/s SATA +SAS, PCIe 2.0 x8 (flashed to IT mode using the version of the firmware that was recommended for FreeNas in early 2015 when I got it)
NIC: Dell X3959 Dual-Port Gigabit PCI-E NIC Server Adapter
PSU: Sea Sonic SS-600HT Active PFC F3 (600Watts)
System pool RootOnZFS (mirrored): 2x Kingston A400 120GB running of one of the MoBo controllers
Pool 1: 6x 6TB WD Red WD60EFRX in z2 pool - running off the LSI Controller
Pool 2: 6x various WD Green drives (I know - this is just for backups from the main pool/elsewhere and I intend to replace these) in z2 pool - running off the MoBo controllers
Case: Fractal Design Define R3 with a load of decent replacement fans
Other stuff: the SSDs are in enclosures in PCI slots, 3x disks from Pool 2 are in a 2x 5.25" bay to 3x 3.5" bay converter with fan). Massive aftermarket CPU cooler - coolermaster I think? - with arctic silver 3)
.
Never had any problems with heat, some of the parts are getting quite old (looking back, I think I bought the MoBo in 2011 but would prefer not to replace right now if possible).
XigmaNAS 11.2.0.4.6026 full, RootOnNFS installed on mirrored SSDs (2x Kingston A400 120GB)
CPU: AMD Phenom II X6 1100T Black Edition - 3.3GHz Six Core
MOBO: Gigabyte GA-880GA-UD3H
RAM: 16GB 2x Kingston 8GB KVR133D3E9SK2/16G ( 2Rx8 PC3 - 10600E - 9 - 12 - E3 )
HDD Controller: LSI SAS 9211-8i 6Gb/s SATA +SAS, PCIe 2.0 x8 (flashed to IT mode using the version of the firmware that was recommended for FreeNas in early 2015 when I got it)
NIC: Dell X3959 Dual-Port Gigabit PCI-E NIC Server Adapter
PSU: Sea Sonic SS-600HT Active PFC F3 (600Watts)
System pool RootOnZFS (mirrored): 2x Kingston A400 120GB running of one of the MoBo controllers
Pool 1: 6x 6TB WD Red WD60EFRX in z2 pool - running off the LSI Controller
Pool 2: 6x various WD Green drives (I know - this is just for backups from the main pool/elsewhere and I intend to replace these) in z2 pool - running off the MoBo controllers
Case: Fractal Design Define R3 with a load of decent replacement fans
Other stuff: the SSDs are in enclosures in PCI slots, 3x disks from Pool 2 are in a 2x 5.25" bay to 3x 3.5" bay converter with fan). Massive aftermarket CPU cooler - coolermaster I think? - with arctic silver 3)
.
Never had any problems with heat, some of the parts are getting quite old (looking back, I think I bought the MoBo in 2011 but would prefer not to replace right now if possible).
- JoseMR
- Hardware & Software Guru
- Posts: 950
- Joined: 16 Apr 2014 04:15
- Location: PR
- Contact:
- Status: Offline
Re: Froze then failed with Error 5 on boot (RootOnZFS)
Sorry for the very late reply, definitely overlooked this.
Please try latest XigmaNAS RootOnZFS version and retry, if the problems persists, try to run memtest86+ on that system first.
Regards

Please try latest XigmaNAS RootOnZFS version and retry, if the problems persists, try to run memtest86+ on that system first.
Regards
System: FreeBSD 12.0, MB: Supermicro X8SI6-F, Xeon X3450, 16GB DDR3 ECC RDIMMs.
Devel: XigmaNAS Embedded and Full Latest, VirtualBox, 2 x CPU Cores, 4GB vRAM.
Addons Devel GitHub
JoseMRPubServ Resources
Devel: XigmaNAS Embedded and Full Latest, VirtualBox, 2 x CPU Cores, 4GB vRAM.
Addons Devel GitHub
JoseMRPubServ Resources
-
- NewUser
- Posts: 12
- Joined: 15 Jul 2018 21:31
- Status: Offline
Re: Froze then failed with Error 5 on boot (RootOnZFS)
Thank you - December and January were kind of crazy so I only got round to it a week ago. All seems to be running well so far but S.M.A.R.T. values on a couple of my drives in one of my Z2 pools and lack of space inspired me to upgrade the pools. And then upgrade the HBA controller cards. Then upgrade to an external chasis for 1 of the pools... oh well.
Anyway, is there anything I can do to work out what happened with this or to prevent it happening again? I am guessing not given that I have now written over the system disks with the re-install using the latest XigmaNAS RootOnZFS version.
By the way, if you have views on this one, that would be very much appreciated: viewtopic.php?f=15&t=14376#p89243
Anyway, is there anything I can do to work out what happened with this or to prevent it happening again? I am guessing not given that I have now written over the system disks with the re-install using the latest XigmaNAS RootOnZFS version.
By the way, if you have views on this one, that would be very much appreciated: viewtopic.php?f=15&t=14376#p89243
-
- Starter
- Posts: 58
- Joined: 19 Nov 2018 11:30
- Status: Offline
Re: Froze then failed with Error 5 on boot (RootOnZFS)
generally with the OS on its own drive(s) such boot failure should not happen
But if incoming data fills the swap or is in some way plex is filling the boot rather than data drives
"perplexing" problems could ensue
reinstall the OS, enlarge the swap, be sure the share is of the data drive(s)
and where plex is putting stuff has enough room.
by using this free advise, you, your heirs, etc
absolve, save, and hold harmless the advisor.
CAVEAT EMPTOR
YMMV
absolve, save, and hold harmless the advisor.
CAVEAT EMPTOR
YMMV
-
- NewUser
- Posts: 12
- Joined: 15 Jul 2018 21:31
- Status: Offline
Re: Froze then failed with Error 5 on boot (RootOnZFS)
Ok, this is very odd. I have just looked at "System > Advanced > Swap" and swap in not enabled.was-armandh wrote: ↑02 Feb 2019 15:16But if incoming data fills the swap or is in some way plex is filling the boot rather than data drives
"Diagnostics > Information > Swap" just shows:
Information
Device 512-blocks Used Avail Capacity
Does that mean I have no swap? I installed using RootOnZFS defaults, so assumed that I had 2G but unless I am looking at the wrong part of the GUI, it would seem not.
So what do I do? Enable it in System > Advanced for 2048MB? Or something smaller, like 512MB?
Very good, I appreciate the pun!

Last edited by Ahab on 03 Feb 2019 00:15, edited 1 time in total.
-
- NewUser
- Posts: 12
- Joined: 15 Jul 2018 21:31
- Status: Offline
Re: Froze then failed with Error 5 on boot (RootOnZFS)
Also, Status > Processes:
Code: Select all
last pid: 40341; load averages: 0.76, 0.98, 1.13 up 1+12:19:52 21:57:26
31 processes: 1 running, 30 sleeping
Mem: 294M Active, 5280K Inact, 15G Wired, 201M Free
ARC: 12G Total, 7872M MFU, 3365M MRU, 133K Anon, 133M Header, 1057M Other
10G Compressed, 13G Uncompressed, 1.34:1 Ratio
Swap:
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
3344 plex 13 45 15 113M 53348K piperd 0 4:02 0.00% Plex Script Host
3287 plex 19 20 0 203M 66504K uwait 1 3:46 0.00% Plex Media Server
19944 root 1 20 0 178M 129M select 1 2:07 0.00% smbd
19917 root 1 20 0 174M 130M select 0 1:28 0.00% smbd
1892 root 1 20 0 6724K 1156K select 2 0:29 0.00% usbhid-ups
2987 root 1 20 0 13008K 3696K kqread 1 0:17 0.00% lighttpd
3348 plex 11 21 0 27308K 3020K usem 3 0:13 0.00% Plex Tuner Service
2069 root 1 20 0 29448K 3516K select 0 0:08 0.00% nmbd
1894 root 1 20 0 9548K 1332K select 5 0:06 0.00% upsd
1941 root 1 20 0 9540K 0K nanslp 4 0:06 0.00% <upsmon>
1702 root 1 20 0 6452K 888K select 5 0:03 0.00% syslogd
2072 root 1 20 0 161M 126M select 5 0:03 0.00% smbd
20049 root 1 52 0 303M 0K accept 2 0:03 0.00% <php-cgi>
34528 root 1 22 0 303M 8328K piperd 3 0:02 0.00% php-cgi
1486 _dhcp 1 20 0 6548K 940K select 2 0:01 0.00% dhclient
2174 root 2 20 0 121M 87836K select 3 0:00 0.00% smbd
1581 root 1 20 0 9184K 4848K select 5 0:00 0.00% devd
40633 root 1 20 0 10540K 0K nanslp 5 0:00 0.00% <smartd>
2829 root 1 20 0 6468K 0K nanslp 1 0:00 0.00% <cron>
2181 root 1 20 0 119M 87672K select 5 0:00 0.00% smbd
1908 root 1 20 0 9524K 0K nanslp 5 0:00 0.00% <upslog>
2988 root 1 20 0 301M 0K wait 3 0:00 0.00% <php-cgi>
3319 root 1 52 0 7504K 0K pause 4 0:00 0.00% <csh>
3316 root 1 52 0 6956K 0K wait 1 0:00 0.00% <login>
3336 root 1 52 0 7144K 1064K ttyin 2 0:00 0.00% sh
40341 root 1 22 0 7916K 2364K CPU2 2 0:00 0.00% top
3317 root 1 52 0 6408K 692K ttyin 1 0:00 0.00% getty
- JoseMR
- Hardware & Software Guru
- Posts: 950
- Joined: 16 Apr 2014 04:15
- Location: PR
- Contact:
- Status: Offline
Re: Froze then failed with Error 5 on boot (RootOnZFS)
Ahab wrote: ↑02 Feb 2019 22:57Also, Status > Processes:
Code: Select all
last pid: 40341; load averages: 0.76, 0.98, 1.13 up 1+12:19:52 21:57:26 31 processes: 1 running, 30 sleeping Mem: 294M Active, 5280K Inact, 15G Wired, 201M Free ARC: 12G Total, 7872M MFU, 3365M MRU, 133K Anon, 133M Header, 1057M Other 10G Compressed, 13G Uncompressed, 1.34:1 Ratio Swap: PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND ...
Hi Ahab, you don't have to worry about "System > Advanced > Swap" and/or creating any Swap file/zvol, if you installed RootOnZFS with most Defaults, it will create a native 2GB Swap partition on the selected disk and added to "/etc/fstab", so the "top" output should looks like the below example:
Code: Select all
last pid: 17811; load averages: 0.54, 0.22, 0.19 up 9+11:31:25 19:54:53
55 processes: 1 running, 54 sleeping
CPU: 0.1% user, 0.0% nice, 0.1% system, 0.0% interrupt, 99.8% idle
Mem: 46M Active, 604M Inact, 12G Wired, 2474M Free
ARC: 10G Total, 1489M MFU, 8112M MRU, 4132K Anon, 70M Header, 524M Other
9051M Compressed, 12G Uncompressed, 1.40:1 Ratio
Swap: 2048M Total, 2048M Free
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
...
Code: Select all
xigmanas: ~# swapinfo
Device 1K-blocks Used Avail Capacity
/dev/gpt/swap0 2097152 0 2097152 0%
xigmanas: ~#
System: FreeBSD 12.0, MB: Supermicro X8SI6-F, Xeon X3450, 16GB DDR3 ECC RDIMMs.
Devel: XigmaNAS Embedded and Full Latest, VirtualBox, 2 x CPU Cores, 4GB vRAM.
Addons Devel GitHub
JoseMRPubServ Resources
Devel: XigmaNAS Embedded and Full Latest, VirtualBox, 2 x CPU Cores, 4GB vRAM.
Addons Devel GitHub
JoseMRPubServ Resources
-
- Starter
- Posts: 58
- Joined: 19 Nov 2018 11:30
- Status: Offline
Re: Froze then failed with Error 5 on boot (RootOnZFS)
if the mem check is good I would [reinstall] increase the swap file size.
I remember failures trying to load a "Map" file from the county assessors office.
the interactive high rez file was way too big for my hardware. [at that time]
I remember failures trying to load a "Map" file from the county assessors office.
the interactive high rez file was way too big for my hardware. [at that time]
by using this free advise, you, your heirs, etc
absolve, save, and hold harmless the advisor.
CAVEAT EMPTOR
YMMV
absolve, save, and hold harmless the advisor.
CAVEAT EMPTOR
YMMV
-
- NewUser
- Posts: 12
- Joined: 15 Jul 2018 21:31
- Status: Offline
Re: Froze then failed with Error 5 on boot (RootOnZFS)
This is strange, if I try "swapinfo" in shell, I get:
Code: Select all
xigmanas: ~# swapinfo
Device 1K-blocks Used Avail Capacity
xigmanas: ~#
Code: Select all
$ swapinfo
Device 512-blocks Used Avail Capacity
I have got my hands on a pair of matching 8GB sticks so now have 32GB RAm in total and they passed 4 passes of Memtest86. I have two (soon to be 3) Z2 pools of 72TB total and 35TB and run plex, and might run own-cloud and a few other extensions and possibly TheBrig/Sickbeard/Couchpotato if I can work out how to do it. Is 32GB enough with the default 2GB swap enough for that? (assuming I have a swap, or create a swap if I don't have one).was-armandh wrote: ↑03 Feb 2019 07:29if the mem check is good I would [reinstall] increase the swap file size.
I remember failures trying to load a "Map" file from the county assessors office.
the interactive high rez file was way too big for my hardware. [at that time]
What about "vfs.zfs.arc_max"? What would be a good starting value to set it to, to leave enough RAM for plex and other extensions?
- JoseMR
- Hardware & Software Guru
- Posts: 950
- Joined: 16 Apr 2014 04:15
- Location: PR
- Contact:
- Status: Offline
Re: Froze then failed with Error 5 on boot (RootOnZFS)
Ahab wrote: ↑07 Feb 2019 01:10This is strange, if I try "swapinfo" in shell, I get:
If I try via Tools > Execute Command, I get:Code: Select all
xigmanas: ~# swapinfo Device 1K-blocks Used Avail Capacity xigmanas: ~#
Does that mean I have no swap?Code: Select all
$ swapinfo Device 512-blocks Used Avail Capacity
I have got my hands on a pair of matching 8GB sticks so now have 32GB RAm in total and they passed 4 passes of Memtest86. I have two (soon to be 3) Z2 pools of 72TB total and 35TB and run plex, and might run own-cloud and a few other extensions and possibly TheBrig/Sickbeard/Couchpotato if I can work out how to do it. Is 32GB enough with the default 2GB swap enough for that? (assuming I have a swap, or create a swap if I don't have one).was-armandh wrote: ↑03 Feb 2019 07:29if the mem check is good I would [reinstall] increase the swap file size.
I remember failures trying to load a "Map" file from the county assessors office.
the interactive high rez file was way too big for my hardware. [at that time]
What about "vfs.zfs.arc_max"? What would be a good starting value to set it to, to leave enough RAM for plex and other extensions?
Hi Ahab, I think you don't have any Swap at all as per see, please run "gpart show adaX" where the X is the ID of the boot drive(s), the resulting output should be as follow:
Code: Select all
xigmanas: ~# gpart show ada0
=> 40 33554352 ada0 GPT (16G)
40 409600 1 efi (200M)
409640 1024 2 freebsd-boot (512K)
410664 7128 - free - (3.5M)
417792 4194304 3 freebsd-swap (2.0G)
4612096 28934144 4 freebsd-zfs (14G)
33546240 8152 - free - (4.0M)
xigmanas: ~#
Code: Select all
xigmanas: ~# cat /etc/fstab
# Device Mountpoint FStype Options Dump Pass#
#
/dev/gpt/swap0 none swap sw 0 0
proc /proc procfs rw 0 0
xigmanas: ~#
Code: Select all
xigmanas: ~# gmirror status
Name Status Components
mirror/swap COMPLETE ada0p3 (ACTIVE)
ada1p3 (ACTIVE)
xigmanas: ~#
As for a system with 32GB RAM and running many services, I would set the ZFS ARC MAX to 28GB and monitor it, however in my case I have a 16GB system with 10GB ZFS ARC MAX, and my RAM is at 92% and some times jump to 95/96% momentarily, fortunately my system hasn't used Swap yet.
Regards
System: FreeBSD 12.0, MB: Supermicro X8SI6-F, Xeon X3450, 16GB DDR3 ECC RDIMMs.
Devel: XigmaNAS Embedded and Full Latest, VirtualBox, 2 x CPU Cores, 4GB vRAM.
Addons Devel GitHub
JoseMRPubServ Resources
Devel: XigmaNAS Embedded and Full Latest, VirtualBox, 2 x CPU Cores, 4GB vRAM.
Addons Devel GitHub
JoseMRPubServ Resources
-
- NewUser
- Posts: 12
- Joined: 15 Jul 2018 21:31
- Status: Offline
Re: Froze then failed with Error 5 on boot (RootOnZFS)
Thank you very much for the quick reply. Strangely enough, "gpart show ada6" and ada7 (the mirrored ZFS boot array) shows this on the 4th line:
which suggests that the swap partitions are there at least.
"gmirror status"
which I think mean that they are active. I used the BIOS Boot Option (my motherboard does not have UEFI) with default settings. Is there a reason why "swapinfo" doesn't show anything?
Code: Select all
8192 4194304 3 freebsd-swap (2.0G)
"gmirror status"
Code: Select all
mirror/gswap COMPLETE ada6p2 (ACTIVE)
ada7p2 (ACTIVE)
- JoseMR
- Hardware & Software Guru
- Posts: 950
- Joined: 16 Apr 2014 04:15
- Location: PR
- Contact:
- Status: Offline
Re: Froze then failed with Error 5 on boot (RootOnZFS)
Ahab wrote: ↑07 Feb 2019 12:02
"gmirror status"
which I think mean that they are active. I used the BIOS Boot Option (my motherboard does not have UEFI) with default settings. Is there a reason why "swapinfo" doesn't show anything?Code: Select all
mirror/gswap COMPLETE ada6p2 (ACTIVE) ada7p2 (ACTIVE)
Hi Ahab, you may be using an early version of RootOnZFS, I've noticed due the "gswap" name(short for GeomSwap), the lasts updates made to RootOnZFS Platform installer follows the FreeBSD bsdinstall standards/practices for convenience, also "gswap" is now simply called "swap"(/dev/mirror/swap) to avoid confusion and ease further administration.
At this point the easiest way to overcome this, is to freshly install RootOnZFS(may require custom configs backup, and/or packages reinstall), also you can select "BIOS+UEFI" boot method as this will cover both, GPT/BIOS and GPT/EFI, so you will be future proof on a sudden motherboard change supporting only UEFI, thus no reinstall will be needed, as for the Swap, you may want to bump it to ~4G during installation as well.
Optionally, if you have a lot of custom configurations/packages, and don't have backup/restore plan/automation for the OS, you can setup "bemanager" then export the current Boot Environment to a file, then reinstall latest XigmaNAS, then import back the Boot Environment and patch required files manually if you feel comfortable working from the shell, though this may be time consuming and abit of a hassle.
If none of the above can be performed at the moment, please verify if the "/etc/fstab" match your swap mirror device "/dev/mirror/gswap", and correct for it. after any correction please type "swapon -a" and re-verify if the swap has been finally added to the system.
Regards
System: FreeBSD 12.0, MB: Supermicro X8SI6-F, Xeon X3450, 16GB DDR3 ECC RDIMMs.
Devel: XigmaNAS Embedded and Full Latest, VirtualBox, 2 x CPU Cores, 4GB vRAM.
Addons Devel GitHub
JoseMRPubServ Resources
Devel: XigmaNAS Embedded and Full Latest, VirtualBox, 2 x CPU Cores, 4GB vRAM.
Addons Devel GitHub
JoseMRPubServ Resources
-
- NewUser
- Posts: 12
- Joined: 15 Jul 2018 21:31
- Status: Offline
Re: Froze then failed with Error 5 on boot (RootOnZFS)
Thank you, yes I think I will just re-install and restore the config + packages. There is definitely something odd going on. With the expanded pools I now have, I have been watching the Memory Usage creep up and up from a fresh boot and the system crashed and froze during a large transfer.
The Memory Usage is currently showing "99% of 31.45GiB" after sitting doing nothing for 1 Day 5 Hours 8 Minutes, even though I have vfs.zfs.arc_max enabled with the value "28G" in "System > Advanced > loader.conf", so something can't be right.
I am currently on "11.2.0.4 - Omnius (revision 6315)". I did a fresh install after the "Error 5" problem earlier in this thread but maybe it is because I didn't wipe the boot disks first? I will fully wipe them and try again - should I try 11.2.0.4 again or wait for 12.0.0.4.6412 to be released?
The Memory Usage is currently showing "99% of 31.45GiB" after sitting doing nothing for 1 Day 5 Hours 8 Minutes, even though I have vfs.zfs.arc_max enabled with the value "28G" in "System > Advanced > loader.conf", so something can't be right.
I am currently on "11.2.0.4 - Omnius (revision 6315)". I did a fresh install after the "Error 5" problem earlier in this thread but maybe it is because I didn't wipe the boot disks first? I will fully wipe them and try again - should I try 11.2.0.4 again or wait for 12.0.0.4.6412 to be released?
- JoseMR
- Hardware & Software Guru
- Posts: 950
- Joined: 16 Apr 2014 04:15
- Location: PR
- Contact:
- Status: Offline
Re: Froze then failed with Error 5 on boot (RootOnZFS)
Ahab wrote: ↑09 Feb 2019 13:16Thank you, yes I think I will just re-install and restore the config + packages. There is definitely something odd going on. With the expanded pools I now have, I have been watching the Memory Usage creep up and up from a fresh boot and the system crashed and froze during a large transfer.
The Memory Usage is currently showing "99% of 31.45GiB" after sitting doing nothing for 1 Day 5 Hours 8 Minutes, even though I have vfs.zfs.arc_max enabled with the value "28G" in "System > Advanced > loader.conf", so something can't be right.
I am currently on "11.2.0.4 - Omnius (revision 6315)". I did a fresh install after the "Error 5" problem earlier in this thread but maybe it is because I didn't wipe the boot disks first? I will fully wipe them and try again - should I try 11.2.0.4 again or wait for 12.0.0.4.6412 to be released?
Hi, yeah definitely something may be eating RAM there even with ZFS_ARC_MAX set to 28G out of 32G(about ~4G reserve), this can vary from just system tuneup, a specific package or even possible memory leak and requires for some time for proper troubleshooting/debug, maybe you may also try limiting the "vfs.zfs.arc_max" to 26G and see how system behaves with about 6G RAM reserve, I personally set mine to 10G out of 16G and my system RAM is at ~92%+, with previous ZFS_ARC to 12G it was too tight and have hit Swap before, but I understand this is highly varying per system/setups.
However if you plan on cleaning the drives from leftovers and reinstall, you can try with latest 11.2.0.4.6400, at that point you may want to set BIOS+UEFI as for the Boot option and set the desired Swap amount as previously denoted, ~4GB should be fine for your system.
Since the ZFS/partitioning scheme will be the the same for further releases, you can later upgrade to major version without hassle, sine each upgrade are is contained on its separate boot environments.
Please let us know the status about your system stability and RAM usage, so we can further help you on system tuning.
Regards
System: FreeBSD 12.0, MB: Supermicro X8SI6-F, Xeon X3450, 16GB DDR3 ECC RDIMMs.
Devel: XigmaNAS Embedded and Full Latest, VirtualBox, 2 x CPU Cores, 4GB vRAM.
Addons Devel GitHub
JoseMRPubServ Resources
Devel: XigmaNAS Embedded and Full Latest, VirtualBox, 2 x CPU Cores, 4GB vRAM.
Addons Devel GitHub
JoseMRPubServ Resources