This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

Memory issues

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
Error323
NewUser
NewUser
Posts: 6
Joined: 30 May 2017 20:23
Status: Offline

Memory issues

Post by Error323 »

Hi everyone,

Recently I upgraded my NAS to 11.2.0.4 - Omnius (revision 6400) and did a fully new install. Everything works great except that ZFS is extremely memory hungry to the point where it starts killing other processes (nfsd, lighttpd, ...). I'm only running it as a pure NAS with no VM's or anything. Secondly I've enabled vfs.zfs.arc_min (2G) and vfs.zfs.arc_max (10G) to well within my memory bounds. Furthermore I've enabled a swap partition as a zfs volume (4G).

Code: Select all

System Information
Hostname 	muis.local
Version 	11.2.0.4 - Omnius (revision 6400)
Compiled 	Monday January 21 22:45:46 CET 2019
Platform OS 	FreeBSD 11.2-RELEASE-p8 #0 r343240M: Mon Jan 21 03:24:40 CET 2019
Platform 	x64-embedded on Intel(R) Core(TM) i3-8100 CPU @ 3.60GHz
System 	Gigabyte Technology Co., Ltd. B360N WIFI-CF
System BIOS 	American Megatrends Inc. Version: F2 04/19/2018
System Time 	Sunday March 17 20:02:24 CET 2019
System Uptime 	8 Hours 45 Minutes 46 Seconds
System Config Change 	Sunday March 17 11:17:41 CET 2019
CPU Frequency 	100MHz
CPU Usage 	
3%	
CPU Core Usage 	
	Core 0: 	7%	Temp: 	29.0	°C	
	Core 1: 	3%	Temp: 	30.0	°C	
	Core 2: 	3%	Temp: 	30.0	°C	
	Core 3: 	1%	Temp: 	29.0	°C	
Memory Usage 	99% of 15.69GiB	
Swap Usage 	
1% of 4.29GB
Device: /dev/zvol/storage/swap | Total: 4.29GB | Used: 36.62MB | Free: 4.25GB
Load Averages 	[Show Process Information]
Disk Space Usage 	
storage
23% of 15.94TB
Total: 15.94TB | Alloc: 3.73TB | Free: 12.2TB | State: ONLINE
Now my main question is, how can I keep ZFS's memory consumption in check? It really should never ever start killing other processes, but I just cannot find a way to stop it from happening.

Kind regards,
Error

edit: I should note that my previous version from 2017 (forgot version, xml config says 1.7) did not have any issues, but also never utilized all memory.

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Memory issues

Post by raulfg3 »

System > Advanced > loader.conf

configure and enable ZFS variables.


PD: google a bit what vfs.zfs.arc_max mean and what are correct values for your RAM
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

Error323
NewUser
NewUser
Posts: 6
Joined: 30 May 2017 20:23
Status: Offline

Re: Memory issues

Post by Error323 »

Hi raulfg3, I indeed understand what it means (https://www.freebsd.org/doc/handbook/zfs-advanced.html) and believe to have set correct values. But it's not really helping. Are there more useful parameters I should be aware of?

sleid
PowerUser
PowerUser
Posts: 774
Joined: 23 Jun 2012 07:36
Location: FRANCE LIMOUSIN CORREZE
Status: Offline

Re: Memory issues

Post by sleid »

vfs.zfs.arc_max 12G

vfs.zfs.arc_min 1G

vfs.zfs.prefetch_disable 1
12.1.0.4 - Ingva (revision 7852)
FreeBSD 12.1-RELEASE-p12 #0 r368465M: Tue Dec 8 23:25:11 CET 2020
X64-embedded sur Intel(R) Atom(TM) CPU C2750 @ 2.40GHz Boot UEFI
ASRock C2750D4I 2 X 8GB DDR3 ECC
Pool of 2 vdev Raidz1: 3 WDC WD40EFRX + 3 WDC WD40EFRX

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Memory issues

Post by raulfg3 »

Error323 wrote:
17 Mar 2019 21:23
Hi raulfg3, I indeed understand what it means (https://www.freebsd.org/doc/handbook/zfs-advanced.html) and believe to have set correct values. But it's not really helping. Are there more useful parameters I should be aware of?
please post a screen capture of your values and your RAM
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

Error323
NewUser
NewUser
Posts: 6
Joined: 30 May 2017 20:23
Status: Offline

Re: Memory issues

Post by Error323 »

loader.png
xigmanas.png
So I added the suggestion by sleid, but without success. Webui (lighttpd) was still killed, whilest swap isn't full yet.
You do not have the required permissions to view the files attached to this post.

User avatar
ms49434
Developer
Developer
Posts: 828
Joined: 03 Sep 2015 18:49
Location: Neuenkirchen-Vörden, Germany - GMT+1
Contact:
Status: Offline

Re: Memory issues

Post by ms49434 »

Error323 wrote:
18 Mar 2019 16:41
loader.pngxigmanas.png

So I added the suggestion by sleid, but without success. Webui (lighttpd) was still killed, whilest swap isn't full yet.
My experience is to start with a low arc_max (25% of the total memory is a good start), and monitor the memory consumption of the system over a period of time. Once you are confident that the memory consumption has reached it's max you can start to increase arc_max.
1) XigmaNAS 12.1.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7U3, 22GB out of 32GB ECC RAM, LSI 9300-8i IT mode in passthrough mode. Pool 1: 2x HGST 10TB, mirrored, L2ARC: Samsung 850 Pro; Pool 2: 1x Samsung 860 EVO 1TB, SLOG: Samsung SM883, services: Samba AD, CIFS/SMB, ftp, ctld, rsync, syncthing, zfs snapshots.
2) XigmaNAS 12.1.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7U3, 8GB out of 32GB ECC RAM, IBM M1215 crossflashed, IT mode, passthrough mode, 2x HGST 10TB , services: rsync.

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Memory issues

Post by raulfg3 »

try this and test it 2 days or so:

vfs.zfs.arc_max=8G

vfs.zfs.arc_min=6G

vfs.zfs.prefetch_disable 0 <- Not Disable
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

Error323
NewUser
NewUser
Posts: 6
Joined: 30 May 2017 20:23
Status: Offline

Re: Memory issues

Post by Error323 »

Hi,

Thanks for all your suggestions/comments. I've now updated the parameters to raulfg3's suggestion and rebooted the server.

User avatar
Snufkin
Advanced User
Advanced User
Posts: 317
Joined: 01 Jul 2012 11:27
Location: Etc/GMT-3 (BSD style)
Status: Offline

Re: Memory issues

Post by Snufkin »

raulfg3 wrote:
18 Mar 2019 19:59
try this and test it 2 days or so:

vfs.zfs.arc_max=8G

vfs.zfs.arc_min=6G

vfs.zfs.prefetch_disable 0 <- Not Disable
I'd recommend to pay attention to sysctl tunable vfs.zfs.arc_free_target.
When it is defined, ZFS monitors current memory load and keeps certain amount of it free.

Appropriate value proved to be trouble free is 32768 blocks (128 MB).

If XNAS operating system needs more memory immediately, this setting gives extra time to ZFS memory manager to reduce the load to vfs.zfs.arc_min level. Time is important in this case, because flushing buffers to the disk arrays is a time-consuming operation.
XNAS 11.4.0.4 embedded, ASUS P5B-E, Intel DC E6600, 4 GB DDR2
ZFS 2 x HGST HDN726040ALE614, L2ARC PLEXTOR PX-128M5S

Error323
NewUser
NewUser
Posts: 6
Joined: 30 May 2017 20:23
Status: Offline

Re: Memory issues

Post by Error323 »

Hi again,

Unfortunately lighttpd still gets killed and memory consumption is > 99% again. vfs.zfs.arc_free_target: 27797. I really fail to understand why this cannot be controlled it seems to me like a big problem in general? Am I really the only one dealing with this?

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Memory issues

Post by raulfg3 »

yes, you can't find post of other user with lighhttpd killed or memory problem.


try this:

save your actual config.

do a new fresh install, on a new media, restore your config, do new tests.


Perhaps your actual install have some corrupt files that exhaust RAM.

PD: Remember that RAM is for use, so a 99% of RAM usage is not a problem, the problem is if this block your system, but for your screen captures is doubt ( no swap usage , so only RAM usage that is "normal")
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

mbze430
experienced User
experienced User
Posts: 95
Joined: 20 Nov 2014 05:41
Status: Offline

Re: Memory issues

Post by mbze430 »

I have the SAME exact issue and might be causing my whole NAS to freeze up?

I run TWO NAS. One is with version 11.2.x.x the other 11.1.x.x. The NAS running 11.2.x.x is constantly running up to 98% even though I have it set to 18G for vfs.zfs.arc_max
My NAS has 24GB

My other NAS running 11.1.x.x also running 24GB with the vfs.zfs.arc_max option turned off and it doesn't get as high. usually 3/4 of my max physical memory

The NAS running 11.2.x.x I did a fresh installed then reloaded my configuration file. And I have tried different vfs.zfs.arc_max setting nothing changes and it just always hit 98%
NAS #1 - 11.2.0.4 - Omnius (revision 6625) - SuperMicro X10SL7-F w/ 24GB ECC - LSI SAS 9207-16i - 2x RAIDZ1 (10x3TB) Pools and 1x (2x4TB) Stripe Pool
NAS #2 - 11.2.0.4 - Omnius (revision 6625) - SuperMicro X10SLM-F w/32GB ECC - LSI SAS 9207-8i (RAID10) - IBM M1015-IT Mode (RAID10)

User avatar
Lee Sharp
Advanced User
Advanced User
Posts: 251
Joined: 13 May 2013 21:12
Contact:
Status: Offline

Re: Memory issues

Post by Lee Sharp »

Running at 99% is not uncommon. I have a brand new NAS with 64 gig of ram and the only thing running is rsync to populate from the old nas. Memory is full. The better question is why are services failing. To start, reset the memory limits to defaults and see if things change. And just look at stability, not the percentage.

Error323
NewUser
NewUser
Posts: 6
Joined: 30 May 2017 20:23
Status: Offline

Re: Memory issues

Post by Error323 »

Hi Lee Sharp,

I'm sorry to say the problem is still occurring up to the point where I'm tempted to buy another 16G of RAM. But I don't think it will resolve the issue. I've upgraded nas4free to the latest version and it still starts killing processes like the web ui and nfsd.

Any more advice is greatly appreciated.

Onichan
Advanced User
Advanced User
Posts: 238
Joined: 04 Jul 2012 21:41
Status: Offline

Re: Memory issues

Post by Onichan »

Are you running any scripts or anything that might be reading/copying data on the system other than SMB/NFS? I had a similar issue with a ZFS Send to a 2nd pool on the local system. The destination wasn't writing as fast as it could read the data so the system would suck up a bunch of RAM to buffer it all and then kill my VM. So idiotic linux kills processes instead of just not giving processes more RAM. ZFS does not release RAM nearly quick enough so I wouldn't trust that target command to be useful. I ended up throttling my ZFS Send using a 3rd party app and that solved my problem.

User avatar
Lee Sharp
Advanced User
Advanced User
Posts: 251
Joined: 13 May 2013 21:12
Contact:
Status: Offline

Re: Memory issues

Post by Lee Sharp »

I found a fix for me and forgot this thread... You NEED a swap device larger than your physical ram now. Is seems that when it is smaller, the memory management can be a bit slow if the demand goes up fast, and so processes are killed. Dropped in a 128gig USB3 stick and formatted it for swap and never had a problem again. (But it still runs in the 90s)

Onichan
Advanced User
Advanced User
Posts: 238
Joined: 04 Jul 2012 21:41
Status: Offline

Re: Memory issues

Post by Onichan »

No you don't require swap, I've been running without swap for ~6 years. You do need to throttle any commands or scripts you're running that reads and transfers or writes data, such as zfs send or rsync. Those processes will read data as fast as it can and continue to cache as much as it can in RAM. If you're reading faster than you're transferring or writing the data then it will fill up the RAM and the retarded OOM killer will run since ZFS ARC doesn't release RAM quickly.

I use pv (Pipe throughput monitor) https://freebsd.pkgs.org/11/freebsd-por ... 6.txz.html to throttle my zfs send by adding "| /mnt/apps/pv -qL 150m |" in the middle of the zfs send.

If you really want swap I wouldn't use a thumb drive, even usb3 thumb drives have low IOPS. Also they aren't designed for heavy usage that swap will cause. I would just use a spare SSD.

edewit
NewUser
NewUser
Posts: 4
Joined: 30 Mar 2020 06:12
Status: Offline

Re: Memory issues

Post by edewit »

I had the same problems more or less. More or less in the sense that I didn't notice processes being killed AFAIK. What I DID notice is that my memory usage grew steadily to 99% or so and that it THEN took ages to login using the WEBGUI. Subsequent menu item selections each time took forever but eventually most of the time it worked. Killing RSYNC jobs seemed to speed things up. Talking about 15...20 minute delays here. This happened with version 10.xxx on older hardware and again now with version 12.xxx on brand new hardware. The new system has a Pentium J5005 and 32GB of RAM. I have 4 3TB drives in RAIDZ2. Shutdown commands start to execute but get terminated after a 90 second delay according to screen messages from the NAS. I do use RSYNC but not much else.
In System/Advanced/loader.conf only hw.nvme.use_nvd has a green check mark with a value of 0.

Is this enough info to come up with a thought about what could be going on here?
CPU: Pentium Silver J5005
RAM: 32GB
Mobo: Asrock J5005-ITX
4x 3TB WD Red in Raidz2
PicoPSU-80-WI-32V on 24V
XigmaNAS version 12.1.0.4.7728
Case: iStarUSA S-35 with BEYST SS harddrive cage

Post Reply

Return to “ZFS (only!)”