Latest News:
*New 11.2 series Release:
2019-06-20: XigmaNAS 11.2.0.4.6766 - released!

*New 12.0 series Release:
2019-06-20: XigmaNAS 12.0.0.4.6766 - released!

We really need "Your" help on XigmaNAS https://translations.launchpad.net/xigmanas translations. Please help today!

Producing and hosting XigmaNAS costs money Please consider donating for our project so that we can continue to offer you the best.
We need your support! eg: PAYPAL

Memory issues

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
Error323
NewUser
NewUser
Posts: 6
Joined: 30 May 2017 20:23
Status: Offline

Memory issues

#1

Post by Error323 » 17 Mar 2019 20:04

Hi everyone,

Recently I upgraded my NAS to 11.2.0.4 - Omnius (revision 6400) and did a fully new install. Everything works great except that ZFS is extremely memory hungry to the point where it starts killing other processes (nfsd, lighttpd, ...). I'm only running it as a pure NAS with no VM's or anything. Secondly I've enabled vfs.zfs.arc_min (2G) and vfs.zfs.arc_max (10G) to well within my memory bounds. Furthermore I've enabled a swap partition as a zfs volume (4G).

Code: Select all

System Information
Hostname 	muis.local
Version 	11.2.0.4 - Omnius (revision 6400)
Compiled 	Monday January 21 22:45:46 CET 2019
Platform OS 	FreeBSD 11.2-RELEASE-p8 #0 r343240M: Mon Jan 21 03:24:40 CET 2019
Platform 	x64-embedded on Intel(R) Core(TM) i3-8100 CPU @ 3.60GHz
System 	Gigabyte Technology Co., Ltd. B360N WIFI-CF
System BIOS 	American Megatrends Inc. Version: F2 04/19/2018
System Time 	Sunday March 17 20:02:24 CET 2019
System Uptime 	8 Hours 45 Minutes 46 Seconds
System Config Change 	Sunday March 17 11:17:41 CET 2019
CPU Frequency 	100MHz
CPU Usage 	
3%	
CPU Core Usage 	
	Core 0: 	7%	Temp: 	29.0	°C	
	Core 1: 	3%	Temp: 	30.0	°C	
	Core 2: 	3%	Temp: 	30.0	°C	
	Core 3: 	1%	Temp: 	29.0	°C	
Memory Usage 	99% of 15.69GiB	
Swap Usage 	
1% of 4.29GB
Device: /dev/zvol/storage/swap | Total: 4.29GB | Used: 36.62MB | Free: 4.25GB
Load Averages 	[Show Process Information]
Disk Space Usage 	
storage
23% of 15.94TB
Total: 15.94TB | Alloc: 3.73TB | Free: 12.2TB | State: ONLINE
Now my main question is, how can I keep ZFS's memory consumption in check? It really should never ever start killing other processes, but I just cannot find a way to stop it from happening.

Kind regards,
Error

edit: I should note that my previous version from 2017 (forgot version, xml config says 1.7) did not have any issues, but also never utilized all memory.

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4913
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Memory issues

#2

Post by raulfg3 » 17 Mar 2019 21:06

System > Advanced > loader.conf

configure and enable ZFS variables.


PD: google a bit what vfs.zfs.arc_max mean and what are correct values for your RAM
12.0.0.4 - BETA (revision 6625)+OBI on SUPERMICRO X8SIL-F 8GB of ECC RAM, 12x3TB disk in 3 vdev in RaidZ1 = 32TB Raw size only 22TB usable

Wiki
Last changes
Old Wiki

Error323
NewUser
NewUser
Posts: 6
Joined: 30 May 2017 20:23
Status: Offline

Re: Memory issues

#3

Post by Error323 » 17 Mar 2019 21:23

Hi raulfg3, I indeed understand what it means (https://www.freebsd.org/doc/handbook/zfs-advanced.html) and believe to have set correct values. But it's not really helping. Are there more useful parameters I should be aware of?

sleid
PowerUser
PowerUser
Posts: 737
Joined: 23 Jun 2012 07:36
Location: FRANCE LIMOUSIN CORREZE
Status: Offline

Re: Memory issues

#4

Post by sleid » 17 Mar 2019 22:21

vfs.zfs.arc_max 12G

vfs.zfs.arc_min 1G

vfs.zfs.prefetch_disable 1
12.0.0.4 - Reticulus (revision 6766)
FreeBSD 12.0-RELEASE-p6 #0 r349200M: Wed Jun 19 20:27:52 CEST 2019
X64-embedded sur Intel(R) Atom(TM) CPU C2750 @ 2.40GHz Boot UEFI
ASRock C2750D4I 2 X 8GB DDR3 ECC
Pool of 2 vdev Raidz1: 3 x WDC WD40EFRX + 3 x WDC WD30EZRX

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4913
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Memory issues

#5

Post by raulfg3 » 18 Mar 2019 07:14

Error323 wrote:
17 Mar 2019 21:23
Hi raulfg3, I indeed understand what it means (https://www.freebsd.org/doc/handbook/zfs-advanced.html) and believe to have set correct values. But it's not really helping. Are there more useful parameters I should be aware of?
please post a screen capture of your values and your RAM
12.0.0.4 - BETA (revision 6625)+OBI on SUPERMICRO X8SIL-F 8GB of ECC RAM, 12x3TB disk in 3 vdev in RaidZ1 = 32TB Raw size only 22TB usable

Wiki
Last changes
Old Wiki

Error323
NewUser
NewUser
Posts: 6
Joined: 30 May 2017 20:23
Status: Offline

Re: Memory issues

#6

Post by Error323 » 18 Mar 2019 16:41

loader.png
xigmanas.png
So I added the suggestion by sleid, but without success. Webui (lighttpd) was still killed, whilest swap isn't full yet.
You do not have the required permissions to view the files attached to this post.

User avatar
ms49434
Developer
Developer
Posts: 658
Joined: 03 Sep 2015 18:49
Location: Neuenkirchen-Vörden, Germany - GMT+1
Contact:
Status: Offline

Re: Memory issues

#7

Post by ms49434 » 18 Mar 2019 17:19

Error323 wrote:
18 Mar 2019 16:41
loader.pngxigmanas.png

So I added the suggestion by sleid, but without success. Webui (lighttpd) was still killed, whilest swap isn't full yet.
My experience is to start with a low arc_max (25% of the total memory is a good start), and monitor the memory consumption of the system over a period of time. Once you are confident that the memory consumption has reached it's max you can start to increase arc_max.
1) XigmaNAS 12.0.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7U2, 22GB out of 32GB ECC RAM, LSI 9300-8i IT mode in passthrough mode. Pool 1: 2x HGST 10TB, mirrored, SLOG: Samsung 850 Pro, L2ARC: Samsung 850 Pro, Pool 2: 1x Samsung 860 EVO 1TB , services: Samba AD, CIFS/SMB, ftp, ctld, rsync, syncthing, zfs snapshots.
2) XigmaNAS 12.0.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7U2, 8GB out of 32GB ECC RAM, IBM M1215 crossflashed, IT mode, passthrough mode, 2x HGST 10TB , services: rsync.

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4913
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Memory issues

#8

Post by raulfg3 » 18 Mar 2019 19:59

try this and test it 2 days or so:

vfs.zfs.arc_max=8G

vfs.zfs.arc_min=6G

vfs.zfs.prefetch_disable 0 <- Not Disable
12.0.0.4 - BETA (revision 6625)+OBI on SUPERMICRO X8SIL-F 8GB of ECC RAM, 12x3TB disk in 3 vdev in RaidZ1 = 32TB Raw size only 22TB usable

Wiki
Last changes
Old Wiki

Error323
NewUser
NewUser
Posts: 6
Joined: 30 May 2017 20:23
Status: Offline

Re: Memory issues

#9

Post by Error323 » 19 Mar 2019 08:12

Hi,

Thanks for all your suggestions/comments. I've now updated the parameters to raulfg3's suggestion and rebooted the server.

User avatar
Snufkin
Advanced User
Advanced User
Posts: 281
Joined: 01 Jul 2012 11:27
Location: Etc/GMT-3 (BSD style)
Status: Offline

Re: Memory issues

#10

Post by Snufkin » 19 Mar 2019 10:28

raulfg3 wrote:
18 Mar 2019 19:59
try this and test it 2 days or so:

vfs.zfs.arc_max=8G

vfs.zfs.arc_min=6G

vfs.zfs.prefetch_disable 0 <- Not Disable
I'd recommend to pay attention to sysctl tunable vfs.zfs.arc_free_target.
When it is defined, ZFS monitors current memory load and keeps certain amount of it free.

Appropriate value proved to be trouble free is 32768 blocks (128 MB).

If XNAS operating system needs more memory immediately, this setting gives extra time to ZFS memory manager to reduce the load to vfs.zfs.arc_min level. Time is important in this case, because flushing buffers to the disk arrays is a time-consuming operation.
XNAS 11.2.0.4 embedded, ASUS P5B-E, Intel DC E6600, 4 GB DDR2, 2 x HGST HDN726040ALE614, 2 x WDC WD5000AAKS, Ippon Back Power Pro 400

Error323
NewUser
NewUser
Posts: 6
Joined: 30 May 2017 20:23
Status: Offline

Re: Memory issues

#11

Post by Error323 » 20 Mar 2019 21:04

Hi again,

Unfortunately lighttpd still gets killed and memory consumption is > 99% again. vfs.zfs.arc_free_target: 27797. I really fail to understand why this cannot be controlled it seems to me like a big problem in general? Am I really the only one dealing with this?

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4913
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Memory issues

#12

Post by raulfg3 » 20 Mar 2019 21:18

yes, you can't find post of other user with lighhttpd killed or memory problem.


try this:

save your actual config.

do a new fresh install, on a new media, restore your config, do new tests.


Perhaps your actual install have some corrupt files that exhaust RAM.

PD: Remember that RAM is for use, so a 99% of RAM usage is not a problem, the problem is if this block your system, but for your screen captures is doubt ( no swap usage , so only RAM usage that is "normal")
12.0.0.4 - BETA (revision 6625)+OBI on SUPERMICRO X8SIL-F 8GB of ECC RAM, 12x3TB disk in 3 vdev in RaidZ1 = 32TB Raw size only 22TB usable

Wiki
Last changes
Old Wiki

mbze430
experienced User
experienced User
Posts: 103
Joined: 20 Nov 2014 05:41
Status: Offline

Re: Memory issues

#13

Post by mbze430 » 18 May 2019 07:25

I have the SAME exact issue and might be causing my whole NAS to freeze up?

I run TWO NAS. One is with version 11.2.x.x the other 11.1.x.x. The NAS running 11.2.x.x is constantly running up to 98% even though I have it set to 18G for vfs.zfs.arc_max
My NAS has 24GB

My other NAS running 11.1.x.x also running 24GB with the vfs.zfs.arc_max option turned off and it doesn't get as high. usually 3/4 of my max physical memory

The NAS running 11.2.x.x I did a fresh installed then reloaded my configuration file. And I have tried different vfs.zfs.arc_max setting nothing changes and it just always hit 98%
NAS #1 - 11.2.0.4 - Omnius (revision 6625) - SuperMicro X10SL7-F w/ 24GB ECC - LSI SAS 9207-16i - 2x RAIDZ1 (10x3TB) Pools and 1x (2x4TB) Stripe Pool
NAS #2 - 11.2.0.4 - Omnius (revision 6625) - SuperMicro X10SLM-F w/32GB ECC - LSI SAS 9207-8i (RAID10) - IBM M1015-IT Mode (RAID10)

User avatar
Lee Sharp
Advanced User
Advanced User
Posts: 253
Joined: 13 May 2013 21:12
Contact:
Status: Offline

Re: Memory issues

#14

Post by Lee Sharp » 03 Jun 2019 00:50

Running at 99% is not uncommon. I have a brand new NAS with 64 gig of ram and the only thing running is rsync to populate from the old nas. Memory is full. The better question is why are services failing. To start, reset the memory limits to defaults and see if things change. And just look at stability, not the percentage.

Error323
NewUser
NewUser
Posts: 6
Joined: 30 May 2017 20:23
Status: Offline

Re: Memory issues

#15

Post by Error323 » 22 Jun 2019 16:12

Hi Lee Sharp,

I'm sorry to say the problem is still occurring up to the point where I'm tempted to buy another 16G of RAM. But I don't think it will resolve the issue. I've upgraded nas4free to the latest version and it still starts killing processes like the web ui and nfsd.

Any more advice is greatly appreciated.

Onichan
Advanced User
Advanced User
Posts: 236
Joined: 04 Jul 2012 21:41
Status: Offline

Re: Memory issues

#16

Post by Onichan » 25 Jun 2019 04:12

Are you running any scripts or anything that might be reading/copying data on the system other than SMB/NFS? I had a similar issue with a ZFS Send to a 2nd pool on the local system. The destination wasn't writing as fast as it could read the data so the system would suck up a bunch of RAM to buffer it all and then kill my VM. So idiotic linux kills processes instead of just not giving processes more RAM. ZFS does not release RAM nearly quick enough so I wouldn't trust that target command to be useful. I ended up throttling my ZFS Send using a 3rd party app and that solved my problem.

User avatar
Lee Sharp
Advanced User
Advanced User
Posts: 253
Joined: 13 May 2013 21:12
Contact:
Status: Offline

Re: Memory issues

#17

Post by Lee Sharp » 28 Jul 2019 01:45

I found a fix for me and forgot this thread... You NEED a swap device larger than your physical ram now. Is seems that when it is smaller, the memory management can be a bit slow if the demand goes up fast, and so processes are killed. Dropped in a 128gig USB3 stick and formatted it for swap and never had a problem again. (But it still runs in the 90s)

Onichan
Advanced User
Advanced User
Posts: 236
Joined: 04 Jul 2012 21:41
Status: Offline

Re: Memory issues

#18

Post by Onichan » 28 Jul 2019 17:02

No you don't require swap, I've been running without swap for ~6 years. You do need to throttle any commands or scripts you're running that reads and transfers or writes data, such as zfs send or rsync. Those processes will read data as fast as it can and continue to cache as much as it can in RAM. If you're reading faster than you're transferring or writing the data then it will fill up the RAM and the retarded OOM killer will run since ZFS ARC doesn't release RAM quickly.

I use pv (Pipe throughput monitor) https://freebsd.pkgs.org/11/freebsd-por ... 6.txz.html to throttle my zfs send by adding "| /mnt/apps/pv -qL 150m |" in the middle of the zfs send.

If you really want swap I wouldn't use a thumb drive, even usb3 thumb drives have low IOPS. Also they aren't designed for heavy usage that swap will cause. I would just use a spare SSD.

Post Reply

Return to “ZFS (only!)”