Latest News:
2019-03-31: XigmaNAS 11.2.0.4.6625 - released!

Latest BETA Release:
2019-03-31: XigmaNAS 12.0.0.4.6625 - BETA released!

We really need "Your" help on XigmaNAS https://translations.launchpad.net/xigmanas translations. Please help today!

Producing and hosting XigmaNAS cost money, please consider a donation to our project so we can continue to offer you the best.
We need your support! eg: PAYPAL

Memory issues

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
Error323
NewUser
NewUser
Posts: 5
Joined: 30 May 2017 20:23
Status: Offline

Memory issues

#1

Post by Error323 » 17 Mar 2019 20:04

Hi everyone,

Recently I upgraded my NAS to 11.2.0.4 - Omnius (revision 6400) and did a fully new install. Everything works great except that ZFS is extremely memory hungry to the point where it starts killing other processes (nfsd, lighttpd, ...). I'm only running it as a pure NAS with no VM's or anything. Secondly I've enabled vfs.zfs.arc_min (2G) and vfs.zfs.arc_max (10G) to well within my memory bounds. Furthermore I've enabled a swap partition as a zfs volume (4G).

Code: Select all

System Information
Hostname 	muis.local
Version 	11.2.0.4 - Omnius (revision 6400)
Compiled 	Monday January 21 22:45:46 CET 2019
Platform OS 	FreeBSD 11.2-RELEASE-p8 #0 r343240M: Mon Jan 21 03:24:40 CET 2019
Platform 	x64-embedded on Intel(R) Core(TM) i3-8100 CPU @ 3.60GHz
System 	Gigabyte Technology Co., Ltd. B360N WIFI-CF
System BIOS 	American Megatrends Inc. Version: F2 04/19/2018
System Time 	Sunday March 17 20:02:24 CET 2019
System Uptime 	8 Hours 45 Minutes 46 Seconds
System Config Change 	Sunday March 17 11:17:41 CET 2019
CPU Frequency 	100MHz
CPU Usage 	
3%	
CPU Core Usage 	
	Core 0: 	7%	Temp: 	29.0	°C	
	Core 1: 	3%	Temp: 	30.0	°C	
	Core 2: 	3%	Temp: 	30.0	°C	
	Core 3: 	1%	Temp: 	29.0	°C	
Memory Usage 	99% of 15.69GiB	
Swap Usage 	
1% of 4.29GB
Device: /dev/zvol/storage/swap | Total: 4.29GB | Used: 36.62MB | Free: 4.25GB
Load Averages 	[Show Process Information]
Disk Space Usage 	
storage
23% of 15.94TB
Total: 15.94TB | Alloc: 3.73TB | Free: 12.2TB | State: ONLINE
Now my main question is, how can I keep ZFS's memory consumption in check? It really should never ever start killing other processes, but I just cannot find a way to stop it from happening.

Kind regards,
Error

edit: I should note that my previous version from 2017 (forgot version, xml config says 1.7) did not have any issues, but also never utilized all memory.

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4834
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Memory issues

#2

Post by raulfg3 » 17 Mar 2019 21:06

System > Advanced > loader.conf

configure and enable ZFS variables.


PD: google a bit what vfs.zfs.arc_max mean and what are correct values for your RAM
12.0.0.4 - BETA (revision 6625)+OBI on SUPERMICRO X8SIL-F 8GB of ECC RAM, 12x3TB disk in 3 vdev in RaidZ1 = 32TB Raw size only 22TB usable

Wiki
Last changes
Old Wiki

Error323
NewUser
NewUser
Posts: 5
Joined: 30 May 2017 20:23
Status: Offline

Re: Memory issues

#3

Post by Error323 » 17 Mar 2019 21:23

Hi raulfg3, I indeed understand what it means (https://www.freebsd.org/doc/handbook/zfs-advanced.html) and believe to have set correct values. But it's not really helping. Are there more useful parameters I should be aware of?

sleid
PowerUser
PowerUser
Posts: 724
Joined: 23 Jun 2012 07:36
Location: FRANCE LIMOUSIN CORREZE
Status: Online

Re: Memory issues

#4

Post by sleid » 17 Mar 2019 22:21

vfs.zfs.arc_max 12G

vfs.zfs.arc_min 1G

vfs.zfs.prefetch_disable 1
12.0.0.4 - BETA (revision 6625)
FreeBSD 12.0-RELEASE-p3 #0 r345740M: Sat Mar 30 23:51:25 CET 2019
X64-embedded sur Intel(R) Atom(TM) CPU C2550 @ 2.40GHz Boot UEFI
ASRock C2550D4I 2 X 8GB DDR3 ECC
Pool of 2 vdev Raidz1: 3 x WDC WD40EFRX + 3 x WDC WD30EZRX

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4834
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Memory issues

#5

Post by raulfg3 » 18 Mar 2019 07:14

Error323 wrote:
17 Mar 2019 21:23
Hi raulfg3, I indeed understand what it means (https://www.freebsd.org/doc/handbook/zfs-advanced.html) and believe to have set correct values. But it's not really helping. Are there more useful parameters I should be aware of?
please post a screen capture of your values and your RAM
12.0.0.4 - BETA (revision 6625)+OBI on SUPERMICRO X8SIL-F 8GB of ECC RAM, 12x3TB disk in 3 vdev in RaidZ1 = 32TB Raw size only 22TB usable

Wiki
Last changes
Old Wiki

Error323
NewUser
NewUser
Posts: 5
Joined: 30 May 2017 20:23
Status: Offline

Re: Memory issues

#6

Post by Error323 » 18 Mar 2019 16:41

loader.png
xigmanas.png
So I added the suggestion by sleid, but without success. Webui (lighttpd) was still killed, whilest swap isn't full yet.
You do not have the required permissions to view the files attached to this post.

User avatar
ms49434
Developer
Developer
Posts: 584
Joined: 03 Sep 2015 18:49
Location: Neuenkirchen-Vörden, Germany - GMT+1
Contact:
Status: Offline

Re: Memory issues

#7

Post by ms49434 » 18 Mar 2019 17:19

Error323 wrote:
18 Mar 2019 16:41
loader.pngxigmanas.png

So I added the suggestion by sleid, but without success. Webui (lighttpd) was still killed, whilest swap isn't full yet.
My experience is to start with a low arc_max (25% of the total memory is a good start), and monitor the memory consumption of the system over a period of time. Once you are confident that the memory consumption has reached it's max you can start to increase arc_max.
1) XigmaNAS 11.2.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7, 22GB out of 32GB ECC RAM, LSI 9300-8i IT mode in passthrough mode. Pool 1: 2x HGST 10TB, mirrored, SLOG: Samsung 850 Pro, L2ARC: Samsung 850 Pro, Pool 2: 1x Samsung 860 EVO 1TB , services: Samba AD, CIFS/SMB, ftp, ctld, rsync, syncthing, zfs snapshots.
2) XigmaNAS 11.2.0.4 amd64-embedded on a Dell T20 running in a VM on ESXi 6.7, 8GB out of 32GB ECC RAM, IBM M1215 crossflashed, IT mode, passthrough mode, 1x HGST 10TB , services: rsync.

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4834
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Memory issues

#8

Post by raulfg3 » 18 Mar 2019 19:59

try this and test it 2 days or so:

vfs.zfs.arc_max=8G

vfs.zfs.arc_min=6G

vfs.zfs.prefetch_disable 0 <- Not Disable
12.0.0.4 - BETA (revision 6625)+OBI on SUPERMICRO X8SIL-F 8GB of ECC RAM, 12x3TB disk in 3 vdev in RaidZ1 = 32TB Raw size only 22TB usable

Wiki
Last changes
Old Wiki

Error323
NewUser
NewUser
Posts: 5
Joined: 30 May 2017 20:23
Status: Offline

Re: Memory issues

#9

Post by Error323 » 19 Mar 2019 08:12

Hi,

Thanks for all your suggestions/comments. I've now updated the parameters to raulfg3's suggestion and rebooted the server.

User avatar
Snufkin
Advanced User
Advanced User
Posts: 279
Joined: 01 Jul 2012 11:27
Location: Etc/GMT-3 (BSD style)
Status: Offline

Re: Memory issues

#10

Post by Snufkin » 19 Mar 2019 10:28

raulfg3 wrote:
18 Mar 2019 19:59
try this and test it 2 days or so:

vfs.zfs.arc_max=8G

vfs.zfs.arc_min=6G

vfs.zfs.prefetch_disable 0 <- Not Disable
I'd recommend to pay attention to sysctl tunable vfs.zfs.arc_free_target.
When it is defined, ZFS monitors current memory load and keeps certain amount of it free.

Appropriate value proved to be trouble free is 32768 blocks (128 MB).

If XNAS operating system needs more memory immediately, this setting gives extra time to ZFS memory manager to reduce the load to vfs.zfs.arc_min level. Time is important in this case, because flushing buffers to the disk arrays is a time-consuming operation.
XNAS 11.2.0.4 embedded, ASUS P5B-E, Intel DC E6600, 4 GB DDR2, 2 x HGST HDN726040ALE614, 2 x WDC WD5000AAKS, Ippon Back Power Pro 400

Error323
NewUser
NewUser
Posts: 5
Joined: 30 May 2017 20:23
Status: Offline

Re: Memory issues

#11

Post by Error323 » 20 Mar 2019 21:04

Hi again,

Unfortunately lighttpd still gets killed and memory consumption is > 99% again. vfs.zfs.arc_free_target: 27797. I really fail to understand why this cannot be controlled it seems to me like a big problem in general? Am I really the only one dealing with this?

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4834
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: Memory issues

#12

Post by raulfg3 » 20 Mar 2019 21:18

yes, you can't find post of other user with lighhttpd killed or memory problem.


try this:

save your actual config.

do a new fresh install, on a new media, restore your config, do new tests.


Perhaps your actual install have some corrupt files that exhaust RAM.

PD: Remember that RAM is for use, so a 99% of RAM usage is not a problem, the problem is if this block your system, but for your screen captures is doubt ( no swap usage , so only RAM usage that is "normal")
12.0.0.4 - BETA (revision 6625)+OBI on SUPERMICRO X8SIL-F 8GB of ECC RAM, 12x3TB disk in 3 vdev in RaidZ1 = 32TB Raw size only 22TB usable

Wiki
Last changes
Old Wiki

mbze430
experienced User
experienced User
Posts: 101
Joined: 20 Nov 2014 05:41
Status: Offline

Re: Memory issues

#13

Post by mbze430 » 18 May 2019 07:25

I have the SAME exact issue and might be causing my whole NAS to freeze up?

I run TWO NAS. One is with version 11.2.x.x the other 11.1.x.x. The NAS running 11.2.x.x is constantly running up to 98% even though I have it set to 18G for vfs.zfs.arc_max
My NAS has 24GB

My other NAS running 11.1.x.x also running 24GB with the vfs.zfs.arc_max option turned off and it doesn't get as high. usually 3/4 of my max physical memory

The NAS running 11.2.x.x I did a fresh installed then reloaded my configuration file. And I have tried different vfs.zfs.arc_max setting nothing changes and it just always hit 98%
NAS #1 - 11.2.0.4 - Omnius (revision 6625) - SuperMicro X10SL7-F w/ 24GB ECC - LSI SAS 9207-16i - 2x RAIDZ1 (10x3TB) Pools and 1x (2x4TB) Stripe Pool
NAS #2 - 11.2.0.4 - Omnius (revision 6625) - SuperMicro X10SLM-F w/32GB ECC - LSI SAS 9207-8i (RAID10) - IBM M1015-IT Mode (RAID10)

Post Reply

Return to “ZFS (only!)”