This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

Server freezing, stuck process when using zfs send

Forum rules
Set-Up GuideFAQsForum Rules
User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: Server freezing, stuck process when using zfs send

Post by Parkcomm »

erico - are you familiar with sysutils/zfs-stats http://www.freshports.org/sysutils/zfs-stats
It'll run happily from within a jail so you can install it presistantly

It might show some useful stats - L2Arc Header, arc size, kernal mem. Here's mine - you can see I rebooted over the weekend so the L2Arc is not warmed up:

Code: Select all

jexec transmission /usr/local/bin/zfs-stats -a

------------------------------------------------------------------------
ZFS Subsystem Report				Mon Nov  9 08:50:38 2015
------------------------------------------------------------------------

System Information:

	Kernel Version:				1002000 (osreldate)
	Hardware Platform:			amd64
	Processor Architecture:			amd64

	ZFS Storage pool Version:		5000
	ZFS Filesystem Version:			5

FreeBSD 10.2-RELEASE-p5 #0 r288904M: Tue Oct 6 07:09:12 CEST 2015 root
 8:50AM  up 17:04, 0 users, load averages: 0.59, 1.15, 1.15

------------------------------------------------------------------------

System Memory:

	1.03%	102.12	MiB Active,	5.78%	572.67	MiB Inact
	89.71%	8.69	GiB Wired,	0.04%	4.42	MiB Cache
	3.43%	340.50	MiB Free,	0.00%	4.00	KiB Gap

	Real Installed:				10.00	GiB
	Real Available:			99.49%	9.95	GiB
	Real Managed:			97.31%	9.68	GiB

	Logical Total:				10.00	GiB
	Logical Used:			91.04%	9.10	GiB
	Logical Free:			8.96%	917.59	MiB

Kernel Memory:					337.28	MiB
	Data:				90.24%	304.37	MiB
	Text:				9.76%	32.91	MiB

Kernel Memory Map:				9.00	GiB
	Size:				69.43%	6.25	GiB
	Free:				30.57%	2.75	GiB

------------------------------------------------------------------------

ARC Summary: (HEALTHY)
	Memory Throttle Count:			0

ARC Misc:
	Deleted:				1.17m
	Recycle Misses:				735.18k
	Mutex Misses:				806
	Evict Skips:				43.94m

ARC Size:				88.93%	7.11	GiB
	Target Size: (Adaptive)		88.93%	7.11	GiB
	Min Size (Hard Limit):		50.00%	4.00	GiB
	Max Size (High Water):		2:1	8.00	GiB

ARC Size Breakdown:
	Recently Used Cache Size:	93.75%	6.67	GiB
	Frequently Used Cache Size:	6.25%	455.41	MiB

ARC Hash Breakdown:
	Elements Max:				753.91k
	Elements Current:		99.86%	752.87k
	Collisions:				541.88k
	Chain Max:				6
	Chains:					106.78k

------------------------------------------------------------------------

ARC Efficiency:					32.04m
	Cache Hit Ratio:		86.14%	27.60m
	Cache Miss Ratio:		13.86%	4.44m
	Actual Hit Ratio:		76.84%	24.61m

	Data Demand Efficiency:		96.24%	4.65m
	Data Prefetch Efficiency:	11.49%	1.06m

	CACHE HITS BY CACHE LIST:
	  Anonymously Used:		5.05%	1.39m
	  Most Recently Used:		9.31%	2.57m
	  Most Frequently Used:		79.88%	22.04m
	  Most Recently Used Ghost:	1.25%	346.09k
	  Most Frequently Used Ghost:	4.50%	1.24m

	CACHE HITS BY DATA TYPE:
	  Demand Data:			16.21%	4.47m
	  Prefetch Data:		0.44%	122.11k
	  Demand Metadata:		72.85%	20.10m
	  Prefetch Metadata:		10.49%	2.90m

	CACHE MISSES BY DATA TYPE:
	  Demand Data:			3.94%	174.72k
	  Prefetch Data:		21.19%	940.65k
	  Demand Metadata:		65.02%	2.89m
	  Prefetch Metadata:		9.86%	437.65k

------------------------------------------------------------------------

L2 ARC Summary: (HEALTHY)
	Passed Headroom:			3.87m
	Tried Lock Failures:			31.00k
	IO In Progress:				190
	Low Memory Aborts:			19
	Free on Write:				14.70k
	Writes While Full:			2.90k
	R/W Clashes:				48
	Bad Checksums:				0
	IO Errors:				0
	SPA Mismatch:				158.93k

L2 ARC Size: (Adaptive)				49.72	GiB
	Header Size:			0.17%	87.81	MiB

L2 ARC Breakdown:				4.44m
	Hit Ratio:			26.14%	1.16m
	Miss Ratio:			73.86%	3.28m
	Feeds:					64.61k

L2 ARC Buffer:
	Bytes Scanned:				4.47	TiB
	Buffer Iterations:			64.61k
	List Iterations:			4.10m
	NULL List Iterations:			28.22k

L2 ARC Writes:
	Writes Sent:			100.00%	19.38k

------------------------------------------------------------------------

File-Level Prefetch: (HEALTHY)

DMU Efficiency:					50.60m
	Hit Ratio:			44.14%	22.33m
	Miss Ratio:			55.86%	28.26m

	Colinear:				28.26m
	  Hit Ratio:			0.01%	3.98k
	  Miss Ratio:			99.99%	28.26m

	Stride:					21.10m
	  Hit Ratio:			99.95%	21.09m
	  Miss Ratio:			0.05%	10.33k

DMU Misc:
	Reclaim:				28.26m
	  Successes:			0.41%	116.96k
	  Failures:			99.59%	28.14m

	Streams:				1.24m
	  +Resets:			0.20%	2.48k
	  -Resets:			99.80%	1.24m
	  Bogus:				0

------------------------------------------------------------------------

VDEV cache is disabled

------------------------------------------------------------------------

ZFS Tunables (sysctl):
	kern.maxusers                           972
	vm.kmem_size                            9663676416
	vm.kmem_size_scale                      1
	vm.kmem_size_min                        0
	vm.kmem_size_max                        1319413950874
	vfs.zfs.trim.max_interval               1
	vfs.zfs.trim.timeout                    30
	vfs.zfs.trim.txg_delay                  32
	vfs.zfs.trim.enabled                    1
	vfs.zfs.vol.unmap_enabled               1
	vfs.zfs.vol.mode                        1
	vfs.zfs.version.zpl                     5
	vfs.zfs.version.spa                     5000
	vfs.zfs.version.acl                     1
	vfs.zfs.version.ioctl                   4
	vfs.zfs.debug                           0
	vfs.zfs.super_owner                     0
	vfs.zfs.sync_pass_rewrite               2
	vfs.zfs.sync_pass_dont_compress         5
	vfs.zfs.sync_pass_deferred_free         2
	vfs.zfs.zio.exclude_metadata            0
	vfs.zfs.zio.use_uma                     1
	vfs.zfs.cache_flush_disable             0
	vfs.zfs.zil_replay_disable              0
	vfs.zfs.min_auto_ashift                 9
	vfs.zfs.max_auto_ashift                 13
	vfs.zfs.vdev.trim_max_pending           10000
	vfs.zfs.vdev.bio_delete_disable         0
	vfs.zfs.vdev.bio_flush_disable          0
	vfs.zfs.vdev.write_gap_limit            4096
	vfs.zfs.vdev.read_gap_limit             32768
	vfs.zfs.vdev.aggregation_limit          131072
	vfs.zfs.vdev.trim_max_active            64
	vfs.zfs.vdev.trim_min_active            1
	vfs.zfs.vdev.scrub_max_active           2
	vfs.zfs.vdev.scrub_min_active           1
	vfs.zfs.vdev.async_write_max_active     10
	vfs.zfs.vdev.async_write_min_active     1
	vfs.zfs.vdev.async_read_max_active      3
	vfs.zfs.vdev.async_read_min_active      1
	vfs.zfs.vdev.sync_write_max_active      10
	vfs.zfs.vdev.sync_write_min_active      10
	vfs.zfs.vdev.sync_read_max_active       10
	vfs.zfs.vdev.sync_read_min_active       10
	vfs.zfs.vdev.max_active                 1000
	vfs.zfs.vdev.async_write_active_max_dirty_percent60
	vfs.zfs.vdev.async_write_active_min_dirty_percent30
	vfs.zfs.vdev.mirror.non_rotating_seek_inc1
	vfs.zfs.vdev.mirror.non_rotating_inc    0
	vfs.zfs.vdev.mirror.rotating_seek_offset1048576
	vfs.zfs.vdev.mirror.rotating_seek_inc   5
	vfs.zfs.vdev.mirror.rotating_inc        0
	vfs.zfs.vdev.trim_on_init               1
	vfs.zfs.vdev.cache.bshift               16
	vfs.zfs.vdev.cache.size                 0
	vfs.zfs.vdev.cache.max                  16384
	vfs.zfs.vdev.metaslabs_per_vdev         200
	vfs.zfs.txg.timeout                     5
	vfs.zfs.space_map_blksz                 4096
	vfs.zfs.spa_slop_shift                  5
	vfs.zfs.spa_asize_inflation             24
	vfs.zfs.deadman_enabled                 1
	vfs.zfs.deadman_checktime_ms            5000
	vfs.zfs.deadman_synctime_ms             1000000
	vfs.zfs.recover                         0
	vfs.zfs.spa_load_verify_data            1
	vfs.zfs.spa_load_verify_metadata        1
	vfs.zfs.spa_load_verify_maxinflight     10000
	vfs.zfs.check_hostid                    1
	vfs.zfs.mg_fragmentation_threshold      85
	vfs.zfs.mg_noalloc_threshold            0
	vfs.zfs.condense_pct                    200
	vfs.zfs.metaslab.bias_enabled           1
	vfs.zfs.metaslab.lba_weighting_enabled  1
	vfs.zfs.metaslab.fragmentation_factor_enabled1
	vfs.zfs.metaslab.preload_enabled        1
	vfs.zfs.metaslab.preload_limit          3
	vfs.zfs.metaslab.unload_delay           8
	vfs.zfs.metaslab.load_pct               50
	vfs.zfs.metaslab.min_alloc_size         33554432
	vfs.zfs.metaslab.df_free_pct            4
	vfs.zfs.metaslab.df_alloc_threshold     131072
	vfs.zfs.metaslab.debug_unload           0
	vfs.zfs.metaslab.debug_load             0
	vfs.zfs.metaslab.fragmentation_threshold70
	vfs.zfs.metaslab.gang_bang              16777217
	vfs.zfs.free_max_blocks                 -1
	vfs.zfs.no_scrub_prefetch               0
	vfs.zfs.no_scrub_io                     0
	vfs.zfs.resilver_min_time_ms            3000
	vfs.zfs.free_min_time_ms                1000
	vfs.zfs.scan_min_time_ms                1000
	vfs.zfs.scan_idle                       50
	vfs.zfs.scrub_delay                     4
	vfs.zfs.resilver_delay                  2
	vfs.zfs.top_maxinflight                 32
	vfs.zfs.zfetch.array_rd_sz              1048576
	vfs.zfs.zfetch.block_cap                256
	vfs.zfs.zfetch.min_sec_reap             2
	vfs.zfs.zfetch.max_streams              8
	vfs.zfs.prefetch_disable                0
	vfs.zfs.delay_scale                     500000
	vfs.zfs.delay_min_dirty_percent         60
	vfs.zfs.dirty_data_sync                 67108864
	vfs.zfs.dirty_data_max_percent          10
	vfs.zfs.dirty_data_max_max              4294967296
	vfs.zfs.dirty_data_max                  1068311347
	vfs.zfs.max_recordsize                  1048576
	vfs.zfs.mdcomp_disable                  0
	vfs.zfs.nopwrite_enabled                1
	vfs.zfs.dedup.prefetch                  1
	vfs.zfs.l2c_only_size                   47187566080
	vfs.zfs.mfu_ghost_data_lsize            4388028416
	vfs.zfs.mfu_ghost_metadata_lsize        923637248
	vfs.zfs.mfu_ghost_size                  5311665664
	vfs.zfs.mfu_data_lsize                  1004780544
	vfs.zfs.mfu_metadata_lsize              161688576
	vfs.zfs.mfu_size                        1299322880
	vfs.zfs.mru_ghost_data_lsize            1807180288
	vfs.zfs.mru_ghost_metadata_lsize        516296704
	vfs.zfs.mru_ghost_size                  2323476992
	vfs.zfs.mru_data_lsize                  4616723456
	vfs.zfs.mru_metadata_lsize              595258880
	vfs.zfs.mru_size                        5313526272
	vfs.zfs.anon_data_lsize                 0
	vfs.zfs.anon_metadata_lsize             0
	vfs.zfs.anon_size                       3194880
	vfs.zfs.l2arc_norw                      1
	vfs.zfs.l2arc_feed_again                1
	vfs.zfs.l2arc_noprefetch                1
	vfs.zfs.l2arc_feed_min_ms               200
	vfs.zfs.l2arc_feed_secs                 1
	vfs.zfs.l2arc_headroom                  2
	vfs.zfs.l2arc_write_boost               8388608
	vfs.zfs.l2arc_write_max                 8388608
	vfs.zfs.arc_meta_limit                  2147483648
	vfs.zfs.arc_free_target                 17644
	vfs.zfs.arc_shrink_shift                5
	vfs.zfs.arc_average_blocksize           8192
	vfs.zfs.arc_min                         4294967296
	vfs.zfs.arc_max                         8589934592

------------------------------------------------------------------------
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

User avatar
erico.bettoni
experienced User
experienced User
Posts: 140
Joined: 25 Jun 2012 22:36
Location: São Paulo - Brasil
Status: Offline

Re: Server freezing, stuck process when using zfs send

Post by erico.bettoni »

Not yet, but will get to it.
Will install in one of my jails and report back.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: Server freezing, stuck process when using zfs send

Post by Parkcomm »

Why not set up a five minute cron job so you have the most recent results before the crash System|Advanced|Cron.

First in the terminal

Code: Select all

touch /mnt/Perm/zfs-stats.old /mnt/Perm/zfs-stats.now
Then in the cron page

Code: Select all

cp /mnt/Perm/zfs-stats.now /mnt/Perm/zfs-stats.old && jexec transmission /usr/local/bin/zfs-stats -a > /mnt/Perm/zfs-stats.now
This mean when you get a crash you have the most recent stats - and one before that so you can see what changed (to the nearest interval as defined by your cron set up)

In this case you'll need to change "Perm" to you own directory - has to be on your data partition to survive a reboot (tmp is no good)
and "transmission" to you own jail name

if you don't use jails just

Code: Select all

pkg install zfs-stats
- it takes two minutes but it won't survive a reboot, and you can drop "jexec transmission"
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

User avatar
erico.bettoni
experienced User
experienced User
Posts: 140
Joined: 25 Jun 2012 22:36
Location: São Paulo - Brasil
Status: Offline

Re: Server freezing, stuck process when using zfs send

Post by erico.bettoni »

Here is the output right after a crash.
Just to clarify one thing: the system doesn't crash, just the process reading files from the tank.

Code: Select all


------------------------------------------------------------------------
ZFS Subsystem Report				Mon Nov  9 14:57:36 2015
------------------------------------------------------------------------

System Information:

	Kernel Version:				1002000 (osreldate)
	Hardware Platform:			amd64
	Processor Architecture:			amd64

	ZFS Storage pool Version:		5000
	ZFS Filesystem Version:			5

FreeBSD 10.2-RELEASE-p7 #0 r290499M: Sat Nov 7 18:14:14 CET 2015 root
 2:57PM  up  1:26, 2 users, load averages: 0.41, 0.25, 0.19

------------------------------------------------------------------------

System Memory:

	0.62%	198.42	MiB Active,	1.92%	612.31	MiB Inact
	68.77%	21.38	GiB Wired,	0.00%	0 Cache
	28.68%	8.92	GiB Free,	0.00%	4.00	KiB Gap

	Real Installed:				32.00	GiB
	Real Available:			99.75%	31.92	GiB
	Real Managed:			97.39%	31.09	GiB

	Logical Total:				32.00	GiB
	Logical Used:			70.27%	22.49	GiB
	Logical Free:			29.73%	9.51	GiB

Kernel Memory:					288.58	MiB
	Data:				88.75%	256.12	MiB
	Text:				11.25%	32.46	MiB

Kernel Memory Map:				31.09	GiB
	Size:				40.07%	12.46	GiB
	Free:				59.93%	18.63	GiB

------------------------------------------------------------------------

ARC Summary: (HEALTHY)
	Memory Throttle Count:			0

ARC Misc:
	Deleted:				46
	Recycle Misses:				110
	Mutex Misses:				0
	Evict Skips:				3

ARC Size:				100.00%	12.00	GiB
	Target Size: (Adaptive)		100.00%	12.00	GiB
	Min Size (Hard Limit):		50.00%	6.00	GiB
	Max Size (High Water):		2:1	12.00	GiB

ARC Size Breakdown:
	Recently Used Cache Size:	74.39%	8.93	GiB
	Frequently Used Cache Size:	25.61%	3.07	GiB

ARC Hash Breakdown:
	Elements Max:				107.99k
	Elements Current:		100.00%	107.98k
	Collisions:				3.17k
	Chain Max:				2
	Chains:					1.31k

------------------------------------------------------------------------

ARC Efficiency:					4.02m
	Cache Hit Ratio:		96.98%	3.90m
	Cache Miss Ratio:		3.02%	121.49k
	Actual Hit Ratio:		96.98%	3.90m

	Data Demand Efficiency:		97.48%	3.92m

	CACHE HITS BY CACHE LIST:
	  Most Recently Used:		37.10%	1.45m
	  Most Frequently Used:		62.90%	2.45m
	  Most Recently Used Ghost:	0.00%	0
	  Most Frequently Used Ghost:	0.00%	0

	CACHE HITS BY DATA TYPE:
	  Demand Data:			97.94%	3.82m
	  Prefetch Data:		0.00%	0
	  Demand Metadata:		2.06%	80.46k
	  Prefetch Metadata:		0.00%	0

	CACHE MISSES BY DATA TYPE:
	  Demand Data:			81.28%	98.75k
	  Prefetch Data:		0.00%	0
	  Demand Metadata:		18.66%	22.68k
	  Prefetch Metadata:		0.06%	69

------------------------------------------------------------------------

L2 ARC Summary: (HEALTHY)
	Passed Headroom:			171.33k
	Tried Lock Failures:			140
	IO In Progress:				0
	Low Memory Aborts:			0
	Free on Write:				2
	Writes While Full:			300
	R/W Clashes:				0
	Bad Checksums:				0
	IO Errors:				0
	SPA Mismatch:				10.61k

L2 ARC Size: (Adaptive)				10.53	GiB
	Header Size:			0.00%	0

L2 ARC Breakdown:				121.46k
	Hit Ratio:			0.00%	0
	Miss Ratio:			100.00%	121.46k
	Feeds:					5.47k

L2 ARC Buffer:
	Bytes Scanned:				479.06	GiB
	Buffer Iterations:			5.47k
	List Iterations:			348.14k
	NULL List Iterations:			863

L2 ARC Writes:
	Writes Sent:			100.00%	4.72k

------------------------------------------------------------------------


------------------------------------------------------------------------

VDEV cache is disabled

------------------------------------------------------------------------

ZFS Tunables (sysctl):
	kern.maxusers                           2378
	vm.kmem_size                            33378516992
	vm.kmem_size_scale                      1
	vm.kmem_size_min                        0
	vm.kmem_size_max                        1319413950874
	vfs.zfs.trim.max_interval               1
	vfs.zfs.trim.timeout                    30
	vfs.zfs.trim.txg_delay                  32
	vfs.zfs.trim.enabled                    1
	vfs.zfs.vol.unmap_enabled               1
	vfs.zfs.vol.mode                        1
	vfs.zfs.version.zpl                     5
	vfs.zfs.version.spa                     5000
	vfs.zfs.version.acl                     1
	vfs.zfs.version.ioctl                   4
	vfs.zfs.debug                           0
	vfs.zfs.super_owner                     0
	vfs.zfs.sync_pass_rewrite               2
	vfs.zfs.sync_pass_dont_compress         5
	vfs.zfs.sync_pass_deferred_free         2
	vfs.zfs.zio.exclude_metadata            0
	vfs.zfs.zio.use_uma                     1
	vfs.zfs.cache_flush_disable             0
	vfs.zfs.zil_replay_disable              0
	vfs.zfs.min_auto_ashift                 9
	vfs.zfs.max_auto_ashift                 13
	vfs.zfs.vdev.trim_max_pending           10000
	vfs.zfs.vdev.bio_delete_disable         0
	vfs.zfs.vdev.bio_flush_disable          0
	vfs.zfs.vdev.write_gap_limit            4096
	vfs.zfs.vdev.read_gap_limit             32768
	vfs.zfs.vdev.aggregation_limit          131072
	vfs.zfs.vdev.trim_max_active            64
	vfs.zfs.vdev.trim_min_active            1
	vfs.zfs.vdev.scrub_max_active           2
	vfs.zfs.vdev.scrub_min_active           1
	vfs.zfs.vdev.async_write_max_active     10
	vfs.zfs.vdev.async_write_min_active     1
	vfs.zfs.vdev.async_read_max_active      3
	vfs.zfs.vdev.async_read_min_active      1
	vfs.zfs.vdev.sync_write_max_active      10
	vfs.zfs.vdev.sync_write_min_active      10
	vfs.zfs.vdev.sync_read_max_active       10
	vfs.zfs.vdev.sync_read_min_active       10
	vfs.zfs.vdev.max_active                 1000
	vfs.zfs.vdev.async_write_active_max_dirty_percent60
	vfs.zfs.vdev.async_write_active_min_dirty_percent30
	vfs.zfs.vdev.mirror.non_rotating_seek_inc1
	vfs.zfs.vdev.mirror.non_rotating_inc    0
	vfs.zfs.vdev.mirror.rotating_seek_offset1048576
	vfs.zfs.vdev.mirror.rotating_seek_inc   5
	vfs.zfs.vdev.mirror.rotating_inc        0
	vfs.zfs.vdev.trim_on_init               1
	vfs.zfs.vdev.cache.bshift               16
	vfs.zfs.vdev.cache.size                 0
	vfs.zfs.vdev.cache.max                  16384
	vfs.zfs.vdev.metaslabs_per_vdev         200
	vfs.zfs.txg.timeout                     5
	vfs.zfs.space_map_blksz                 4096
	vfs.zfs.spa_slop_shift                  5
	vfs.zfs.spa_asize_inflation             24
	vfs.zfs.deadman_enabled                 1
	vfs.zfs.deadman_checktime_ms            5000
	vfs.zfs.deadman_synctime_ms             1000000
	vfs.zfs.recover                         0
	vfs.zfs.spa_load_verify_data            1
	vfs.zfs.spa_load_verify_metadata        1
	vfs.zfs.spa_load_verify_maxinflight     10000
	vfs.zfs.check_hostid                    1
	vfs.zfs.mg_fragmentation_threshold      85
	vfs.zfs.mg_noalloc_threshold            0
	vfs.zfs.condense_pct                    200
	vfs.zfs.metaslab.bias_enabled           1
	vfs.zfs.metaslab.lba_weighting_enabled  1
	vfs.zfs.metaslab.fragmentation_factor_enabled1
	vfs.zfs.metaslab.preload_enabled        1
	vfs.zfs.metaslab.preload_limit          3
	vfs.zfs.metaslab.unload_delay           8
	vfs.zfs.metaslab.load_pct               50
	vfs.zfs.metaslab.min_alloc_size         33554432
	vfs.zfs.metaslab.df_free_pct            4
	vfs.zfs.metaslab.df_alloc_threshold     131072
	vfs.zfs.metaslab.debug_unload           0
	vfs.zfs.metaslab.debug_load             0
	vfs.zfs.metaslab.fragmentation_threshold70
	vfs.zfs.metaslab.gang_bang              16777217
	vfs.zfs.free_max_blocks                 -1
	vfs.zfs.no_scrub_prefetch               0
	vfs.zfs.no_scrub_io                     0
	vfs.zfs.resilver_min_time_ms            3000
	vfs.zfs.free_min_time_ms                1000
	vfs.zfs.scan_min_time_ms                1000
	vfs.zfs.scan_idle                       50
	vfs.zfs.scrub_delay                     4
	vfs.zfs.resilver_delay                  2
	vfs.zfs.top_maxinflight                 32
	vfs.zfs.zfetch.array_rd_sz              1048576
	vfs.zfs.zfetch.block_cap                256
	vfs.zfs.zfetch.min_sec_reap             2
	vfs.zfs.zfetch.max_streams              8
	vfs.zfs.prefetch_disable                1
	vfs.zfs.delay_scale                     500000
	vfs.zfs.delay_min_dirty_percent         60
	vfs.zfs.dirty_data_sync                 67108864
	vfs.zfs.dirty_data_max_percent          10
	vfs.zfs.dirty_data_max_max              4294967296
	vfs.zfs.dirty_data_max                  3427388211
	vfs.zfs.max_recordsize                  1048576
	vfs.zfs.mdcomp_disable                  0
	vfs.zfs.nopwrite_enabled                1
	vfs.zfs.dedup.prefetch                  1
	vfs.zfs.l2c_only_size                   0
	vfs.zfs.mfu_ghost_data_lsize            24414208
	vfs.zfs.mfu_ghost_metadata_lsize        589312
	vfs.zfs.mfu_ghost_size                  25003520
	vfs.zfs.mfu_data_lsize                  3221051904
	vfs.zfs.mfu_metadata_lsize              5565440
	vfs.zfs.mfu_size                        3227078144
	vfs.zfs.mru_ghost_data_lsize            197090816
	vfs.zfs.mru_ghost_metadata_lsize        1821696
	vfs.zfs.mru_ghost_size                  198912512
	vfs.zfs.mru_data_lsize                  9499830272
	vfs.zfs.mru_metadata_lsize              7433728
	vfs.zfs.mru_size                        9585528320
	vfs.zfs.anon_data_lsize                 0
	vfs.zfs.anon_metadata_lsize             0
	vfs.zfs.anon_size                       52736
	vfs.zfs.l2arc_norw                      1
	vfs.zfs.l2arc_feed_again                1
	vfs.zfs.l2arc_noprefetch                1
	vfs.zfs.l2arc_feed_min_ms               200
	vfs.zfs.l2arc_feed_secs                 1
	vfs.zfs.l2arc_headroom                  2
	vfs.zfs.l2arc_write_boost               8388608
	vfs.zfs.l2arc_write_max                 8388608
	vfs.zfs.arc_meta_limit                  3221225472
	vfs.zfs.arc_free_target                 56540
	vfs.zfs.arc_shrink_shift                5
	vfs.zfs.arc_average_blocksize           8192
	vfs.zfs.arc_min                         6442450944
	vfs.zfs.arc_max                         12884901888

------------------------------------------------------------------------


User avatar
erico.bettoni
experienced User
experienced User
Posts: 140
Joined: 25 Jun 2012 22:36
Location: São Paulo - Brasil
Status: Offline

Re: Server freezing, stuck process when using zfs send

Post by erico.bettoni »

Bmillett wrote:I have 2 servers at each of my clients locations. They mirror each other each night (and some at noon) using zfs send/receive.
It is this copy process that causes the machines to "break" quickly.
Because I have 2 machines, I will move everything to one, format the 2nd. Move everything to the 2nd. Format the 1st and then get the syncing back up again with both on 9.3.
Very disappointing that this is happening. It's conforting to see that I'm not the only one. My hardware setup is as simple as possible so it "runs forever".
I hope 9.3 will just run and run. Like a good unix should!!
Were you able to test 9.3?

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: Server freezing, stuck process when using zfs send

Post by Parkcomm »

I cant see anythin weird except kmem is at the max, did you try tuning kmem.size?

also unrelated

if you set vfs.zfs.debug=1 you might get a clue in the console messages
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

User avatar
erico.bettoni
experienced User
experienced User
Posts: 140
Joined: 25 Jun 2012 22:36
Location: São Paulo - Brasil
Status: Offline

Re: Server freezing, stuck process when using zfs send

Post by erico.bettoni »

Parkcomm wrote:I cant see anythin weird except kmem is at the max, did you try tuning kmem.size?

also unrelated

if you set vfs.zfs.debug=1 you might get a clue in the console messages
I was tunning it, but stopped as daoyama and others told me better to leave it alone on FBSD 10.
Do you think it should be higher? On FBSD 9 i used to set it to 1.5x size of the ram.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: Server freezing, stuck process when using zfs send

Post by Parkcomm »

At this point I'm guessing - I used to tune it on N4F9 as well - but I don't on N4F10 and I have not seen a prob. I think I tuned to 1G less than RAM
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

User avatar
erico.bettoni
experienced User
experienced User
Posts: 140
Joined: 25 Jun 2012 22:36
Location: São Paulo - Brasil
Status: Offline

Re: Server freezing, stuck process when using zfs send

Post by erico.bettoni »

Reporting back on this thread!

After the update to 10.3 the problem is gone.
I've also created a zvol for swap and disabled the swap partition on the usb stick.

No more crashs, no more memory allocation errors, everything is fine, even with several VMs running and letting ZFS use as much memory as it wants. Memory use is above 90% and no crash!

Post Reply

Return to “ZFS (only!)”