This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

poor ZFS read performance

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
rs232
Starter
Starter
Posts: 59
Joined: 25 Jun 2012 13:48
Status: Offline

poor ZFS read performance

Post by rs232 »

I'm testing ZFS after the migration from freenas8.2 to Nas4free9. I didn't upgrade the ZFS pool version yet.
The pool is made by 4x2Tb disks in raidz1. Nas4free is sunning on a VM with 4Gb of ram running on vmware ESXi 5.0.
I hav installed and set the ZFS kernel tune to 4Gb.

I'm testing using CIFS from/to my laptop over a gigabit lan with Jumbo frames (9K) enabled.


I have to say the writing performance it's fantastic! I get a staggering 70~80 MByte/sec

The reading performance thought it's rather disappointing: 5~10 MByte/sec

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: poor ZFS read performance

Post by raulfg3 »

Sorry but actually there are not post in the new forum to help you, you need to read old FreNAS Forum , and perhaps find some usefull:

http://sourceforge.net/apps/phpbb/freen ... m.php?f=62
http://sourceforge.net/apps/phpbb/freen ... m.php?f=45
http://sourceforge.net/apps/phpbb/freen ... m.php?f=46

Actually I'm trying to save the most important post of the old FreeNAS forum because Sourceforge will delete it on Sept. 1st, is an enormeous job, so please if you find some post that help you , please consider to write a new Post in the apropiate subforum to help others whit the same problems, any help are welcome.


Eg: This post: http://sourceforge.net/apps/phpbb/freen ... 62&t=12008
or this other: http://sourceforge.net/apps/phpbb/freen ... =62&t=5827

this is one that I consider important , and copy to new forum: viewtopic.php?f=75&t=206&p=465#p465
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

rs232
Starter
Starter
Posts: 59
Joined: 25 Jun 2012 13:48
Status: Offline

Re: poor ZFS read performance

Post by rs232 »

I've found posts of people with the very same problem but no answers/solutions.

About the old forum I suggest you make a full copy using a site mirror software such as offline explorer or similar.

rs232
Starter
Starter
Posts: 59
Joined: 25 Jun 2012 13:48
Status: Offline

Re: poor ZFS read performance

Post by rs232 »

This is really bothering me now...

Does anybody know where zfs parameters are stored in nas4free? I looked into /etc/rc.conf but couldn't find anything zfs related.

viper4444
NewUser
NewUser
Posts: 8
Joined: 01 Jul 2012 18:15
Status: Offline

Re: poor ZFS read performance

Post by viper4444 »

I'm having the same problem but not to this extent (see new post). You can see zfs parameters by using 'zfs get' and 'sysctl vfs.zfs' on the command line.

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: poor ZFS read performance

Post by raulfg3 »

12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

rs232
Starter
Starter
Posts: 59
Joined: 25 Jun 2012 13:48
Status: Offline

Re: poor ZFS read performance

Post by rs232 »

There's something I really don't understand, please see the data below:

Reading from ZFS over CIFS (5-15MB/sec)

Code: Select all

nas4free:~# zpool iostat 1
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
raid5       4.35T  2.90T      5      0   760K      0
raid5       4.35T  2.90T    115      0  14.5M      0
raid5       4.35T  2.90T    232      0  29.1M      0
raid5       4.35T  2.90T    148    115  18.6M   330K
raid5       4.35T  2.90T    173      0  21.7M      0
raid5       4.35T  2.90T    200      0  25.1M      0
raid5       4.35T  2.90T    200      0  25.0M      0
raid5       4.35T  2.90T    173      0  21.7M      0
raid5       4.35T  2.90T    175      7  21.9M  16.8K
raid5       4.35T  2.90T    179    132  22.4M   338K
raid5       4.35T  2.90T    216      0  27.1M      0
raid5       4.35T  2.90T    188      0  23.5M      0
raid5       4.35T  2.90T    214      0  26.9M      0
raid5       4.35T  2.90T    113    109  14.2M   290K
raid5       4.35T  2.90T    125      0  15.7M      0
raid5       4.35T  2.90T    250      0  31.3M      0
raid5       4.35T  2.90T    216      0  27.1M      0
raid5       4.35T  2.90T    196      0  24.5M      0
raid5       4.35T  2.90T    193      0  24.1M      0
raid5       4.35T  2.90T    197      0  24.6M      0
raid5       4.35T  2.90T    214      0  26.9M      0
raid5       4.35T  2.90T    189      0  23.6M      0
raid5       4.35T  2.90T    204      0  25.6M      0
raid5       4.35T  2.90T    144    105  18.1M   267K
raid5       4.35T  2.90T     85      0  10.6M      0
raid5       4.35T  2.90T    161      0  20.2M      0
raid5       4.35T  2.90T    140      0  17.6M      0
raid5       4.35T  2.90T    111      0  14.0M      0
raid5       4.35T  2.90T      0    266      0  1.79M
raid5       4.35T  2.90T     91      0  11.4M      0
raid5       4.35T  2.90T    160      0  19.9M      0
raid5       4.35T  2.90T     70      0  8.79M      0
raid5       4.35T  2.90T    112      0  14.1M      0
raid5       4.35T  2.90T      0    226      0  1.36M
Writing from ZFS over CIFS (70-80MB/sec)

Code: Select all

nas4free:~# zpool iostat 1
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
raid5       4.35T  2.90T      2      2  6.44K   380K
raid5       4.35T  2.90T     49     76   115K  9.53M
raid5       4.35T  2.90T      7    406  18.3K  50.5M
raid5       4.35T  2.90T     48     74   116K  9.05M
raid5       4.35T  2.90T     29  1.57K  73.8K  9.09M
raid5       4.35T  2.90T     48     81   112K  9.91M
raid5       4.35T  2.90T     51     82   122K  10.3M
raid5       4.35T  2.90T     21  1.50K  50.5K  9.26M
raid5       4.35T  2.90T     48     70   113K  8.79M
raid5       4.35T  2.90T     28  1.37K  69.3K  11.0M
raid5       4.35T  2.90T     47     64   116K  8.05M
raid5       4.35T  2.90T     35  1.19K  86.6K  12.0M
raid5       4.35T  2.90T     34     57  80.7K  7.18M
raid5       4.35T  2.90T     48     81   114K  9.91M
raid5       4.35T  2.90T     25  1.20K  62.4K  9.00M
raid5       4.35T  2.90T     50    109   121K  10.2M
raid5       4.35T  2.90T     24  1.12K  56.9K  10.1M
raid5       4.35T  2.90T     37  1.09K  88.1K  10.9M
raid5       4.35T  2.90T     40     65  93.6K  8.17M
raid5       4.35T  2.90T     29  1.01K  69.3K  8.31M
raid5       4.35T  2.90T     50     80   112K  10.0M
raid5       4.35T  2.90T     27  1.01K  62.9K  8.38M
raid5       4.35T  2.90T     44   1003   105K  11.2M
raid5       4.35T  2.90T     33     70  80.2K  8.55M
raid5       4.35T  2.90T     27  1.13K  64.9K  10.2M
raid5       4.35T  2.90T     33    996  80.7K  10.1M
raid5       4.35T  2.90T     43    800   102K  11.8M
raid5       4.35T  2.90T     34    181  80.2K  8.54M
raid5       4.35T  2.90T     28    941  67.8K  8.52M
raid5       4.35T  2.90T     50    581   119K  9.52M
raid5       4.35T  2.90T     32    383  79.7K  8.84M
raid5       4.35T  2.90T     27    963  67.3K  8.95M
raid5       4.35T  2.90T     36   1011  90.1K  11.5M
raid5       4.35T  2.90T      9    254  21.6K  31.9M
raid5       4.35T  2.90T     33    909  78.7K  10.8M
As you can see the reading operations are very low compared to the writing one. max 250 vs 1500! More or less the same relationship there's between the throughput (10Mb and 70Mb).


Raw writing performance:

Code: Select all

nas4free:/mnt/raid5/tmp# dd if=/dev/zero of=testfile bs=1024 count=500000
500000+0 records in
500000+0 records out
512000000 bytes transferred in 14.490316 secs (35333943 bytes/sec)
Raw reading performance:

Code: Select all

nas4free:/mnt/raid5/tmp# dd if=testfile of=/dev/zero bs=1024 count=500000
500000+0 records in
500000+0 records out
512000000 bytes transferred in 3.731848 secs (137197442 bytes/sec)

Code: Select all

nas4free:/mnt/raid5/tmp# sysctl kstat
kstat.zfs.misc.xuio_stats.onloan_read_buf: 0
kstat.zfs.misc.xuio_stats.onloan_write_buf: 0
kstat.zfs.misc.xuio_stats.read_buf_copied: 0
kstat.zfs.misc.xuio_stats.read_buf_nocopy: 0
kstat.zfs.misc.xuio_stats.write_buf_copied: 0
kstat.zfs.misc.xuio_stats.write_buf_nocopy: 0
kstat.zfs.misc.zfetchstats.hits: 84268951
kstat.zfs.misc.zfetchstats.misses: 4876043
kstat.zfs.misc.zfetchstats.colinear_hits: 3344
kstat.zfs.misc.zfetchstats.colinear_misses: 4872699
kstat.zfs.misc.zfetchstats.stride_hits: 83731844
kstat.zfs.misc.zfetchstats.stride_misses: 34622
kstat.zfs.misc.zfetchstats.reclaim_successes: 155518
kstat.zfs.misc.zfetchstats.reclaim_failures: 4717181
kstat.zfs.misc.zfetchstats.streams_resets: 658
kstat.zfs.misc.zfetchstats.streams_noresets: 536924
kstat.zfs.misc.zfetchstats.bogus_streams: 0
kstat.zfs.misc.arcstats.hits: 16150200
kstat.zfs.misc.arcstats.misses: 1054665
kstat.zfs.misc.arcstats.demand_data_hits: 9195015
kstat.zfs.misc.arcstats.demand_data_misses: 121428
kstat.zfs.misc.arcstats.demand_metadata_hits: 6190037
kstat.zfs.misc.arcstats.demand_metadata_misses: 231357
kstat.zfs.misc.arcstats.prefetch_data_hits: 98626
kstat.zfs.misc.arcstats.prefetch_data_misses: 664995
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 666522
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 36885
kstat.zfs.misc.arcstats.mru_hits: 2137744
kstat.zfs.misc.arcstats.mru_ghost_hits: 101408
kstat.zfs.misc.arcstats.mfu_hits: 13247308
kstat.zfs.misc.arcstats.mfu_ghost_hits: 132333
kstat.zfs.misc.arcstats.allocated: 1350027
kstat.zfs.misc.arcstats.deleted: 848432
kstat.zfs.misc.arcstats.stolen: 885683
kstat.zfs.misc.arcstats.recycle_miss: 65853
kstat.zfs.misc.arcstats.mutex_miss: 21
kstat.zfs.misc.arcstats.evict_skip: 43588
kstat.zfs.misc.arcstats.evict_l2_cached: 0
kstat.zfs.misc.arcstats.evict_l2_eligible: 74125959680
kstat.zfs.misc.arcstats.evict_l2_ineligible: 36573444096
kstat.zfs.misc.arcstats.hash_elements: 114879
kstat.zfs.misc.arcstats.hash_elements_max: 128835
kstat.zfs.misc.arcstats.hash_collisions: 2431813
kstat.zfs.misc.arcstats.hash_chains: 33664
kstat.zfs.misc.arcstats.hash_chain_max: 11
kstat.zfs.misc.arcstats.p: 1137793536
kstat.zfs.misc.arcstats.c: 1610612736
kstat.zfs.misc.arcstats.c_min: 1610612736
kstat.zfs.misc.arcstats.c_max: 1610612736
kstat.zfs.misc.arcstats.size: 1166190064
kstat.zfs.misc.arcstats.hdr_size: 28496208
kstat.zfs.misc.arcstats.data_size: 1058932224
kstat.zfs.misc.arcstats.other_size: 78761632
kstat.zfs.misc.arcstats.l2_hits: 0
kstat.zfs.misc.arcstats.l2_misses: 0
kstat.zfs.misc.arcstats.l2_feeds: 0
kstat.zfs.misc.arcstats.l2_rw_clash: 0
kstat.zfs.misc.arcstats.l2_read_bytes: 0
kstat.zfs.misc.arcstats.l2_write_bytes: 0
kstat.zfs.misc.arcstats.l2_writes_sent: 0
kstat.zfs.misc.arcstats.l2_writes_done: 0
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 0
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 0
kstat.zfs.misc.arcstats.l2_evict_reading: 0
kstat.zfs.misc.arcstats.l2_free_on_write: 0
kstat.zfs.misc.arcstats.l2_abort_lowmem: 0
kstat.zfs.misc.arcstats.l2_cksum_bad: 0
kstat.zfs.misc.arcstats.l2_io_error: 0
kstat.zfs.misc.arcstats.l2_size: 0
kstat.zfs.misc.arcstats.l2_hdr_size: 0
kstat.zfs.misc.arcstats.memory_throttle_count: 15
kstat.zfs.misc.arcstats.l2_write_trylock_fail: 0
kstat.zfs.misc.arcstats.l2_write_passed_headroom: 0
kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 0
kstat.zfs.misc.arcstats.l2_write_in_l2: 0
kstat.zfs.misc.arcstats.l2_write_io_in_progress: 0
kstat.zfs.misc.arcstats.l2_write_not_cacheable: 296377
kstat.zfs.misc.arcstats.l2_write_full: 0
kstat.zfs.misc.arcstats.l2_write_buffer_iter: 0
kstat.zfs.misc.arcstats.l2_write_pios: 0
kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 0
kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 0
kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 0
kstat.zfs.misc.vdev_cache_stats.delegations: 29184
kstat.zfs.misc.vdev_cache_stats.hits: 168349
kstat.zfs.misc.vdev_cache_stats.misses: 170063

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: poor ZFS read performance

Post by raulfg3 »

some news?
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

Post Reply

Return to “ZFS (only!)”