This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

zfs reading goes slow

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
dpo1905
Starter
Starter
Posts: 16
Joined: 22 Sep 2015 23:13
Status: Offline

zfs reading goes slow

Post by dpo1905 »

Good evening,

I'm on fresh installed nas4free facing slow reading speeds after storage been filled for 78%.

Code: Select all

 nas4free: /mnt# uname -a
FreeBSD nas4free.local 10.2-RELEASE-p2 FreeBSD 10.2-RELEASE-p2 #0 r287260M: Fri Aug 28 18:38:18 CEST 2015     root@dev.nas4free.org:/usr/obj/nas4free/usr/src/sys/NAS4FREE-amd64  amd64

Got mirror raid on two identical brand new disks HGST HDN724040ALE640, 4Tb, 7200rpm

Code: Select all

 nas4free: /mnt# zpool status
  pool: data4
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        data4       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0    ONLINE       0     0     0
            ada1    ONLINE       0     0     0

errors: No known data errors

Code: Select all

 nas4free: /mnt# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
data4  3.62T  2.85T   793G         -    36%    78%  1.00x  ONLINE  -

no compression, no deduplication, all by default.
But I got my read speed tested directly in the console with 30Gb file:

Code: Select all

 nas4free: divx# dd if=test.mkv of=/dev/null bs=64k
484486+1 records in
484486+1 records out
31751303111 bytes transferred in 293.203255 secs (108291100 bytes/sec)

While top shows no load at all:

Code: Select all

last pid:  2445;  load averages:  0.31,  0.27,  0.20             up 0+00:13:15  00:16:45
39 processes:  1 running, 38 sleeping
CPU:  0.1% user,  0.0% nice,  6.4% system,  0.3% interrupt, 93.2% idle
Mem: 57M Active, 156M Inact, 1813M Wired, 66M Buf, 1591M Free
ARC: 1536M Total, 1524M MRU, 16K Anon, 5921K Header, 7273K Other
Swap: 4096M Total, 4096M Free

  PID USERNAME      THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
 2445 root            1  26    0 12364K  2148K zio->i  2   0:03   8.79% dd

even gstat show disk load about 40-50%, not more.

Code: Select all

dT: 1.001s  w: 1.000s
 L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
    0      0      0      0    0.0      0      0    0.0    0.0| xmd0
    0      0      0      0    0.0      0      0    0.0    0.0| xmd1
    0      0      0      0    0.0      0      0    0.0    0.0| da0
    0    365    365  46181    1.2      0      0    0.0   43.6| ada0
    0    361    361  46149    1.1      0      0    0.0   40.2| ada1

I wonder why reading speed is SO slow? It was even lower few hours ago, about 80Mbs...
When I upload data to server via ftp client on gigabit network, it shows 110Mbs on WRITE, making gigabit network bottleneck, not zfs system.

Please advice, what should I do to fix the situation and speed up reading?
I can provide any additional information that help solve the problem.


p.s. Tried "ZFS kernel tune (WebGUI extension) 20121031" from viewtopic.php?f=71&t=1278 . Still the same

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: zfs reading goes slow

Post by Parkcomm »

Do old files read faster than new files? (test whether fragmentation is causing the slowdown - although I'd exec fragmentation to affect writes first)
Last edited by Parkcomm on 23 Sep 2015 00:44, edited 3 times in total.
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

dpo1905
Starter
Starter
Posts: 16
Joined: 22 Sep 2015 23:13
Status: Offline

Re: zfs reading goes slow

Post by dpo1905 »

Thank you for reply. Yes, I have run few tests right now and can confirm you are right. The oldest file (written among first ones, on empty pool) is reading faster than newest one.

As far as I found, there is no defragmentation available on zfs volumes, so what shold I do now? Although I am quite new to zfs - been windows admin all the life, so may be I am missing the point...

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: zfs reading goes slow

Post by Parkcomm »

There is no defrag - the official ZFS solution is get more space! (delete data or buy disks)

Well if you copy off, turn compression of (if the data is compressible) and copy back will solve defrag and and get you some freespace headroom.

I have another idea - is prefetch enabled?

Code: Select all

#sysctl vfs.zfs.prefetch_disable
vfs.zfs.prefetch_disable: 0
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

dpo1905
Starter
Starter
Posts: 16
Joined: 22 Sep 2015 23:13
Status: Offline

Re: zfs reading goes slow

Post by dpo1905 »

Yes, the only solution I've found yet is "buy more disks". Still it sound very pitty. I already invested 50% of HDD space for redundancy - 2x4Tb = only 4Tb, so now I cannot even use them all, so original 8Tb = 3Tb in use, overhead seems to be dissaponting..

There's no compression and my prefetch is turned off, as recomended for systems under 4Gb (I've got 3.7Gb usable)

Code: Select all

# sysctl vfs.zfs.prefetch_disable
vfs.zfs.prefetch_disable: 1
I wonder how those QNAPs and other propietary NAS boxes can operate then with such a tiny hardware...

User avatar
b0ssman
Forum Moderator
Forum Moderator
Posts: 2438
Joined: 14 Feb 2013 08:34
Location: Munich, Germany
Status: Offline

Re: zfs reading goes slow

Post by b0ssman »

then also more ram with prefetch enabled is your next upgrade.
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: zfs reading goes slow

Post by Parkcomm »

If sequential reads are you main concern - turn prefetch_on and see how it goes. It won't cause any disasters, you'll just have less mem for arc.

Code: Select all

#sysctl vfs.zfs.prefetch_disable=0
vfs.zfs.prefetch_disable: 1 -> 0
Also - feel free to run your utilisation up higher, you've got a usable system even at a slower read speed. But be careful, if you get in the nineties you can experience massive slow downs.
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

dpo1905
Starter
Starter
Posts: 16
Joined: 22 Sep 2015 23:13
Status: Offline

Re: zfs reading goes slow

Post by dpo1905 »

Yes, I'd love to boost sequential reads - the average file size in storage is 2-4Gb up to 50-100Gb, like BD-rip movies, system image backup, acronis, etc
I neither store millions of small files there, nor worry about small files reading speed, as this is matter of seconds anyway.

I've also made deeper research and found that my HDDs actually are advanced format sector size 4k, so I will have to rebuild my raid from the very beggining - and allign it for 4k sector. As far as I researched, there's no way to "re-align" existing zfs pool.

Will also check and report if enabling prefetch can evolve performance, thank you for idea. I wonder, how it affects multi-user performance - I've got 1 "serious" writer, 1 "small" writer and several readers of different power - some take 30Gb at once, some just stream video over wifi.

Unfortunatelly, my mITX motherboard hold onlly 2 memory slots, both taken, so I will not be able to upgrade memory in easy/cheap way.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: zfs reading goes slow

Post by Parkcomm »

The good news is you can change the prefetch parameter from the terminal, not just at boot. So have a play with it - you are robbing Peter to pay Paul (hurting write to improve reads). But the sweet spot might be to have it turned off. I find fast reads to be more important than fast writes: incremental backups vs urgent restores.

While I agree that 4K alignment is a good idea, you'll see maybe 10% speed increase, noticeable but not massive.

I'd pull at least one of those memory sticks and whack in 8G or more.
Last edited by Parkcomm on 24 Sep 2015 00:23, edited 1 time in total.
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

dpo1905
Starter
Starter
Posts: 16
Joined: 22 Sep 2015 23:13
Status: Offline

Re: zfs reading goes slow

Post by dpo1905 »

Funny thing about prefetch: i have enabled it, and... disk busy time in gstat dropped to 30% from 40-50%, while reading speed stays the same 80Mbs. Just like somewhere is bottleneck for reading and it's about 80Mbs.

And even more funny: I've started 2 (TWO) read file transfers simultaneously and I've got 100+Mbs total speed, about 50Mbs each transfer, limited only by gigabit network. Disk busy % rised accordingly, now it's about 70% on each ada0, ada1.

So 80Mbs seems to be limit for single reading process only.

Ordered 8Gb RAM for tests.
Backing up my array to rebuild it with 4K support.
Will post results.

Now I'm completely lost in assumptions.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: zfs reading goes slow

Post by Parkcomm »

dpo1905 wrote:Funny thing about prefetch: i have enabled it, and... disk busy time in gstat dropped to 30% from 40-50%, while reading speed stays the same 80Mbs. Just like somewhere is bottleneck for reading and it's about 80Mbs.

And even more funny: I've started 2 (TWO) read file transfers simultaneously and I've got 100+Mbs total speed, about 50Mbs each transfer, limited only by gigabit network. Disk busy % rised accordingly, now it's about 70% on each ada0, ada1.

So 80Mbs seems to be limit for single reading process only.
It's not CPU bound and its not Disk bound and its not network bound and disk writes are OK but disk reads are slow. Damn thats a head scratcher.

There are a couple of other tools that can shed light on the situation.

zfs-mon in sysutils/zfs-stats (it'll say you are grtting a lot of zfetch misses I guess from the above)
the other is "systat -vmstat" includes disc and cpu but also interrupts and memory paging.
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

dpo1905
Starter
Starter
Posts: 16
Joined: 22 Sep 2015 23:13
Status: Offline

Re: zfs reading goes slow

Post by dpo1905 »

Thanks for head scratching for me :)

I have embeded version of nas4free, so I do not have ports to try sysutils/zfs-stats. Yet. That is going to be next step - to install on USB3 full edition and check more serious.

I re-build zfs storage with allign for 4k and upload information back right now.

Regarding interrupts, I've monitored "top -P" today and here is header output. Please note, that %interrupt is always high on CPU2. It switches from 20% to 50%, only on that CPU though. File are being uploaded to NAS, 1 transfer, 100mbs va gigabit network, so I don't see slows here.

Code: Select all

last pid: 17658;  load averages:  2.01,  1.91,  1.75                                       up 0+02:41:56  17:46:23
33 processes:  1 running, 31 sleeping, 1 zombie
CPU 0:  9.4% user,  0.0% nice, 28.0% system,  0.8% interrupt, 61.8% idle
CPU 1:  7.5% user,  0.0% nice, 16.5% system,  0.8% interrupt, 75.2% idle
CPU 2:  0.4% user,  0.0% nice,  3.9% system, 56.3% interrupt, 39.4% idle
CPU 3: 10.6% user,  0.0% nice, 21.3% system,  4.3% interrupt, 63.8% idle
Mem: 17M Active, 214M Inact, 2210M Wired, 78M Buf, 1177M Free
ARC: 1536M Total, 44M MFU, 1383M MRU, 58M Anon, 7672K Header, 43M Other
Swap: 4096M Total, 4096M Free
I use Intel Celeron Quad-Core J1900, SoC on ASRock Q1900-ITX motherboard, if that info helps. I tried to get as small and silent system as possible to compete propietary NAS.


Here goes "systat -vmstat" output, although I cannot understand anything valuable in it.

Code: Select all

    2 users    Load  1.31  1.28  1.48                  Sep 23 17:52

Mem:KB    REAL            VIRTUAL                       VN PAGER   SWAP PAGER
        Tot   Share      Tot    Share    Free           in   out     in   out
Act  107868   26916  1596564    41792 1194004  count
All  259956   29616  1837948    87144          pages
Proc:                                                            Interrupts
  r   p   d   s   w   Csw  Trp  Sys  Int  Sof  Flt        ioflt 12781 total
  1          39       31k 6853  56k 8842   49 1624    257 cow       1 ehci0 23
                                                     1341 zfod    800 cpu0:timer
17.7%Sys  14.6%Intr  6.5%User  0.0%Nice 61.3%Idle         ozfod  1475 ahci0 256
|    |    |    |    |    |    |    |    |    |           %ozfod     8 xhci0 257
=========+++++++>>>                                       daefr       hdac0 258
                                        11 dtbuf     1408 prcfr  7358 re0 259
Namei     Name-cache   Dir-cache    140755 desvn     1681 totfr       ahci1 260
   Calls    hits   %    hits   %     31728 numvn          react  1082 cpu2:timer
    5589    5567 100       3   0     23271 frevn          pdwak  1073 cpu1:timer
                                                        7 pdpgs   984 cpu3:timer
Disks   da0  ada0  ada1  ada2  ada3 pass0 pass1           intrn
KB/t   0.00   119   119  0.00  0.00  0.00  0.00   2264632 wire
tps       0   703   701     0     0     0     0     25528 act
MB/s   0.00 82.03 81.21  0.00  0.00  0.00  0.00    220324 inact
%busy     0    59    64     0     0     0     0           cache
                                                  1194336 free
                                                    79776 buf

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: zfs reading goes slow

Post by Parkcomm »

On an embedded system you can install zfs-mon in a jail. It works just fine.

Or you can pkg install it - it'll be gone after the next reboot but works just fine in the meantime.

Obviously you are accessing the disks via the network for the "systat -vmstat" - none of those number look suspicious to me. See what the local dd test looks like.

If you run top -PHSzIa (pre CPU, show threads, show system calls, don't show the idle process, don't show idle processes, expand arguments) you'll see what is causing those spikes.

btw - in the top -P above you have the line

Code: Select all

last pid: 17658;  load averages:  2.01,  1.91,  1.75 
That means your CPU is overloaded - load average how many processes are waiting for the CPU [url]https://en.wikipedia.org/wiki/Load_(computing)[.url] . This is the real world performance number, not the the utilisation number below it. So while we don't know exactly what is causing the slowdown to fix it, we do know in this case that the performance IS CPU bound.

Note in your original "top" above this was not the case.
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

dpo1905
Starter
Starter
Posts: 16
Joined: 22 Sep 2015 23:13
Status: Offline

Re: zfs reading goes slow

Post by dpo1905 »

Thanks! That will require some research.
Let me clarify few things, and I will be off to do tests as you described.
That means your CPU is overloaded - load average how many processes are waiting for the CPU
I have learned from the wiki link you supplied, that in some linuxes load averages above 1 can also be caused by process, who waits for i/o - slow hdd, etc. Not sure if that is applicable to freebsd though, but it can be applicable to my case, when fast process just waits for hdd to become available.

I've also posted my issue to freebsd mailing list and got even some response - please see image attached. There are some good points here, although the person is not the most pleasant :)

Code: Select all

Note in your original "top" above this was not the case.
Original top shows "dd reading" case. Last top shows "upload from network" case.
Things got a little messy, I will try to add descriptions as much as possible and stay focused.
You do not have the required permissions to view the files attached to this post.

dpo1905
Starter
Starter
Posts: 16
Joined: 22 Sep 2015 23:13
Status: Offline

Re: zfs reading goes slow

Post by dpo1905 »

On an embedded system you can install zfs-mon in a jail. It works just fine.
Or you can pkg install it - it'll be gone after the next reboot but works just fine in the meantime.
I am completelly new to nas4free and got almost no serious experience with freebsd, just basic router setups.
Thus, I have no idea about jails etc, so if that's no critical, I would like to leave it to "last effort".

And seems like there's no pkg_add on embedded system:

Code: Select all

 nas4free: shara# pkg_add
pkg_add: Command not found.


Now to the tests. I'm not sure if gstat output is helpfull or applicable, but decided to include it in case it can be useful. I've made some videos, 1-2Mb size where you can see process in dynamics. I have cut the middle, so last seconds of the videos will show result.

My current system state:

Code: Select all

nas4free: shara# sysctl vfs.zfs.prefetch_disable
vfs.zfs.prefetch_disable: 1

 nas4free: shara# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
data   3.62T  2.48T  1.15T         -    34%    68%  1.00x  ONLINE  -
wd2tb  1.81T   357G  1.46T         -     6%    19%  1.00x  ONLINE  -

1st test. I've uploaded single 42Gb file from my intel ssd via network to smb share on nas. Windows show upload speed around 70Gbs.
Video: https://drive.google.com/file/d/0B6GbEP ... sp=sharing

2nd test. DD read the file.

Code: Select all

nas4free: shara# dd if=r03.vmdk of=/dev/null bs=128k
343523+0 records in
343523+0 records out
45026246656 bytes transferred in 369.582716 secs (121829958 bytes/sec)
121Mbs, quite nice!
Video: https://drive.google.com/file/d/0B6GbEP ... sp=sharing

3rd test. Write test. Same location, 12Gb file.

Code: Select all

nas4free: shara# dd if=/dev/zero of=sometestfile bs=128k count=100000
100000+0 records in
100000+0 records out
13107200000 bytes transferred in 114.887499 secs (114087260 bytes/sec)
Also nice, 114Mbs for writing.
Video: https://drive.google.com/file/d/0B6GbEP ... sp=sharing


Results are very good, although I have no idea, wjy things got better.

High CPU states in the 1st test are only while uploading from network, so it can be samba or network issues, but definetelly not zfs, because last 2 test shows no problem at all.

So for now seems like problem is gone, or I can try to fill zfs pool further till 80% to see if speed drops again.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: zfs reading goes slow

Post by Parkcomm »

dpo1905 wrote:Not sure if that is applicable to freebsd though
Its exactly the same
dpo1905 wrote:Although the person is not the most pleasant
Yep!

btw - instead of taking screenshots, just cut and paste the URL between the URL tabs.
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: zfs reading goes slow

Post by Parkcomm »

I think that the guys from FreeBSD forum made a good point about dd. A simpl(ish) tool for testing throughput called iozone can do a multi threaded test.

Code: Select all

#pkg install iozone
#/usr/local/bin/iozone -i 0 -i 1 -t 2 -s 12000000
The number after the -t is the number of threads. This does not solve the other issues such as clearing the cache. Here is some background http://www.thegeekstuff.com/2011/05/iozone-examples/

Code: Select all

#/usr/local/bin/iozone -a -g 12G -i 0 -i 1
will show the effect of changing file and record sizes, but your arc will affect the results significantly. To minimise this -g 12G says the test file is three times the arc, so the results are dominated by non cached data.

You can even generate an excel file
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

dpo1905
Starter
Starter
Posts: 16
Joined: 22 Sep 2015 23:13
Status: Offline

Re: zfs reading goes slow

Post by dpo1905 »

Thank you very much for your feedback, help and assistance. I will switch nas4free install to FULL llater that week or may be next one. Then will be able to install all necessary tools and post results.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: zfs reading goes slow

Post by Parkcomm »

You can install the tools on embedded - they will be fine until you reboot.

You can just reinstall after reboot if you need the tool again


Sent from my foam - stupid auto correct.
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

dpo1905
Starter
Starter
Posts: 16
Joined: 22 Sep 2015 23:13
Status: Offline

Re: zfs reading goes slow

Post by dpo1905 »

ouch, my bad! I tried pkg_add and it failed.
but pkg runs just great! I will test and report.

dpo1905
Starter
Starter
Posts: 16
Joined: 22 Sep 2015 23:13
Status: Offline

Re: zfs reading goes slow

Post by dpo1905 »

Code: Select all

 Run began: Thu Sep 24 13:47:21 2015

        File size set to 12000000 KB
        Command line used: /usr/local/bin/iozone -i 0 -i 1 -t 2 -s 12000000
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Throughput test with 2 processes
        Each process writes a 12000000 Kbyte file in 4 Kbyte records

        Children see throughput for  2 initial writers  =  102954.23 KB/sec
        Parent sees throughput for  2 initial writers   =  101157.82 KB/sec
        Min throughput per process                      =   51476.00 KB/sec
        Max throughput per process                      =   51478.23 KB/sec
        Avg throughput per process                      =   51477.12 KB/sec
        Min xfer                                        = 11999480.00 KB


        Children see throughput for  2 rewriters        =   47809.43 KB/sec
        Parent sees throughput for  2 rewriters         =   47730.78 KB/sec
        Min throughput per process                      =   23745.21 KB/sec
        Max throughput per process                      =   24064.21 KB/sec
        Avg throughput per process                      =   23904.71 KB/sec
        Min xfer                                        = 11841280.00 KB

        Children see throughput for  2 readers          =  192039.35 KB/sec
        Parent sees throughput for  2 readers           =  192035.07 KB/sec
        Min throughput per process                      =   95923.60 KB/sec
        Max throughput per process                      =   96115.75 KB/sec
        Avg throughput per process                      =   96019.68 KB/sec
        Min xfer                                        = 11976064.00 KB

        Children see throughput for 2 re-readers        =  205387.08 KB/sec
        Parent sees throughput for 2 re-readers         =  205383.50 KB/sec
        Min throughput per process                      =  102613.77 KB/sec
        Max throughput per process                      =  102773.31 KB/sec
        Avg throughput per process                      =  102693.54 KB/sec
        Min xfer                                        = 11981440.00 KB
Those speeds are even better what I would want! Brilliant!
Seems like zfs is really MULTI-USER file system, as it can give me great performance on TWO processes, but still cannot give it on ONE process...




Next command gave really HUGE output, not sure if you need it there... I'll redirect to file and will meditate on it myself :)

Code: Select all

#/usr/local/bin/iozone -a -g 12G -i 0 -i 1

dpo1905
Starter
Starter
Posts: 16
Joined: 22 Sep 2015 23:13
Status: Offline

Re: zfs reading goes slow

Post by dpo1905 »

another my fail:

Code: Select all

CPU 0: 10.9% user,  0.0% nice,  9.3% system,  0.0% interrupt, 79.8% idle
CPU 1:  9.3% user,  0.0% nice, 13.2% system,  0.0% interrupt, 77.5% idle
CPU 2: 10.1% user,  0.0% nice, 10.9% system,  0.0% interrupt, 79.1% idle
CPU 3:  3.5% user,  0.0% nice,  2.7% system,  1.9% interrupt, 91.9% idle
I paid TOO much attention to that information. I interpreted it that CPU is almost not loaded at all.

Although most important thing was:

Code: Select all

last pid: 15227;  load averages:  0.72,  0.79,  0.74  
and according to that CPU is near max load.

*nix is so different from windows... I feel like starting to learn computers from very beginning.

User avatar
b0ssman
Forum Moderator
Forum Moderator
Posts: 2438
Joined: 14 Feb 2013 08:34
Location: Munich, Germany
Status: Offline

Re: zfs reading goes slow

Post by b0ssman »

No. The load average for 4 CPUs has to be at 4 to be Max load. 1 for each CPU.

Gesendet von meinem D5803 mit Tapatalk
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: zfs reading goes slow

Post by Parkcomm »

So the iozone test is telling me that for large file transfers not dominated by the cache you getting as good as can be expected (I should have suggested this at the beginning). Rewrites do look a little slow.

I think you have tricked by two things - you initial dd speed may have been skewed because or the ARC, (which is a good thing) and dd. Have you actually noticed a slowdown, other than the results of the dd test?

To get a more succinct output for the i/o test use

Code: Select all

iozone -a -i 0 -i 1 -r 4096 -s 16000 -g 12000000 
or

Code: Select all

iozone -a -i 0 -i 1 -r 4096 -s 16000 -g 12000000 -b ouput.xls
If you want to use Excel to analyse.

What it says is:
-a auto mode (sweep)
-i 0 do write tests
-i 1 do read tests
-r 4096K The record size to use (block size) i/o speeds vary greatly with this number as you will see in the full test. I have chose an arbitrary but large number
-s start with a minimum file size of 16M
-g grow to a maximum file size of 12G

-b Excel fomat output

You will see as the test sweeps through the file sizes that there is a breakpoint where the read or write speed suddenly drops - this is where the cache does not dominate the result. Here are mine - two mirrored vdevs.

Code: Select all

	Auto Mode
	Record Size 4096 KB
	File size set to 16000 KB
	Using maximum file size of 24000000 kilobytes.
	Command line used: iozone -a -i 0 -i 1 -r 4096 -s 16000 -g 24000000
	Output is in Kbytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 Kbytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
                                   
              KB  reclen   write rewrite    read    reread    read  
           16000    4096 1009950 1047663  1202796  1199074 
           32000    4096 1016132 1054304  1496377  1498167 
           64000    4096 1137715  864364  1453483  1473943 
          128000    4096 1369484  861869   731888  1274346 
          256000    4096 1219791 1173638   892399  1378743  
          512000    4096 1143349  834033   640276  1259204    
         1024000    4096  772002  912864  1160084  1331698  
         2048000    4096  385006  275698  1331229  1500388  
         4096000    4096  246674  239501  1119404  1497334   
         8192000    4096  205881  197258   353372   282935    
        16384000    4096  177941  174650   219707   277979     
I have to us a larger file for the throughout test, because I have a larger cache:

Code: Select all

	File size set to 25165824 KB
	Record Size 4096 KB
	Command line used: /usr/local/bin/iozone -i 0 -i 1 -s 24G -r 4096 -t 2
	Output is in Kbytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 Kbytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
	Throughput test with 2 processes
	Each process writes a 25165824 Kbyte file in 4096 Kbyte records

	Children see throughput for  2 initial writers 	=  188713.90 KB/sec
	Parent sees throughput for  2 initial writers 	=  181854.55 KB/sec
	Min throughput per process 			=   94155.56 KB/sec 
	Max throughput per process 			=   94558.34 KB/sec
	Avg throughput per process 			=   94356.95 KB/sec
	Min xfer 					= 25059328.00 KB

	Children see throughput for  2 rewriters 	=  169270.85 KB/sec
	Parent sees throughput for  2 rewriters 	=  163862.07 KB/sec
	Min throughput per process 			=   84599.45 KB/sec 
	Max throughput per process 			=   84671.41 KB/sec
	Avg throughput per process 			=   84635.43 KB/sec
	Min xfer 					= 25145344.00 KB

	Children see throughput for  2 readers 		=  319606.80 KB/sec
	Parent sees throughput for  2 readers 		=  319588.72 KB/sec
	Min throughput per process 			=  151868.77 KB/sec 
	Max throughput per process 			=  167738.03 KB/sec
	Avg throughput per process 			=  159803.40 KB/sec
	Min xfer 					= 22786048.00 KB

	Children see throughput for 2 re-readers 	=  325103.02 KB/sec
	Parent sees throughput for 2 re-readers 	=  325083.80 KB/sec
	Min throughput per process 			=  142318.36 KB/sec 
	Max throughput per process 			=  182784.66 KB/sec
	Avg throughput per process 			=  162551.51 KB/sec
	Min xfer 					= 19595264.00 KB
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

dpo1905
Starter
Starter
Posts: 16
Joined: 22 Sep 2015 23:13
Status: Offline

Re: zfs reading goes slow

Post by dpo1905 »

After several tests, I'm almost sure that zfs is working as good as it can. Last iozone test:

Code: Select all

 Run began: Fri Oct  2 13:44:28 2015

        File size set to 25165824 KB
        Record Size 4096 KB
        Command line used: /usr/local/bin/iozone -i 0 -i 1 -s 24G -r 4096 -t 2
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Throughput test with 2 processes
        Each process writes a 25165824 Kbyte file in 4096 Kbyte records

        Children see throughput for  2 initial writers  =  500179.52 KB/sec
        Parent sees throughput for  2 initial writers   =  499882.23 KB/sec
        Min throughput per process                      =  249836.19 KB/sec
        Max throughput per process                      =  250343.33 KB/sec
        Avg throughput per process                      =  250089.76 KB/sec
        Min xfer                                        = 25116672.00 KB

        Children see throughput for  2 rewriters        =  308990.39 KB/sec
        Parent sees throughput for  2 rewriters         =  308973.19 KB/sec
        Min throughput per process                      =  151569.69 KB/sec
        Max throughput per process                      =  157420.70 KB/sec
        Avg throughput per process                      =  154495.20 KB/sec
        Min xfer                                        = 24231936.00 KB

        Children see throughput for  2 readers          =  457372.28 KB/sec
        Parent sees throughput for  2 readers           =  457310.45 KB/sec
        Min throughput per process                      =  224204.67 KB/sec
        Max throughput per process                      =  233167.61 KB/sec
        Avg throughput per process                      =  228686.14 KB/sec
        Min xfer                                        = 24203264.00 KB

        Children see throughput for 2 re-readers        =  419768.08 KB/sec
        Parent sees throughput for 2 re-readers         =  419719.89 KB/sec
        Min throughput per process                      =  205347.95 KB/sec
        Max throughput per process                      =  214420.12 KB/sec
        Avg throughput per process                      =  209884.04 KB/sec
        Min xfer                                        = 24104960.00 KB



iozone test complete.

Now I have to search further for the bottleneck. Tried both samba\ftp - cann't get reading speed above 80Mbps on gigabit network. So I exclude samba or ftp settings, as they get almost same speed and will focus on network card settings. The strangest thing is when I download files from NAS on 2 separate computers - I get total speed of 110Mbs, as limited by gigabit network, as should be. But when I download from NAS on single computer, I got 80Mbps limit.

Any ideas which part of forum should I direct myself for help?


Thank you for your support.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: zfs reading goes slow

Post by Parkcomm »

First - get the right chipset. Intel is probably best, broadcom second, realtek not reccomended
Second turn on jumbo frames
Third, here is my experience with system tuneables viewtopic.php?f=58&t=6814
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

dpo1905
Starter
Starter
Posts: 16
Joined: 22 Sep 2015 23:13
Status: Offline

Re: zfs reading goes slow

Post by dpo1905 »

Yes, i've already read a lot that realtek network card is not the best for freebsd. Will test with Intel gigabit NIC, although I don't have it in my stores right now.
Second turn on jumbo frames
You mean MTU 9000, right? Been there, done that, seen no difference yet. My main windows 7 machine is not able to connect to nas4free after that,nor web-interface, neither SMB share. May be I should restart it, but it's not easy, as there are several live virtual systems. Although another windows 7 machine can connect and work just fine. But I keep it in mind for future tests.

Been also plyaing around iperf, same results - up to 50 Mbps, rarelly 60Mbps+ on ONE transfer. But when I do transfers to 2 computers at once - speed is OK. Tried iperf clients both for another freebsd machine and windows machine.

I'm not sure about link you supplied. There's something about FREEZING, but my case there is stable, constant speed. I will compare values from your last post there with my defaults and see if at makes difference.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: zfs reading goes slow

Post by Parkcomm »

dpo1905 wrote:You mean MTU 9000
If you have a smart switch in between it'll detect which are jumbos and which aren't and convert between. Otherwise a mixed environment is a pain. Also sometimes 9000 can cause problems, so before abandoning try something like 8192.

You should see an increase if both machines support Jumbo frames, if the limitation is the NIC or packet/interrrupt processing.
dpo1905 wrote:But when I do transfers to 2 computers at once - speed is OK
The simplest reason for this is that the performance limitation is the client not the server.
dpo1905 wrote:There's something about FREEZING,
The original poster had a problem with freezing, but my problem was throughput - however they both had a similar solution.
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

Post Reply

Return to “ZFS (only!)”