This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

samba fast write, slower read

CIFS/SMB network sharing.
Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
kfnas
Starter
Starter
Posts: 65
Joined: 06 Mar 2014 18:41
Status: Offline

samba fast write, slower read

Post by kfnas »

Hello,

currently I'm testing 9.2.0.1 - Shigawire (revision 943) on the asrock c2750 with 32GB ECC ram and 4TB seagate NAS disks. What I have found so far is inconsistent read speed over CIFS to W7 client on SSD.

I'm testing it right now on both UFS or ZFS standalone mount points

The best write performance (110MB/s) I can get with samba Send buffers Recieve buffers set to 0. But then read performance is ~ at 50MB/s... toooo slow...

so for better read performance I have to set the buffers to the default or double of it with AIO enabled. Then the read perfomrance is roughly about 100MB/s... but the write falls down to 40-60MB/s ...

why?

what does mean when rcv/snd buffers are set to 0? why then so good write performance?

can someone put the top notch samba settings here? I would like to see the full gig ethernet saturation....

00Roush
Starter
Starter
Posts: 64
Joined: 15 Sep 2013 09:27
Status: Offline

Re: samba fast write, slower read

Post by 00Roush »

With NAS4Free I have found the best performance with Samba snd/rcv buffers at 0. AIO also seems to help give higher performance but in the past I have had some issues with Samba crashing under certain loads and AIO enabled. I think this issue has been resolved in the latest version included in N4F. The other key piece to better performance with ZFS is to use ZFS kernel tune extension. (viewtopic.php?f=71&t=1278)

So what does setting snd/rcv buffers do? Currently in N4F Samba buffer sizes are set automatically and to my knowledge there is no way to turn this off. Setting them to 0 in the web gui essentially disables the buffers. My understanding is that setting these buffers essentially overrides default OS network settings for send and receive window sizes. Disabling them allows the OS to use autotune network settings to maximize network throughput. In some cases I have seen this cause Samba transfer speeds to be more variable. Like when disk speeds are low. Transfer speeds will max out for a few seconds and then drop down because the disks can't keep up and there is no more memory to store incoming data.

I haven't really tested out the latest N4F version so I gave it a go and with the above changes was able to get 100-110 MB/sec read/write. This is with a 3 disk RAIDZ1 array. Wasn't sure if maybe latest N4F version might work different but it appears to work similar to last 9.1 version.

With your case it is a bit difficult to narrow down exactly what the issue might be... Need more info about your disk setup in the N4F server. (Disks and configuration) Also need to know if raw network/disk speeds in server and client are good. http://n4f.siftusystems.com/index.php/2 ... leshooting steps 1 and 2 outline the tests. Check your client disk speeds too using something like Atto. I know you mentioned SSD but sometimes even they will give poor write speeds. One other question would be what NIC is in the client? I ask because on my main computer (Win 7) I have a Realtek onboard NIC that used to be considerably slower when I used "Balanced" power profile. Switching to "High Performance" power profile improved speeds. Later drivers appear to have improved the situation quite a bit.

00Roush

kfnas
Starter
Starter
Posts: 65
Joined: 06 Mar 2014 18:41
Status: Offline

Re: samba fast write, slower read

Post by kfnas »

hello,

my setup is 32GB ecc ram on 8 core avoton asrock c2750 with 5 seagate NAS 4TB disk ST4000VN000 and 2 seagate 3TB ES.2 ST33000650SS ... they can outperfom more then limit of gig ethernet over any L2/L3 protocol (http://www.tomshardware.de/enterprise-h ... 389-7.html) and I have tested them without any load on N4F and either on ZFS standalone or UFS. My recent tests are on UFS FS since I had to avoid any performance missconfiguration of ZFS first.
My clients are w7 and w2k8 server, everyone is attached on L3 cisco gig switch, N4F with link lacp aggregation. I did test also the link aggregation protocol and have to say that I can have simultaneously both writes to different disks and thus almost 2 gig links were saturated... but as I mentioned earlier, write performance (sequential) is perfect with buffers 0/0.... but read not ;-)

I have tested buffers at deafult 64240 size till 513920 where the performance was worst. If cifs buffers are enabled and tuned, the connection does start around 110MB but after 10-15 second will drop down to 60 in any direction...

what are available tools to debug

-ethernet buffers drops/overflow
-tcp statistics (retransmittion, out of order packets)
- kernel queuing

what are best praxis with freebsd IP stack, to be setted up in the top modern cpu I/O architecture?

does it makes sens to tune TCP full buffer window in kernel?
how can I change ethernet kernel driver software TX buffers? (hardware I cannot...of course)

I would say, just generall, since I have a lot of buffer tuning experience on Cisco's routers and switches (QoS and scheduling) that a lot of buffers on any subsystem (ethernet HW ring, ethernet kernel I/O, TCP I/O, system I/O, application - CIFS I/O) together should not mean the best performance... a total architecture is need to be understand, but I have no experienece here on Unix/BSD/Linux enviroment.

I have tested on the avoton W2012R2 hypervisor, Nas4Free, Freenas, Synology dsm 5.0... each system by default has different performance..

out of box Freenas does have best FTP performance (almost 980Mbit over one session) but cifs started at 88MB/s dropped down to 55MB/s.... found weird zfs checksum errors, where on each disk after certain period of time the data were corrupted, in mirror could be repaired.. anyway, then found the ethernet link went after certain amount of busted traffic down and up ... yes all those things could be driver related since the avoton is new.... integrated geli aes-ni 256 encryption had no performance drop over non encrypted drives :-)

the most fastest out of the box was synology ds5.0 .... they are running it on linux kernel and EXT FS... cifs were on the top of line but with strange drops over time, sadly I have no graphs recorded, but I can remember them.....

anyway... Freenas is no go, since no nice VB or any other hypervisor support, I found N4F the platform, where I have to stick with, just need good tunning and .... a luck

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: samba fast write, slower read

Post by raulfg3 »

12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

00Roush
Starter
Starter
Posts: 64
Joined: 15 Sep 2013 09:27
Status: Offline

Re: samba fast write, slower read

Post by 00Roush »

From what I have seen NAS4Free already has implemented some network tuning out of the box to support network speeds up to 10 Gbps. You can certainly make some changes if you wanted but the defaults appear to allow for good performance out of the box. I would first try to test raw network speeds using iperf to see if network throughput is as expected. Iperf is included in N4F. On server side (N4F) just use iperf -s command. Then on client side I recommend iperf -c <server ip> -w 128k -r. This will test send speed and then receive speed using a window size of 128k. In my experience 128k window size is generally large enough to saturate Gigabit connection on a small local LAN. With Intel NICs I would expect to see above 900 Mbps. Note: I was testing iperf on latest N4F 9.2 build and found iperf seems to not be working correctly for receive speeds with my normal command line. So I added -P 4 to the command to create 4 threads. You might also want to try this if you also run into trouble.

Even though your disks are more than capable of saturating Gigabit I would test in N4F to double check. Use the link raulfg3 provided and go down to step 2. Also you never listed how you have the disks setup... are they in RAID1/Mirror, RAID5/RAIDz1, or single disks?

You mentioned hypervisor... are you running bare metal or on top of hypervisor? Hypervisor could cause some issues. If possible try bare metal with the simplest configuration you can. Basically no changes from default N4F other than setting up a simple 2 disk RAID1 array (UFS) to use as a share.

Let us know how it goes.

Cheers,
00Roush

User avatar
b0ssman
Forum Moderator
Forum Moderator
Posts: 2438
Joined: 14 Feb 2013 08:34
Location: Munich, Germany
Status: Offline

Re: samba fast write, slower read

Post by b0ssman »

which of the sata ports are you using?
the intel ones or the marvel ones?
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.

Post Reply

Return to “CIFS/SMB (Samba)”