This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
Slower than expected iperf results with Intel nic
-
robdeep
- NewUser

- Posts: 3
- Joined: 19 Oct 2013 23:28
- Status: Offline
Slower than expected iperf results with Intel nic
Hello all,
I'm banging my head against a wall on this. I have a Intel Pro 1000 NIC in my nas4free box. The system has a Intel i5 2500 quad core cpu, with 16gb of ram and 4, WD Red 3tb drives. I'm running the latest version of Nas4Free x64. The best I've been able to achieve with iperf going to and from the nas4free box is about 750mbps, I've tried all sorts of adjustments, even different cables.
I can get 958Mbps from my Macbook's thunderbolt Ethernet interface going to my PC (Marvel nic). All these devices are connected to the same gigabit switch.
I noticed that Nas4Free recognizes the NIC as "em0: <Intel(R) PRO/1000 Legacy Network Connection 1.0.4> port 0xdf00-0xdf3f mem 0xfbdc0000-0xfbddffff,0xfbda0000-0xfbdbffff irq 18 at device 2.0 on pci5" - Is that the right driver?
Any input would be appreciated.
I'm banging my head against a wall on this. I have a Intel Pro 1000 NIC in my nas4free box. The system has a Intel i5 2500 quad core cpu, with 16gb of ram and 4, WD Red 3tb drives. I'm running the latest version of Nas4Free x64. The best I've been able to achieve with iperf going to and from the nas4free box is about 750mbps, I've tried all sorts of adjustments, even different cables.
I can get 958Mbps from my Macbook's thunderbolt Ethernet interface going to my PC (Marvel nic). All these devices are connected to the same gigabit switch.
I noticed that Nas4Free recognizes the NIC as "em0: <Intel(R) PRO/1000 Legacy Network Connection 1.0.4> port 0xdf00-0xdf3f mem 0xfbdc0000-0xfbddffff,0xfbda0000-0xfbdbffff irq 18 at device 2.0 on pci5" - Is that the right driver?
Any input would be appreciated.
-
robdeep
- NewUser

- Posts: 3
- Joined: 19 Oct 2013 23:28
- Status: Offline
Re: Slower than expected iperf results with Intel nic
My Intel NIC is an 82541PI.
This thread seems strikingly similar to my problem. No resolution that I can see.
http://freebsd.1045724.n5.nabble.com/em ... 45922.html
This thread seems strikingly similar to my problem. No resolution that I can see.
http://freebsd.1045724.n5.nabble.com/em ... 45922.html
- b0ssman
- Forum Moderator

- Posts: 2438
- Joined: 14 Feb 2013 08:34
- Location: Munich, Germany
- Status: Offline
Re: Slower than expected iperf results with Intel nic
the nic is pci. the pci bus has reached its limits long ago.
all devices on the bus have to share a theoretical maxium bandwidth of 133mb/sec.
i would assume that would be the limiting factor.
all devices on the bus have to share a theoretical maxium bandwidth of 133mb/sec.
i would assume that would be the limiting factor.
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.
-
robdeep
- NewUser

- Posts: 3
- Joined: 19 Oct 2013 23:28
- Status: Offline
Re: Slower than expected iperf results with Intel nic
Thanks. I'll try a Intel PCI-E nic.
-
GregAndo
- NewUser

- Posts: 5
- Joined: 19 Nov 2013 06:07
- Status: Offline
Re: Slower than expected iperf results with Intel nic
The results you have are very similar to what I have been seeing, but my server is using a BCM5708 chipset (NetXtreme II).
At iperf defaults, I can push data out of the 1GB connected NAS4Free box to one of my two 1GB connected Windows 8 machines at ~800Mbit/s. Pushing in the other direction is much worse, and is generally ~600Mbit/s. Based on some pretty thorough testing the underlying disk subsystem seems fine. This is very indicative of the disappointing performance I am seeing in CIFS and iSCSI. Traffic between the Windows 8 machines is virtually ~1000Mbit/s as expected.
If I copy a file to the NAS4Free box from both Windows 8 machines at the same time I can achieve 2x50MB/s giving the desired 100MB/s I am so craving. Reads work in the same way. But using only once source is underperforming.
If I run iperf in either direction using larger sliding window sizes, I can achieve results approaching 1000Mbit/s in both directions to the NAS4Free box.
Basically, 64K sliding window gives me the slower results above, and 128K above gives the near full speed results. And I get the same kind of performance between the Windows boxes too.
I am getting the feeling that there is a bug in the adaptive sliding window size. I am going to try and investigate further.
In your case, I would be surprised if you have that much competition on your PCI bus. Unless you have something regularly using more than 33MB/s in there somewhere, you should be fine really.
At iperf defaults, I can push data out of the 1GB connected NAS4Free box to one of my two 1GB connected Windows 8 machines at ~800Mbit/s. Pushing in the other direction is much worse, and is generally ~600Mbit/s. Based on some pretty thorough testing the underlying disk subsystem seems fine. This is very indicative of the disappointing performance I am seeing in CIFS and iSCSI. Traffic between the Windows 8 machines is virtually ~1000Mbit/s as expected.
If I copy a file to the NAS4Free box from both Windows 8 machines at the same time I can achieve 2x50MB/s giving the desired 100MB/s I am so craving. Reads work in the same way. But using only once source is underperforming.
If I run iperf in either direction using larger sliding window sizes, I can achieve results approaching 1000Mbit/s in both directions to the NAS4Free box.
Basically, 64K sliding window gives me the slower results above, and 128K above gives the near full speed results. And I get the same kind of performance between the Windows boxes too.
I am getting the feeling that there is a bug in the adaptive sliding window size. I am going to try and investigate further.
In your case, I would be surprised if you have that much competition on your PCI bus. Unless you have something regularly using more than 33MB/s in there somewhere, you should be fine really.
-
00Roush
- Starter

- Posts: 64
- Joined: 15 Sep 2013 09:27
- Status: Offline
Re: Slower than expected iperf results with Intel nic
As was mentioned PCI bus is most likely bottleneck. Some boards have better implementation than others. In my testing I can get around 800 mbps one direction and 600 mbps the other with a Intel 1000 PRO MT nic.
When testing with Iperf you have to realize that the window size that is set by iperf overrides what the OS is set to. Most modern OSes auto tune window sizes by default. To get an accurate measurement of max network throughput from iperf the window size typically needs to be set larger. In my experience the iperf window size needs to be set to 64k or larger to support full gigabit speeds. Command line I recommend folks use is something like iperf -c 192.168.0.2 -w 128k -r. This will test with a large enough window size in both directions.
If iperf and disk tests show good results but CIFS performance is not quite matching up there is a good chance CIFS settings need to be changed. My recommended settings to try are SMB2, AIO enabled, send/receive buffers set to 0. The reason I recommend Samba send/receive buffers get set to zero is so that the OS can auto tune network window size on its own. Specifically setting the Samba buffers essentially sets the network window size for Samba. In my experience the default Samba buffers in NAS4Free can limit performance. One other change I have found makes a large difference in Samba/CIFS is to use ZFSkerntune when using zfs. ZFSkerntune solved issue I have had of CIFS file transfers stalling when data was being flushed to the disk which would happen every couple of seconds. Also helped with iSCSI performance for me.
Let me know if that helps or not.
00Roush
When testing with Iperf you have to realize that the window size that is set by iperf overrides what the OS is set to. Most modern OSes auto tune window sizes by default. To get an accurate measurement of max network throughput from iperf the window size typically needs to be set larger. In my experience the iperf window size needs to be set to 64k or larger to support full gigabit speeds. Command line I recommend folks use is something like iperf -c 192.168.0.2 -w 128k -r. This will test with a large enough window size in both directions.
If iperf and disk tests show good results but CIFS performance is not quite matching up there is a good chance CIFS settings need to be changed. My recommended settings to try are SMB2, AIO enabled, send/receive buffers set to 0. The reason I recommend Samba send/receive buffers get set to zero is so that the OS can auto tune network window size on its own. Specifically setting the Samba buffers essentially sets the network window size for Samba. In my experience the default Samba buffers in NAS4Free can limit performance. One other change I have found makes a large difference in Samba/CIFS is to use ZFSkerntune when using zfs. ZFSkerntune solved issue I have had of CIFS file transfers stalling when data was being flushed to the disk which would happen every couple of seconds. Also helped with iSCSI performance for me.
Let me know if that helps or not.
00Roush
-
GregAndo
- NewUser

- Posts: 5
- Joined: 19 Nov 2013 06:07
- Status: Offline
Re: Slower than expected iperf results with Intel nic
Wow. I think I love you. I had tried everything except decreasing the buffer sizes in CIFS. Performance is great on CIFS now with all buffers (including AIO) set to 0. Working a treat. I have not tried ZFSkerntune but have tuned parameters in loader.cf myself. Nothing has really helped as yet, so I will have a bit more of a play on that front as iSCSI write performance is still only around 600MBit/s.
I will stop here in the interests of not hijacking the thread.
I would also like to apologise, as I misread the OP, thinking that they were getting full speed on the iperf tests but were this was only the case between the clients (I was getting full speed with iperf).
I will stop here in the interests of not hijacking the thread.
I would also like to apologise, as I misread the OP, thinking that they were getting full speed on the iperf tests but were this was only the case between the clients (I was getting full speed with iperf).
-
Andy22
- Starter

- Posts: 54
- Joined: 22 Feb 2014 17:16
- Status: Offline
Re: Slower than expected iperf results with Intel nic
Can u post/send your global smb.conf?GregAndo wrote:Performance is great on CIFS now with all buffers (including AIO) set to 0. Working a treat.
I'm just in the process migrating a Win7 server over to nas4free, so far testing all requirements inside VirtualBox, only revealed minor issues. I did some real samba test-runs and as i expected got worse speeds that using only Windows machines.
thx
Andy
-
00Roush
- Starter

- Posts: 64
- Joined: 15 Sep 2013 09:27
- Status: Offline
Re: Slower than expected iperf results with Intel nic
FYI NAS4Free by default does not support manual changes to smb.conf file. Changes are made in webgui.
Depending on how well NAS4Free works inside of VirtualBox you should be able to get transfer speeds close to what you would get windows to windows. Try Samba snd/rcv buffers 0 and AIO on. Then install ZFS Kernel Tune extension and set memory size in webgui if using ZFS. After NAS4Free restart try testing again.
00Roush
Depending on how well NAS4Free works inside of VirtualBox you should be able to get transfer speeds close to what you would get windows to windows. Try Samba snd/rcv buffers 0 and AIO on. Then install ZFS Kernel Tune extension and set memory size in webgui if using ZFS. After NAS4Free restart try testing again.
00Roush
- raulfg3
- Site Admin

- Posts: 4865
- Joined: 22 Jun 2012 22:13
- Location: Madrid (ESPAÑA)
- Contact:
- Status: Offline
Re: Slower than expected iperf results with Intel nic
read Siftu Blog can help too: http://n4f.siftusystems.com/index.php/2 ... /comments/
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)
Wiki
Last changes
HP T510
Wiki
Last changes
HP T510
-
Andy22
- Starter

- Posts: 54
- Joined: 22 Feb 2014 17:16
- Status: Offline
Re: Slower than expected iperf results with Intel nic
I finished my tests on the physical hardware and basically leaving samba at its default settings gives me the best performance, which was 110 MB/s on UFS.
If i enable AIO, performance drops like crazy to 30-40 MB/s with heavy audible hdd seek activity. Decreasing snd/rcv buffers also slows down my speeds drastically and at 0/0 it mostly recovers, but large copy operations are slower. Increasing snd/rcv buffers >64k does not have any negative nor positive effect, also enabling "Large 64k copy/writes" has a slightly negative effect for me.
The always noted "TCP_NODELAY" option does actually noting in my setup, i get 110 MB/s with and without it.
The actual fix for my setup was to go back to MTU 1500 on all my machines, enable the "rxcsum/txcsum/tco4" hardware features on the nas4free server realtek card and actually disabling all those features on the Win7 clients in the nic device settings, i also disabled "interrupt moderation" and left only "flow control" enabled. It seems the lower speeds was a result of using jumbo frames + the hardware offload settings on the windows clients.
Now i'm pretty happy, since i get near maximum network speed in samba, at 1500 MTU with crappy realtek cards and no samba tweaks at all :p
thx
Andy
If i enable AIO, performance drops like crazy to 30-40 MB/s with heavy audible hdd seek activity. Decreasing snd/rcv buffers also slows down my speeds drastically and at 0/0 it mostly recovers, but large copy operations are slower. Increasing snd/rcv buffers >64k does not have any negative nor positive effect, also enabling "Large 64k copy/writes" has a slightly negative effect for me.
The always noted "TCP_NODELAY" option does actually noting in my setup, i get 110 MB/s with and without it.
The actual fix for my setup was to go back to MTU 1500 on all my machines, enable the "rxcsum/txcsum/tco4" hardware features on the nas4free server realtek card and actually disabling all those features on the Win7 clients in the nic device settings, i also disabled "interrupt moderation" and left only "flow control" enabled. It seems the lower speeds was a result of using jumbo frames + the hardware offload settings on the windows clients.
Now i'm pretty happy, since i get near maximum network speed in samba, at 1500 MTU with crappy realtek cards and no samba tweaks at all :p
thx
Andy