Just putting together a NASbox, wondering about the performance when multiple clients read or write data.
I put 3 cat6 lines from the server to the switch in LAGG with LACP and Jumbo frames (9000 MTU). Each client is connected to the switch with one Cat6 Gb line.
When I write files from one client I saturate the Gb line at 110MB/s when I initiate another client simultaneously it drops to 60-ish on both clients.
I thought this would not happen since I aggregated 3 Gb lines from the server to the switch, increasing throughput on that end.
These are the specs of the system:
Supermicro X7DCA-3
2x Xeon E5420 @2,5Ghz
24GB ECC ram
8x 4TB Seagate 5900 RPM (ST4000DM000)
2 port intel 82573V/L Gigabit Ethernet Controller
2 port intel 82571EB Gigabit Ethernet Controller
switch managed Cisco SG300-10 port Gb.
Communication happens over the apple AFP protocol.
Any advice, more then welcome!
This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
3 LAGG Gb ports to switch from NAS --> still poor R/W?
-
helloha
- Starter

- Posts: 18
- Joined: 23 Jan 2014 14:49
- Status: Offline
3 LAGG Gb ports to switch from NAS --> still poor R/W?
Supermicro X7DWN+ - Dual Xeon 5130 - 56GB Ram - 8x 4TB Seagate ST4000DM000 - 2x 4TB HGST HDN724040ALE640 - 1x 256GB Crucial_CT256MX100SSD1 - TDK LoR TF30 USB 3.0 PMAP (boot) - Dell H310 SAS/SATA Controller - 2x HP360T Gb NIC - Supermicro SC825 2U Chassis 920Watt Platinum PSU.
-
vandy
- Starter

- Posts: 20
- Joined: 13 Dec 2013 08:34
- Status: Offline
Re: 3 LAGG Gb ports to switch from NAS --> still poor R/W?
I have a Cisco SG300 28 port switch and I have LACP working with 2 ports (intel i210). My system is 16gb ram with 6x HGST 4tb coolspin drives. I push able to push 200 megabytes / sec both ways.
Did you run zfskertune? This had a huge performance boost for me.
Running 9k jumbo frames actually had a diminishing return effect for me. I found 4k jumbo frames was the sweet spot. YMMV.
And most importantly, are you running raidz2? If so, 8 disks in raidz2 is not optimal and the performance hit is huge.
Did you run zfskertune? This had a huge performance boost for me.
Running 9k jumbo frames actually had a diminishing return effect for me. I found 4k jumbo frames was the sweet spot. YMMV.
And most importantly, are you running raidz2? If so, 8 disks in raidz2 is not optimal and the performance hit is huge.
-
helloha
- Starter

- Posts: 18
- Joined: 23 Jan 2014 14:49
- Status: Offline
Re: 3 LAGG Gb ports to switch from NAS --> still poor R/W?
I must say that in experimenting with jumbo frames it seemed like I was getting less performance.vandy wrote:I have a Cisco SG300 28 port switch and I have LACP working with 2 ports (intel i210). My system is 16gb ram with 6x HGST 4tb coolspin drives. I push able to push 200 megabytes / sec both ways.
Did you run zfskertune? This had a huge performance boost for me.
Running 9k jumbo frames actually had a diminishing return effect for me. I found 4k jumbo frames was the sweet spot. YMMV.
And most importantly, are you running raidz2? If so, 8 disks in raidz2 is not optimal and the performance hit is huge.
I will try the zfskertune thingy.
And yes I am running 8 disks in raidz2... what would be more optimal? 9x?
THX!
Supermicro X7DWN+ - Dual Xeon 5130 - 56GB Ram - 8x 4TB Seagate ST4000DM000 - 2x 4TB HGST HDN724040ALE640 - 1x 256GB Crucial_CT256MX100SSD1 - TDK LoR TF30 USB 3.0 PMAP (boot) - Dell H310 SAS/SATA Controller - 2x HP360T Gb NIC - Supermicro SC825 2U Chassis 920Watt Platinum PSU.
-
vandy
- Starter

- Posts: 20
- Joined: 13 Dec 2013 08:34
- Status: Offline
Re: 3 LAGG Gb ports to switch from NAS --> still poor R/W?
This is a good thread about optimal number of disks for zfs:
http://hardforum.com/showthread.php?p=1 ... 1036154326
My old NAS was running raidz1 with 6 disks and the performance was disappointing. On top of that, I was using Samsung H204UI drives which were excellent drives but are still 5400rpm drives so that didn't help.
You can do a simple DD to get and idea of the performance your array is getting:
dd if=/dev/zero of=/path/to/pool/zerofile.000 bs=1m count=10000
So your problem seems to be either A) array configuration or B) link aggregation configuration
http://hardforum.com/showthread.php?p=1 ... 1036154326
My old NAS was running raidz1 with 6 disks and the performance was disappointing. On top of that, I was using Samsung H204UI drives which were excellent drives but are still 5400rpm drives so that didn't help.
You can do a simple DD to get and idea of the performance your array is getting:
dd if=/dev/zero of=/path/to/pool/zerofile.000 bs=1m count=10000
So your problem seems to be either A) array configuration or B) link aggregation configuration
-
helloha
- Starter

- Posts: 18
- Joined: 23 Jan 2014 14:49
- Status: Offline
Re: 3 LAGG Gb ports to switch from NAS --> still poor R/W?
Yes I think the problem is the Link aggregation.
When I do the dd on the server itself I get reads up to 300MB/s.
I'm quite new to this. I just created a LAGG on the cisco SG300-10 with 3 ports. But I read stuff about VLAN as well. Do I need to configure this also?
Right now the LAGG is configured on the NAS and on the switch with LACP on both ends.
The switch shows the link as UP, and I can connect to the web interface. But don't get the speeds.
I think there's something wrong because today I connected a client with a LAGG to the switch with 2 lines and still only got 1Gb speeds (120MB/s).
I am kind of at a loss here...
Did decrease MTU to 4000 and that increased performance a bit again. Also did the zfskertune which helped a bit as well.
Thx for all the help until now!
K.
When I do the dd on the server itself I get reads up to 300MB/s.
I'm quite new to this. I just created a LAGG on the cisco SG300-10 with 3 ports. But I read stuff about VLAN as well. Do I need to configure this also?
Right now the LAGG is configured on the NAS and on the switch with LACP on both ends.
The switch shows the link as UP, and I can connect to the web interface. But don't get the speeds.
I think there's something wrong because today I connected a client with a LAGG to the switch with 2 lines and still only got 1Gb speeds (120MB/s).
I am kind of at a loss here...
Did decrease MTU to 4000 and that increased performance a bit again. Also did the zfskertune which helped a bit as well.
Thx for all the help until now!
K.
Supermicro X7DWN+ - Dual Xeon 5130 - 56GB Ram - 8x 4TB Seagate ST4000DM000 - 2x 4TB HGST HDN724040ALE640 - 1x 256GB Crucial_CT256MX100SSD1 - TDK LoR TF30 USB 3.0 PMAP (boot) - Dell H310 SAS/SATA Controller - 2x HP360T Gb NIC - Supermicro SC825 2U Chassis 920Watt Platinum PSU.
-
vandy
- Starter

- Posts: 20
- Joined: 13 Dec 2013 08:34
- Status: Offline
Re: 3 LAGG Gb ports to switch from NAS --> still poor R/W?
If your switch shows the the LAG is up then I think it should be working fine.
But I think you might want to read up on link aggregation a little more. Link aggregation only provides more bandwidth when you have many clients connecting to the host. LAG can be confusing but consider this simplified example: Take a modern day CPU with 2 cores with each core running at 3ghz. Each core can execute 1 program at 3ghz and since it is a dual core CPU, 2 programs can run at the same time (for simplicity sake, each program can only run 1 thread). But, they cannot work together to run 1 single program to execute the program twice as fast nor does that make the CPU 6ghz. Same idea with LAG. Your NAS with 3x gigabit LAG will increase your bandwidth to host more clients but cannot speed up the link to a single client.
As for connecting a client with LAG and expecting your bandwidth to increase, LAG doesn't work that way. You are still connecting 1 client to the host so you are still limited to 1 gigabit.
I would probrably say that your low performance might be a mixture of:
1) those seagate "green" drives are slow. actually any "green" drives are slow to begin with so your IOPs is going to be low.
2) non optimal number of disks. for raidz2, 6 drives or 10 drives is optimal. 8, sadly is not.
4) not check "4k drives" option when creating vdevs.
Doing both read and write operations at the same time is very IO intensive and "green" drives that run below 7200rpm are not good at.
Try creating an array with 6x raidz2 and i think you'll be able to get much higher performance.
PS. 300mb/s is really bad for 8 drives. I'm getting 425mb/s on my 6x HGST coolspins and these are 5700rpm drives!
My old server with 8x Samsung "green" 2tb 5400rpm drives connected to an Areca 1223 card in RAID 6 can do 720MB/s reads and 775MB/s writes.
So definitely check your array!
But I think you might want to read up on link aggregation a little more. Link aggregation only provides more bandwidth when you have many clients connecting to the host. LAG can be confusing but consider this simplified example: Take a modern day CPU with 2 cores with each core running at 3ghz. Each core can execute 1 program at 3ghz and since it is a dual core CPU, 2 programs can run at the same time (for simplicity sake, each program can only run 1 thread). But, they cannot work together to run 1 single program to execute the program twice as fast nor does that make the CPU 6ghz. Same idea with LAG. Your NAS with 3x gigabit LAG will increase your bandwidth to host more clients but cannot speed up the link to a single client.
As for connecting a client with LAG and expecting your bandwidth to increase, LAG doesn't work that way. You are still connecting 1 client to the host so you are still limited to 1 gigabit.
I would probrably say that your low performance might be a mixture of:
1) those seagate "green" drives are slow. actually any "green" drives are slow to begin with so your IOPs is going to be low.
2) non optimal number of disks. for raidz2, 6 drives or 10 drives is optimal. 8, sadly is not.
4) not check "4k drives" option when creating vdevs.
Doing both read and write operations at the same time is very IO intensive and "green" drives that run below 7200rpm are not good at.
Try creating an array with 6x raidz2 and i think you'll be able to get much higher performance.
PS. 300mb/s is really bad for 8 drives. I'm getting 425mb/s on my 6x HGST coolspins and these are 5700rpm drives!
My old server with 8x Samsung "green" 2tb 5400rpm drives connected to an Areca 1223 card in RAID 6 can do 720MB/s reads and 775MB/s writes.
So definitely check your array!