I've been having issues for the past month trying to get a N4F box working how I think it should. I believe my configuration is supported and correct, but feel free to let me know if this is intended behavior.
Server specs below:
Motherboard: GigaByte GA-990FXA-UD3
Processor: AMD FX-8120 processor
RAM: 16GB Non-ECC DDR3 memory
Onboard NIC: Realtek based
Video Card: HIS Radeon HD 5450 (Because the board won't boot without something as video...)
Add-In NICs: Two Intel Pro 1000/PT Dual Port NICs
HBA: LSI 9211-8i HBA
Arrays:
1 4TB Seagate ST4000VN000 Disk (I know I need to at least mirror it and plan on it soon)
5 1.5TB Seagate 7200.11 Disks in Z1 with a 60GB OCZ Vertex 2 SSD for L2ARC
zVols:
4.5TB Sparse Volume on the Z1 array
3TB Sparse Volume on 4TB Disk
What I am trying to do is get full performance out of my disks over iSCSI. I have tried doing this multiple ways, and it never works quite properly, but it's good enough for a while until it isn't and I try to fix it again. I've had enough though and asking for help.
The closest I have gotten to this being perfect was last night, I configured my 4 Add-In NICs as 10.1.1.1, 10.1.2.1, 10.1.3.1, and 10.1.4.1 respectively. I configured the iSCSI portal to be able to use all four of those IPs. On the machine that I am connecting with (Windows 7 machine with two of the same NICs as the server) my NICs were 10.1.1.2, 10.1.2.2, 10.1.3.2, and 10.1.4.2 respectively. I also configured the iSCSI initiator to allow incoming connections from those IPs. These machines are the only thing on those subnets.
My iSCSI volumes are the zVols listed above, and I have them connected on the Win 7 box. I open the iSCSI initiator on that machine, connect to the N4F server, and setup my first connection using the 10.1.1.0 subnet. I then open the properties for the connection, open MCS and add the 2.0, 3.0, and 4.0 subnet connections in a Round Robin configuration.
If I run the ATTO benchmark, transfers to both disks run perfectly up to 64KB block sizes. Before that the performance scales up as expected reaching 80 MB/s transfer speeds for both read and write. Monitoring all the connections on both the N4F box and my Win 7 box, traffic is evenly distributed between all NICs. As soon as I move to 128KB block sizes my write speed goes up again to 125 MB/s, and traffic is spread between all NICs evenly, the read drops to 15 MB/s, and starts only being served by my 10.1.1.1 NIC. This behavior continues for the rest of the test, writes are evenly distributed between NICs, and peak at 250 MB/s, reads are served by only the 10.1.1.1 NIC and peak at 90 MB/s. This behavior is the same for both zVols as well.
If I run ATTO on both disks at the same time, performance is sliced by more than half on both arrays at the same time, even though I am not close to the maximum throughput of either disk array, nor CPU, or network. Then after the 64KB block test read performance takes on the same behavior, only served from one NIC.
So I though, "Ok, maybe this is because I initiated the connection to the 1.0 subnet, what if I create a LACP group on the N4F box and try again?" I did that, configured the IP on the LAGG connection for 10.1.0.1/22 and connected the same way as above. Opposite behavior, after 64KB, the reads increase on both disks to a max of 220 MB/s, and writes are only serviced by a random NIC. Also the 4.5TB disk is only serviced by 3 of the four NICs on the N4F box, while the 3TB disk is serviced by all four, with the exception being that writes are only served by the NIC that was not servicing the 4.5TB disk.
If you need me to clean the above up to make sense I will, but here is the TL;DR version:
I believe I set everthing up correctly and according to iperf, I have ~4gbps of bandwidth between these boxes. I cannot get iSCSI to use the connections properly after 64KB block sizes, without a LACP connection on the N4F box reads are served by one NIC while writes are served by all NICs and processed at maximum disk bandwidth. With a LACP connection writes are only served by one NIC while reads are served by almost all NICs but still at maximum disk bandwidth.
This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
iSCSI MCS issues
- Lee Sharp
- Advanced User

- Posts: 251
- Joined: 13 May 2013 21:12
- Contact:
- Status: Offline
Re: iSCSI MCS issues
You do not have 4gbps of bandwith. You have 4x1gbps of bandwidth. LAGG/LACP doesn't load share. You can force some round robbing behaviour, but each conversation is on a distinct path.
That said, you are not pegging 1gbps yet. I would drop out the other nics for now to simplify things and focus on maximizing disk performance.
Also, for sustained reads, an ssd l2arc won't really help much.
That said, you are not pegging 1gbps yet. I would drop out the other nics for now to simplify things and focus on maximizing disk performance.
Also, for sustained reads, an ssd l2arc won't really help much.
-
JL421
- NewUser

- Posts: 3
- Joined: 03 Sep 2013 16:33
- Status: Offline
Re: iSCSI MCS issues
Yes I know I do not have a solid 4gbps of bandwidth, however I'm trying to figure out why my initial MCS (Multiple Connections per Session) round robin without lacp was not load sharing reads correctly. If I could get that worked out I wouldn't have an issue.
With the following scenario I was getting ~6,000 4k random read and write iops, as well as 320 MB/s writes with 512k blocks. Write traffic was balanced among each NIC as expected. Read traffic was balanced among each NIC until I used larger than 64k blocks, then all read traffic shifted to the Storage 1 set of NICs, and never exceeded 110 MB/s.
The iSCSI connection was made initially between the Storage 1 NICs, then the remaining connections were added using MCS.
N4F Server: Win 7 Box:
Lan: 192.168.20.25 192.168.20.10 :Lan
Storage 1: 10.1.1.1 <------> 10.1.1.2 :Storage 1
Storage 2: 10.1.2.1 <------> 10.1.2.2 :Storage 2
Storage 3: 10.1.3.1 <------> 10.1.3.2 :Storage 3
Storage 4: 10.1.4.1 <------> 10.1.4.3 :Storage 4
With the next scenario I was getting the same ~6,000 4k random reads and write iops, but I had a 270 MB/s read with 512k blocks. Read traffic was split between 3 of the 4 N4F NICs and all 4 of the Win 7 NICs, write traffic was only allowed on the unused NIC.
N4F Server: Win 7 Box:
Lan: 192.168.20.25 192.168.20.10 :Lan
LAGG0: 10.1.1.1/22 <------> 10.1.0.2 :Storage 1
LAGG0: 10.1.1.1/22 <------> 10.1.1.2 :Storage 2
LAGG0: 10.1.1.1/22 <------> 10.1.2.2 :Storage 3
LAGG0: 10.1.1.1/22 <------> 10.1.3.3 :Storage 4
If I drop down to 1 NIC I bounce of the 1gbps limit, and based on my transfer speeds above, I can achieve both a 2.5gbps write and a 2.16gbps read. I just can't get them both to work with the same setup.
The l2arc isn't there for the sustained reads, I also do a lot of small random read/writes, that, and the ssd was spare and not destined for anything else.
With the following scenario I was getting ~6,000 4k random read and write iops, as well as 320 MB/s writes with 512k blocks. Write traffic was balanced among each NIC as expected. Read traffic was balanced among each NIC until I used larger than 64k blocks, then all read traffic shifted to the Storage 1 set of NICs, and never exceeded 110 MB/s.
The iSCSI connection was made initially between the Storage 1 NICs, then the remaining connections were added using MCS.
N4F Server: Win 7 Box:
Lan: 192.168.20.25 192.168.20.10 :Lan
Storage 1: 10.1.1.1 <------> 10.1.1.2 :Storage 1
Storage 2: 10.1.2.1 <------> 10.1.2.2 :Storage 2
Storage 3: 10.1.3.1 <------> 10.1.3.2 :Storage 3
Storage 4: 10.1.4.1 <------> 10.1.4.3 :Storage 4
With the next scenario I was getting the same ~6,000 4k random reads and write iops, but I had a 270 MB/s read with 512k blocks. Read traffic was split between 3 of the 4 N4F NICs and all 4 of the Win 7 NICs, write traffic was only allowed on the unused NIC.
N4F Server: Win 7 Box:
Lan: 192.168.20.25 192.168.20.10 :Lan
LAGG0: 10.1.1.1/22 <------> 10.1.0.2 :Storage 1
LAGG0: 10.1.1.1/22 <------> 10.1.1.2 :Storage 2
LAGG0: 10.1.1.1/22 <------> 10.1.2.2 :Storage 3
LAGG0: 10.1.1.1/22 <------> 10.1.3.3 :Storage 4
If I drop down to 1 NIC I bounce of the 1gbps limit, and based on my transfer speeds above, I can achieve both a 2.5gbps write and a 2.16gbps read. I just can't get them both to work with the same setup.
The l2arc isn't there for the sustained reads, I also do a lot of small random read/writes, that, and the ssd was spare and not destined for anything else.
-
JL421
- NewUser

- Posts: 3
- Joined: 03 Sep 2013 16:33
- Status: Offline
Re: iSCSI MCS issues
I figured it out a couple days ago.
When I originally setup this configuration:
Lan: 192.168.20.25 192.168.20.10 :Lan
Storage 1: 10.1.1.1 <------> 10.1.1.2 :Storage 1
Storage 2: 10.1.2.1 <------> 10.1.2.2 :Storage 2
Storage 3: 10.1.3.1 <------> 10.1.3.2 :Storage 3
Storage 4: 10.1.4.1 <------> 10.1.4.3 :Storage 4
I left all of the N4F NICS at the default /1 subnet bit count. IIRC, MCS works correctly when the subnet masks of the source and destination are different. So while my computer was configured correctly and was sending traffic to all the N4F NICs as expected because it saw a different subnet on each NIC, the N4F server was sending traffic out one NIC because it saw all of my computer's NICs as being on the same subnet.
Now I can burst traffic up to 325 MB/s sequential read and write for short transfers (16GB of RAM really helps with this) as well as 20k 4KB IOps read and write per device. I can also use both devices and get a 410 MB/s sequential read/write for small transfers.
TL;DR: If traffic is utilizing all interfaces in one direction, but not the other, check your subnet masks.
When I originally setup this configuration:
Lan: 192.168.20.25 192.168.20.10 :Lan
Storage 1: 10.1.1.1 <------> 10.1.1.2 :Storage 1
Storage 2: 10.1.2.1 <------> 10.1.2.2 :Storage 2
Storage 3: 10.1.3.1 <------> 10.1.3.2 :Storage 3
Storage 4: 10.1.4.1 <------> 10.1.4.3 :Storage 4
I left all of the N4F NICS at the default /1 subnet bit count. IIRC, MCS works correctly when the subnet masks of the source and destination are different. So while my computer was configured correctly and was sending traffic to all the N4F NICs as expected because it saw a different subnet on each NIC, the N4F server was sending traffic out one NIC because it saw all of my computer's NICs as being on the same subnet.
Now I can burst traffic up to 325 MB/s sequential read and write for short transfers (16GB of RAM really helps with this) as well as 20k 4KB IOps read and write per device. I can also use both devices and get a 410 MB/s sequential read/write for small transfers.
TL;DR: If traffic is utilizing all interfaces in one direction, but not the other, check your subnet masks.