This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

Great iSCSI performance on Windows, Poor on VMWare ESXi

iSCSI over TCP/IP.
Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
User avatar
tuaris
experienced User
experienced User
Posts: 85
Joined: 19 Jul 2012 21:31
Contact:
Status: Offline

Great iSCSI performance on Windows, Poor on VMWare ESXi

Post by tuaris »

I don't think this is a NAS4Free issue, but on VMWare ESXi the throughput for iSCSI is bad, poor, horrible. Those words don't come close to describing what I am seeing.

My Hardware:
  • 9.2.0.1 - Shigawire (revision 972)
  • x64-embedded on Intel(R) Xeon(R) CPU E3110 @ 3.00GHz
  • Memory usage: 34% of 7921MiB
Two ZFS Pools:

Code: Select all

da0 	2861589MB 	WDC WD30EFRX-68AX9N0 80.0  	n/a  	n/a  	ZFS storage pool device  	25.17 KiB/t, 8 tps, 0.20 MiB/s  	n/a  	ONLINE 
da1 	2861589MB 	WDC WD30EFRX-68AX9N0 80.0  	n/a  	n/a  	ZFS storage pool device  	25.38 KiB/t, 8 tps, 0.20 MiB/s  	n/a  	ONLINE 
da2 	2861589MB 	WDC WD30EFRX-68AX9N0 80.0  	n/a  	n/a  	ZFS storage pool device  	25.61 KiB/t, 8 tps, 0.20 MiB/s  	n/a  	ONLINE 
da3 	2861589MB 	WDC WD30EFRX-68AX9N0 80.0  	n/a  	n/a  	ZFS storage pool device  	25.71 KiB/t, 8 tps, 0.20 MiB/s  	n/a  	ONLINE 

Code: Select all

da4 	953870MB 	WDC WD1003FBYX-01Y7B 01.0  	n/a  	n/a  	ZFS storage pool device  	18.59 KiB/t, 375 tps, 6.81 MiB/s  	n/a  	ONLINE 
da5 	953870MB 	WDC WD1003FBYX-01Y7B 01.0  	n/a  	n/a  	ZFS storage pool device  	18.59 KiB/t, 375 tps, 6.81 MiB/s  	n/a  	ONLINE 

Code: Select all

 pool: external1
 state: ONLINE
status: The pool is formatted using a legacy on-disk format.  The pool can
	still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
	pool will no longer be accessible on software that does not support feature
	flags.
  scan: resilvered 879G in 4h7m with 0 errors on Tue Aug 19 18:04:40 2014
config:

	NAME        STATE     READ WRITE CKSUM
	external1   ONLINE       0     0     0
	  raidz2-0  ONLINE       0     0     0
	    da2     ONLINE       0     0     0
	    da0     ONLINE       0     0     0
	    da1     ONLINE       0     0     0
	    da3     ONLINE       0     0     0

errors: No known data errors

  pool: internal
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	internal    ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    da4     ONLINE       0     0     0
	    da5     ONLINE       0     0     0

errors: No known data errors
Overall, the performance on the NAS4Free server is amazing. On the "external" pool, SMB read/write is well into the 50 to 60 MB/s range and iSCSI read is steady at 50 MB/s while write is 120 MB/s

SMB Read and Write:
Image Image
ImageImage

The CPU usage:
Image

iSCSI Read and Write
ImageImage
ImageImage

The CPU usage:
Image

With the VMWare side of things I have the other pool 'internal' dedicated to provide a VMWare data store via iSCSI. The VSphere cluster is housed in an IBM BladeCenter S and consists of 4 HS21 server blades all with 16GB of RAM and dual quad core CPU's.

The performance is horrible. The throughput rarely breaches the 100Mbit/s mark. While idle (no large read/write tasks), my VM's are sluggish yet the network utilization on the VMWare interface on NAS4Free it remains very low:

ImageImage

Guest VM:
Image

If I start to copy data to or from a VM, the performance drops like a rock:

Guest VM:
Image

Notice the drop at the point where the IO operation begins:
Image

Then all the other VM's in the cluster become slower and unresponsive.

To ensure I do not have a network bottleneck, I tested the performance from a VM in the cluster to an external machine:
Image

Although my test shows roughly 400mbit, previous tests have gone as high as 800mbit. It just so happens that I am at peak usage times at the moment. However, the issues I see occur regardless of time.

I'm not sure what else to try. Why is the performance so bad just on the VMWare side?

vlho
NewUser
NewUser
Posts: 4
Joined: 21 May 2013 11:28
Status: Offline

Re: Great iSCSI performance on Windows, Poor on VMWare ESXi

Post by vlho »

Hi,

VMware ESXi does not do caching write operation on the OS level.
You need controller with cache and battery.
More information can be find here:
https://communities.vmware.com/message/2284963

vlho
NewUser
NewUser
Posts: 4
Joined: 21 May 2013 11:28
Status: Offline

Re: Great iSCSI performance on Windows, Poor on VMWare ESXi

Post by vlho »

Or else create iSCSI Target on ZFS partition with Sync disabled
But this isn't safe.

User avatar
STAMSTER
Starter
Starter
Posts: 72
Joined: 23 Feb 2014 15:58
Status: Offline

Re: Great iSCSI performance on Windows, Poor on VMWare ESXi

Post by STAMSTER »

vlho wrote:Or else create iSCSI Target on ZFS partition with Sync disabled
But this isn't safe.
That won't help much either. The issue is on the VMware side - "software iSCSI adapter"... I've heard that on XenServer siuation is just a little better, but not great...

On my Linux box mounted iSCSI ZVOL works like a charm, maxing out Gbit NIC in full duplex.

Who tried running UFS disk/volume as an iSCSI target, situation is somewhat better.
rIPMI

User avatar
Lee Sharp
Advanced User
Advanced User
Posts: 251
Joined: 13 May 2013 21:12
Contact:
Status: Offline

Re: Great iSCSI performance on Windows, Poor on VMWare ESXi

Post by Lee Sharp »

The best solution is to use Infinio or Pernix Data caching solutions. It improves performance DRASTICALLY!

User avatar
STAMSTER
Starter
Starter
Posts: 72
Joined: 23 Feb 2014 15:58
Status: Offline

Re: Great iSCSI performance on Windows, Poor on VMWare ESXi

Post by STAMSTER »

Well, that's true since such solutions create an software tier for IO only... and hence you have blazzing fast performances and IOPS's.
But those are commercial solutions (licence = $$$$), so I believe they are not of big interest for this community.
Afterall, if you're running a business it's smarter to invest in FCoE infrastructure..
rIPMI

User avatar
Lee Sharp
Advanced User
Advanced User
Posts: 251
Joined: 13 May 2013 21:12
Contact:
Status: Offline

Re: Great iSCSI performance on Windows, Poor on VMWare ESXi

Post by Lee Sharp »

Your time has value as well. After I spent over 40 billable hours making nas4free perform with VMware as well as possible, we tried Infinio, and it was much better. Some times spending the money is worth it.

User avatar
tuaris
experienced User
experienced User
Posts: 85
Joined: 19 Jul 2012 21:31
Contact:
Status: Offline

Re: Great iSCSI performance on Windows, Poor on VMWare ESXi

Post by tuaris »

I looked into the Infinio option. I found this rather interesting:
Infinio uses two vCPUs and just 8GB of RAM on each host
8GB is a lot of RAM.

So this basically keeps a cache on the ESXi hosts so that they don't need to access the NAS4Free box. Sounds more like a work around rather than a solution but I see how it would solve the performance problems. For now I have to use a less expensive option since I probably can't afford it. I may just setup NFS shares on NAS4Free, then mount them in my VM's, and point the data directory for my applications to the NFS share.

I wonder if there is an open source alternative to Infinio...

User avatar
STAMSTER
Starter
Starter
Posts: 72
Joined: 23 Feb 2014 15:58
Status: Offline

Re: Great iSCSI performance on Windows, Poor on VMWare ESXi

Post by STAMSTER »

I've tested this pretty crazy work-around in order to improve performanse of Hypervisor / VM machine in a certain work loads..

I installed Debian wheezy as a regular VM on ESXi 5.5U2, and afterwards installed open-iscsi service inside of a VM, and mounted iSCSI share directly from the N4F box. So both ESXi and VM itself have direct access to the N4F box appliance. Perfomance on a such setup is just great! I even put my MongoDB's datafiles on that iSCSI mounted partition as well, and it works just fine as it was 'regular' machine with locally attached storage.

So the VM structure looks like this:
/
/boot
/home
/etc regular partitions are controlled by ESXi datastore layer

the special mounted partition /myiscsi is mounted via open-iscsi service as a direct ZFS exposed target on the N4F box.

/etc/fstab entry is mapped to UUID of the net device (iSCSI extent).

For now I'm running only single link (without iSCSI multipathing), gonna give it a try with multiple NIC's but 100MB/s write/read speeds are very good for such distributed infrastructure :)
rIPMI

ganddy
NewUser
NewUser
Posts: 2
Joined: 28 Feb 2015 21:12
Status: Offline

Re: Great iSCSI performance on Windows, Poor on VMWare ESXi

Post by ganddy »

Hi Stamster, any figures yet on multiple links already?

User avatar
tuaris
experienced User
experienced User
Posts: 85
Joined: 19 Jul 2012 21:31
Contact:
Status: Offline

Re: Great iSCSI performance on Windows, Poor on VMWare ESXi

Post by tuaris »

Finally got a second NAS4Free system setup. USing NFS mounts directly within the VM's makes a huge diffrence:

NFS Mounted:

Code: Select all

root@wallets04:/mnt/blockchains # /usr/bin/time -h dd if=/dev/zero of=sometestfile bs=1024 count=1604800
1604800+0 records in
1604800+0 records out
1643315200 bytes transferred in 29.251770 secs (56178317 bytes/sec)
        29.26s real             0.37s user              15.85s sys
root@wallets04:/mnt/blockchains # /usr/bin/time -h dd if=sometestfile of=/dev/null bs=1024 count=1604800
1604800+0 records in
1604800+0 records out
1643315200 bytes transferred in 30.086053 secs (54620498 bytes/sec)
        30.09s real             0.33s user              11.36s sys
Local Disk (ESXi iSCSI):

Code: Select all

root@wallets01:~ # /usr/bin/time -h dd if=/dev/zero of=sometestfile bs=1024 count=1604800
1604800+0 records in
1604800+0 records out
1643315200 bytes transferred in 420.038034 secs (3912301 bytes/sec)
        7m0.04s real            0.21s user              8.09s sys
root@wallets01:~ # /usr/bin/time -h dd if=sometestfile of=/dev/null bs=1024 count=1604800
1604800+0 records in
1604800+0 records out
1643315200 bytes transferred in 5.835309 secs (281615797 bytes/sec)
        5.84s real              0.24s user              3.91s sys
Looks like FreeBSD does not do read caching for NFS, but it's still a performance gain since the stuff I am reading will likely not be cached in RAM. However, I'd still like to know if it's possible to enable NFS read caching?

User avatar
tuaris
experienced User
experienced User
Posts: 85
Joined: 19 Jul 2012 21:31
Contact:
Status: Offline

Re: Great iSCSI performance on Windows, Poor on VMWare ESXi

Post by tuaris »

Suddenly, performance is a whole lot better with iSCSI:

Code: Select all

# /usr/bin/time -h dd if=/dev/zero of=sometestfile bs=1024 count=560480
560480+0 records in
560480+0 records out
573931520 bytes transferred in 20.528034 secs (27958426 bytes/sec)
        20.55s real             0.10s user              3.25s sys
# /usr/bin/time -h dd if=sometestfile of=/dev/null bs=1024 count=560480
560480+0 records in
560480+0 records out
573931520 bytes transferred in 9.440106 secs (60797148 bytes/sec)
        9.44s real              0.04s user              2.10s sys
I don't know what exactly 'fixed' this. The changes I've done since my last post include:
  • I moved the busy stuff to NFS (it was still slow after this)
  • Removed an unused iSCSI target that who's extent was sitting on a USB drive (this was never used as a ESXi datastore)
  • Upgraded to latest version of NAS4Free

User avatar
STAMSTER
Starter
Starter
Posts: 72
Joined: 23 Feb 2014 15:58
Status: Offline

Re: Great iSCSI performance on Windows, Poor on VMWare ESXi

Post by STAMSTER »

Inside of a VM, read test:

time dd if=file.iscsi of=/dev/null bs=512K
10240+0 records in
10240+0 records out
5368709120 bytes (5.4 GB) copied, 48.8904 s, 110 MB/s

real 0m48.891s
user 0m0.016s
sys 0m2.104s



Still using single path. Decided not to go multipath yet, but to balance paths manually - i.e. path1/lan1 dedicated to a group of 5 VM's, and another path2/lan2 dedicated to another group of 5 VM's :)

Working with full read/write network Gbps speeds! :)
rIPMI

Post Reply

Return to “iSCSI (Internet Small Computer Systems Interface)”