This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

SSD Cache

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
blowfish256
NewUser
NewUser
Posts: 5
Joined: 19 Sep 2013 11:50
Location: Yorkshire, UK
Status: Offline

SSD Cache

Post by blowfish256 »

Hi all,

I wonder if someone can help me here. I've 3x2TB hdd's in a zfsraid1. I've added a single 64gb ssd, partitioned it for cache(45gb) and logging(10gb).

I did some disk performance checks before I upgraded the pool with an ssd. see below.

nas1:~# dd if=/dev/zero of=/myraid/test.dd bs=2M count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 141.014045 secs (148719370 bytes/sec)


I was expecting an improvement after the ssd install but it appears now to be slower...see below


nas1: ~ # dd if=/dev/zero of=/myraid/test.dd bs=2M count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 162.717118 secs (128883305 bytes/sec)


Why is this slower? My zpool status is...

nas1: ~ # zpool status
pool: myraid
state: ONLINE
scan: scrub repaired 0 in 6h15m with 0 errors on Tue Aug 20 06:15:19 2013
config:

NAME STATE READ WRITE CKSUM
myraid ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ada0p2 ONLINE 0 0 0
ada1p2 ONLINE 0 0 0
ada2p2 ONLINE 0 0 0
logs
ada3p1 ONLINE 0 0 0
cache
ada3p2 ONLINE 0 0 0

errors: No known data errors



Thanks for reading any suggestions/ideas as to why it's slower?

chrisf4lc0n
Advanced User
Advanced User
Posts: 262
Joined: 07 May 2013 13:15
Location: West Drayton (London)
Status: Offline

Re: SSD Cache

Post by chrisf4lc0n »

There could be several reasons for that:
1. CPU not being powerful enough for all them drives.
2. If the SSD is just SATA2 then max transfers you could get is 300mb/s, so if you are logging and caching to the same device the max you could achieve is 150mb/s for each of the operations and that is if linear access is required, it will be even slower if random data need to be accessed.
I would personally either use a single SSD as cache or log, but not both, unless a very fast SSD is being used over SATA3 interface.
Watercooling is just the beginning ;)

chrisf4lc0n
Advanced User
Advanced User
Posts: 262
Joined: 07 May 2013 13:15
Location: West Drayton (London)
Status: Offline

Re: SSD Cache

Post by chrisf4lc0n »

nas4free: ZFS_RAID # dd if=/dev/zero of=/mnt/ZFS_RAID//test.dd bs=2M count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 77.180087 secs (271721901 bytes/sec)
Just for your reference, that is what I archive with 3 way mirror and 1 80 GB SSD SATA2 drive.
Watercooling is just the beginning ;)

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4865
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: SSD Cache

Post by raulfg3 »

synthetic test are not a good measure, only indicate levels, try to repeat test, copying one big iso file or several avi files ( if this are your goal files), and compare.

and only for information I do the same test using OCZ 64GB disk, and finally do not notice any real improvements for home use, perhaps for professional or SOHO use, when lots of files are copy/move at same time, can be noticeable, but for home use, when copy/move files one after one, is not noticeable.
12.1.0.4 - Ingva (revision 7743) on SUPERMICRO X8SIL-F 8GB of ECC RAM, 11x3TB disk in 1 vdev = Vpool = 32TB Raw size , so 29TB usable size (I Have other NAS as Backup)

Wiki
Last changes

HP T510

blowfish256
NewUser
NewUser
Posts: 5
Joined: 19 Sep 2013 11:50
Location: Yorkshire, UK
Status: Offline

Re: SSD Cache

Post by blowfish256 »

Hi thanks very much for the fast responses!

System spec is...

Version 9.1.0.1 - Sandstorm (revision 847)
Build date Sun Aug 18 03:49:41 CEST 2013
Platform OS FreeBSD 9.1-RELEASE-p5 (kern.osreldate: 901000)
Platform x64-embedded on Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz
System Intel Corporation DG965WH
System bios Intel Corp. version: MQ96510J.86A.1754.2008.1117.0002 11/17/2008
RAM 8GB


I use nas4free for an esxi datastore via iscsi. Running 6 Vm's for my business. The aim was to speed up these operating system (which run well but I thought the ssd would make a good difference)

Copying a large 20GBfile over a samba share from the nas box starts around, around 90mb/s then drops to 50mb/s average.

the SSD im using is a Corsair 64GB Neutron 2.5inch SATAIII SSD

Thanks

User avatar
alexey123
Moderator
Moderator
Posts: 1469
Joined: 19 Aug 2012 08:22
Location: Israel, Karmiel
Contact:
Status: Offline

Re: SSD Cache

Post by alexey123 »

My test ZFS. Before ZFS tune
# dd if=/dev/zero of=/mnt/disk0/test.bin bs=2M count=100
100+0 records in
100+0 records out
209715200 bytes transferred in 1.389821 secs (150893671 bytes/sec)
After first step tune
zfskerntune # dd if=/dev/zero of=/mnt/disk0/test.bin bs=2M count=100 100+0 records in
100+0 records out
209715200 bytes transferred in 0.622582 secs (336847538 bytes/sec)
I not want continue, All work
Home12.1.0.4 - Ingva (revision 7091)/ x64-embedded on AMD A8-7600 Radeon R7 A88XM-PLUS/ 16G RAM / UPS Ippon Back Power Pro 600
Lab 12.1.0.4 - Ingva (revision 7091) /x64-embedded on Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz / H61M-DS2 / 4G RAM / UPS Ippon Back Power Pro 600

chrisf4lc0n
Advanced User
Advanced User
Posts: 262
Joined: 07 May 2013 13:15
Location: West Drayton (London)
Status: Offline

Re: SSD Cache

Post by chrisf4lc0n »

Your CPU is more than capable of maxing the gigabit network out..
The cache drive only helps in the scenario when the particular file has been accessed frequently, so it is still sitting on the SSD...
However if the speed is your main aim ditch the RAIDZ and create a mirror with the 3 drives, that way you will be reading from 3 drives at the same time and up to 2 disks in the pool could fail before data would be completely lost, that it will be of course at the cost of the capacity though!
Watercooling is just the beginning ;)

blowfish256
NewUser
NewUser
Posts: 5
Joined: 19 Sep 2013 11:50
Location: Yorkshire, UK
Status: Offline

Re: SSD Cache

Post by blowfish256 »

Thanks Chris. I might just do that. If I do...should I still use the ssd for cache or is it wasted?

Last thing, my iscsi uses file extents. If I create a snapshot while they are in use will that cause any data loss to the backup?

chrisf4lc0n
Advanced User
Advanced User
Posts: 262
Joined: 07 May 2013 13:15
Location: West Drayton (London)
Status: Offline

Re: SSD Cache

Post by chrisf4lc0n »

blowfish256 wrote:Thanks Chris. I might just do that. If I do...should I still use the ssd for cache or is it wasted?
Keep the SSD as cache or log only, you can always add another drive in the future if you need both.
blowfish256 wrote:Last thing, my iscsi uses file extents. If I create a snapshot while they are in use will that cause any data loss to the backup?
That I am not enough qualified to answer ;)
Watercooling is just the beginning ;)

Onichan
Advanced User
Advanced User
Posts: 238
Joined: 04 Jul 2012 21:41
Status: Offline

Re: SSD Cache

Post by Onichan »

I am not sure how ZFS decides what to cache, but I have read multiple people said the cache doesn't do much unless you are using it on something like a fileshare with many small files frequently accessed.

With that said more disks is the best way to get more speed, but your disks should be able to saturate your Gb NIC. Not sure why it is dropping to 50, but you should make sure you have AIO enabled and large read/write and double the send and receive buffers. Also enabling jumbo frames from end to end helped me get an extra 20MB/s or so, I would look into that as well. I get ~105MB/s sustained when transferring 10+GB files. I actually have quite good disk speed as well

Code: Select all

nas:/mnt/pool/random# dd if=/dev/zero of=test.dd bs=2M count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 41.667902 secs (503301558 bytes/sec)
but if your running a bunch of VM's off it you would want more IO which requires more vdevs. You only get 1 disk's worth of IO for each vdev.

User avatar
siftu
Moderator
Moderator
Posts: 71
Joined: 17 Oct 2012 06:36
Status: Offline

Re: SSD Cache

Post by siftu »

1. Don't run slog and l2arc on the same SSD, we see it time and time again that people get worse performance than just raw disks.
2. slog and L2arcs cache random IO, not a sequential dd test by default. Use a benchmarking program which can actually generate random io.

You should notice improvements with protocol like NFS with a slog. The L2ARC can also take a while to "Warm" up its cache.

You will also get better performance on iscsi with zvols rather than file based extents.
System specs: NAS4Free amd64-embedded on ASUSTeK. M5A78L-M LX PLUS - AMD Phenom(tm) II X3 720 Processor - 8GB ECC Ram, Storage: 2x ZFS mirrors with 4x Western Digital Green (WDC WD10EADS)
My NAS4Free related blog - http://n4f.siftusystems.com/

blowfish256
NewUser
NewUser
Posts: 5
Joined: 19 Sep 2013 11:50
Location: Yorkshire, UK
Status: Offline

Re: SSD Cache

Post by blowfish256 »

Hi all, thanks for info.

Last couple of questions, Im going to change to a mirror instead of the zraid for better ESXi Iscsi performance. I've bought a 4th 2TB hdd. Can I still use a cache with a mirror pool? Will the cache work with 2 mirrors? So my setup would be 2 mirrors of 2x2TB HDD's in one pool and SSD as cache...

Finally, is there a way to convert file extents back to a zvol?

Cheers for any advice.

blowfish256
NewUser
NewUser
Posts: 5
Joined: 19 Sep 2013 11:50
Location: Yorkshire, UK
Status: Offline

Re: SSD Cache

Post by blowfish256 »

Hi all, just an update. I'm now running 4 x 2tb WD blacks and my SSD for cache only.

Current stats are...

Code: Select all

nas1: datapool_1 # dd if=/dev/zero of=/mnt/datapool_1/test.dd bs=2M count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 189.956621 secs (110401627 bytes/sec)

nas1: datapool_1 # dd if=/mnt/datapool_1/test.dd of=/dev/null bs=2M count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 125.112781 secs (167620925 bytes/sec)

Code: Select all


nas1: datapool_1 # zpool status
  pool: datapool_1
 state: ONLINE
  scan: resilvered 612M in 0h3m with 0 errors on Thu Jan  2 16:20:21 2014
config:

        NAME        STATE     READ WRITE CKSUM
        datapool_1  ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0    ONLINE       0     0     0
            ada1    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            ada2    ONLINE       0     0     0
            ada4    ONLINE       0     0     0
        cache
          ada3      ONLINE       0     0     0

errors: No known data errors
nas1: datapool_1 #

That's very poor isn't it? :(

cancerman
Starter
Starter
Posts: 33
Joined: 23 Jun 2012 07:27
Status: Offline

Re: SSD Cache

Post by cancerman »

Have you set your ram in the zfs kernel tune extension?
Nas4Free 9.1.0.1.775. EP43T-UD3L, 12GB, Q6600, Supermicro USAS-L8i with IT firmware, 4x 2TB WD Green, 4x 1.5TB WD Green, 3x 1TB Samsung F4, 3x 1TB Seagate Barracuda, 2x 1TB Hitachi Deskstar, OCZ SSD for L2ARC, Mirrored Corsair SSDs for ZIL.

User avatar
Lee Sharp
Advanced User
Advanced User
Posts: 251
Joined: 13 May 2013 21:12
Contact:
Status: Offline

Re: SSD Cache

Post by Lee Sharp »

ESXi... Now I understand. I was setting up a NAS for a VMware View deployment, and the performance was in the toilet. So, some hints...

1) Do NOT disable sync. It will break things eventually.

2) Make 2 drive mirror vdevs, and then stripe them. Per Oracle, this gives you the best IOPS with any fault tolerance. Only a pure stripe is better.

3) Get a very good and fast pair of SSDs and make a mirror ZIL device. They must be solid, and fast, and on SATA3 6Gbps ports. I prefer Intel, as they seem to live longer than anything else.

4) cache_flush_disable=”1″ - Note that you are risking some data... Not loss of data, but time. Anything in the cache in a power loss state will be lost. Some info on what is happening... http://ateamdev.ateamprojects.com/tech/ ... -over-nfs/ However, I disagree with the author that the risks are similar to sync disable. Oracle says all over the place how bad sync disable is, but it does not say the same things about this. Sync Disable can risk ALL you data. this only risks what is in the cache.
More articles... https://forums.freebsd.org/viewtopic.php?&t=30856 http://christopher-technicalmusings.blo ... h-zil.html

This may do it for you, and is as good as it gets with a pure nas4free solution. But it may not be good enough, so...

5) Infinio Accelerator - Not cheap, but it makes one heck of a difference! http://www.infinio.com/about-our-product/what-is-it It is a VM running on your ESXi server that acts as a disk cache in ram.

Post Reply

Return to “ZFS (only!)”