Page 1 of 1
SSD Cache
Posted: 19 Sep 2013 12:00
by blowfish256
Hi all,
I wonder if someone can help me here. I've 3x2TB hdd's in a zfsraid1. I've added a single 64gb ssd, partitioned it for cache(45gb) and logging(10gb).
I did some disk performance checks before I upgraded the pool with an ssd. see below.
nas1:~# dd if=/dev/zero of=/myraid/test.dd bs=2M count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 141.014045 secs (148719370 bytes/sec)
I was expecting an improvement after the ssd install but it appears now to be slower...see below
nas1: ~ # dd if=/dev/zero of=/myraid/test.dd bs=2M count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 162.717118 secs (128883305 bytes/sec)
Why is this slower? My zpool status is...
nas1: ~ # zpool status
pool: myraid
state: ONLINE
scan: scrub repaired 0 in 6h15m with 0 errors on Tue Aug 20 06:15:19 2013
config:
NAME STATE READ WRITE CKSUM
myraid ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ada0p2 ONLINE 0 0 0
ada1p2 ONLINE 0 0 0
ada2p2 ONLINE 0 0 0
logs
ada3p1 ONLINE 0 0 0
cache
ada3p2 ONLINE 0 0 0
errors: No known data errors
Thanks for reading any suggestions/ideas as to why it's slower?
Re: SSD Cache
Posted: 19 Sep 2013 12:44
by chrisf4lc0n
There could be several reasons for that:
1. CPU not being powerful enough for all them drives.
2. If the SSD is just SATA2 then max transfers you could get is 300mb/s, so if you are logging and caching to the same device the max you could achieve is 150mb/s for each of the operations and that is if linear access is required, it will be even slower if random data need to be accessed.
I would personally either use a single SSD as cache or log, but not both, unless a very fast SSD is being used over SATA3 interface.
Re: SSD Cache
Posted: 19 Sep 2013 12:53
by chrisf4lc0n
nas4free: ZFS_RAID # dd if=/dev/zero of=/mnt/ZFS_RAID//test.dd bs=2M count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 77.180087 secs (271721901 bytes/sec)
Just for your reference, that is what I archive with 3 way mirror and 1 80 GB SSD SATA2 drive.
Re: SSD Cache
Posted: 19 Sep 2013 12:55
by raulfg3
synthetic test are not a good measure, only indicate levels, try to repeat test, copying one big iso file or several avi files ( if this are your goal files), and compare.
and only for information I do the same test using OCZ 64GB disk, and finally do not notice any real improvements for home use, perhaps for professional or SOHO use, when lots of files are copy/move at same time, can be noticeable, but for home use, when copy/move files one after one, is not noticeable.
Re: SSD Cache
Posted: 19 Sep 2013 15:39
by blowfish256
Hi thanks very much for the fast responses!
System spec is...
Version 9.1.0.1 - Sandstorm (revision 847)
Build date Sun Aug 18 03:49:41 CEST 2013
Platform OS FreeBSD 9.1-RELEASE-p5 (kern.osreldate: 901000)
Platform x64-embedded on Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz
System Intel Corporation DG965WH
System bios Intel Corp. version: MQ96510J.86A.1754.2008.1117.0002 11/17/2008
RAM 8GB
I use nas4free for an esxi datastore via iscsi. Running 6 Vm's for my business. The aim was to speed up these operating system (which run well but I thought the ssd would make a good difference)
Copying a large 20GBfile over a samba share from the nas box starts around, around 90mb/s then drops to 50mb/s average.
the SSD im using is a Corsair 64GB Neutron 2.5inch SATAIII SSD
Thanks
Re: SSD Cache
Posted: 19 Sep 2013 16:40
by alexey123
My test ZFS.
Before ZFS tune
# dd if=/dev/zero of=/mnt/disk0/test.bin bs=2M count=100
100+0 records in
100+0 records out
209715200 bytes transferred in 1.389821 secs (150893671 bytes/sec)
After first step tune
zfskerntune # dd if=/dev/zero of=/mnt/disk0/test.bin bs=2M count=100 100+0 records in
100+0 records out
209715200 bytes transferred in 0.622582 secs (336847538 bytes/sec)
I not want continue, All work
Re: SSD Cache
Posted: 19 Sep 2013 16:53
by chrisf4lc0n
Your CPU is more than capable of maxing the gigabit network out..
The cache drive only helps in the scenario when the particular file has been accessed frequently, so it is still sitting on the SSD...
However if the speed is your main aim ditch the RAIDZ and create a mirror with the 3 drives, that way you will be reading from 3 drives at the same time and up to 2 disks in the pool could fail before data would be completely lost, that it will be of course at the cost of the capacity though!
Re: SSD Cache
Posted: 19 Sep 2013 22:04
by blowfish256
Thanks Chris. I might just do that. If I do...should I still use the ssd for cache or is it wasted?
Last thing, my iscsi uses file extents. If I create a snapshot while they are in use will that cause any data loss to the backup?
Re: SSD Cache
Posted: 19 Sep 2013 22:45
by chrisf4lc0n
blowfish256 wrote:Thanks Chris. I might just do that. If I do...should I still use the ssd for cache or is it wasted?
Keep the SSD as cache or log only, you can always add another drive in the future if you need both.
blowfish256 wrote:Last thing, my iscsi uses file extents. If I create a snapshot while they are in use will that cause any data loss to the backup?
That I am not enough qualified to answer

Re: SSD Cache
Posted: 20 Sep 2013 05:45
by Onichan
I am not sure how ZFS decides what to cache, but I have read multiple people said the cache doesn't do much unless you are using it on something like a fileshare with many small files frequently accessed.
With that said more disks is the best way to get more speed, but your disks should be able to saturate your Gb NIC. Not sure why it is dropping to 50, but you should make sure you have AIO enabled and large read/write and double the send and receive buffers. Also enabling jumbo frames from end to end helped me get an extra 20MB/s or so, I would look into that as well. I get ~105MB/s sustained when transferring 10+GB files. I actually have quite good disk speed as well
Code: Select all
nas:/mnt/pool/random# dd if=/dev/zero of=test.dd bs=2M count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 41.667902 secs (503301558 bytes/sec)
but if your running a bunch of VM's off it you would want more IO which requires more vdevs. You only get 1 disk's worth of IO for each vdev.
Re: SSD Cache
Posted: 20 Sep 2013 15:40
by siftu
1. Don't run slog and l2arc on the same SSD, we see it time and time again that people get worse performance than just raw disks.
2. slog and L2arcs cache random IO, not a sequential dd test by default. Use a benchmarking program which can actually generate random io.
You should notice improvements with protocol like NFS with a slog. The L2ARC can also take a while to "Warm" up its cache.
You will also get better performance on iscsi with zvols rather than file based extents.
Re: SSD Cache
Posted: 07 Nov 2013 11:58
by blowfish256
Hi all, thanks for info.
Last couple of questions, Im going to change to a mirror instead of the zraid for better ESXi Iscsi performance. I've bought a 4th 2TB hdd. Can I still use a cache with a mirror pool? Will the cache work with 2 mirrors? So my setup would be 2 mirrors of 2x2TB HDD's in one pool and SSD as cache...
Finally, is there a way to convert file extents back to a zvol?
Cheers for any advice.
Re: SSD Cache
Posted: 02 Jan 2014 21:25
by blowfish256
Hi all, just an update. I'm now running 4 x 2tb WD blacks and my SSD for cache only.
Current stats are...
Code: Select all
nas1: datapool_1 # dd if=/dev/zero of=/mnt/datapool_1/test.dd bs=2M count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 189.956621 secs (110401627 bytes/sec)
nas1: datapool_1 # dd if=/mnt/datapool_1/test.dd of=/dev/null bs=2M count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 125.112781 secs (167620925 bytes/sec)
Code: Select all
nas1: datapool_1 # zpool status
pool: datapool_1
state: ONLINE
scan: resilvered 612M in 0h3m with 0 errors on Thu Jan 2 16:20:21 2014
config:
NAME STATE READ WRITE CKSUM
datapool_1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ada2 ONLINE 0 0 0
ada4 ONLINE 0 0 0
cache
ada3 ONLINE 0 0 0
errors: No known data errors
nas1: datapool_1 #
That's very poor isn't it?

Re: SSD Cache
Posted: 04 Jan 2014 09:38
by cancerman
Have you set your ram in the zfs kernel tune extension?
Re: SSD Cache
Posted: 07 Jan 2014 17:44
by Lee Sharp
ESXi... Now I understand. I was setting up a NAS for a VMware View deployment, and the performance was in the toilet. So, some hints...
1) Do NOT disable sync. It will break things eventually.
2) Make 2 drive mirror vdevs, and then stripe them. Per Oracle, this gives you the best IOPS with any fault tolerance. Only a pure stripe is better.
3) Get a very good and fast pair of SSDs and make a mirror ZIL device. They must be solid, and fast, and on SATA3 6Gbps ports. I prefer Intel, as they seem to live longer than anything else.
4) cache_flush_disable=”1″ - Note that you are risking some data... Not loss of data, but time. Anything in the cache in a power loss state will be lost. Some info on what is happening...
http://ateamdev.ateamprojects.com/tech/ ... -over-nfs/ However, I disagree with the author that the risks are similar to sync disable. Oracle says all over the place how bad sync disable is, but it does not say the same things about this. Sync Disable can risk ALL you data. this only risks what is in the cache.
More articles...
https://forums.freebsd.org/viewtopic.php?&t=30856 http://christopher-technicalmusings.blo ... h-zil.html
This may do it for you, and is as good as it gets with a pure nas4free solution. But it may not be good enough, so...
5) Infinio Accelerator - Not cheap, but it makes one heck of a difference!
http://www.infinio.com/about-our-product/what-is-it It is a VM running on your ESXi server that acts as a disk cache in ram.