ZFS max performance, PURE SSD?
Posted: 24 Dec 2014 02:22
Hello, I currently have a 24 bay super micro chassis with 48GB of RAM.
My current set up is as follows
zpool1-
5 7200RPM SAS drives in RAIDZ1 format
1 - 128GB L2ARC
1 - 128GB ZIL mirrored
zpool1-
5 7200RPM SAS drives in RAIDZ1 format
1 - 128GB L2ARC
1 - 128GB ZIL mirrored
both pools are shared via iSCSI and connected via 4, 10Gb etherchannel connections.
I get that I probably just should have mirrored the pool entirely.
Here is what I want to accomplish,
I want to host about 60VM's with various work loads as best I can on this chassis. I have a budget to populate it entirely with 512GB SSD's, but I'm wondering if this is the best way to go to maximize storage capacity / performance. Right now in my current set up it's not exactly fast. When I shift some VM's to internal storage on the hosts, they perform much better. Network throughput seems relatively low to the system most of the time because this is TRULY not in production yet... but it will be in a week or so. I'd like help on determining the best set up if I were to go PURE SSD, for example, would I need a ZIL and L2ARC with all SSD disks? Could I get away with using buying 18 x 3TB 15k SAS disks and front ending them with a mirrored ZIL for each pool and an L2ARC for each? or should I mirror the vdevs and form one single pool front ended by 4 ZILs and a mirrored L2ARC? these combinations are killing me. I wish I had the budget for something like a compellent or IBM storewize V7000 and call it a day, but right now that is not yet an option. All that being said, I am currently using iSCSI with VMware and etherchannel and throughput does not seem to be an issue as I have done iperf and pushed 10Gbps no problem. I think my bottleneck is "spindles" Is there anything I should tweak inside of nas4free with pure SSD's? I thought about using NFS over iSCSI too if I switch.
My current set up is as follows
zpool1-
5 7200RPM SAS drives in RAIDZ1 format
1 - 128GB L2ARC
1 - 128GB ZIL mirrored
zpool1-
5 7200RPM SAS drives in RAIDZ1 format
1 - 128GB L2ARC
1 - 128GB ZIL mirrored
both pools are shared via iSCSI and connected via 4, 10Gb etherchannel connections.
I get that I probably just should have mirrored the pool entirely.
Here is what I want to accomplish,
I want to host about 60VM's with various work loads as best I can on this chassis. I have a budget to populate it entirely with 512GB SSD's, but I'm wondering if this is the best way to go to maximize storage capacity / performance. Right now in my current set up it's not exactly fast. When I shift some VM's to internal storage on the hosts, they perform much better. Network throughput seems relatively low to the system most of the time because this is TRULY not in production yet... but it will be in a week or so. I'd like help on determining the best set up if I were to go PURE SSD, for example, would I need a ZIL and L2ARC with all SSD disks? Could I get away with using buying 18 x 3TB 15k SAS disks and front ending them with a mirrored ZIL for each pool and an L2ARC for each? or should I mirror the vdevs and form one single pool front ended by 4 ZILs and a mirrored L2ARC? these combinations are killing me. I wish I had the budget for something like a compellent or IBM storewize V7000 and call it a day, but right now that is not yet an option. All that being said, I am currently using iSCSI with VMware and etherchannel and throughput does not seem to be an issue as I have done iperf and pushed 10Gbps no problem. I think my bottleneck is "spindles" Is there anything I should tweak inside of nas4free with pure SSD's? I thought about using NFS over iSCSI too if I switch.