*New 12.1 series Release:
2020-07-21: XigmaNAS - released

*New 11.4 series Release:
2020-07-20: XigmaNAS - released!

We really need "Your" help on XigmaNAS https://translations.launchpad.net/xigmanas translations. Please help today!

Producing and hosting XigmaNAS costs money. Please consider donating for our project so that we can continue to offer you the best.
We need your support! eg: PAYPAL

vSphere with HAST and ZFS and iSCSI

Highly Available Storage.
Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
User avatar
experienced User
experienced User
Posts: 85
Joined: 19 Jul 2012 21:31
Status: Offline

vSphere with HAST and ZFS and iSCSI


Post by tuaris »

I recently had an issue where I had to reboot my NAS4Free server due to a hardware problem, it was unavoidable. This unfortunately has a very negative affect on my vSphere cluster, so I am looking into setting up some type of high availability storage. Based on what I read so far it does not look like VMWare has solution (their data store replication seems to only be for disaster recovery purposes).

A few years ago I had experimented with HAST+CARP+ZFS on FreeBSD 9.0:

Sadly it was not suitable for production use due a problem with devd. I am re-considering using HAST with NAS4Free since it's possible many of the issues outlined above have been worked out in the last couple of years.

Before I go out and invest in hardware, I'd like to hear some thoughts on the idea of using HAST+CARP+ZFS+iSCSI with NAS4Free? Is it capable of supporting the high i/o that would be generated with all the running VM's?

Posts: 2
Joined: 27 Oct 2012 11:38
Status: Offline

Re: vSphere with HAST and ZFS and iSCSI


Post by swissiws »

I am still using commercial SAN hardware for VMWare systems using dual controllers on SAN for cluster for load balancing/fault tolerance.

the core issue with HAST I currently experience having issues with LAGG + CARP interfaces.

http://unix.stackexchange.com/questions ... oses-state

Another issue - each power disruption of networking components between the two HAST hosts locations will eventually end up as split brain configuration, having manually to decide which system is primary. I am not sure how to resolve this issue.

maybe somebody else has experience in resolving those.
2 x Dell R610 96GB RAM,Perc H800, MD1200 DAS, 12x3TB SAS Z1 - HAST Cluster -

Post Reply

Return to “HAST”