*New 11.3 series Release:
2019-10-05: XigmaNAS 11.3.0.4.6928 - released, 11.2 series are soon unsupported!

*New 12.0 series Release:
2019-10-05: XigmaNAS 12.0.0.4.6928 - released!

*New 11.2 series Release:
2019-09-23: XigmaNAS 11.2.0.4.6881 - released!

We really need "Your" help on XigmaNAS https://translations.launchpad.net/xigmanas translations. Please help today!

Producing and hosting XigmaNAS costs money. Please consider donating for our project so that we can continue to offer you the best.
We need your support! eg: PAYPAL

vSphere with HAST and ZFS and iSCSI

Highly Available Storage.
Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
User avatar
tuaris
experienced User
experienced User
Posts: 89
Joined: 19 Jul 2012 21:31
Contact:
Status: Offline

vSphere with HAST and ZFS and iSCSI

#1

Post by tuaris » 26 Jul 2014 05:22

I recently had an issue where I had to reboot my NAS4Free server due to a hardware problem, it was unavoidable. This unfortunately has a very negative affect on my vSphere cluster, so I am looking into setting up some type of high availability storage. Based on what I read so far it does not look like VMWare has solution (their data store replication seems to only be for disaster recovery purposes).

A few years ago I had experimented with HAST+CARP+ZFS on FreeBSD 9.0:
https://forums.freebsd.org/viewtopic.php?&t=29996
http://forums.freebsd.org/viewtopic.php?f=39&t=29639

Sadly it was not suitable for production use due a problem with devd. I am re-considering using HAST with NAS4Free since it's possible many of the issues outlined above have been worked out in the last couple of years.

Before I go out and invest in hardware, I'd like to hear some thoughts on the idea of using HAST+CARP+ZFS+iSCSI with NAS4Free? Is it capable of supporting the high i/o that would be generated with all the running VM's?

swissiws
NewUser
NewUser
Posts: 2
Joined: 27 Oct 2012 11:38
Status: Offline

Re: vSphere with HAST and ZFS and iSCSI

#2

Post by swissiws » 16 Oct 2015 20:24

I am still using commercial SAN hardware for VMWare systems using dual controllers on SAN for cluster for load balancing/fault tolerance.

the core issue with HAST I currently experience having issues with LAGG + CARP interfaces.

http://unix.stackexchange.com/questions ... oses-state

Another issue - each power disruption of networking components between the two HAST hosts locations will eventually end up as split brain configuration, having manually to decide which system is primary. I am not sure how to resolve this issue.

maybe somebody else has experience in resolving those.
2 x Dell R610 96GB RAM,Perc H800, MD1200 DAS, 12x3TB SAS Z1 - HAST Cluster - 10.2.0.2.2332

Post Reply

Return to “HAST”