*New 12.1 series Release:
2019-11-08: XigmaNAS 12.1.0.4.7091 - released!

*New 11.3 series Release:
2019-10-19: XigmaNAS 11.3.0.4.7014 - released


We really need "Your" help on XigmaNAS https://translations.launchpad.net/xigmanas translations. Please help today!

Producing and hosting XigmaNAS costs money. Please consider donating for our project so that we can continue to offer you the best.
We need your support! eg: PAYPAL

ZFS pool config

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
shmish
NewUser
NewUser
Posts: 12
Joined: 24 Jun 2012 17:17
Status: Offline

ZFS pool config

#1

Post by shmish » 14 Jan 2013 03:19

Hi,
This fall I upgraded from Freenas7 to NAS4Free. I was using ZFS with raidz1, with 3x500GB drives. I recall having about 950 GB of storage When I upgraded to NAS4Free, I also added a fourth drive to the ZFS pool. I can't remember exactly what I did, but it wouldn't have been anything terribly complicated. I would have found a post here that instructed on how to add a drive. The idea would have been to have close to 1500 TB of drive space with mirror (or redundancy - I'm not sure of the exact language).

My old config for disks and ZFS are:

Code: Select all

<disks>
		<disk>
			<uuid>c0445be4-8af2-4397-8f3b-53059761d05b</uuid>
			<name>ad1</name>
			<devicespecialfile>/dev/ad1</devicespecialfile>
			<harddiskstandby>60</harddiskstandby>
			<acoustic>0</acoustic>
			<apm>0</apm>
			<transfermode>auto</transfermode>
			<type>IDE</type>
			<desc>WD200GB</desc>
			<size>238476MB</size>
			<smart>
				<extraoptions/>
			</smart>
			<fstype>ufsgpt</fstype>
		</disk>
		<disk>
			<uuid>57e77b51-a139-45df-970f-9b9c8f2c1d6f</uuid>
			<name>ad4</name>
			<devicespecialfile>/dev/ad4</devicespecialfile>
			<harddiskstandby>0</harddiskstandby>
			<acoustic>0</acoustic>
			<apm>0</apm>
			<transfermode>auto</transfermode>
			<type>IDE</type>
			<desc>500-1</desc>
			<size>476941MB</size>
			<smart>
				<extraoptions/>
			</smart>
			<fstype>zfs</fstype>
		</disk>
		<disk>
			<uuid>8b01ee76-c9de-4178-ba0a-ce04179a49a8</uuid>
			<name>ad6</name>
			<devicespecialfile>/dev/ad6</devicespecialfile>
			<harddiskstandby>0</harddiskstandby>
			<acoustic>0</acoustic>
			<fstype>zfs</fstype>
			<apm>0</apm>
			<transfermode>auto</transfermode>
			<type>IDE</type>
			<desc>Seagate500</desc>
			<size>476941MB</size>
			<smart>
				<extraoptions/>
			</smart>
		</disk>
		<disk>
			<uuid>f51beab8-c25a-43c7-9de9-c41fdde2bd46</uuid>
			<name>ad8</name>
			<devicespecialfile>/dev/ad8</devicespecialfile>
			<harddiskstandby>0</harddiskstandby>
			<acoustic>0</acoustic>
			<apm>0</apm>
			<transfermode>auto</transfermode>
			<type>IDE</type>
			<desc>500-2</desc>
			<size>476941MB</size>
			<smart>
				<extraoptions/>
			</smart>
			<fstype>zfs</fstype>
		</disk>
		<disk>
			<uuid>9162dcfb-c8eb-4851-89ab-10ea396a3c39</uuid>
			<name>ad10</name>
			<devicespecialfile>/dev/ad10</devicespecialfile>
			<harddiskstandby>0</harddiskstandby>
			<acoustic>0</acoustic>
			<apm>0</apm>
			<transfermode>auto</transfermode>
			<type>IDE</type>
			<desc>500-3</desc>
			<size>476941MB</size>
			<smart>
				<extraoptions/>
			</smart>
			<fstype>zfs</fstype>
		</disk>
	</disks>
<zfs>
		<vdevices>
			<vdevice>
				<uuid>b29df859-0e80-4822-b387-518154fe817a</uuid>
				<name>Fileserver_raidz1_0</name>
				<type>raidz1</type>
				<device>/dev/ad4</device>
				<device>/dev/ad8</device>
				<device>/dev/ad10</device>
			</vdevice>
		</vdevices>
		<pools>
			<pool>
				<uuid>885dd26b-d156-4320-8d0c-7feeca714ae2</uuid>
				<name>Fileserver</name>
				<vdevice>Fileserver_raidz1_0</vdevice>
				<root/>
				<mountpoint/>
			</pool>
		</pools>
		<datasets/>
	</zfs>
Only this weekend I've come to realize that I'm now getting 1000 GB of storage. My new config says:

Code: Select all

<disks>
		<disk>
			<uuid>c0445be4-8af2-4397-8f3b-53059761d05b</uuid>
			<name>ad4</name>
			<devicespecialfile>/dev/ad4</devicespecialfile>
			<harddiskstandby>60</harddiskstandby>
			<acoustic>0</acoustic>
			<apm>0</apm>
			<transfermode>auto</transfermode>
			<type>IDE</type>
			<desc>WD200GB</desc>
			<size>238476MB</size>
			<smart>
				<extraoptions/>
			</smart>
			<fstype>ufsgpt</fstype>
		</disk>
		<disk>
			<uuid>57e77b51-a139-45df-970f-9b9c8f2c1d6f</uuid>
			<name>ad0</name>
			<devicespecialfile>/dev/ad0</devicespecialfile>
			<harddiskstandby>0</harddiskstandby>
			<acoustic>0</acoustic>
			<apm>0</apm>
			<transfermode>auto</transfermode>
			<type>IDE</type>
			<desc>500-1</desc>
			<size>476941MB</size>
			<smart>
				<extraoptions/>
			</smart>
			<fstype>zfs</fstype>
		</disk>
		<disk>
			<uuid>8b01ee76-c9de-4178-ba0a-ce04179a49a8</uuid>
			<name>ad1</name>
			<devicespecialfile>/dev/ad1</devicespecialfile>
			<harddiskstandby>0</harddiskstandby>
			<acoustic>0</acoustic>
			<fstype>zfs</fstype>
			<apm>0</apm>
			<transfermode>auto</transfermode>
			<type>IDE</type>
			<desc>Seagate500</desc>
			<size>476941MB</size>
			<smart>
				<extraoptions/>
			</smart>
		</disk>
		<disk>
			<uuid>f51beab8-c25a-43c7-9de9-c41fdde2bd46</uuid>
			<name>ad2</name>
			<devicespecialfile>/dev/ad2</devicespecialfile>
			<harddiskstandby>0</harddiskstandby>
			<acoustic>0</acoustic>
			<apm>0</apm>
			<transfermode>auto</transfermode>
			<type>IDE</type>
			<desc>500-2</desc>
			<size>476941MB</size>
			<smart>
				<extraoptions/>
			</smart>
			<fstype>zfs</fstype>
		</disk>
		<disk>
			<uuid>9162dcfb-c8eb-4851-89ab-10ea396a3c39</uuid>
			<name>ad3</name>
			<devicespecialfile>/dev/ad3</devicespecialfile>
			<harddiskstandby>0</harddiskstandby>
			<acoustic>0</acoustic>
			<apm>0</apm>
			<transfermode>auto</transfermode>
			<type>IDE</type>
			<desc>500-3</desc>
			<size>476941MB</size>
			<smart>
				<extraoptions/>
			</smart>
			<fstype>zfs</fstype>
		</disk>
	</disks>
<zfs>
		<vdevices>
			<vdevice>
				<uuid>b29df859-0e80-4822-b387-518154fe817a</uuid>
				<name>Fileserver_raidz1_0</name>
				<type>raidz1</type>
				<device>/dev/ad4</device>
				<device>/dev/ad8</device>
				<device>/dev/ad10</device>
				<desc/>
			</vdevice>
		</vdevices>
		<pools>
			<pool>
				<uuid>885dd26b-d156-4320-8d0c-7feeca714ae2</uuid>
				<name>Fileserver</name>
				<vdevice>Fileserver_raidz1_0</vdevice>
				<root/>
				<mountpoint/>
				<desc/>
			</pool>
		</pools>
		<datasets/>
	</zfs>
Pool information is:

Code: Select all

  pool: Fileserver
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
	still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
	pool will no longer be accessible on older software versions.
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	Fileserver  ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    ada2    ONLINE       0     0     0
	    ada3    ONLINE       0     0     0
	  mirror-1  ONLINE       0     0     0
	    ada0    ONLINE       0     0     0
	    ada1    ONLINE       0     0     0
Should I have been able to go from ~1TB to ~1.5TB of storage by adding a 4th drive? If so, is there any likely explanation on what I did wrong? I see from the wiki that there is no optimal configuration for 4 drives.

Considering that I have 4x500GB drives and all my data backed up to an external drive, is there a way to configure my ZFS so that it has more storage? Or is my configuration more suited to raid5?

thanks

User avatar
raulfg3
Site Admin
Site Admin
Posts: 4978
Joined: 22 Jun 2012 22:13
Location: Madrid (ESPAÑA)
Contact:
Status: Offline

Re: ZFS pool config

#2

Post by raulfg3 » 14 Jan 2013 07:52

You actually have 2 mirror, so 1TB is expected, your config have some advantages, one is that You only need 2 disk in mirror to grow in size.


If You want more space, You need raidZ1, using 4 disk, You have 1.5TB usable, please read more about ZFS, search the forum about .ptt powerpoint that explain ZFS.
12.0.0.4 (revision 6766)+OBI on SUPERMICRO X8SIL-F 8GB of ECC RAM, 12x3TB disk in 3 vdev in RaidZ1 = 32TB Raw size only 22TB usable

Wiki
Last changes

fsbruva
Advanced User
Advanced User
Posts: 383
Joined: 21 Sep 2012 14:50
Status: Offline

Re: ZFS pool config

#3

Post by fsbruva » 14 Jan 2013 13:36

Your old config was raidz1 (which is like original RAID 5), and your new config has two vdev mirrors (kind of like RAID10). You lose a little bit of space, but you have doubled your read speeds!

There is no way to migrate to Raidz from a mirrored config. And, you cannot add new disks to a Raidz vdev.

Post Reply

Return to “ZFS (only!)”