Hi all,
I'd like to setup a NAS4Free system from an "old" computer I've got laying around.
It consists out of a Gigabyte GA-G33M-DS2Rmotherboard, an Intel Core2Duo 8500 CPU and 8GB of RAM. More than adequate to make a NAS system from I would think.
I would like to install 3x WD Green 3TB drives into this system, because I've learned (from reading about it - not personal experience!) that ZFS requires at least 3 disks for RAIDZ1.
Now, my question is, are problems to be expected when I connect these disks directly to the ICH9 southbridge controller of the motherboard? I know for sure that, with Microsoft Windows, this controller cannot address these disks directly since they are larger than 2TB.
But I'm wondering if this is also a problem when using NAS4Free, which is FreeBSD based and I know *nix system have a tendency to ignore most of the BIOS in a system and simply use their own drivers. In other words, I'd like to know if the Windows limitation is simply a matter of Intel never wanting to release good drivers (for Windows) to be able to go beyond the 2.2TB limitation, or if the ICH-9 chipset really isn't up to it because of some hardware deficiency. I wouldn't know because I have little to no experience with *nix systems.
Sooooo... are problems to be expected when connecting 3TB drives to this motherboard's controller, for use with NAS4Free, or should I install a separate SATA interface card into one of the PCI Express slots and connect the drives to that first?
Thanks!
This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
Can I expect problems with this setup?
-
Bart
- NewUser

- Posts: 2
- Joined: 01 Nov 2012 18:33
- Location: Bruges, Belgium
- Status: Offline
- daoyama
- Developer

- Posts: 394
- Joined: 25 Aug 2012 09:28
- Location: Japan
- Status: Offline
Re: Can I expect problems with this setup?
Hi,
If you need more space, you can replace the drives or add new 3 disks of RAIDZ1 to existing pool.
Few days ago, I decided to switch ESXi Direct I/O VM environment with the drives.
Daisuke Aoyama
Yes, using 3 drives RAIDZ1 is useful for small starting.Bart wrote:I would like to install 3x WD Green 3TB drives into this system, because I've learned (from reading about it - not personal experience!) that ZFS requires at least 3 disks for RAIDZ1.
If you need more space, you can replace the drives or add new 3 disks of RAIDZ1 to existing pool.
I don't know about ICH9, but I used 2x WD30EZRX by ZFS mirror on ICH9R + Celeron E3400 + MEM8GB + USB boot.Bart wrote: Now, my question is, are problems to be expected when I connect these disks directly to the ICH9 southbridge controller of the motherboard? I know for sure that, with Microsoft Windows, this controller cannot address these disks directly since they are larger than 2TB.
Few days ago, I decided to switch ESXi Direct I/O VM environment with the drives.
Daisuke Aoyama
NAS4Free 10.2.0.2.2115 (x64-embedded), 10.2.0.2.2258 (arm), 10.2.0.2.2258(dom0)
GIGABYTE 5YASV-RH, Celeron E3400 (Dual 2.6GHz), ECC 8GB, Intel ET/CT/82566DM (on-board), ZFS mirror (2TBx2)
ASRock E350M1/USB3, 16GB, Realtek 8111E (on-board), ZFS mirror (2TBx2)
MSI MS-9666, Core i7-860(Quad 2.8GHz/HT), 32GB, Mellanox ConnectX-2 EN/Intel 82578DM (on-board), ZFS mirror (3TBx2+L2ARC/ZIL:SSD128GB)
Develop/test environment:
VirtualBox 512MB VM, ESXi 512MB-8GB VM, Raspberry Pi, Pi2, ODROID-C1
GIGABYTE 5YASV-RH, Celeron E3400 (Dual 2.6GHz), ECC 8GB, Intel ET/CT/82566DM (on-board), ZFS mirror (2TBx2)
ASRock E350M1/USB3, 16GB, Realtek 8111E (on-board), ZFS mirror (2TBx2)
MSI MS-9666, Core i7-860(Quad 2.8GHz/HT), 32GB, Mellanox ConnectX-2 EN/Intel 82578DM (on-board), ZFS mirror (3TBx2+L2ARC/ZIL:SSD128GB)
Develop/test environment:
VirtualBox 512MB VM, ESXi 512MB-8GB VM, Raspberry Pi, Pi2, ODROID-C1
-
Bart
- NewUser

- Posts: 2
- Joined: 01 Nov 2012 18:33
- Location: Bruges, Belgium
- Status: Offline
Re: Can I expect problems with this setup?
Thanks for getting into this, because, this was also something I was wondering : if I wanted to increase the space, I've understood from the ZFS documentation that I cannot simply add another disk to the pool which would then be added to the volume. But now you're telling me I could simply replace the disks? Would this be done like with most other RAID setups: replacing the disks one by one with larger disks, and having the system restore all data on the new disk, resulting in three larger disks being used and thus new available space?daoyama wrote:Hi,
Yes, using 3 drives RAIDZ1 is useful for small starting.Bart wrote:I would like to install 3x WD Green 3TB drives into this system, because I've learned (from reading about it - not personal experience!) that ZFS requires at least 3 disks for RAIDZ1.
If you need more space, you can replace the drives or add new 3 disks of RAIDZ1 to existing pool.
Or, alternatively, I could add another three disks to the drive pool and create a second RAIDZ1 array?
Aha! Thanks very much! It seems my motherboard also uses the ICH-9R chip, same as yours, so this isn't a problem then! Thanks a lot for the info.daoyama wrote:I don't know about ICH9, but I used 2x WD30EZRX by ZFS mirror on ICH9R + Celeron E3400 + MEM8GB + USB boot.Bart wrote: Now, my question is, are problems to be expected when I connect these disks directly to the ICH9 southbridge controller of the motherboard? I know for sure that, with Microsoft Windows, this controller cannot address these disks directly since they are larger than 2TB.
Few days ago, I decided to switch ESXi Direct I/O VM environment with the drives.
What exactly are the benefits of running a NAS4Free setup virtually? Doesn't this mean that all data would be stored into one gigantic *.vm* file? I can see how this would mean you could easily copy the file to another computer with a completely different hardware setup, but wouldn't the size of the file (several terabytes) make this very difficult to handle?