This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

3tb hdd what's the score?

Hard disks, HDD, RAID Hardware, disk controllers, SATA, PATA, SCSI, IDE, On Board, USB, Firewire, CF (Compact Flash)
Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
rs232
Starter
Starter
Posts: 59
Joined: 25 Jun 2012 13:48
Status: Offline

3tb hdd what's the score?

Post by rs232 »

As prices of storage per Gb are falling again I'm considering a zfs pool of Seagate Barracuda - Hard drive - 3 TB - internal - 3.5 - SATA-600 - buffer: 64 MB.
Has anybody tried 3Tb sata in a redundant config in nas4free yet?
Anything to be taken into account?

I guess as a minimum they have to be seen by the system (HW) but I was planning to get a SATAIII compatible motherboard, so I guess this shouldn't be a problem...

Anything else?

Thanks!

Onichan
Advanced User
Advanced User
Posts: 238
Joined: 04 Jul 2012 21:41
Status: Offline

Re: 3tb hdd what's the score?

Post by Onichan »

I am currently using 6 WD Green 3TB drives without issues in RAIDZ2 for over a month now. Just make sure use 4k sectors and I wouldn't do anything less then dual parity.

rs232
Starter
Starter
Posts: 59
Joined: 25 Jun 2012 13:48
Status: Offline

Re: 3tb hdd what's the score?

Post by rs232 »

Going back to this topic... does the motherboard really need to understand 3TB? Or is it just a freebsd thing to be able to read the right capacity?

Ref:
http://askubuntu.com/questions/207458/3 ... otherboard

ku-gew
Advanced User
Advanced User
Posts: 172
Joined: 29 Nov 2012 09:02
Location: Den Haag, The Netherlands
Status: Offline

Re: 3tb hdd what's the score?

Post by ku-gew »

If the drive can boot, N4F should be fine. But you usually boot from another drive.

About 3 TB WD Green drives: mines are used in an external enclosure FW800 in ZFS RAID1. After three months I have about 30k load/unload cycles. They are rated for 600k, that means after three years they are done.
Be careful, I will put them in a new server using RAID10 but I will won't put the two Greens in the same vdev, I want less risks. I will make 2 vdevs, each one with a Red and a (older) Green.
HP Microserver N40L, 8 GB ECC, 2x 3TB WD Red, 2x 4TB WD Red
XigmaNAS stable branch, always latest version
SMB, rsync

rs232
Starter
Starter
Posts: 59
Joined: 25 Jun 2012 13:48
Status: Offline

Re: 3tb hdd what's the score?

Post by rs232 »

Thanks, I actually did sell my WD20EARS to get instead a set of Segate barracuda ST3000DM001.
I don't think I'm going to get any WD caviar green again, I mean they are great for systems with 1 disk (1! not more) including desktop computers. Not for NAS in my experience.

I've just ordered the Seagate I'll let you guys know as soon as they are up and running.

I have an additional question though: supposed I have either a raid5 or raidz1 are the drives able to take advantage of the idle power off timer is included in any sort of raid?

davidb
Starter
Starter
Posts: 55
Joined: 05 Jul 2012 17:51
Status: Offline

Re: 3tb hdd what's the score?

Post by davidb »

I agree that the WD Green drives are sub-par for NAS4Free application, although i know many are using them (i purchased 5 of them (2TB) for a RAID5 setup, and within 3 months i had RMA'd 4 of them for replacements, and two of the replacements for replacements. They have found other homes across my company, but they all have smart errors and i don't really trust them). I recently had the opportunity to purchase and build a couple NASes with WD Red drives, and since then they are still running strong, no errors at all, and they are on 24/7. I also have a build using the Samsung HD204UI drives, and they are not actively failing and timing out, i am watching the SMART numbers starting to climb up/ errors occur.

Let us know how the seagates work out!

waldo22
Starter
Starter
Posts: 29
Joined: 27 Nov 2012 16:20
Status: Offline

Re: 3tb hdd what's the score?

Post by waldo22 »

I'm currently using 8 3TB Seagate drives in a RAID-Z2 pool; 6 of the Barracuda ST3000DM001 (3 platters, 8W operating, 5.4W idle) and 2 of the Barracuda XT ST33000651AS (older generation, 5-platters, 9.23W operating, 7.37W idle).

Working fine so far. I actually ordered 8 of the ST3000DM001 drives, but the seller sent 2 of the older Barracuda XT drives and I didn't notice until I'd already installed them. They wouldn't replace them. 3 of the ST3000DM001 were also running firmware that clearly indicate that they were removed from an external enclosure. I'm pretty disappointed by that. Based on the Seagate datasheets, and some websites, the ST3000DM001 is a 4K Advanced Format drive and the Barracuda XT is a 512byte drive, but I thought all 3TB drives were Advanced Format. It's hard to get a straight answer. If the answer is 512bytes, then I'm really not comfortable running a zpool with a mixture of technologies like that.

A few weeks ago, Seagate had external 3TB models on sale for US$109!!! Who knows what was inside the enclosure, though...

rs232
Starter
Starter
Posts: 59
Joined: 25 Jun 2012 13:48
Status: Offline

Re: 3tb hdd what's the score?

Post by rs232 »

It must be me but here it doesn't work.
I've just bought 4x 3TB Seagate, raw mapped into ESXi and added to the nas4free VM.
Same old story: unsupportable block 0 when mapped physically (-z)
If I map virtual (-r) the message disappears but the storage capacity is limited to 2TB.

When mapped physical if I ignore the unsupportable block 0 if I try to import the disk it's reported on the GUI with size 0 and I can't even format it.
nas4free:~# fdisk
******* Working on device /dev/md0 *******
parameters extracted from in-core disklabel are:
cylinders=28 heads=255 sectors/track=63 (16065 blks/cyl)

parameters to be used for BIOS calculations are:
cylinders=28 heads=255 sectors/track=63 (16065 blks/cyl)

fdisk: invalid fdisk partition table found
Media sector size is 512
Warning: BIOS sector numbering starts with sector 1
Information from DOS bootblock is:
The data for partition 1 is:
sysid 165 (0xa5),(FreeBSD/NetBSD/386BSD)
start 63, size 449757 (219 Meg), flag 80 (active)
beg: cyl 0/ head 1/ sector 1;
end: cyl 27/ head 254/ sector 63
The data for partition 2 is:
<UNUSED>
The data for partition 3 is:
<UNUSED>
The data for partition 4 is:
<UNUSED>

waldo22
Starter
Starter
Posts: 29
Joined: 27 Nov 2012 16:20
Status: Offline

Re: 3tb hdd what's the score?

Post by waldo22 »

I'm not doing an virtualization.

All 8 drives are plugged-in to a 3Ware/LSI RAID controller, but exported as "single" disks.

They show up to NAS4Free as /dev/da1, /dev/da2, etc. The Vdevs created just fine, and had no trouble creating a zpool.

I did only get 15.2 TiB out of 6 "3 TB" drives (after losing 2 to ZFS parity). "3 TB" drives are actually only about 2.73TiB, with some formatting losses I'm sure, but that should have been ~16.4TiB.

Otherwise, it is working, though.

rs232
Starter
Starter
Posts: 59
Joined: 25 Jun 2012 13:48
Status: Offline

Re: 3tb hdd what's the score?

Post by rs232 »

I confirm this is a problem with freebsd (so both nas4free and freenas). Same hardware I switched to openmediavault it worked straight away.

jim71
Starter
Starter
Posts: 22
Joined: 09 Nov 2012 00:22
Status: Offline

Re: 3tb hdd what's the score?

Post by jim71 »

Hi,

I am doing the same as "rs232" with nas4free 9.1 as a vm on esxi5.1 and use physical raw disk mapping
with 3 WD red 3TB. So result is the same, even though I see my VM has 8.19TB of storage, the disks are
shown with a 0 size on nas4free.

Before I had a similar config with nas4free 9.0, esxi5.0 and 2 seagate 2TB in soft mirror and it was working
fine.

Has someone any news on this issue ? Is there a solution to this problem ?

I would like to stick with nas4free as I like the filesystem encryption feature and don't really want to buy a HW raid card.
If I would have seen this problem before I would have taken the WD red 2TB model instead...

christian
NewUser
NewUser
Posts: 14
Joined: 21 Jan 2013 01:15
Location: Ottawa, Canada
Status: Offline

Re: 3tb hdd what's the score?

Post by christian »

Isn't ESXi's vmdk's (vmfs5) limited to 2TB? How did you raw map the drive? via vmkfstools? If so then I agree it probably shouldn't care.
N4F embedded: E45M1-M PRO (AMD-E450), Sil3132 eSATA, StarTech PEXSATA24E. Disks (30TB usable): onboard RAIDZ1 5x2TB Seagate; RAID10 4x2TB WD; offboard TR5M DAS Stripe 5x3TB Seagate (backup); TR4M DAS RAIDZ1 4x1TB Samsung

rostreich
Status: Offline

Re: 3tb hdd what's the score?

Post by rostreich »

christian wrote:Isn't ESXi's vmdk's (vmfs5) limited to 2TB? How did you raw map the drive? via vmkfstools? If so then I agree it probably shouldn't care.

the 2TB limit is from esxi version smaller 5.0 when using iscsi, as i got told.

christian
NewUser
NewUser
Posts: 14
Joined: 21 Jan 2013 01:15
Location: Ottawa, Canada
Status: Offline

Re: 3tb hdd what's the score?

Post by christian »

rostreich wrote:the 2TB limit is from esxi version smaller 5.0 when using iscsi, as i got told.
I don't think so:
http://kb.vmware.com/selfservice/micros ... Id=1003565

VMDKs are certainly limited to 2TB but the disk it sits on can be up to 64TB. Not sure about a disk that is passed in raw via vmkfstools though. I've never tried that. rs232's success with OMV suggests it works but I don't know how he mapped it in raw nor how jim71 mapped his.
N4F embedded: E45M1-M PRO (AMD-E450), Sil3132 eSATA, StarTech PEXSATA24E. Disks (30TB usable): onboard RAIDZ1 5x2TB Seagate; RAID10 4x2TB WD; offboard TR5M DAS Stripe 5x3TB Seagate (backup); TR4M DAS RAIDZ1 4x1TB Samsung

alexplatform
Starter
Starter
Posts: 38
Joined: 26 Jun 2012 21:21
Status: Offline

Re: 3tb hdd what's the score?

Post by alexplatform »

This can easily be addressed as it is published and documented at the source.

http://kb.vmware.com/selfservice/micros ... Id=2003813
What are the limitations for VMFS-5?

VMFS-5 limits the number of extents to 32 and the total datastore size to 64TB, but the individual extents are not limited to 2TB each. For example, you can create a datastore with a LUN size of 64TB, or one with up to 32 extents up to maximum size of 64TB.
Only pass-through RDMs (Raw Device Mapping) can be created with a size >2TB. Non-pass-through RDMs and virtual disk files are still limited to 512B ~ 2TB.
Passthrough RDMs are supported up to ~60TB in size.
Both upgraded and newly-created VMFS-5 volumes supported the larger Passthrough RDM size.

christian
NewUser
NewUser
Posts: 14
Joined: 21 Jan 2013 01:15
Location: Ottawa, Canada
Status: Offline

Re: 3tb hdd what's the score?

Post by christian »

Thanks alexplatform. But to do this with a local disk, the only way is with vmkfstools right? To my knowledge RDMs are for off board storage but I believe there is a work around with vmkfstools.

I was looking into this because I am making a transition to ESXi5.1 from VMWare Server (on a box other than N4F) and plan to use iSCSI for the data store (which will be on N4F). But in the transition I need to migrate data on local disks which I would prefer to simply map in as raw (at least for now). So I was looking at using the vmkfstools work around to enable this. I'm not certain if this allows for drives larger than 2TB in that scenario.
N4F embedded: E45M1-M PRO (AMD-E450), Sil3132 eSATA, StarTech PEXSATA24E. Disks (30TB usable): onboard RAIDZ1 5x2TB Seagate; RAID10 4x2TB WD; offboard TR5M DAS Stripe 5x3TB Seagate (backup); TR4M DAS RAIDZ1 4x1TB Samsung

alexplatform
Starter
Starter
Posts: 38
Joined: 26 Jun 2012 21:21
Status: Offline

Re: 3tb hdd what's the score?

Post by alexplatform »

Q: But to do this with a local disk, the only way is with vmkfstools right?
A: I dont know, I never tried this configuration. RDM only works on supported hardware (which you can get around as you mentioned) but I'm not sure why you would want to.

Q: I was looking into this because I am making a transition to ESXi5.1 from VMWare Server (on a box other than N4F) and plan to use iSCSI for the data store (which will be on N4F. )But in the transition I need to migrate data on local disks which I would prefer to simply map in as raw (at least for now).
A: Why would you want to do that?!! copying vmdk's between data stores is trivially easy, and could be done via Vmware administration, CLI (ESX 5 shell,) using VMA, or VMware converter (all of which are freely available to ESXi customers.) Simply add the iSCSI storage as a datastore and click copy.

In general, there are very few scenarios where presenting raw storage to guests is desirable or advantageous.

christian
NewUser
NewUser
Posts: 14
Joined: 21 Jan 2013 01:15
Location: Ottawa, Canada
Status: Offline

Re: 3tb hdd what's the score?

Post by christian »

alexplatform wrote: A: Why would you want to do that?!! copying vmdk's between data stores is trivially easy, and could be done via Vmware administration, CLI (ESX 5 shell,) using VMA, or VMware converter (all of which are freely available to ESXi customers.) Simply add the iSCSI storage as a datastore and click copy.
I agree with respect to transferring the vmdks. Those are not a problem. But I'm not sure how I would get them off the raw disk without a way to mount those disks to ESXi. It seems I will need to run Converter from my work station and upload it to ESXi

I have three RAID1 sets which belonged to physical servers which will now migrate to virtual on ESXi. One RAID pair has data from an application. I was going to install that app fresh into a vm and then restore data from the original disk (the app has that ability to inhale from an old disk into the new environment which is normally quicker than backup/restore). My other option I suppose is to use vCenter Converter to convert from the physical machine but I don't know how effective it is. If you have experience with it converting a CentOS base then I'm happy for any input.

The second issue is my N4F box is pretty full and I was hoping to avoid buying another DAS. The box that will host the ESXi has a lot of SATA capacity and I was thinking to load a N4F vm to manage those disks. I wanted to keep them raw so that it would be portable to my main N4F box at a later time and save a lot of copying between environments.
N4F embedded: E45M1-M PRO (AMD-E450), Sil3132 eSATA, StarTech PEXSATA24E. Disks (30TB usable): onboard RAIDZ1 5x2TB Seagate; RAID10 4x2TB WD; offboard TR5M DAS Stripe 5x3TB Seagate (backup); TR4M DAS RAIDZ1 4x1TB Samsung

alexplatform
Starter
Starter
Posts: 38
Joined: 26 Jun 2012 21:21
Status: Offline

Re: 3tb hdd what's the score?

Post by alexplatform »

Ok, if I understand you correctly, the following describe your situation:

1. You have an existing vmware environment with existing VMs, which you'd like to transition to a new hypervisor.
2. you have existing physical servers you'd like to transition to a new hypervisor
3. You have an existing NAS, but it is full.

The first step is to figure out your hypervisor, and the requested guest load; Ideally, you'll have {X} guests where X is ALL your desired guests from above. in total, the guests will need {Y} diskspace for system, and {Z} diskspace for data. As long as you have enough logical volume to exceed the currently total used space of your current {X} guests, you can thin provision your entire target space and proceed to transition ALL of them. You can certainly CAN use VMWare converted to move your Centos hosts, even in-place; its certified for RHEL use. If can even simply partition copy from the HDD to a DDimage (or using Acronis et al) and restore it to a vmdk- it should work without issue, although theres a little manual work required using this method.

Attaching your existing guest HDDs as RDMs to your hypervisor is not necessary or advisable, if for no other reason that you dont want to tie up your hypervisor with non virtual guest hardware and that you would not have the flexibility to keep the hard server as backup to the virtual for the duration of the transition. I'm not at all sure what you mean by
I wanted to keep them raw so that it would be portable to my main N4F box at a later time and save a lot of copying between environments.
They are of no use in their raw state to your NAS box, at least not as far as I can tell.

As for your N4F running out of space- well, build another one then ;)

ku-gew
Advanced User
Advanced User
Posts: 172
Joined: 29 Nov 2012 09:02
Location: Den Haag, The Netherlands
Status: Offline

Re: 3tb hdd what's the score?

Post by ku-gew »

ku-gew wrote:About 3 TB WD Green drives: mines are used in an external enclosure FW800 in ZFS RAID1. After three months I have about 30k load/unload cycles. They are rated for 600k, that means after three years they are done.
Be careful, I will put them in a new server using RAID10 but I will won't put the two Greens in the same vdev, I want less risks. I will make 2 vdevs, each one with a Red and a (older) Green.
I did the same, even if the load/unload counts are not really meaningful.
Or maybe I will buy another 2 WD Red and I use the Green as external backup.
HP Microserver N40L, 8 GB ECC, 2x 3TB WD Red, 2x 4TB WD Red
XigmaNAS stable branch, always latest version
SMB, rsync

christian
NewUser
NewUser
Posts: 14
Joined: 21 Jan 2013 01:15
Location: Ottawa, Canada
Status: Offline

Re: 3tb hdd what's the score?

Post by christian »

alexplatform wrote:Attaching your existing guest HDDs as RDMs to your hypervisor is not necessary or advisable, if for no other reason that you dont want to tie up your hypervisor with non virtual guest hardware and that you would not have the flexibility to keep the hard server as backup to the virtual for the duration of the transition.
I'm in transition! I had one big CentOS based server that housed everything (storage, apps, vApps, etc) and while it started small it is becoming unwieldly especially when there is a failure and reconstruction can take a day. So I'm separating out everything into discrete components. I'm taking the time to think this through before I execute on the plan. Thanks for the advice. It is useful!
alexplatform wrote:I'm not at all sure what you mean by
I wanted to keep them raw so that it would be portable to my main N4F box at a later time and save a lot of copying between environments.
They are of no use in their raw state to your NAS box, at least not as far as I can tell.
They are simply UFS based RAID1s so I should be able to import them into an N4F box as is.
alexplatform wrote:As for your N4F running out of space- well, build another one then ;)
Indeed! However as I think about what my objective is I am thinking the right thing to do is in fact to remove all of the disks from the ESXi server and just keep it as pure compute and then have a cold standby server ready to take over in case of catastrophy. So I may just buy another DAS and move the disks to it.

Thanks again.
N4F embedded: E45M1-M PRO (AMD-E450), Sil3132 eSATA, StarTech PEXSATA24E. Disks (30TB usable): onboard RAIDZ1 5x2TB Seagate; RAID10 4x2TB WD; offboard TR5M DAS Stripe 5x3TB Seagate (backup); TR4M DAS RAIDZ1 4x1TB Samsung

jim71
Starter
Starter
Posts: 22
Joined: 09 Nov 2012 00:22
Status: Offline

Re: 3tb hdd what's the score?

Post by jim71 »


VMDKs are certainly limited to 2TB but the disk it sits on can be up to 64TB. Not sure about a disk that is passed in raw via vmkfstools though. I've never tried that. rs232's success with OMV suggests it works but I don't know how he mapped it in raw nor how jim71 mapped his.
I created physical RDM like that:

vmkfstools -z /vmfs/devices/disks/vml.0100...... RDM1P.vmdk -a lsilogic

As described here http://www.vm-help.com/esx40i/SATA_RDMs.php

Then my VM shows it has almost 9TB of storage (3 x 3TB HD), and rs232 make it work with OMV, so for me the problem is definitly on the
freebsd side.

So finally I gave up doing like that and passed through my internal SATA controler (Asus M5A99X EVO) where the disks are connected.
It works fine, except I can only have 1 CPU. As soon as I add a second core, I am flooded with some "interrupt storm detected on irq 19",
where irq 19 is the sata controller (ahci0). I googled around and found some people with similar issue, but I did not find a solution.

It disturbs me a bit as with the soft raid, encryption, and iscsi process I reach 100% cpu usage during sequencial write benchmark and
I supsect this is the bootleneck which limits my write speed to about 60MB/s (where read is about 100MB/s).

If someone as a solution for this issue I would appreciate very much, but for me it looks to be a freebsd bug.

jim71
Starter
Starter
Posts: 22
Joined: 09 Nov 2012 00:22
Status: Offline

Re: 3tb hdd what's the score?

Post by jim71 »

Maybe I sould try the 32bit version of Nas4free:

"FreeBSD/amd64 is a very young platform on FreeBSD. While the core FreeBSD kernel and base system components are generally fairly robust, there are likely to still be rough edges, particularly with third party packages." (from 9.1 release notes)

reb00tas
NewUser
NewUser
Posts: 2
Joined: 15 Jan 2013 18:17
Status: Offline

Re: 3tb hdd what's the score?

Post by reb00tas »

Hello. I am using esxi with openindiana and nappit. Works perfect with my 6 x 3 tb red drives.

The max TB of a drive using RDM (Raw Device Mapping) is 64 TB.

You can pass the drive directly to your machine by ssh to your ESXI host (i use putty)
This will give you same or very close to same performance as if you use passthrough

I got 227 MB/s Write - 210 MB/s Read. With 3 Disks in raidz1

First i make a folder called RDM for the Raw Device Mappings.
To enter where you datastore is type:
cd /vmfs/volumes

Name of my datastore is datastore1, but you can use this to be sure:
ls

Now go to your datastore:
cd datastore1

And this will make a folder called RDM:
cd RDM

Now use this too see your drives:
ls -l /vmfs/devices/disks/

Find this for the drive you want to add:
vml.0100000000202020202057442d574d43315431363037303431574443205744

AND dont use the ones with :1 or :2 and so on after. that is a partition on the drive not the whole drive...

copy it and replace in the text below

Use this command to pass the drive direct to your Virtual machine:
vmkfstools -z /vmfs/devices/disks/vml.0100000000202020202057442d574d43315431363037303431574443205744 3TB.vmdk -a lsilogic

REPLACE: vml.0100000000202020202057442d574d43315431363037303431574443205744 with the info you get after typing.



Now you can go to your virtual machine and add and existing hdd, browse to your datastore/RDM click on the mapped drive in this case (3TB.vmdk) and finish.

Post Reply

Return to “Hard disk & controller”