Page 1 of 1

Does v5000 now support "autoreplace" ?

Posted: 14 Jan 2014 17:08
by prior_philip
Hi all,
as my subject line suggests I like to know if new ZFS now supports autoreplace. In previous releases (v28) the option was available but not implemented for FreeBSD.

Re: Does v5000 now support "autoreplace" ?

Posted: 15 Jan 2014 12:11
by raulfg3
sorry, not sure, because I do not find acurate info.

need to test, but at this time I do not have enought time/resources to do, perhaps another user can test / check if Autoreplace works finally on BSD.

This is what I find for previous version ZFSv28: http://lists.freebsd.org/pipermail/free ... 13351.html

but none acurate for ZFSv5000

Re: Does v5000 now support "autoreplace" ?

Posted: 15 Mar 2014 05:23
by gjarboni
Unfortunately, the answer appears to be no. I've tried creating multiple pools (6 x Mirror, 11 x RAIDZ2) and have turned autoreplace on. But when I offline a drive, nothing happens. Each time I have to manually tell ZFS to use a spare drive.

Here's the output from the relevant commands:

Code: Select all


nas4free: ~ # zpool status
  pool: AmySchumer
 state: ONLINE
  scan: none requested
config:
        NAME        STATE     READ WRITE CKSUM
        AmySchumer  ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            da0     ONLINE       0     0     0
            da1     ONLINE       0     0     0
            da2     ONLINE       0     0     0
            da3     ONLINE       0     0     0
            da4     ONLINE       0     0     0
            da5     ONLINE       0     0     0
            da6     ONLINE       0     0     0
            da7     ONLINE       0     0     0
            da8     ONLINE       0     0     0
            da9     ONLINE       0     0     0
            da10    ONLINE       0     0     0
        spares
          da11      AVAIL   
          da12      AVAIL   
          da13      AVAIL 



nas4free: ~ # zpool get all AmySchumer
NAME        PROPERTY               VALUE                  SOURCE
AmySchumer  size                   3T                     -
AmySchumer    capacity             0%                     -
AmySchumer  altroot                -                      default
AmySchumer  health                 ONLINE                 -
AmySchumer  guid                   12839006967930676292   default
AmySchumer  version                -                      default
AmySchumer  bootfs                 -                      default
AmySchumer  delegation             on                     default
AmySchumer  autoreplace            on                     local
AmySchumer  cachefile              -                      default
AmySchumer  failmode               wait                   default
AmySchumer  listsnapshots          off                    default
AmySchumer  autoexpand             off                    default
AmySchumer  dedupditto             0                      default
AmySchumer  dedupratio             1.00x                  -
AmySchumer  free                   3.00T                  -
AmySchumer  allocated              840K                   -
AmySchumer  readonly               off                    -
AmySchumer  comment                -                      default
AmySchumer  expandsize             0                      -
AmySchumer  freeing                0                      default
AmySchumer  feature@async_destroy  enabled                local
AmySchumer  feature@empty_bpobj    enabled                local
AmySchumer  feature@lz4_compress   enabled                local

nas4free: ~ # zpool offline AmySchumer da0
nas4free: ~ # zpool status
  pool: AmySchumer
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: none requested
config:
        NAME                     STATE     READ WRITE CKSUM
        AmySchumer               DEGRADED     0     0     0
          raidz2-0               DEGRADED     0     0     0
            5198184294963300961  OFFLINE      0     0     0  was /dev/da0
            da1                  ONLINE       0     0     0
            da2                  ONLINE       0     0     0
            da3                  ONLINE       0     0     0
            da4                  ONLINE       0     0     0
            da5                  ONLINE       0     0     0
            da6                  ONLINE       0     0     0
            da7                  ONLINE       0     0     0
            da8                  ONLINE       0     0     0
            da9                  ONLINE       0     0     0
            da10                 ONLINE       0     0     0
        spares
          da11                   AVAIL   
          da12                   AVAIL   
          da13                   AVAIL   
errors: No known data errors
I am new to ZFS and Nas4Free, so maybe I missed something? If not, I assume this means that even if I booted with my array connected to a server running FreeBSD 10, autoreplace would still not work?

Jason M.

Re: Does v5000 now support "autoreplace" ?

Posted: 15 Mar 2014 23:31
by gjarboni
One more thing. FreeBSD is currently developing zfsd which will handle auto replacing defective or offline drives. It will hopefully make it into 10.1.