This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

disk space available but no space in pool

Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
yh113
NewUser
NewUser
Posts: 14
Joined: 01 May 2014 06:14
Status: Offline

disk space available but no space in pool

Post by yh113 »

Hi,

I am running 10.2.0.2 revision 1906 with 4TB HDD. I created a pool named backup and dataset with dedup.
the pool is 27% used but there is no free space left.
Also "System > General" reports that there is 2.63B free space left, but cannot write data. I have no idea why there is no space in the pool.
Please advice.

Regards,
You do not have the required permissions to view the files attached to this post.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: disk space available but no space in pool

Post by Parkcomm »

Firstly you've stored 81.5T on a 3.5T file system - thats got to be a win regardless.

Is the the pool a three disk raidz2? read this https://docs.oracle.com/cd/E26502_01/ht ... gbbti.html
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

yh113
NewUser
NewUser
Posts: 14
Joined: 01 May 2014 06:14
Status: Offline

Re: disk space available but no space in pool

Post by yh113 »

Thank you for your reply.
I read the document.
How can i check how much mata data occupies?
I am suspicious that hash table occupies as dedup ratio is quite big.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: disk space available but no space in pool

Post by Parkcomm »

freshports zfs-stats http://www.freshports.org/sysutils/zfs-stats give you all the stats

you can just

Code: Select all

pkg install zfs-stats
to install but it won't survive a reboot
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

yh113
NewUser
NewUser
Posts: 14
Joined: 01 May 2014 06:14
Status: Offline

Re: disk space available but no space in pool

Post by yh113 »

Thank you. After deleting some files to create disk space I installed zfs-stats.
I am still wondering why there is little space left in pool.

Code: Select all

backup01: ~# zfs list
NAME                  USED  AVAIL  REFER  MOUNTPOINT
backup               81.0T  25.3G  72.9T  /mnt/backup
backup/ZFS_Datasets  8.02T  25.3G  8.02T  /mnt/backup/ZFS_Datasets
scripts               625K   983M  32.5K  /mnt/scripts
 backup01: ~# zpool list
NAME      SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
backup   3.62T  1011G  2.64T         -    33%    27%  226.54x  ONLINE  -
scripts  1016M   948K  1015M         -      -     0%  1.00x  ONLINE  -
 backup01: ~#

 backup01: ~# /usr/local/bin/zfs-stats -a

------------------------------------------------------------------------
ZFS Subsystem Report                            Mon Nov  9 09:54:17 2015
------------------------------------------------------------------------

System Information:

        Kernel Version:                         1002000 (osreldate)
        Hardware Platform:                      amd64
        Processor Architecture:                 amd64

        ZFS Storage pool Version:               5000
        ZFS Filesystem Version:                 5

FreeBSD 10.2-RELEASE-p5 #0 r289200M: Tue Oct 13 00:54:46 CEST 2015 root
 9:54AM  up 2 days, 10 mins, 2 users, load averages: 0.22, 0.26, 0.24

------------------------------------------------------------------------

System Memory:

        0.22%   26.23   MiB Active,     2.72%   323.73  MiB Inact
        82.93%  9.64    GiB Wired,      0.00%   0 Cache
        10.66%  1.24    GiB Free,       3.47%   412.42  MiB Gap

        Real Installed:                         12.00   GiB
        Real Available:                 99.59%  11.95   GiB
        Real Managed:                   97.23%  11.62   GiB

        Logical Total:                          12.00   GiB
        Logical Used:                   87.04%  10.45   GiB
        Logical Free:                   12.96%  1.55    GiB

Kernel Memory:                                  391.73  MiB
        Data:                           91.82%  359.67  MiB
        Text:                           8.18%   32.06   MiB

Kernel Memory Map:                              11.62   GiB
        Size:                           56.68%  6.59    GiB
        Free:                           43.32%  5.03    GiB

------------------------------------------------------------------------

ARC Summary: (HEALTHY)
        Memory Throttle Count:                  0

ARC Misc:
        Deleted:                                63.86m
        Recycle Misses:                         42.27m
        Mutex Misses:                           13.19k
        Evict Skips:                            15.68m

ARC Size:                               96.87%  7.70    GiB
        Target Size: (Adaptive)         97.27%  7.73    GiB
        Min Size (Hard Limit):          50.00%  3.98    GiB
        Max Size (High Water):          2:1     7.95    GiB

ARC Size Breakdown:
        Recently Used Cache Size:       81.62%  6.31    GiB
        Frequently Used Cache Size:     18.38%  1.42    GiB

ARC Hash Breakdown:
        Elements Max:                           3.28m
        Elements Current:               99.48%  3.26m
        Collisions:                             30.22m
        Chain Max:                              14
        Chains:                                 924.13k

------------------------------------------------------------------------

ARC Efficiency:                                 1.92b
        Cache Hit Ratio:                94.13%  1.81b
        Cache Miss Ratio:               5.87%   112.71m
        Actual Hit Ratio:               93.42%  1.79b

        Data Demand Efficiency:         77.61%  271.14k

        CACHE HITS BY CACHE LIST:
          Most Recently Used:           2.43%   43.98m
          Most Frequently Used:         96.81%  1.75b
          Most Recently Used Ghost:     0.03%   467.38k
          Most Frequently Used Ghost:   2.75%   49.64m

        CACHE HITS BY DATA TYPE:
          Demand Data:                  0.01%   210.44k
          Prefetch Data:                0.00%   0
          Demand Metadata:              99.12%  1.79b
          Prefetch Metadata:            0.87%   15.72m

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  0.05%   60.70k
          Prefetch Data:                0.00%   0
          Demand Metadata:              74.68%  84.17m
          Prefetch Metadata:            25.27%  28.48m

------------------------------------------------------------------------

L2 ARC Summary: (HEALTHY)
        Passed Headroom:                        7.75m
        Tried Lock Failures:                    1.95m
        IO In Progress:                         220.63k
        Low Memory Aborts:                      2
        Free on Write:                          308.96k
        Writes While Full:                      17.52k
        R/W Clashes:                            2.66k
        Bad Checksums:                          0
        IO Errors:                              0
        SPA Mismatch:                           829.73k

L2 ARC Size: (Adaptive)                         212.50  GiB
        Header Size:                    0.28%   604.18  MiB

L2 ARC Breakdown:                               112.71m
        Hit Ratio:                      73.67%  83.03m
        Miss Ratio:                     26.33%  29.68m
        Feeds:                                  193.66k

L2 ARC Buffer:
        Bytes Scanned:                          12.07   TiB
        Buffer Iterations:                      193.66k
        List Iterations:                        12.29m
        NULL List Iterations:                   1.40m

L2 ARC Writes:
        Writes Sent:                    100.00% 85.54k

------------------------------------------------------------------------

File-Level Prefetch: (HEALTHY)

DMU Efficiency:                                 2.07b
        Hit Ratio:                      99.84%  2.06b
        Miss Ratio:                     0.16%   3.36m

        Colinear:                               3.36m
          Hit Ratio:                    0.07%   2.25k
          Miss Ratio:                   99.93%  3.36m

        Stride:                                 2.06b
          Hit Ratio:                    100.00% 2.06b
          Miss Ratio:                   0.00%   60.35k

DMU Misc:
        Reclaim:                                3.36m
          Successes:                    2.19%   73.45k
          Failures:                     97.81%  3.29m

        Streams:                                1.02m
          +Resets:                      0.00%   14
          -Resets:                      100.00% 1.02m
          Bogus:                                0

------------------------------------------------------------------------

VDEV cache is disabled

------------------------------------------------------------------------

ZFS Tunables (sysctl):
        kern.maxusers                           1100
        vm.kmem_size                            12476121088
        vm.kmem_size_scale                      1
        vm.kmem_size_min                        0
        vm.kmem_size_max                        1319413950874
        vfs.zfs.trim.max_interval               1
        vfs.zfs.trim.timeout                    30
        vfs.zfs.trim.txg_delay                  32
        vfs.zfs.trim.enabled                    1
        vfs.zfs.vol.unmap_enabled               1
        vfs.zfs.vol.mode                        1
        vfs.zfs.version.zpl                     5
        vfs.zfs.version.spa                     5000
        vfs.zfs.version.acl                     1
        vfs.zfs.version.ioctl                   4
        vfs.zfs.debug                           0
        vfs.zfs.super_owner                     0
        vfs.zfs.sync_pass_rewrite               2
        vfs.zfs.sync_pass_dont_compress         5
        vfs.zfs.sync_pass_deferred_free         2
        vfs.zfs.zio.exclude_metadata            0
        vfs.zfs.zio.use_uma                     1
        vfs.zfs.cache_flush_disable             0
        vfs.zfs.zil_replay_disable              0
        vfs.zfs.min_auto_ashift                 9
        vfs.zfs.max_auto_ashift                 13
        vfs.zfs.vdev.trim_max_pending           10000
        vfs.zfs.vdev.bio_delete_disable         0
        vfs.zfs.vdev.bio_flush_disable          0
        vfs.zfs.vdev.write_gap_limit            4096
        vfs.zfs.vdev.read_gap_limit             32768
        vfs.zfs.vdev.aggregation_limit          131072
        vfs.zfs.vdev.trim_max_active            64
        vfs.zfs.vdev.trim_min_active            1
        vfs.zfs.vdev.scrub_max_active           2
        vfs.zfs.vdev.scrub_min_active           1
        vfs.zfs.vdev.async_write_max_active     10
        vfs.zfs.vdev.async_write_min_active     1
        vfs.zfs.vdev.async_read_max_active      3
        vfs.zfs.vdev.async_read_min_active      1
        vfs.zfs.vdev.sync_write_max_active      10
        vfs.zfs.vdev.sync_write_min_active      10
        vfs.zfs.vdev.sync_read_max_active       10
        vfs.zfs.vdev.sync_read_min_active       10
        vfs.zfs.vdev.max_active                 1000
        vfs.zfs.vdev.async_write_active_max_dirty_percent60
        vfs.zfs.vdev.async_write_active_min_dirty_percent30
        vfs.zfs.vdev.mirror.non_rotating_seek_inc1
        vfs.zfs.vdev.mirror.non_rotating_inc    0
        vfs.zfs.vdev.mirror.rotating_seek_offset1048576
        vfs.zfs.vdev.mirror.rotating_seek_inc   5
        vfs.zfs.vdev.mirror.rotating_inc        0
        vfs.zfs.vdev.trim_on_init               1
        vfs.zfs.vdev.cache.bshift               16
        vfs.zfs.vdev.cache.size                 0
        vfs.zfs.vdev.cache.max                  16384
        vfs.zfs.vdev.metaslabs_per_vdev         200
        vfs.zfs.txg.timeout                     5
        vfs.zfs.space_map_blksz                 4096
        vfs.zfs.spa_slop_shift                  5
        vfs.zfs.spa_asize_inflation             24
        vfs.zfs.deadman_enabled                 0
        vfs.zfs.deadman_checktime_ms            5000
        vfs.zfs.deadman_synctime_ms             1000000
        vfs.zfs.recover                         0
        vfs.zfs.spa_load_verify_data            1
        vfs.zfs.spa_load_verify_metadata        1
        vfs.zfs.spa_load_verify_maxinflight     10000
        vfs.zfs.check_hostid                    1
        vfs.zfs.mg_fragmentation_threshold      85
        vfs.zfs.mg_noalloc_threshold            0
        vfs.zfs.condense_pct                    200
        vfs.zfs.metaslab.bias_enabled           1
        vfs.zfs.metaslab.lba_weighting_enabled  1
        vfs.zfs.metaslab.fragmentation_factor_enabled1
        vfs.zfs.metaslab.preload_enabled        1
        vfs.zfs.metaslab.preload_limit          3
        vfs.zfs.metaslab.unload_delay           8
        vfs.zfs.metaslab.load_pct               50
        vfs.zfs.metaslab.min_alloc_size         33554432
        vfs.zfs.metaslab.df_free_pct            4
        vfs.zfs.metaslab.df_alloc_threshold     131072
        vfs.zfs.metaslab.debug_unload           0
        vfs.zfs.metaslab.debug_load             0
        vfs.zfs.metaslab.fragmentation_threshold70
        vfs.zfs.metaslab.gang_bang              16777217
        vfs.zfs.free_max_blocks                 -1
        vfs.zfs.no_scrub_prefetch               0
        vfs.zfs.no_scrub_io                     0
        vfs.zfs.resilver_min_time_ms            3000
        vfs.zfs.free_min_time_ms                1000
        vfs.zfs.scan_min_time_ms                1000
        vfs.zfs.scan_idle                       50
        vfs.zfs.scrub_delay                     4
        vfs.zfs.resilver_delay                  2
        vfs.zfs.top_maxinflight                 32
        vfs.zfs.zfetch.array_rd_sz              1048576
        vfs.zfs.zfetch.block_cap                256
        vfs.zfs.zfetch.min_sec_reap             2
        vfs.zfs.zfetch.max_streams              8
        vfs.zfs.prefetch_disable                0
        vfs.zfs.delay_scale                     500000
        vfs.zfs.delay_min_dirty_percent         60
        vfs.zfs.dirty_data_sync                 67108864
        vfs.zfs.dirty_data_max_percent          10
        vfs.zfs.dirty_data_max_max              4294967296
        vfs.zfs.dirty_data_max                  1283175219
        vfs.zfs.max_recordsize                  1048576
        vfs.zfs.mdcomp_disable                  0
        vfs.zfs.nopwrite_enabled                1
        vfs.zfs.dedup.prefetch                  1
        vfs.zfs.l2c_only_size                   227634050560
        vfs.zfs.mfu_ghost_data_lsize            0
        vfs.zfs.mfu_ghost_metadata_lsize        6779491840
        vfs.zfs.mfu_ghost_size                  6779491840
        vfs.zfs.mfu_data_lsize                  91648
        vfs.zfs.mfu_metadata_lsize              398522368
        vfs.zfs.mfu_size                        423091712
        vfs.zfs.mru_ghost_data_lsize            0
        vfs.zfs.mru_ghost_metadata_lsize        1529002496
        vfs.zfs.mru_ghost_size                  1529002496
        vfs.zfs.mru_data_lsize                  6135668224
        vfs.zfs.mru_metadata_lsize              244748800
        vfs.zfs.mru_size                        6777208832
        vfs.zfs.anon_data_lsize                 0
        vfs.zfs.anon_metadata_lsize             0
        vfs.zfs.anon_size                       296960
        vfs.zfs.l2arc_norw                      1
        vfs.zfs.l2arc_feed_again                1
        vfs.zfs.l2arc_noprefetch                1
        vfs.zfs.l2arc_feed_min_ms               200
        vfs.zfs.l2arc_feed_secs                 1
        vfs.zfs.l2arc_headroom                  2
        vfs.zfs.l2arc_write_boost               8388608
        vfs.zfs.l2arc_write_max                 8388608
        vfs.zfs.arc_meta_limit                  2134196224
        vfs.zfs.arc_free_target                 21164
        vfs.zfs.arc_shrink_shift                5
        vfs.zfs.arc_average_blocksize           8192
        vfs.zfs.arc_min                         4268392448
        vfs.zfs.arc_max                         8536784896

------------------------------------------------------------------------

 backup01: ~#

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: disk space available but no space in pool

Post by Parkcomm »

To calculate the size of the dedup table

Code: Select all

zdb -b MightyMouse

Traversing all blocks to verify nothing leaked ...

loading space map for vdev 2 of 3, metaslab 118 of 119 ...
3.46T completed (1037MB/s) estimated time remaining: 4294906092hr 4294967257min 4294967248sec        
	No leaks (block sum matches space maps exactly)

	bp count:        32312675
	ganged count:           0
	bp logical:    3868526688768      avg: 119721
	bp physical:   3792449540096      avg: 117367     compression:   1.02
	bp allocated:  3802939887616      avg: 117691     compression:   1.02
	bp deduped:    160838795264    ref>1: 794679   deduplication:   1.04
	SPA allocated: 3642100318208     used: 73.10%

	additional, non-pointer bps of type 0:     122332
	Dittoed blocks on same vdev: 109900
let it run for half a day and it will tell you the number of blocks - in my case its 32312675
multiple this by 320(bytes) will give you the size of the dedup table - it'll be at most tens of gigs, it won'd explain missing terrabytes.

Can you provide the output of

Code: Select all

zfs list poolname
zool list -v poolname
Last edited by Parkcomm on 09 Nov 2015 07:02, edited 1 time in total.
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: disk space available but no space in pool

Post by Parkcomm »

NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

yh113
NewUser
NewUser
Posts: 14
Joined: 01 May 2014 06:14
Status: Offline

Re: disk space available but no space in pool

Post by yh113 »

Thank you so much. Due to business trip I cannot run the commands right now. I will get back to you next week. Much appreciated!.

yh113
NewUser
NewUser
Posts: 14
Joined: 01 May 2014 06:14
Status: Offline

Re: disk space available but no space in pool

Post by yh113 »

Hi, just a quick result... I deleted some files to create disk space hence there are some space now.

Code: Select all

 backup01: ~# zfs list backup
NAME     USED  AVAIL  REFER  MOUNTPOINT
backup  81.0T  25.3G  72.9T  /mnt/backup

 backup01: ~# zpool list -v backup
NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
backup      3.62T  1011G  2.64T         -    33%    27%  226.54x  ONLINE  -
  da3       3.62T  1011G  2.64T         -    33%    27%
cache           -      -      -      -      -      -
  da1        233G   206G  26.5G         -     0%    88%
 backup01: ~#
now it's time to leave for business trip...
Last edited by yh113 on 16 Nov 2015 02:19, edited 1 time in total.

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: disk space available but no space in pool

Post by Parkcomm »

you should use the

Code: Select all

 tag - click the full editor and preview button.
 
 Here's mine - I have the issue, just not as bad
[code]# zfs list MightyMouse
NAME          USED  AVAIL  REFER  MOUNTPOINT
MightyMouse  4.40T   138G  17.9G  /mnt/MightyMouse

# zpool list MightyMouse
NAME          SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
MightyMouse  4.53T  3.31T  1.22T         -    19%    73%  1.62x  ONLINE  -
Last edited by Parkcomm on 09 Nov 2015 13:45, edited 2 times in total.
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

User avatar
Parkcomm
Advanced User
Advanced User
Posts: 384
Joined: 21 Sep 2012 12:58
Location: Australia
Status: Offline

Re: disk space available but no space in pool

Post by Parkcomm »

I reckon its a bug - I tried to find an outstanding bug report, you want to report it. https://sourceforge.net/p/nas4free/bugs/
NAS4Free Embedded 10.2.0.2 - Prester (revision 2003), HP N40L Microserver (AMD Turion) with modified BIOS, ZFS Mirror 4 x WD Red + L2ARC 128M Apple SSD, 10G ECC Ram, Intel 1G CT NIC + inbuilt broadcom

yh113
NewUser
NewUser
Posts: 14
Joined: 01 May 2014 06:14
Status: Offline

Re: disk space available but no space in pool

Post by yh113 »

Thank you. I am back from business trip.
I posted this issue at https://sourceforge.net/p/nas4free/bugs/253/ .

yh113
NewUser
NewUser
Posts: 14
Joined: 01 May 2014 06:14
Status: Offline

Re: disk space available but no space in pool

Post by yh113 »

Hello,

A person pointed out that this issue has been already fixed. Hope it will be implemented to NAS4Free soon.
See detailed information at https://sourceforge.net/p/nas4free/bugs/253/ .

Regards

Post Reply

Return to “ZFS (only!)”