Not quite sure which section to post this in. I don't see any section dedicated to questions about the base of nas4free or strictly related to freebsd.
I ran into a problem doing an "upgrade". I'm not sure the problem didn't manifest itself in freenas 0.7.2.xxxx (can't remember the build, I think it was .6 something), but I didn't discover it until after the upgrade. The problem seems resolved now, but I want to avoid it in the future.
Anyway the problem is basically that the freespace on a UFS drive didn't match the actual freespace. I'm not talking about the reserved space used by admin either. I had a 2TB drive with (eventually) no files on it except for .snap. The drive was showing as 1.7T used out of 1.8T reported and 4G free. fsck would come back and say there was nothing wrong with the filesystem.
The following was post upgrade.
The problem started with transmission. I started to download something and the drive ran out of space. So I moved (and I mean with mv, not cp followed by rm) a few Gig to a different drive, restarted transmission and it ran out again. That is when I did a df -h and the drive showed -4G or so remaining. I moved another 20-30G of files and checked again, and the space remaining was still negative. At this point I umounted the drive and ran fsck, which came back clean, but the space remaining still was out of whack. So I moved all files (except .snap) off the drive and reran fsck. Same thing. Then I went and tried to remove .snap, but after 10 min. or so of drive activity, I tried to kill the process which was trying to remove .snap, since there was obviously a problem, and I couldn't kill it, or its child even with kill -9. So I shutdown the server and restarted, and got that block rebuilding stuff (all the single or double digit numbers that run accross the screen, sorry I didn't write down what it was, and don't remember the specifics). Then when I got the server restarted, ran fsck again, just to see, and the problem resolved itself.
So I'm writing this to find out if others have run into similar problems?
Is there a way to fix it outside of moving all files off the drive?
Did this start under freenas (freebsd 2.7.something???) but is resolved in nas4free (freebsd 2.9.something).
Is this a freebsd issue, or does it only show up under nas4free/freenas?
I never ran into this in more than a year with freenas 0.7.2, so I'm still thinking of reverting. I saved a copy of my usb stick and if I run into one more problem, I'm done with nas4free at least until it is more stable. I may try openmediavault, since I'd rather have a linux based NAS, but that still seems even more beta than nas4free.
Other pertinent information.
Current version of nas4free is 9.0.0.1.139
I am running embedded off a usb stick, not off a hd partition.
I couldn't see any pertinent information in the var logs when I looked.
After the first couple of mv's, I emptied the rest of the drive by copying and removing files, not just moving.
I tried to upgrade twice. The first time I tried to merge contents of my config from the old and new versions, to save network settings/userids/passwords/services etc. but gave up quickly, since I realized it wasn't going to be very difficult to add what I wanted to nas4free given that I could look at settings I needed in the old config.xml. But I couldn't do transfers from win7 (turns out it is the aio problem that hasn't been resolved yet), so I reverted because I needed the server back. Then tried again a few days ago.
Along the way I ended up having fsck run several times. One or two times manually, to try and resolve my issues, and one or two times because I had the system freeze and had no choice but to flip the switch.
My NAS is a boxd525mw which is atom d525. I have 4G ram. Using 4 drives
3x2.0TB Seagate green
1x1.5TB WD green
2 of those use the onboard SATA, 2 are running from a syba pci sata card (4xsata 1 ports)
I upgraded the bios before the last install.
That is about all I can think of for now.
This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
Encountered a UFS problem after switching from freenas 0.7.2
-
scott1256ca
- NewUser

- Posts: 12
- Joined: 28 Jun 2012 02:14
- Status: Offline
- shakky4711
- Advanced User

- Posts: 273
- Joined: 25 Jun 2012 08:27
- Status: Offline
Re: Encountered a UFS problem after switching from freenas 0
Hello,
When you format a drive with UFS the system will reserve 8% from the capacity for user root (old unix behavior to enable root can still install software to a drive the user or a damaged software task has totally filled), furthermore for file system internal jobs.
As you surely know the harddisk manufacturers use the factor 1000 instead of the technical correct 1024, by doing it drive capacity looks bigger than it is in fact.
So a 1 TB drive has in fact a capacity of 931GB, minus 8% you reach ~860GB.
Some programs count space different, DF (disk-free) reports other values than you would calculate when you take the capacity from the disk and remove all used files (du), Windows Explorer shows a third different value as far as I heard (I do not have Windows). This is the reason some file systems were reportet as 105% used.
Best regards
Shakky
When you format a drive with UFS the system will reserve 8% from the capacity for user root (old unix behavior to enable root can still install software to a drive the user or a damaged software task has totally filled), furthermore for file system internal jobs.
As you surely know the harddisk manufacturers use the factor 1000 instead of the technical correct 1024, by doing it drive capacity looks bigger than it is in fact.
So a 1 TB drive has in fact a capacity of 931GB, minus 8% you reach ~860GB.
Some programs count space different, DF (disk-free) reports other values than you would calculate when you take the capacity from the disk and remove all used files (du), Windows Explorer shows a third different value as far as I heard (I do not have Windows). This is the reason some file systems were reportet as 105% used.
Best regards
Shakky
-
scott1256ca
- NewUser

- Posts: 12
- Joined: 28 Jun 2012 02:14
- Status: Offline
Re: Encountered a UFS problem after switching from freenas 0
Thank you for responding, but I'm not sure you read my entire post.
After removing ALL files (except the .snap directory) from the drive , it still reported only 4G free.
I was already aware that some space is reserved by admin (or root, if you will) and can only be used by root. I did state that admin reserved space was not the situation I was talking about. If you have some insight into why a 2T drive with no files would report 4G free, I am interested to here it. I don't see how this could be any background process taking up the space, since I rebooted at least a couple of times during this process. At no time, even after removing chunks of 200G, did the filesystem show as more than 4G free, until I killed that process that was trying to remove .snap and rebooted.
What I could find about .snap indicated it is used for snapshots. I never tried to create one, and I know nothing of the procedure. If that is what used the space, I'd like to know why and how to avoid what I had to go through to recover it afterwards.
Thanks
Perhaps I wasn't as clear as I thought.Anyway the problem is basically that the freespace on a UFS drive didn't match the actual freespace. I'm not talking about the reserved space used by admin either. I had a 2TB drive with (eventually) no files on it except for .snap. The drive was showing as 1.7T used out of 1.8T reported and 4G free. fsck would come back and say there was nothing wrong with the filesystem.
After removing ALL files (except the .snap directory) from the drive , it still reported only 4G free.
I was already aware that some space is reserved by admin (or root, if you will) and can only be used by root. I did state that admin reserved space was not the situation I was talking about. If you have some insight into why a 2T drive with no files would report 4G free, I am interested to here it. I don't see how this could be any background process taking up the space, since I rebooted at least a couple of times during this process. At no time, even after removing chunks of 200G, did the filesystem show as more than 4G free, until I killed that process that was trying to remove .snap and rebooted.
What I could find about .snap indicated it is used for snapshots. I never tried to create one, and I know nothing of the procedure. If that is what used the space, I'd like to know why and how to avoid what I had to go through to recover it afterwards.
Thanks
-
Alekwhat
- NewUser

- Posts: 1
- Joined: 24 Jul 2012 04:13
- Status: Offline
Re: Encountered a UFS problem after switching from freenas 0
May be it's time to switch for ZSF.
I have moved from Freenas 7 to Nas4Free 9 with no problem at all.
The UFS I,m using for OS only and all the data is store on ZSF.
Running NAS on VMware workstation 7
2 core 2700MHz
1.5 Gb RAM
2 x 1Gb HDD
1 x .5Gb HDD
13Gb (VMware) virtual HDD for OS
I have moved from Freenas 7 to Nas4Free 9 with no problem at all.
The UFS I,m using for OS only and all the data is store on ZSF.
Running NAS on VMware workstation 7
2 core 2700MHz
1.5 Gb RAM
2 x 1Gb HDD
1 x .5Gb HDD
13Gb (VMware) virtual HDD for OS