This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!



I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!

GA-J1900N-D3V: How it works!

Motherboard compatibillity with XigmaNAS, questions, answers, suggestions
Forum rules
Set-Up GuideFAQsForum Rules
Post Reply
sku1d
NewUser
NewUser
Posts: 7
Joined: 25 Mar 2014 13:04
Location: Bielefeld, Germany
Contact:
Status: Offline

GA-J1900N-D3V: How it works!

Post by sku1d »

This board is brand new and I could not find anybody using it together with FreeBSD or NAS4Free.
I think it might be a nice contribution to mention that it actually works- after some BIOS-Tweaks, which I gonna try to collect here.

Considerations
Take some cosiderations before you buy it!
  • +price (<90euro)
  • +mini iTX comes handy
  • +just 10watts
  • + Quad Core CPU with 2.0GHz
  • - Celeron CPU
  • + USB3.0 ports
  • - only two on board SATA Ports
  • - no PCIe slot (only PCI)
  • +2X 1000Mbit Network
  • - not able to use two monitors because of missing drivers under linux or bsd
  • - onboard usb connector is only an usb2.0, so no front panel usb3.0 support
  • - SATA expansion cards (if you need one) for PCI are more expensive, harder to get and often wrong promoted: Look for cards which are supported by FreeBSD, are bootable, have hotplug support and 300MBit, mostly promoted as SATA generation II
  • + the board has two serial ports and seems to have a fully functional parallel port. you might be able to use them with LCDproc to drive a front panel LCD display
BIOS settings to use with NAS4Free
  1. If you got rev. 1.0: Do a BIOS update. The current version is called F2 and can be found here
  2. Enter the BIOS by hitting either DEL or ESC
  3. Activate the CSM-Support under Advanced -> CSM Configuration. This stands for "Compatibility Support Module" and will enable traditional boot modes other than UEFI, which implies having a GPT boot sector and a boot partition.

    Code: Select all

    CSM Support [enabled]
    Boot Option Filter  [legacy only]
    Network [do not launch]
    Storage [legacy only]
    Video [UEFI first] // take care for this one as it might disable your graphics card and leaves you with a blank screen!
    Other PCI Devices [legacy only]
    
  4. Go to Advanced -> USB Configuration and disable XHCI&EHCI Hand-Off as it seems to be a faulty implementation and causes a kernel crash at boot time

    Code: Select all

    Legacy USB Support [Enabled]
    USB3.0 Support [Enabled]
    XHCI Hand-Off [Disabled]
    EHCI Hand-Off [Disabled]
    USB Mass Storage Driver Support [Enabled]
    
  5. Go to Save & Exit and choose Save Changes and Reset and reenter the BIOS so that your 'legacy' devices are getting displayed in the boot menu
  6. Insert your prepared NAS4Free memory stick
  7. Go to Boot -> Hard Drive BBS Priorities and select your memory stick to be Boot Option #1 and your internal disk where you want to install it to be Boot Option #2
  8. Go back to the normal Boot menu by hitting ESC and order the Boot Option List again
  9. Save changes and reboot as before / have fun.
Last edited by sku1d on 25 Mar 2014 16:31, edited 1 time in total.

User avatar
b0ssman
Forum Moderator
Forum Moderator
Posts: 2438
Joined: 14 Feb 2013 08:34
Location: Munich, Germany
Status: Offline

Re: GA-J1900N-D3V: How it works!

Post by b0ssman »

some remarks
the celeron j1900 does not support aes-ni

freebsd does not support usb 3.0

more interesting boards
http://www.asrock.com/mb/Intel/Q1900M/? ... ifications (3 pcie slots)
http://www.supermicro.com/products/moth ... X10SBA.cfm (6 sata 1x pcie)
Nas4Free 11.1.0.4.4517. Supermicro X10SLL-F, 16gb ECC, i3 4130, IBM M1015 with IT firmware. 4x 3tb WD Red, 4x 2TB Samsung F4, both GEOM AES 256 encrypted.

sku1d
NewUser
NewUser
Posts: 7
Joined: 25 Mar 2014 13:04
Location: Bielefeld, Germany
Contact:
Status: Offline

Re: GA-J1900N-D3V: How it works!

Post by sku1d »

Thanks for pointing out the AES thing. I must have misinterpreted some command line output and edited my post.
The XHCI support stays a pro argument for me, because it will surely be implemented some day so that its benefit lies in the future.

sku1d
NewUser
NewUser
Posts: 7
Joined: 25 Mar 2014 13:04
Location: Bielefeld, Germany
Contact:
Status: Offline

slow NFS performance when using TCP

Post by sku1d »

I know that this information is out there, but the reasoning why this made NFS about a quarter slower was not trivial to me. I found, that I could archive 80MB/s writing speed using SMB/CIFS and about 30MB/s using NFS in it's standard configuration. As NFS4 defaults to using tcp that is it's default value in many linux distributions as well and it is sane, because it checks the data integrity and even works in noisy WLAN environments. To keep things short I noticed, that I can archive 110MB/s using the mount options vers=3 and proto=udp when mounting NFS volumes. I stopped my investigations concerning ZFS and tried to find clues, that some of my network cables are broken, but could not find too bad values for package losses and such. In the end I found, that I could get the same results using NFS together with so called jumbo frames, where the bsd driver only supports sizes of 6000 as MTU value, where in many tutorials 9000 is suggested. I have not found a method to output the possible values and just tried it out by hand: Log in to your nas4free system via ssh and with root permissions (deactivate afterwards!) and use the following command to do so:

Code: Select all

ifconfig re0 mtu 6000 up
If an error messages appears that will tell you, that your interface does not support such MTU value. After you find a value which fits for your server you should apply that to your clients as well, so that they deliver jumbo frames. That can lead to incompatibility. I don't really know how to handle them right, but I believe it will also work, when you devide the MTU value in steps of 2, so that the frame size fits in two packages.

Also notice that tips concerning the rsize and wsize value for NFS shares are somehow deprecated, as they can be read from server in these days and at least when using NFS4. To activate NFS4 add these settings to your rc.conf-file:

Code: Select all

nfsv_server_enable="YES"
nfsv4_server_enable="YES"
nfsuserd_enable="YES"

sku1d
NewUser
NewUser
Posts: 7
Joined: 25 Mar 2014 13:04
Location: Bielefeld, Germany
Contact:
Status: Offline

MTU Setting and Jumboframes

Post by sku1d »

After working with the above configuration I had my client machines hanging/blocking for one minute or more. It even seemed to deadlock

Investigation showed, that this happens whenever I transfer a big amount of files or when use

Code: Select all

find /mnt/nfs/
to list all files on the nfs filesystem.

It seems as if I had, edit: I found the solution for that problem in the freebsd documentation: 27.3.6 Problems Integrating with Other Systems. This problem seems to happen, if your client machine is faster than your server (which in case of this board might probably be so).

I was able to fix that issue by limiting the clients transfer size when writing to the NFS Server. This can be done by using a mount parameter (that is, when you call mount with -o), e.g.

Code: Select all

# bsd syntax:
mount -o -w 1024 [netdev] [localpath]
# linux syntax:
mount -o wsize=1024 [netdev] [localpath] 
That solution might not be applicable in situations where you cannot configure the clients though. To my current knowledge there is no other way than to add a faster network expansion card as mentioned in the freebsd documentation (see above)

sku1d
NewUser
NewUser
Posts: 7
Joined: 25 Mar 2014 13:04
Location: Bielefeld, Germany
Contact:
Status: Offline

Murphy and me

Post by sku1d »

One of the two harddrives failed the day before yesterday. It was brand new, but I had a

Code: Select all

Failed SMART usage Attribute: 184 End-to-End_Error.
I have contacted Seagate and they told me to replace the drive.

But this is not about my personal experiences, but about a strange error. I used ZFS and created a mirror configuration. Before I knew that this is silly, I used a SSD drive as ZIF device with it (do not do that with a non-redunant drive or you loose one of ZFS's mayor features, called resilience). I removed the drive before informing ZFS about it and shred the drive so I could send it back to seagate to get a replacement. But when I powered up my fileserver I found the ZFS in a 'degraded' state, which is not unusual while one drive is missing so that the pool is incomplete. But for some reason no drive was in that pool any more. I started reading the ZFS documentation from oracle, which is horrible, because it lists commands, but the reasoning is missing. In that documentation is a chapter about Repairing a Damaged ZFS Configuration which suggests to export and re-import the ZFS. In one or the other order I was able to export the configuration before or after I set the drive BBS priorities in the BIOS, because it finally came to my mind that my OS together with that ZIL partition was not on /dev/ada0, but /dev/ada2. After removing the drive that had changed, so that /dev/ada1 now was the SSD on which ZFS could even find a ZFS partition (which did not fit together with the other one). That seems to have confused zfs a bit. I believe, that nas4free uses the devices nodes when creating a ZFS configuration. To prevent such situations it had been better to use drive labels or Disc-UIDs instead / you never know how bad your BIOS really is until you recognize it... :)

But that story is not over. Next problem was, that after reassigning the preview drive order I could not re-import the zpool configuration, because one device was OFFLINE and the other had 'CORRUPTED DATA' on it. Reading in forums took me hours, but I found somebody mentioning, that in his case a reboot had done the job. This did not work with nas4free, but I put the drive in another computer for further investigation. There I ran FreeBSD 10 in a QEMU environment together with that 'corrupt data' drive and was for some reason able to re-import the ZFS pool with it. If you try it as well be aware, that the FreeBSD-*memstick.iso mounts / as read-only so that ZFS cannot create a directory under /mnt/, so that you have to do a

Code: Select all

mount -orw /
zpool import -fF zpool_name
# zpool_import without arguments will hopefully show you the name
before re-importing or

Code: Select all

zfs mount -a
after the import (because importing of the pool works and gets written to the disk and is not part of your local /etc/-tree or such. After that I was able to reattach the disk to the nas4free server and it was recognized again, like magic ... ;)

edit: I have received a rapaired drive directly from seagate

Replacing a failed disk in a zfs mirror configuration
Normally you would not do, what I have done above, but this:

Code: Select all

zpool status # to find the name of the corrupt device
zpool detach <device-name> # to remove the device from the zpool
To rebuild the zfs raid array these steps seem to be sane:
  • tell zfs that you have removed a drive (see above)
  • after having received a new hard disk do not instantly turn it on, but wait a few hours so it can acclimatize and do not cover the hole, which is used to compensate possible differences in air pressure.
  • install the drive and inspect the drive

    Code: Select all

    smartctl -a /dev/adaX # you should see power-on-hours=0 
    smartctl -t conveyance # it will tell you how long it takes to finish
    reinspect the drive with the above command to view the result.
  • give the drive a label like for example:

    Code: Select all

    glabel label mirror02 /dev/ada1
    
    this makes the drive available as /dev/label/mirror02
  • to attach the drive to your zfs pool and make it a mirror of the remaining disk do:

    Code: Select all

    zpool list # to look after your pools name
    zpool status # to look after the name of the disk, which should get mirrored to the new disk in the next step
    zpool attach <zpool-name> <existing-disk-in-pool> /dev/label/mirror02
    zpool status # will now show you something similar to this:
    	NAME                STATE     READ WRITE CKSUM
    	zfs_pool            ONLINE       0     0     0
    	  mirror-0          ONLINE       0     0     0
    	    ada0            ONLINE       0     0     0
    	    label/mirror02  ONLINE       0     0     0  (resilvering)
    	logs
    	  ada2s2            ONLINE       0     0     0
    
After having done these steps you can go for a coffee, because resilvering means, that all data gets copied, which usually takes a while. After that your raid configuration is up and running redundant again. man zpool says, that the zpool attach command can also be used to attach more drives and wikipedia says, that three disks do have a significant better failure rate than two (5% compared to 0.7% in 3 years). But using more than three drives is not economical any more.

remark: zpool attach only works with disks with no data on it (or if you specify -f, which is not recommend).

Clear a disk drive

Code: Select all

dd if=/dev/zero of=/dev/adaX bs=1M
and check its status by sending dd a signal (from another console):

Code: Select all

killall -sUSR1 dd
- but be careful to get the device right. dd will overwrite without questioning again!

Securely erase a disk
if you send your drive back to your supplier you might want to destroy its contents, which can also be done with dd or shred, which actually does the same:

Code: Select all

dd if=/dev/urandom of=/dev/adaX bs=1M
, this will overwrite your drive with random numbers. Do that 3 to 5 times to be on the save site. Notice that i have used bs=1M, which is not needed but speeds up things a bit.
Also notice that this does not make sense when you are using a SSD (and I do not know of any solution)

sku1d
NewUser
NewUser
Posts: 7
Joined: 25 Mar 2014 13:04
Location: Bielefeld, Germany
Contact:
Status: Offline

Re: GA-J1900N-D3V: How it works!

Post by sku1d »

just some quick updates:

NFSv4
I found an more or less elegant solution to get NFSv4 to work without interfering the nas4free implementation too much. Be aware, that zfs can take care about exporting nfs as well and so will some parts of the nas4free implementation. Using all those together won't work and render your server unreachable (showmount -e will not see shares any more). To get around this remove all nfs shares from the webgui and deactivate them in with zfs unshare -a before using the following instructions.
  • use system-> advanced-> rc.conf and set

    Code: Select all

    mountd_flags="-e -r /etc/exports /etc/zfs/exports /etc/exports-v4"
    nfsv4_server_enable="YES"
    nfsv_server_enable="YES"
  • Create a configuration file /etc/exports-v4:

    Code: Select all

    V4: /mnt -sec=sys -network 192.168.178.0 -mask 255.255.255.0
    /mnt/zfs_pool
As you can see: NFSv4 differs from previews versions in that it uses one root directory, in which it expects all exports to live in. The exports-file looks different, too: You specify access rights for your root directory only and they get copied for all subdirectories. Open issue: The user mapping is not working as expected. I am currently trying to figure out why. If you want to join me: The problem is probably the configuration of nfsuserd, a program which translates user&groupnames. It must run on both sides: server and client and is not always called like that. It is called <i>nfsmapid</i> on OpenSolaris and <i>rpc.idmapd</i> on Linux.

php5
As I broke my web interaface with an update (do not do a pkg install base!) I am now reinstalling all required packages by hand. I needed a

Code: Select all

ln -s /usr/local/lib/libxml2.so /usr/local/lib/libxml2.so.5
list of required php packages
  • php5-pdo
  • php5-filter
  • php5-sqlite3
  • php5-pdo_sqlite
  • iconv
  • php5-iconv

User avatar
crowi
Forum Moderator
Forum Moderator
Posts: 1176
Joined: 21 Feb 2013 16:18
Location: Munich, Germany
Status: Offline

Re: GA-J1900N-D3V: How it works!

Post by crowi »

just one comment, your board does not support ECC RAM which is strongly needed for ZFS, please refer to:
http://forums.freenas.org/index.php?thr ... zfs.15449/
better switch to softRAID with this setup.
NAS 1: Milchkuh: Asrock C2550D4I, Intel Avoton C2550 Quad-Core, 16GB DDR3 ECC, 5x3TB WD Red RaidZ1 +60 GB SSD for ZIL/L2ARC, APC-Back UPS 350 CS, NAS4Free 11.0.0.4.3460 embedded
NAS 2: Backup: HP N54L, 8 GB ECC RAM, 4x4 TB WD Red, RaidZ1, NAS4Free 11.0.0.4.3460 embedded
NAS 3: Office: HP N54L, 8 GB ECC RAM, 2x3 TB WD Red, ZFS Mirror, APC-Back UPS 350 CS NAS4Free 11.0.0.4.3460 embedded

sku1d
NewUser
NewUser
Posts: 7
Joined: 25 Mar 2014 13:04
Location: Bielefeld, Germany
Contact:
Status: Offline

Re: GA-J1900N-D3V: How it works!

Post by sku1d »

Fair enough! I have to confess you are right. I have used this machine for four month and I have learned several lessons. It turns our that it is after all not so well suited for my needs.

Here are several items I am currently reconsidering for my next NAS system:
  • cpu with on chip encryption capabilities (or a faster cpu)
  • ensure there are enough SATA ports on board
  • motherboard and memory with ECC support
  • using a three disk mirror set, not two, because that is really much more unlikely as several studies have showed (google has made data available concerning disks MTBF on their server farms; sorry but I cannot find the link)
  • prepare the case for the day when road works get done in front of your house
    • should be solid, maybe a server rack
    • place it on a solid floor
    • decouple the disks as good as possible
  • think of an off site backup strategy
    • bluray
    • usb-3.0 or e-sata (not widely used, but better suited to be used with nas4free) and an external hard disk (which normally lies in a safe)
    • using network based off-site backup might require encryption again (@see: cpu)
  • choose parts, which have been shown to be compatible with freebsd (if you want to use nas4free): Rule of thumb: Do not rely on the newest main board, CPU or whatever
  • consider if using a web based NAS system is really what you want (or if a ssh shell makes you feel comfortable enough)
  • consider to make the NAS system a general purpose server using virtualization, even making nas4free running as virtual machine
  • take care which network card the nas system will have (this may have impact on its performance)
-- decisions and further investigation is pending

btw: my fauly hard disk is failing again after the rma case. no luck whatsoever, but I have off site backups :)

Post Reply

Return to “Motherboards”