Bonjour
Mon lot me donne ce type de message:
13:27:15 kernel: Local NSM refuses to monitor localhost.localdomain
Oct 3 13:27:15 rpc.statd: Invalid hostname to sm_mon: localhost.localdomain
J ai pas trouve grand chose sur le net à part uk post sur freenas.
Est ce lié au service nfs ?
Kezako ?
Merci par avance
This is the old XigmaNAS forum in read only mode,
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
it will taken offline by the end of march 2021!
I like to aks Users and Admins to rewrite/take over important post from here into the new fresh main forum!
Its not possible for us to export from here and import it to the main forum!
[Resolu]Message dans log non connu
Moderators: velivole18, ernie, mtiburs
- ernie
- Forum Moderator

- Posts: 1458
- Joined: 26 Aug 2012 19:09
- Location: France - Val d'Oise
- Status: Offline
[Resolu]Message dans log non connu
NAS 1&2:
System: GA-6LXGH(BIOS: R01 04/30/2014) / 16 Go ECC
XigmaNAS 12.1.0.4 - Ingva (revision 7743) embedded
NAS1: Xeon E3 1241@3.5GHz, 2HDD@8To/mirror, 1SSD cache, Zlog on mirror, 1 UFS 300 Go
NAS2: G3220@3GHz, 2x3HDD@2To/strip+raidz1, 1SSD cache, Zlog on mirror
UPS: APC Back-UPS RS 900G
Case : Fractal Design XL R2
Extensions & services:
NAS1: OBI (Plex, BTSync, zrep, rclone, themes), nfs, smb, UPS,
NAS2: OBI (zrep (backup mode), themes)
System: GA-6LXGH(BIOS: R01 04/30/2014) / 16 Go ECC
XigmaNAS 12.1.0.4 - Ingva (revision 7743) embedded
NAS1: Xeon E3 1241@3.5GHz, 2HDD@8To/mirror, 1SSD cache, Zlog on mirror, 1 UFS 300 Go
NAS2: G3220@3GHz, 2x3HDD@2To/strip+raidz1, 1SSD cache, Zlog on mirror
UPS: APC Back-UPS RS 900G
Case : Fractal Design XL R2
Extensions & services:
NAS1: OBI (Plex, BTSync, zrep, rclone, themes), nfs, smb, UPS,
NAS2: OBI (zrep (backup mode), themes)
- velivole18
- Forum Moderator

- Posts: 647
- Joined: 14 Jul 2012 20:23
- Location: France
- Status: Offline
Re: Message dans log non connu
Bonjour,
J'ai trouvé cela dans un cours au format pdf :
velivole18
J'ai trouvé cela dans un cours au format pdf :
Si ça peut t'aider ...Network Lock Manager
The NFS version 2 and 3 protocols use separate side-band protocols to manage file locking. On Linux
2.4 kernels, the lockd daemon manages file locks using the NLM (Network Lock Manager) protocol,
and the rpc.statd program manages lock recovery using the NSM (Network Status Monitor) protocol
to report server and client reboots. The lockd daemon runs in the kernel and is started automatically
when the kernel starts up at boot time. The rpc.statd program is a user-level process that is started
during system initialization from an init script. If rpc.statd is not able to contact servers when the
client starts up, stale locks will remain on the servers that can interfere with the normal operation of
applications.
The rpcinfo command on Linux can help determine whether these services have started and are
available. If rpc.statd is not running, use the chkconfig program to check that its init script (which
is usually /etc/init.d/nfslock) is enabled to run during system bootup. If the client host’s network
stack is not fully initialized when rpc.statd runs during system startup, rpc.statd may not send a
reboot notification to all servers. Some reasons network stack initialization can be delayed are slow NIC
devices, slow DHCP service, or CPU-intensive programs running during system startup. Network
problems external to the client host may also cause these symptoms.
Because status monitoring requires bidirectional communication between server and client, some
firewall configurations can prevent lock recovery from working. Firewalls may also significantly restrict
communication between a client’s lock manager and a server. Network traces captured on the client
and server at the same time usually reveal a networking or firewall misconfiguration. Read the section
on using Linux NFS with firewalls carefully if you suspect a firewall is preventing lock management from
working.
Your client’s nodename determines how a filer recognizes file lock owners. You can easily find out what
your client’s nodename is using the uname –n or hostname command. (A system’s nodename is set
on Red Hat systems during boot using the HOSTNAME value set in /etc/sysconfig/network.) The
rpc.statd daemon determines which name to use by calling gethostbyname(3), or you can
specify it explicitly when starting rpc.statd using the “-n” option.
If the client’s nodename is fully qualified (that is, it contains the hostname and the domain name spelled
out), then rpc.statd must also use a fully qualified name. Likewise, if the nodename is unqualified,
then rpc.statd must use an unqualified name. If the two values do not match, lock recovery will not
work. Be sure the result of gethostbyname(3) matches the output of uname –n by adjusting your
client’s nodename in /etc/hosts, DNS, or your NIS databases.
Similarly, you should account for client hostname clashes in different subdomains by ensuring that you
always use a fully qualified domain name when setting up a client’s nodename during installation. With
multihomed hosts and aliased hostnames, you can use rpc.statd’s “-n” option to set unique
hostnames for each interface. The easiest approach is to use each client’s fully qualified domain name
as its nodename.
When working in high-availability database environments, test all worst-case scenarios (such as server
crash, client crash, application crash, network partition, and so on) to ensure lock recovery is
functioning correctly before you deploy your database in a production environment. Ideally, you should
examine network traces and the kernel log before, during, and after the locking/disaster/locking
recovery events.
The file system containing /var/lib/nfs must be persistent across client reboots. This directory is
where the rpc.statd program stores information about servers that are holding locks for the local
NFS client. A tmpfs file system, for instance, is not sufficient; the server will fail to be notified that it
must release any POSIX locks it might think your client is holding if it fails to shut down cleanly. That
can cause a deadlock the next time you try to access a file that was locked before the client restarted.
Locking files in NFS can affect the performance of your application. The NFS client assumes that if an
application locks and unlocks a file, it wishes to share that file’s data among cooperating applications
running on multiple clients. When an application locks a file, the NFS client purges any data it has
already cached for the file, forcing any read operation after the lock to go back to the server. When an
application unlocks a file, the NFS client flushes any writes that may have occurred while the file was
locked. In this way, the client greatly increases the probability that locking applications can see all
previous changes to the file.
However, this increased data cache coherency comes at the cost of decreased performance. In some
cases, all of the processes that share a file reside on the same client; thus aggressive cache purging
and flushing unnecessarily hamper the performance of the application. Solaris allows administrators to
disable the extra cache purging and flushing that occur when applications lock and unlock files with the
“llock” mount option. Note well that this is not the same as the “nolock” mount option in Linux. The
“nolock” mount option disables NLM calls by the client, but the client continues to use aggressive cache
purging and flushing. Essentially this is the opposite of what Solaris does when “llock” is in effect.
velivole18
11.2.0.4 - Omnius (revision 6026) x64-embedded
111909 RSDT1411 AMD Athlon(tm) 64 Processor 4000+ 4096MiB RAM - HDD 2 x 6 To in ZFS mirroring + 2 x (2 x 4To in ZFS mirroring) - SSD 32Go - UPS EATON Ellipse MAX 1100.
111909 RSDT1411 AMD Athlon(tm) 64 Processor 4000+ 4096MiB RAM - HDD 2 x 6 To in ZFS mirroring + 2 x (2 x 4To in ZFS mirroring) - SSD 32Go - UPS EATON Ellipse MAX 1100.