Configuring and administering a CXFS cluster can be a complex task. In general, most problems can be solved by rebooting a node. However, the topics in this chapter may help you avoid rebooting:
Administrative tasks must be performed from a node that has the cluster_admin.sw.base software package installed. See the CXFS MultiOS for CXFS Client-Only Nodes: Installation and Configuration Guide for additional troubleshooting information.
To troubleshoot CXFS problems, do the following:
This section provides an overview of the tools required to troubleshoot CXFS:
![]() | Caution: Many of the commands listed are beyond the scope of this book and are provided here for quick reference only. See the other guides and man pages referenced for complete information before using these commands. |
Understand the following physical storage tools:
To display the hardware inventory, use the hinv(1M) command:
# /sbin/hinv |
If the output is not what you expected, do a probe for devices and perform a SCSI bus reset, using the scsiha(1M) command:
# /usr/sbin/scsiha -pr bus_number |
To configure I/O devices, use the ioconfig(1M) command:
# /sbin/ioconfig -f /hw |
To show the physical volumes, use the xvm(1M) command:
# /sbin/xvm show -v phys/ |
See the XVM Volume Manager Administrator's Guide.
Understand the following cluster configuration tools:
To configure XVM volumes, use the xvm(1M) command:
# /sbin/xvm |
See the XVM Volume Manager Administrator's Guide.
To configure CXFS nodes and cluster, use either the GUI or the cmgr(1M) command:
# /usr/sbin/cxfsmgr |
See “GUI Features” in Chapter 4 and Chapter 4, “Reference to GUI Tasks for CXFS”.
The cmgr(1M) command line with prompting:
# /usr/cluster/bin/cmgr -p |
See “cmgr(1M) Overview” in Chapter 5, and Chapter 5, “Reference to cmgr Tasks for CXFS”.
To reinitialize the database, use the cdbreinit command:
# /usr/cluster/bin/cdbreinit |
Understand the following cluster control tools:
To start and stop the cluster services daemons, use the following commands:
# /etc/init.d/cluster start # /etc/init.d/cluster stop |
These commands are useful if you know that filesystems are available but are not indicated as such by the cluster status, or if cluster quorum is lost.
See the following:
To start and stop CXFS services, use the GUI or the following cmgr(1M) commands:
cmgr> start cx_services on node hostname for cluster clustername cmgr> stop cx_services on node hostname for cluster clustername |
Running this command on the metadata server will cause its filesystems to be recovered by another potential metadata server. See “Cluster Services Tasks with cmgr” in Chapter 5, and “Cluster Services Tasks with the GUI” in Chapter 4.
![]() | Note: In this release, relocation is disabled by default and recovery
is supported only when using standby nodes.
Relocation and recovery are fully implemented, but the number of associated problems prevents full support of these features in the current release. Although data integrity is not compromised, cluster node panics or hangs are likely to occur. Relocation and recovery will be fully supported in a future release when these issues are resolved. |
To allow and revoke CXFS kernel membership on the local node, forcing recovery of the metadata server for the local node, use the GUI or the following cmgr(1M) commands:
cmgr> admin cxfs_start cmgr> admin cxfs_stop |
Wait until recovery is complete before issuing a subsequent admin cxfs_start. The local node cannot rejoin the CXFS kernel membership until its recovery is complete.
See the following:
Understand the following cluster/node status tools:
To show which cluster daemons are running, use the ps(1) command:
# /sbin/ps -ef | grep cluster |
See “Verify that the Cluster Daemons are Running” in Chapter 3.
To see cluster and filesystem status, use one of the following:
GUI:
# /usr/sbin/cxfsmgr |
cluster_status(1M) command:
# /usr/cluster/cmgr-scripts/cluster_status |
See “Check Cluster Status with cluster_status” in Chapter 9.
clconf_info command:
# /usr/cluster/bin/clconf_info |
cxfs_info(1M) command on a client-only node:
IRIX:
irix# /usr/cluster/bin/cxfs_info |
Solaris:
solaris# /usr/cxfs_cluster/bin/cxfs_info |
Windows:
C: \Program Files\CXFS\cxfs_info |
To see the mounted filesystems, use the mount(1M) or df(1) commands:
# /sbin/mount # /usr/sbin/df |
Use the df(1) command to report number of free disk blocks:
# /usr/sbin/df |
Use the xvm(1M) command to show volumes:
# /sbin/xvm show vol/ |
See the XVM Volume Manager Administrator's Guide.
Understand the following performance monitoring tools:
To monitor system activity, use the sar(1) command:
# /usr/bin/sar |
To monitor file system buffer cache activity, use the bufview(1) command:
# /usr/sbin/bufview |
![]() | Note: Do not use bufview interactively on a busy node; run it in batch mode. |
To monitor operating system activity data, use the osview(1) command:
# /usr/sbin/osview |
To monitor the statistics for an XVM volume, use the xvm(1M) command:
# /sbin/xvm change stat on {concatname|stripename|physname} |
See the XVM Volume Manager Administrator's Guide.
To monitor system performance, use Performance Co-Pilot. See the Performance Co-Pilot for IRIX Advanced User's and Administrator's Guide, the Performance Co-Pilot Programmer's Guide , and the pmie(1) and pmieconf(1) man pages.
Understand the following kernel status tools (this may require help from SGI service personnel):
To determine kernel status, use the icrash(1M) commands:
# /usr/bin/icrash |
sthread | grep cmsd to determine the CXFS kernel membership state. You may see the following in the output:
cms_follower() indicates that the node is waiting for another node to create the CXFS kernel membership (the leader)
cms_leader() indicates that the node is leading the CXFS kernel membership creation
cms_declare_membership() indicates that the node is ready to declare the CXFS kernel membership but is waiting on resets
cms_nascent() indicates that the node has not joined the cluster since starting
cms_shutdown() indicates that the node is shutting down and is not in the CXFS kernel membership
cms_stable() indicates that the CXFS kernel membership is formed and stable
tcp_channels to determine the status of the connection with other nodes
-t -a -w filename to trace for CXFS
-t cms_thread to trace one of the above threads
To invoke internal kernel routines that provide useful debugging information, use the idbg(1M) command:
# /usr/sbin/idbg |
Understand the following log files:
Administration node log files:
IRIX client-only node log files:
For information about client-only nodes running other operating systems, see CXFS MultiOS for CXFS Client-Only Nodes: Installation and Configuration Guide.
Before reporting a problem to SGI, you should use the cxfsdump (1M) command to gather configuration information about the CXFS cluster, such as network interfaces, CXFS registry information, I/O, and cluster database contents. This will allow SGI support to solve the problem more quickly.
![]() | Note: The cxfsdump command requires access across IRIX and Solaris nodes in the cluster via the rcp(1M) and rsh(1M) commands. Because these commands are not provided on Windows nodes, the cxfsdump command must be run manually on each Windows node. |
You should run cxfsdump from a CXFS administration node in the cluster:
# /usr/cluster/bin/cxfsdump |
Output for IRIX and Solaris systems will be placed in a file in the directory /var/cluster/cxfsdump-data directory on the CXFS administration node on which the cxfsdump command was run. The cxfsdump command will report the name and location of the file when it is finished.
If you cluster contains Windows nodes, you must run the command manually on each Windows node.
For example:
adminnode# cxfsdump Detecting cluster configuration Executing CXFSDUMP on CLUSTER testcluster NODE o200a Gathering cluster information... Determining OS level...... Getting versions info.... Obtaining CXFS database... Checking for tie-breakers etc... Obtaining hardware inventory... Grabbing /etc/hosts..... Grabbing /etc/resolv.conf... Grabbing /ets/nsswitch.conf... Obtaining physvol information using XVM... ioctl() to xvm api node failed: Invalid argument Could not get xvm subsystem info: xvmlib_execute_ioctl: system call failed. Obtaining Volume topology information using XVM... ioctl() to xvm api node failed: Invalid argument Could not get xvm subsystem info: xvmlib_execute_ioctl: system call failed. Copying failover configuration and scsifo paths ... Gathering network information... Checking for any installed Patches.. Monitoring file system buffer cache for 3 minutes... Running Systune ... Obtaining modified system tunable parameters... Creating ICRASH CMD file... Executing ICRASH commands... Copying CXFS logs... Copying /var/cluster/ha/log/cad_log... Copying /var/cluster/ha/log/clconfd_o200a... Copying /var/cluster/ha/log/cli_o200a... Copying /var/cluster/ha/log/cmond_log... Copying /var/cluster/ha/log/crsd_o200a... Copying /var/cluster/ha/log/fs2d_log... Copying /var/cluster/ha/log/fs2d_log.old... Copying SYSLOG... Distributing /usr/cluster/bin/cxfsdump.pl to node o200c ... Distributing /usr/cluster/bin/cxfsdump.pl to node o200b ... Creating the output directory : /var/cluster/cxfsdump-data Gathering node information for the cluster testcluster ... Running RSH to node o200c... Running RSH to node o200b... Waiting for other cluster nodes to gather data... FINAL CXFSDUMP OUTPUT IN /var/cluster/cxfsdump-data/testcluster_cxfsdump20020903.tar.gz |
On Windows systems, cxfsdump creates a the directory called CXFSDump in the same directory where the the passwd file is kept. The cxfsdump command will report the location where the data is stored when it is complete. For example:
FINAL CXFSDUMP output in output_filename |
When you encounter a problem, identify the cluster status by answering the following questions:
Are the cluster daemons running? See “Verify that the Cluster Daemons are Running” in Chapter 3.
Is the cluster state consistent on each node? Run the clconf_info command on each CXFS administration node and compare.
Which nodes are in the CXFS kernel membership? See “Check Cluster Status with cluster_status” in Chapter 9, “Check Cluster Status with cmgr” in Chapter 9, and the /var/adm/SYSLOG file.
Which nodes are in the cluster database (fs2d ) membership? See the /var/cluster/ha/log/fs2d_log files on each CXFS administration node.
Is the database consistent on all nodes? Determine this logging in to each CXFS administration node and examining the /var/cluster/ha/log/fs2d_log file and database checksum.
Log onto the various CXFS client nodes or use the GUI view area display with details showing to answer the following:
On the metadata server, use the clconf_info command.
Is the metadata server in the process of recovery? Use the icrash(1M) command to search for messages and look at /var/adm/SYSLOG. See “Kernel Status Tools”. Messages such as the following indicate that recovery status:
Are there any long running (>20 seconds) kernel messages? Use the icrash mesglist command to examine the situation. For example:
>> mesglist Cell:7 THREAD ADDR MSG ID TYPE CELL MESSAGE Time(Secs) ================== ======= ==== ==== ================================ ========== 0xa8000000d60a4800 5db537 Rcv 0 I_dcvn_recall 0 0xa8000000d60a4800 5db541 Snt 0 I_dsvn_notfound 0 0xa80000188fc51800 3b9b4f Snt 0 I_dsxvn_inode_update 17:48:58 |
If filesystems are not mounting, do they appear online in XVM? You can use the following xvm(1M) command:
xvm:cluster> show vol/* |
To locate the problem, do the following:
Examine the log files (see “Log Files”):
Search for errors in all log files. See “Status in Log Files” in Chapter 9. Examine all messages within the timeframe in question.
Trace errors to the source. Try to find an event that triggered the error.
Use the icrash commands. See “Kernel Status Tools”.
Use detailed information from the view area in the GUI to drill down to specific configuration information.
Run the Test Connectivity task in the GUI. See “Test Node Connectivity with the GUI” in Chapter 4.
Determine how the nodes of the cluster see the current CXFS kernel membership by entering the following command on each CXFS administration node:
# /usr/cluster/bin/clconf_info |
This command displays the following fields:
Node name
Node ID
Status (up or down)
Age (not useful; ignore this field)
Incarnation (not useful; ignore this field)
Cell ID, which is a number that is dynamically allocated by the CXFS software when you add a node to a cluster (the user does not define a cell ID number). To see the cell ID, use the clconf_info command.
For example:
# /usr/cluster/bin/clconf_info Membership since Fri Sep 10 08:57:36 1999 Node NodeId Status Age Incarnation CellId cxfs6 1001 UP 1 7 0 cxfs7 1002 UP 0 0 1 cxfs8 1003 UP 0 0 2 2 CXFS FileSystems /dev/xvm/test1 on /mnts/test1 disabled server 0 clients /dev/xvm/test2 on /mnts/test2 disabled server 0 clients |
Check /var/adm/SYSLOG on each CXFS administration node to make sure the CXFS filesystems have been successfully mounted or unmounted. If a mount/unmount fails, the error will be logged and the operation will be retried after a short delay.
Use the sar(1) system activity reporter to show the disks that are active. For example, the following example will show the disks that are active, put the disk name at the end of the line, and poll every second for 10 seconds:
# sar -DF 1 10 |
For more information, see the sar(1) man page.
Use the bufview(1) filesystem buffer cache activity monitor to view the buffers that are in use. Within bufview, you can use the help subcommand to learn about available subcommands, such as the f subcommand to limit the display to only those with the specified flag. For example, to display the in-use (busy) buffers:
# bufview f Buffer flags to display bsy |
For more information, see the bufview(1) man page.
Use the icrash(1M) IRIX system crash analysis utility. For more information, see the icrash(1M) man page.
Get a dump of the cluster database. You can extract such a dump with the following command:
# /usr/cluster/bin/cdbutil -c 'gettree #' > dumpfile |
This section covers the following:
Ensure that you follow the instructions in “Preliminary Cluster Configuration Steps” in Chapter 3, before configuring the cluster.
Before you start configuring another new cluster, make sure no nodes are still in a CXFS membership from a previous cluster. Enter the following:
# icrash -e 'sthread | grep cmsd' |
If the output shows a cmsd kernel thread, force a CXFS shutdown by entering the following:
# /usr/cluster/bin/cmgr -p cmgr> admin cxfs_stop |
Then check for a cmsd kernel thread again:
# icrash -e 'sthread | grep cmsd' |
After waiting a few moments, if the cmsd kernel thread still exists, you must reboot the machine or leave it out of the new cluster definition. It will not be able to join a new cluster in this state and it may prevent the rest of the cluster from forming a new CXFS membership.
The cluster database membership quorum must remain stable during the configuration process. If possible, use multiple windows to display the fs2d_log file for each CXFS administration node while performing configuration tasks. Enter the following:
# tail -f /var/cluster/ha/log/fs2d_log |
Check the member count when it prints new quorums. Under normal circumstances, it should print a few messages when adding or deleting nodes, but it should stop within a few seconds after a new quorum is adopted.
If not enough machines respond, there will not be a quorum. In this case, the database will not be propagated.
If you detect cluster database membership quorum problems, fix them before making other changes to the database. Try restarting the cluster infrastructure daemons on the node that does not have the correct cluster database membership quorum, or on all nodes at the same time. Enter the following:
# /etc/init.d/cluster stop # /etc/init.d/cluster start |
Please provide the fs2d log files when reporting a cluster database membership quorum problem.
Be consistent in configuration files for nodes across the pool, and when configuring networks. Use the same names in the same order. See “Configure System Files” in Chapter 2.
Use the appropriate node function definition:
Use an odd number of server-capable nodes and an odd number of CXFS administration nodes for stability.
Make unstable nodes CXFS client-only nodes.
The GUI provides a convenient display of a cluster and its components through the view area. You should use it to see your progress and to avoid adding or removing nodes too quickly. After defining a node, you should wait for it to appear in the view area before adding another node. After defining a cluster, you should wait for it to appear before you add nodes to it. If you make changes too quickly, errors can occur.
For more information, see “Starting the GUI” in Chapter 4.
When running the GUI on IRIX, do not move to another IRIX desktop while GUI action is taking place; this can cause the GUI to crash.
You should not change the names of the log files. If you change the names of the log files, errors can occur.
Periodically, you should rotate log files to avoid filling your disk space; see “Log File Management” in Chapter 6. If you are having problems with disk space, you may want to choose a less verbose log level; see “Configure Log Groups with the GUI” in Chapter 4, or “Configure Log Groups with cmgr” in Chapter 5.
When accessing the Brocade Web Tools V2.0 through Netscape on an IRIX node, you must first enter one of the following before starting Netscape:
For sh(1) or ksh(1) shells:
$ NOJIT=1; export NOJIT |
For csh(1) shell:
% setenv NOJIT 1 |
If this is not done, Netscape will crash with a core dump.
This section discusses performance problems with unwritten extent tracking and exclusive write tokens.
When you define a filesystem, you can specify whether unwritten extent tracking is on (unwritten=1) or off ( unwritten=0); it is on by default.
In most cases, the use of unwritten extent tracking does not affect performance and you should use the default to provide better security.
However, unwritten extent tracking can affect performance when both of the following are true:
A file has been preallocated
These preallocated extents are written for the first time with records smaller than 4MB
For optimal performance with CXFS when both of these conditions are true, it may be necessary to build filesystems with unwritten=0 (off).
![]() | Note: There are security issues with using unwritten=0 . For more information, see the IRIX Admin: Disks and Filesystems. |
For proper performance, CXFS should not obtain exclusive write tokens. Therefore, use the following guidelines:
Preallocate the file.
Set the size of the file to the maximum size and do not allow it to be changed, such as through truncation.
Do not append to the file. (That is, O_APPEND is not true on the open.)
Do not mark an extent as written.
Do not allow the application to do continual preallocation calls.
If the guidelines are followed and there are still performance problems, you may find useful information by running the icrash stat command before, halfway through, and after running the MPI job. For more information, see the icrash(1M) man page.
The default root crontab file contains the following entries (line breaks inserted here for readability):
0 5 * * * find / -local -type f '(' -name core -o -name dead.letter ')' -atime +7 -mtime +7 -exec rm -f '{}' ';' 0 3 * * 0 if test -x /usr/etc/fsr; then (cd /usr/tmp; /usr/etc/fsr) fi |
The first entry executes a find(1) command that looks for and removes all files with the name core or dead.letter that have not been accessed in the past seven days.
The second entry executes an fsr(1M) command that improves the organization of mounted filesystems.
The find command will be run nightly on all local filesystems. Because CXFS filesystems are considered as local on all nodes in the cluster, the nodes may generate excessive filesystem activity if they try to access the same filesystems simultaneously. Therefore, you may wish use the following sequence to disable or modify the find crontab entries on all the CXFS administration nodes except for one:
Log in as root.
Define your editor of choice, such as vi:
# setenv EDITOR vi |
Edit the crontab file:
# crontab -e |
Comment out or delete the find line.
The fsr command can only be run on the metadata server, so it is not harmful to leave it in the crontab file for CXFS clients, but it will not be executed.
To avoid a loss of connectivity between the metadata server and the CXFS clients, do not oversubscribe the metadata server or the private network connecting the nodes in the cluster. Avoid unnecessary metadata traffic.
If the amount of free memory is insufficient, a node may experience delays in heartbeating and as a result will be kicked out of the CXFS membership. To observe the amount of free memory in your system, use the osview(1) tool.
See also “Out of Logical Swap Space”.
If you want redefine a node ID or the cluster ID, you must first reboot. The problem is that the kernel still has the old values, which prohibits a CXFS membership from forming. However, if you perform a reboot first, it will clear the original values and you can then redefine the node or cluster ID.
Therefore, if you use cdbreinit on a node to recreate the cluster database, you must reboot it before changing the node IDs or the cluster ID. See “Recreating the Cluster Database”.
If a node is going to be down for a while, remove it from the cluster and the pool to avoid cluster database membership and CXFS membership quorum problems. See the following sections:
If you perform a forced shutdown on a node, you must restart CXFS on that node before it can return to the cluster. If you do this while the database still shows that the node is in a cluster and is activated, the node will restart the CXFS membership daemon. Therefore, you may want to do this after resetting the database or after stopping CXFS services.
For example, enter the following on the node you wish to start:
# /usr/cluster/bin/cmgr -p cmgr> stop cx_services on node localnode cmgr> admin cxfs_start |
See also “Forced CXFS Shutdown: Revoke Membership of Local Node” in Chapter 6.
When serial hardware reset is enabled, CXFS requires a reset successful message before it moves the metadata server. Therefore, if you have the serial hardware reset capability enabled and you must remove the reset lines for some reason, you must also disable the reset capability. See “Modify a Node Definition with the GUI” in Chapter 4, or “Modify a Node with cmgr” in Chapter 5.
![]() | Note: The reset capability is mandatory to ensure data integrity for clusters with only two server-capable nodes, and it is highly recommended for all server-capable nodes. Larger clusters should have an odd number of server-capable and CXFS administration nodes. See “Cluster Environment” in Chapter 1. |
CXFS filesystems are really clustered XFS filesystems; therefore, in case of a file system corruption, you can use the xfs_check(1M) and xfs_repair(1M) commands. However, you must first ensure that you have an actual case of data corruption and retain valuable metadata information by replaying the XFS logs before running xfs_repair.
![]() | Caution: If you run xfs_repair without first replaying the XFS logs, you may introduce data corruption. |
You should only run xfs_repair in case of an actual filesystem corruption; forced filesystem shutdown messages do not necessarily imply that xfs_repair should be run. Following is an example of a message that does indicate an XFS file corruption:
XFS read error in file system meta-data block 106412416 |
When a filesystem is forcibly shut down, the log is not empty -- it contains valuable metadata. You must replay it by mounting the filesystem. The log is only empty if the filesystem is unmounted cleanly (that is, not a forced shutdown, not a crash). You can use the following command line to see an example of the transactions captured in the log file:
# xfs_logprint -t device |
If you run xfs_repair before mounting the filesystem, xfs_repair will delete all of this valuable metadata.
If you think you have a filesystem with real corruption, do this:
Mount the device in order to replay the log:
# mount device any_mount_point |
Unmount the filesystem:
# unmount device |
Check the filesystem:
# xfs_check device |
View the repairs that could be made, using xfs_repair in no-modify mode:
# xfs_repair -n device |
If you are certain that the repairs are appropriate, complete them:
# xfs_repair device |
For more information, see the IRIX Admin: Disks and Filesystems.
The following are common problems and solutions.
If you cannot access a filesystem, check the following:
Is the filesystem enabled? Check the GUI, clconf_info command, and cluster_status commands.
Were there mount errors?
If the GUI will not run, check the following:
Are the cluster daemons running? See “Verify that the Cluster Daemons are Running” in Chapter 3.
Are the tcpmux and tcpmux/sgi_sysadm services enabled in the /etc/inetd.conf file?
Are the inetd or tcp wrappers interfering? This may be indicated by connection refused or login failed messages.
Are you connecting to a CXFS administration node? The cxfsmgr(1) command can only be executed on a CXFS administration node. The GUI may be run from another system via the Web if you connect the GUI to a CXFS administration node.
If the log files are consuming too much disk space, you should rotate them; see “Log File Management” in Chapter 6. You may also want to consider choosing a less-verbose log level; see the following:
If you are unable to define a node, it may be that there are hostname resolution problems. See “Hostname Resolution: /etc/sys_id, /etc/hosts, /etc/nsswitch.conf” in Chapter 2.
The following may cause the system to hang:
Overrun disk drives.
Heartbeat was lost. In this case, you will see a message that mentions withdrawl of node.
As a last resort, do a non-maskable interrupt (NMI) of the system and contact SGI. (The NMI tells the kernel to panic the node so that an image of memory is saved and can be analyzed later.) For more information, see the owner's guide for the node.
Make vmcore.#.comp, unix.#, /var/adm/SYSLOG, and cluster log files available.
If a node is detected in /var/adm/SYSLOG but it never receives a Membership delivered message, it is likely that there is a network problem. See “Configure System Files” in Chapter 2.
The Membership delivered messages in the /var/adm/SYSLOG file include a list of cell IDs for nodes that are members in the new CXFS membership.
Following each cell ID is a number, the membership version , that indicates the number of times the membership has changed since the node joined the membership.
If the Membership delivered messages are appearing frequently in /var/adm/SYSLOG, this may indicate a network problem:
Nodes that are stable and remain in the membership will have a large membership version number.
Nodes that are having problems will be missing from the messages or have a small membership version number.
If you cannot log in to a CXFS administration node, you can use one of the following commands, assuming the node you are on is listed in the other nodes' .rhosts files:
# rsh hostname ksh -i # rsh hostname csh -i |
The following message indicates a problem (output lines wrapped here for readability):
ALERT: I/O error in filesystem ("/mnt") meta-data dev 0xbd block 0x41df03 ("xlog_iodone") ALERT: b_error 0 b_bcount 32768 b_resid 0 NOTICE: xfs_force_shutdown(/mnt,0x2) called from line 966 of file ../fs/xfs/xfs_log.c. Return address = 0xc0000000008626e8 ALERT: I/O Error Detected. Shutting down filesystem: /mnt ALERT: Please umount the filesystem, and rectify the problem(s) |
You can fix this problem using xfs_repair only if there is no metadata in the XFS log. See “Appropriate Use of xfs_repair”, for the appropriate procedure.
I/O errors can also appear if the node is unable to access the storage. This can happen for several reasons:
The node has been physically disconnected from the SAN
A filesystem shutdown due to loss of membership
A filesystem shutdown due to lost of the metadata server
The node has been fenced out of the SAN
If you have defined filesystems and then rename your cluster (by deleting the old cluster and defining a new cluster), CXFS will not be able to mount the existing filesystems. This happens because the clustered XVM volume on which your CXFS filesystem resides is not accessible to the new cluster, and the volumes are therefore considered as foreign.
In order to mount the filesystem on the new cluster, you must use the XVM steal command to bring the clustered XVM volume into the domain of the new cluster. For more information, see the XVM Volume Manager Administrator's Guide .
If you create new slices on a previously sliced disk that have the same starting blocks as slices already existing on the disk, and if the old slices had filesystems, then the GUI will display those old filesystems even though they may not be valid.
A client_timeout value is set by the clconfd and cxfs_client daemons. The value depends on the order in which filesystems are mounted on the various nodes. The value adapts to help ensure that all filesystems get mounted in a timely manner. The value has no effect on the filesystem operation after it is mounted.
The value for client_timeout may differ among nodes, and therefore having multiple values is not really a problem.
The retry value is forced to be 0 and you cannot change it.
![]() | Caution: You should not attempt to change the client_timeout value. Improperly setting the values for client_timeout and retry could cause the mount command to keep waiting for a server and could delay the availability of the CXFS filesystems. |
This section describes some of the error messages you may see. In general, the example messages are listed first by type and then in alphabetical order, starting with the message identifier or text.
Sections are as follows:
You can expect to see the following messages. They are normal and do not indicate a problem.
If the clconfd daemon exits immediately after it starts up, it means that the CXFS license has not been properly installed. For information about the associated error message, see “License Error”.
You must install the license on each node before you can use CXFS. If you increase the number of CPUs in your system, you may need a new license. See Chapter 2, “IRIX Systems: Installation of CXFS Software and System Preparation”.
The following example /var/adm/SYSLOG message indicates an oversubscribed system:
ALERT: inetd [164] - out of logical swap space during fork while allocating uarea - see swap(1M) Availsmem 8207 availrmem 427 rlx freemem 10, real freemem 9 |
See “Use System Capacity Wisely”.
The cluster daemons could also be leaking memory in this case. You may need to restart them:
# /etc/init.d/cluster restart |
Mar 1 15:06:18 5A:nt-test-07 unix: NOTICE: Physvol (name cip4) has no CLUSTER name id: set to "" |
This message means the following:
The disk labeled as an XVM physvol was probably labeled under IRIX 6.5.6f and the system was subsequently upgraded to a newer version that uses a new version of XVM label format. This does not indicate a problem.
The cluster name had not yet been set when XVM encountered these disks with an XVM cluster physvol label on them. This is normal output when XVM performs the initial scan of the disk inventory, before node/cluster initialization has completed on this host.
The message indicates that XVM sees a disk with an XVM cluster physvol label, but that this node has not yet joined a CXFS membership; therefore, the cluster name is empty ("").
When a node or cluster initializes, XVM rescans the disk inventory, searching for XVM cluster physvol labels. At that point, the cluster name should be set for this host. An empty cluster name after node/cluster initialization indicates a problem with cluster initialization.
The first time any configuration change is made to any XVM element on this disk, the label will be updated and converted to the new label format, and these notices will go away.
For more information about XVM, see the XVM Volume Manager Administrator's Guide.
The following message in /var/adm/SYSLOG indicates a kernel-triggered revocation of CXFS membership:
Membership lost - withdrawing from cluster |
You must actively allow CXFS membership for the local node in this situation. See “Allow Membership of the Local Node with the GUI” in Chapter 4, or “Allow Membership of the Local Node with cmgr” in Chapter 5.
If you see the following message in the /var/cluster/ha/log/clconf_hostname logfile, it means that the CXFS license was not properly installed:
CXFS not properly licensed for this host. Run '/usr/cluster/bin/cxfslicense -d' for detailed failure information. |
If you do not have the CXFS license properly installed, you will see the following error on the console when trying to run CXFS:
Cluster services:CXFS not properly licensed for this host. Run '/usr/cluster/bin/cxfslicense -d' for detailed failure information. After fixing the license, please run '/etc/init.d/cluster restart'. |
An error such as the following example will appear in the SYSLOG file:
Mar 4 12:58:05 6X:typhoon-q32 crsd[533]: <<CI> N crs 0> Crsd restarted. Mar 4 12:58:05 6X:typhoon-q32 clconfd[537]: <<CI> N clconf 0> Mar 4 12:58:05 5B:typhoon-q32 CLCONFD failed the CXFS license check.Use the Mar 4 12:58:05 5B:typhoon-q32 '/usr/cluster/bin/cxfslicense -d' Mar 4 12:58:05 5B:typhoon-q32 command to diagnose the license problem. |
If the clconfd daemon dies right after it starts up, this error is present.
You must install the license on each node before you can use CXFS. See Chapter 2, “IRIX Systems: Installation of CXFS Software and System Preparation”.
If you have conflicting cluster ID numbers at your site, you will see errors such as the following:
WARNING: mtcp ignoring alive message from 1 with wrong ip addr 128.162.89.34 WARNING: mtcp ignoring alive message from 0 with wrong ip addr 128.162.89.33 |
A cluster ID number must be unique. To solve this problem, make the cluster ID numbers unique.
This error can occur if you redefine the cluster configuration and start CXFS services while some nodes have stale information from a previous configuration.
To solve the problem, first try the steps in “Eliminate a Residual Cluster”. If that does not work, reboot the nodes that have stale information. You can determine which nodes have stale information as follows: stale nodes will complain about all of the nodes, but the up-to-date nodes will complain only about the stale nodes. The /var/cluster/ha/log/clconfd_ log file on the stale nodes will also show error messages about SGI_CMS_CONFIG_ID failures.
If there are too many error messages to recognize the stale nodes, reboot every node.
CXFS logs both normal operations and critical errors to /var/adm/SYSLOG, as well as to individual log files for each log group.
In general, errors in the /var/adm/SYSLOG file take the following form:
timestamp priority_&_facility : hostname process[ID]: <internal_info> CODE message_text |
For example:
Sep 7 11:12:59 6X:cxfs0 cli[5830]: < E clconf 0> CI_IPCERR_NOSERVER, clconf ipc: ipcclnt_connect() failed, file /var/cluster/ha/comm/clconfd-ipc_cxfs0 |
Table 10-1 shows the parts of the preceding SYSLOG message.
Table 10-1. SYSLOG Error Message Format
Content | Part | Meaning |
---|---|---|
Sep 7 11:12:59 | Time Stamp | September 7 at 11:12 AM. |
6X | Facility and level | 6X indicates an informational message. See syslogd(1M) and the file /usr/include/sys/syslog.h . |
cxfs0 | Node name | The node whose logical name is cxfs0 is the node on which the process is running. |
cli[5830] | Process[ID] | The process sending the message is cli and its process ID number is 5830. |
<CI>E clconf 0 | Internal information: message source, logging subsystem, and thread ID | The message is from the cluster infrastructure (CI). E indicates that it is an error. The clconf command is the logging subsystem. 0 indicates that it is not multithreaded. |
CI_IPCERR_NOSERVER, clconf ipc | Internal error code | Information about the type of message; in this case, a message indicating that the server is missing. No error code is printed if it is a normal message. |
ipcclnt_connect() failed, file /var/cluster/ha/comm/clconfd-ipc_cxfs0 | Message text | A connection failed for the clconfd-ipc_cxfs0 file. |
The following sections present only the message identifiers and text.
For all cli messages, only the last message from the command (which begins with CLI private command failed) is meaningful. You can ignore all other cli messages.
The following are example errors from the cli daemon.
CI_ERR_INVAL, CLI private command: failed (Machine (cxfs0) exists.) | |
You tried to create a new node definition with logical name cxfs0; however, that node name already exists in the cluster database. Choose a different name. | |
CI_ERR_INVAL, CLI private command: failed (IP address (128.162.89.33) specified for control network is cxfs0 is assigned to control network of machine (cxfs0).) | |
You specified the same IP address for two different control networks of node cxfs0. Use a different IP address. | |
CI_FAILURE, CLI private command: failed (Unable to validate hostname of machine (cxfs0) being modified.) | |
The DNS resolution of the cxfs0 name failed. To solve this problem, add an entry for cxfs0 in /etc/hosts on all nodes. | |
CI_IPCERR_NOPULSE, CLI private command: failed (Cluster state is UNKNOWN.) | |
The cluster state is UNKNOWN and the command could not complete. This is a transient error. However, if it persists, stop and restart the cluster daemons; see “Stopping and Restarting Cluster Infrastructure Daemons”. |
The following errors are sent by the clconfd daemon.
CI_CONFERR_NOTFOUND, Could not access root node. | |
The cluster database is either non-existent or corrupted, or the database daemons are not responding. Check that the database does exist. If you get an error or the dump is empty, re-create the database; for more information, see “Clearing the Cluster Database”. If the database exists, restart the cluster daemons; see “Stopping and Restarting Cluster Infrastructure Daemons”. | |
CI_ERR_NOTFOUND, Could not get Cellular status for local machine (cxfs1) | |
The database is corrupted or cannot be accessed. Same actions as above. | |
CI_FAILURE, Call to open cdb for logging configuration when it is already open. | |
This indicates a software problem requiring you to restart the daemons; see “Stopping and Restarting Cluster Infrastructure Daemons”. | |
CI_FAILURE, Cell 1 Machine cxfs1: server has no information about a machine that has reset capabilities for this machine | |
A reset mechanism was not provided for this node. The node will not be automatically reset if it fails. To ensure proper failure handling, use the GUI or the cmgr(1M) command to modify the node's definition and add reset information. See “Define a Node with the GUI” in Chapter 4, or “Modify a Node with cmgr” in Chapter 5. | |
CI_FAILURE, CMD(/sbin/umount -k /dev/xvm/bob1): exited with status 1 (0x1) | |
An error occurred when trying to unmount the /dev/xvm/bob1 filesystem. Messages from the umount(1M) command are usually issued just before this message and provide more information about the reason for the failure. | |
CI_FAILURE, CMD(/sbin/clmount -o 'server_list=(cxfs0,cxfs1)' /dev/xvm/bob2 /bob2): exited with status 1 (0x1) | |
An error occurred when trying to mount the /dev/xvm/bob2 filesystem. Messages from the mount (1M) command are usually issued just before this message and provide more information about the reason of the failure. | |
CI_FAILURE, CMD(/sbin/clmount -o 'server_list=(cxfs2,cxfs0)' /dev/xvm/stripe4 /xvm/stripe4): exited with status 1 (0x1) | |
You have tried to mount a filesystem without first running mkfs(1M). You must use mkfs(1M) to construct the filesystem before mounting it. For more information, see the mkfs(1M) man page. | |
CI_FAILURE, Could not write newincarnation number to CDB, error = 9. | |
There was a problem accessing the cluster database. Retry the operation. If the error persists, stop and restart the cluster daemons; see “Stopping and Restarting Cluster Infrastructure Daemons”. If the problem persists, clear the database, reboot, and re-create the database. See “Clearing the Cluster Database”. | |
CI_FAILURE, Exiting, monitoring agent should revive me. | |
The daemon requires fresh data. It will be automatically restarted. | |
CI_FAILURE, No node for client (3) of filesystem (/dev/xvm/bob1) on (/bob1). | |
(There may be many repetitions of this message.) The filesystem appears to still be mounted on a CXFS client node that is no longer in the cluster database. If you can identify the CXFS client node that used to be in the cluster and still has the filesystem mounted, reboot that node. Otherwise, reboot the entire cluster. | |
CI_FAILURE, No node for server (-1) of filesystem (/dev/xvm/bob1) on (/bob1). | |
(There may be many repetitions of this message.) The filesystem appears to still be mounted on a server node that is no longer in the cluster database. If you can identify the server node that used to be in the cluster and still has the filesystem mounted, reboot that node. Otherwise, reboot the entire cluster. | |
CI_ FAILURE, Node cxfs0: SGI_CMS_HOST_ID(tcp,128.162.8 >9.33) error 149 (Operation already in progress) | |
The kernel already had this information; you can ignore this message. | |
CI_FAILURE, Unregistered from crs. | |
The clconfd daemon is no longer connected to the reset daemon and will not be able to handle resets of failed nodes. There is no corrective action. | |
CI_IPCERR_NOSERVER, Crs_register failed,will retry later. Resetting not possible yet. | |
The clconfd daemon cannot connect to the reset daemon. It will not be able to handle resets of failed nodes. Check the reset daemon's log file (/var/cluster/ha/log/crsd_ ) for more error messages. | |
Clconfd is out of membership, will restart after notifying clients. | |
The clconfd daemon does not have enough information about the current state of the cluster. It will exit and be automatically restarted with fresh data. | |
CMD(/sbin/clmount -o 'server_list=(cxfs2,cxfs0)' /dev/xvm/stripe4 /xvm/stripe4): /dev/xvm/stripe4: Invalid argument | |
You have tried to mount a filesystem without first running mkfs(1M). You must use mkfs(1M) to construct the filesystem before mounting it. For more information, see the mkfs(1M) man page.
| |
CMD(/sbin/clmount -o 'server_list=(cxfs0,cxfs1)' /dev/xvm/bob2 /bob2): /dev/xvm/bob2: Invalid argumentSep 9 14:12:43 6X:cxfs0 clconfd[345]: < E clconf 3> CI_FAILURE, CMD(/sbin/clmount -o 'server_list=(cxfs0,cxfs1)' /dev/xvm/bob2 /bob2): exited with status 1 (0x1) | |
The first message comes from the clmount command (the internal CXFS mount command) and explains the error (an invalid argument was issued). The second message says that the mount failed. |
The following errors are sent by the crsd daemon.
CI_ERR_NOTFOUND, No logging entries found for group crsd, no logging will take place - Database entry #global#logging#crsd not found. | |
No crsd logging definition was found in the cluster database. This can happen if you start cluster processes without creating the database. See “Recreating the Cluster Database”. | |
CI_ERR_RETRY, Could not find machine listing. | |
The crsd daemon could not find the local node in the cluster database. You can ignore this message if the local node definition has not yet been created. | |
CI_ERR_SYS:125, bind() failed. | |
The sgi-crsd port number in the /etc/services file is not unique, or there is no sgi-crsd entry in the file. For information about adding this entry, see “/etc/services on CXFS Administration Nodes” in Chapter 2. | |
CI_FAILURE, Entry for sgi-crsd is missing in /etc/services. | |
The sgi-crsd entry is missing from the /etc/services file. For information about adding this entry, see “/etc/services on CXFS Administration Nodes” in Chapter 2. | |
CI_FAILURE, Initialization failed, exiting. | |
A sequence of messages will be ended with this message; see the messages prior to this one in order to determine the cause of the failure. |
The following errors are sent by the cmond daemon.
The following errors are sent by the cxfs_client daemon.
cxfs_client: cis_get_hba_wwns warning: fencing configuration file "fencing.conf" not found | |
The fencing file was not found, therefore the fencing configuration will not be updated on the server. | |
cxfs_client:op_failed ERROR: Mount failed for concat0 | |
A filesystem mount has failed and will be retried. |
The following errors are sent by the fs2d daemon.
Error 9 writing CDB info attribute for node #cluster#elaine#machines#cxfs2#Cellular#status | |
An internal error occurred when writing to the cluster database. Retry the operation. If the error persists, stop and restart the cluster daemons; see “Stopping and Restarting Cluster Infrastructure Daemons”. If the problem persists, clear the database, reboot, and re-create the database. See “Clearing the Cluster Database”. | |
Error 9 writing CDB string value for node #cluster#elaine#machines#cxfs2#Cellular#status | |
An internal error occurred when writing to the cluster database. Retry the operation. If the error persists, stop and restart the cluster daemons; see “Stopping and Restarting Cluster Infrastructure Daemons”. If the problem persists, clear the database, reboot, and re-create the database. See “Clearing the Cluster Database”. | |
Failed to update CDB for node #cluster#elaine#Cellular#FileSystems#fs1#FSStatus | |
An internal error occurred when writing to the cluster database. Retry the operation. If the error persists, stop and restart the cluster daemons; see “Stopping and Restarting Cluster Infrastructure Daemons”. If the problem persists, clear the database, reboot, and re-create the database. See “Clearing the Cluster Database”. | |
Failed to update CDB for node #cluster#elaine#machines#cxfs2#Cellular#status | |
An internal error occurred when writing to the cluster database. Retry the operation. If the error persists, stop and restart the cluster daemons; see “Stopping and Restarting Cluster Infrastructure Daemons”. If the problem persists, clear the database, reboot, and re-create the database. See “Clearing the Cluster Database”. | |
Machine 101 machine_sync failed with lock_timeout error | |
The fs2d daemon was not able to synchronize the cluster database and the sync process timed out. This operation will be retried automatically by fs2d. | |
ALERT: CXFS Recovery: Cell 0: Server Cell 2 Died, Recovering | |
The server (cell 2) died and the system is now recovering a filesystem. |
CI_CONFERR_NOTFOUND, Logging configuration error: could not read cluster database /var/cluster/cdb/cdb.db, cdb error = 3. | |
The cluster database has not been initialized. See “Recreating the Cluster Database”. | |
WARNING: Error receiving messages from cell 2 tcpchannel 1 | |
There has been an error on the CXFS membership channel (channel 1; channel 0 is the main message channel for CXFS and XVM data). This may be a result of tearing down the channel or may be an error of the node (node with an ID of 2 in this case). There is no corrective action. |
CXFS maintains logs for each of the CXFS daemons. For information about customizing these logs, see “Set Log Configuration with the GUI” in Chapter 4.
Log file messages take the following form:
daemon_log timestamp internal_process: message_text |
For example:
cad_log:Thu Sep 2 17:25:06.092 cclconf_poll_clconfd: clconf_poll failed with error CI_IPCERR_NOPULSE |
Table 10-2, shows the parts in the preceding message.
Table 10-2. Log File Error Message Format
Content | Part | Meaning |
---|---|---|
cad_log | Daemon identifier | The message pertains to the cad daemon |
Sep 2 17:25:06.092 | Time stamp and process ID | September 2 at 5:25 PM, process ID 92. |
cclconf_poll_clconfd | Internal process information | Internal process information |
clconf_poll failed with error CI_IPCERR_NOPULSE | Message text | The clconfd daemon could not be contacted to get an update on the cluster's status. |
The following are examples of messages from /var/cluster/ha/log/cad_log :
ccacdb_cam_open: failed to open connection to CAM server error 4 | ||
Internal message that can be ignored because the cad operation is automatically retried. | ||
ccamail_cam_open: failed to open connection to CAM server error 4 | ||
Internal message that can be ignored because the cad operation is automatically retried. | ||
ccicdb_cam_open: failed to open connection to CAM server error 4 | ||
Internal message that can be ignored because the cad operation is automatically retried. | ||
cclconf_cam_open: failed to open connection to CAM server error 4 | ||
Internal message that can be ignored because the cad operation is automatically retried. | ||
cclconf_poll_clconfd: clconf_poll failed with error CI_IPCERR_NOCONN | ||
The clconfd daemon is not running or is not responding to external requests. If the error persists, stop and restart the cluster daemons; see “Stopping and Restarting Cluster Infrastructure Daemons”. | ||
cclconf_poll_clconfd: clconf_poll failed with error CI_IPCERR_NOPULSE | ||
The clconfd daemon could not be contacted to get an update on the cluster's status. If the error persists, stop and restart the cluster daemons; see “Stopping and Restarting Cluster Infrastructure Daemons”. | ||
cclconf_poll_clconfd: clconf_poll failed with error CI_CLCONFERR_LONELY | ||
The clconfd daemon does not have enough information to provide an accurate status of the cluster. It will automatically restart with fresh data and resume its service. | ||
csrm_cam_open: failed to open connection to CAM server error 4 | ||
Internal message that can be ignored because the cad operation is automatically retried. | ||
Could not execute notification cmd. system() failed. Error: No child processes | ||
No mail message was sent because cad could not fork processes. Stop and restart the cluster daemons; see “Stopping and Restarting Cluster Infrastructure Daemons”. | ||
error 3 sending event notification to client 0x000000021010f078 | ||
GUI process exited without cleaning up.
| ||
error 8 sending event notification to client 0x000000031010f138 | ||
GUI process exited without cleaning up.
|
The following are examples of messages from /var/cluster/ha/log/cli_ hostname:
CI_CONFERR_NOTFOUND, No machines found in the CDB. | |
The local node is not defined in the cluster database. | |
CI_ERR_INVAL, Cluster (bob) not defined | |
The cluster called bob is not present in the cluster database. | |
CI_ERR_INVAL, CLI private command: failed (Cluster (bob) not defined) | |
The cluster called bob is not present in the cluster database. | |
CI_IPCERR_AGAIN, ipcclnt_connect(): file /var/cluster/ha/comm/clconfd-ipc_cxfs0 lock failed - Permission denied | |
The underlying command line interface (CLI) was invoked by a login other than root. You should only use cmgr(1M) when you are logged in as root. | |
CI_IPCERR_NOPULSE, CLI private command: failed (Cluster state is UNKNOWN.) | |
The cluster state could not be determined. Check if the clconfd(1M) daemon is running. | |
CI_IPCERR_NOPULSE, ipcclnt_pulse_internal(): server failed to pulse | |
The cluster state could not be determined. Check if the clconfd(1M) daemon is running. | |
CI_IPCERR_NOSERVER, clconf ipc: ipcclnt_connect() failed, file /var/cluster/ha/comm/clconfd-ipc_cxfs0 | |
The local node (cxfs0) is not defined in the cluster database. | |
CI_IPCERR_NOSERVER, Connection file /var/cluster/ha/comm/clconfd-ipc_cxfs0 not present. | |
The local node (cxfs0) is not defined in the cluster database. |
The following are examples of messages from /var/cluster/ha/log/crsd_hostname:
CI_CONFERR_INVAL, Nodeid -1 is invalid., I_CONFERR_INVAL, Error from ci_security_init()., CI_ERR_SYS:125, bind() failed., CI_ERR_SYS:125, Initialization failed, exiting., CI_ERR_NOTFOUND, Nodeid does not have a value., CI_CONFERR_INVAL, Nodeid -1 is invalid. | |
For each of these messages, either the node ID was not provided in the node definition or the cluster processes were not running in that node when node definition was created in the cluster database. This is a warning that optional information is not available when expected. | |
CI_ERR_NOTFOUND, SystemController information for node cxfs2 not found, requests will be ignored. | |
System controller information (optional information) was not provided for node cxfs2. Provide system controller information for node cxfs2 by modifying node definition. This is a warning that optional information is not available when expected. Without this information, the node will not be reset if it fails, which might prevent the cluster from properly recovering from the failure. | |
CI_ERR_NOTFOUND, SystemController information for node cxfs0 not found, requests will be ignored. | |
The owner node specified in the node definition for the node with a node ID of 101 has not been defined. You must define the owner node. | |
CI_CRSERR_NOTFOUND, Reset request 0x10087d48 received for node 101, but its owner node does not exist. | |
The owner node specified in the node definition for the node with a node ID of 101 has not been defined. You must define the owner node. |
The following are examples of messages from /var/cluster/ha/log/fs2d_hostname:
Failed to copy global CDB to node cxfs1 (1), error 4 | |
There are communication problems between the local node and node cxfs2. Check the control networks of the two nodes. | |
Communication failure send new quorum to machine cxfs2 (102) (error 6003) | |
There are communication problems between the local node and node cxfs2. Check the control networks of the two nodes. | |
Failed to copy CDB transaction to node cxfs2 (1) | |
There are communication problems between the local node and node cxfs2. Check the control networks of the two nodes. | |
Outgoing RPC to hostname : NULL | |
If you see this message, check your Remote Procedure Call (RPC) setup. For more information, see the rpcinfo (1M), rpcinfo(1M), and portmap(1M) man pages. |
This section covers the following corrective actions:
If CXFS services to do not restart after a reboot, it may be that the start flag was turned off by using the stop function of the GUI or the cmgr(1M) command. In this case, issuing a /etc/init.d/cluster start will not restart the services. You must start CXFS services. If you use the GUI or cmgr to restart the services, the configuration will be set so that future reboots will also restart CXFS services.
For information, see “Start CXFS Services with the GUI” in Chapter 4, or “Start CXFS Services with cmgr” in Chapter 5.
To clear the cluster database on all the nodes of the cluster, do the following, completing each step on each node before moving to the next step:
Enter the following on all nodes:
# /usr/cluster/bin/cmgr -c 'admin cxfs_stop' |
Enter the following on all nodes:
# /etc/init.d/cluster stop |
![]() | Caution: Complete steps 1 and 2 on each node before moving to step 3 for any node. |
Enter the following on all nodes:
# /usr/cluster/bin/cdbreinit |
Enter the following on all nodes:
# /etc/init.d/cluster start |
Enter the following on all nodes:
# /usr/cluster/bin/cmgr -c 'admin cxfs_start' |
See “Eliminate a Residual Cluster”, to get rid of possible stale cluster configuration in the kernel. If needed, reboot the nodes.
Enter the following individually on every node to reboot the cluster:
# reboot |
For information about nodes running operating systems other than IRIX, see the CXFS MultiOS for CXFS Client-Only Nodes: Installation and Configuration Guide.
If you want CXFS services to restart whenever the node is rebooted, use the GUI or cmgr(1M) to start CXFS services. For information, see “Start CXFS Services with the GUI” in Chapter 4, and “Start CXFS Services with cmgr” in Chapter 5.
The following are situations that may require a rebooting:
If some CXFS clients are unable to unmount a filesystem because of a busy vnode and a reset of the node does not fix the problem, you may need to reboot every node in the cluster
If there is no recovery activity within 10 minutes, you may need to reboot the node
You have cluster named clusterA that has two server-capable nodes and there is no CXFS tiebreaker:
node1
node2
node1 goes down and will remain down for a while.
node2 recovers and clusterA remains up.
![]() | Note: An existing cluster can drop down to 50% of the remaining server-capable nodes after the initial CXFS kernel membership is formed. For more information, see “CXFS Kernel Membership, Quorum, and Tiebreaker” in Appendix B. |
node2 goes down and therefore clusterA fails.
node2 comes back up. However, clusterA cannot form because the initialization of a cluster requires either:
More than 50% of the server-capable nodes
50% of the server-capable nodes, one of which is the CXFS tiebreaker
To allow node2 to form a cluster by itself, you must do the following:
Set node2 to be the CXFS tiebreaker node, using either the GUI or cmgr:
Revoke the CXFS kernel membership of node2:
See “Revoke Membership of the Local Node with the GUI” in Chapter 4.
In cmgr, enter:
cmgr> admin cxfs_stop |
See “Revoke Membership of the Local Node with cmgr” in Chapter 5.
Allow CXFS kernel membership of node2 :
See “Allow Membership of the Local Node with the GUI” in Chapter 4.
In cmgr, enter:
cmgr> admin cxfs_start |
See “Allow Membership of the Local Node with cmgr” in Chapter 5.
Unset the CXFS tiebreaker node capability.
![]() | Caution: If the CXFS tiebreaker node in a cluster with two server-capable nodes fails or if the administrator stops CXFS services, the other node will do a forced shutdown, which unmounts all CXFS filesystems. The serial hardware reset capability is mandatory to ensure data integrity for clusters with only two server-capable nodes and highly recommended for all server-capable nodes. Larger clusters should have an odd number of server-capable nodes, or must have serial hardware reset lines or use I/O fencing with switches if only two of the nodes are server-capable nodes. |
Use either the GUI or cmgr:
The cluster will attempt to communicate with the node1 because it is still configured in the cluster, even though it is down. Therefore, it may take some time for the CXFS kernel membership to form and for filesystems to mount.
The cluster flag to chkconfig(1M) controls the other cluster administration daemons and the replicated cluster database. If it is turned off, the database daemons will not be started at the next reboot and the local copy of the database will not be updated if you make changes to the cluster configuration on the other nodes. This could cause problems later, especially if a majority of nodes are not running the database daemons.
If the cluster daemons are causing serious trouble and prevent the machine from booting, you can recover the node by booting in single-user mode, turning off the cluster flag, and booting in multiuser mode:
# init 1 # /etc/chkconfig cluster off # init 2 |
To stop and restart cluster infrastructure daemons, enter the following:
# /etc/init.d/cluster stop # /etc/init.d/cluster start |
These commands affect the cluster infrastructure daemons only.
![]() | Caution: When the cluster infrastructure daemons are stopped, the node will not receive database updates and will not update the kernel configuration. This can have very unpleasant side effects. Under most circumstances, the infrastructure daemons should remain running at all times. Use these commands only as directed. |
See also “Restarting CXFS Services”. For general information about the daemons, see “Daemons” in Appendix A.