This chapter provides recommendations and a summary of the steps required to initially configure a cluster using either the graphical user interface (GUI) or the cmgr(1M) command. You may also wish to use the worksheet provided in Appendix D, “Initial Configuration Checklist”. If you are converting from an existing IRIS FailSafe cluster, see “Set Up an Existing FailSafe Cluster for CXFS with the GUI” in Chapter 4.
This chapter assumes that you have already performed the steps in Chapter 2, “IRIX Systems: Installation of CXFS Software and System Preparation”. It points to detailed descriptions in the task reference chapters and in the XVM Volume Manager Administrator's Guide.
For the initial installation, SGI highly recommends that you use the GUI guided configuration tasks; see “Configuring with the GUI”. You should also read through the entire book, including Chapter 10, “Troubleshooting”, before configuring the cluster.
CXFS requires a license to be installed on each node. If you increase the number of CPUs in your system, you may need a new license; see Chapter 2, “IRIX Systems: Installation of CXFS Software and System Preparation”. For information about other operating systems, see the CXFS MultiOS for CXFS Client-Only Nodes: Installation and Configuration Guide.
If you want to use the file alteration monitor (fam), you must remove the /dev/imon file from CXFS nodes. Removing this file forces fam to poll the filesystem. For more information about the monitor, see the fam(1M) man page.
If there are any network issues on the private network, fix them before trying to use CXFS.
Use a network switch rather than a hub for performance and control.
All nodes should be on the same physical network segment. Two clusters should not share the same private network.
A production cluster should be configured with a minimum of three server-capable nodes and a maximum of 32 nodes total, 16 of which can be administration nodes.
For large clusters, SGI recommends that you define only the first three server-capable nodes and then continue on to the next configuration step; add the remaining nodes after you have a successful small cluster.
Create a new cluster using IRIX nodes that have the same IRIX release installed. When the cluster is functioning, you can later upgrade selected nodes to an adjacent release.
If you want to run CXFS and Trusted IRIX/CMW, have all nodes in the cluster run Trusted IRIX. You should configure your system such that all nodes in the cluster have the same user IDs, access control lists (ACLs), and capabilities.
Avoid unnecessary metadata traffic by avoiding the use of the find(1) command. Because CXFS filesystems are considered as local on all nodes in the cluster, the nodes may generate excessive filesystem activity if they try to access the same filesystems simultaneously while doing a find. Edit the nodes' crontab file to only execute find on one metadata server of the cluster.
Always contact SGI technical support before using xfs_repair(1M) on CXFS filesystems. Only use xfs_repair on metadata servers and only when you have verified that all other cluster nodes have unmounted the filesystem.
When using xfs_repair, make sure it is run only on a cleanly unmounted filesystem. If your filesystem has not been cleanly unmounted, there will be un-committed metadata transactions in the log, which xfs_repair will erase. This usually causes loss of some data and messages from xfs_repair that make the filesystem appear to be corrupted.
If you are running xfs_repair right after a system crash or a filesystem shutdown, your filesystem is likely to have a dirty log. To avoid data loss, you MUST mount and unmount the filesystem before running xfs_repair. It does not hurt anything to mount and unmount the filesystem locally, after CXFS has unmounted it, before xfs_repair is run.
Use an odd number of server-capable nodes.
Use an odd number of CXFS administration nodes.
Shut down cluster services before maintenance. Disabled nodes are not used in CXFS kernel membership calculations, so this action may prevent a loss of quorum.
Avoid recovery in the current release.
![]() | Note: In this release, relocation is disabled by default
and recovery is supported only when using standby nodes.
Relocation and recovery are fully implemented, but the number of associated problems prevents full support of these features in the current release. Although data integrity is not compromised, cluster node panics or hangs are likely to occur. Relocation and recovery will be fully supported in a future release when these issues are resolved. |
Do the following before shutting down a node:
Use the CXFS GUI or the cmgr(1M) command to unmount the filesystem from all hosts.
Shut down cluster services.
Enable the forced unmount feature for CXFS filesystems, which is off by default. Many sites have found that enabling this feature improves the stability of their CXFS clusters, particularly in situations where the filesystem must be unmounted.
On IRIX nodes, this feature uses the umount -k option. The -k option attempts to kill processes that have open files or current directories in the appropriate filesystems and then unmount them. That is, it attempts to terminate any I/O going to the filesystem, so that it can unmount it promptly, rather than having to wait for the I/O to finish on its own, causing the unmount to possibly fail.
This feature is available through the following CXFS GUI menu:
Tasks -> Filesystems -> Unmount a CXFS Filesystem
You can also specify this feature using the cmgr(1M) commands to define the filesystem.
See “Unmount a CXFS Filesystem with the GUI” in Chapter 4, “Define a CXFS Filesystem with cmgr” in Chapter 5, the umount(1M) man page, and the Solaris fuser(1M) man page.
Do not use any filesystem defragmenter software. On IRIX nodes, you can use the IRIX fsr(1M) command only on a metadata server for the filesystem it acts upon.
Do not include the find(1) command in a crontab(1) file on any node in the cluster. CXFS filesystems act like local filesystems and therefore the search will be done on each node. Using find will slow the system and temporarily consume large quantities of memory on the metadata server.
If you are using I/O fencing, you must keep the telnet port on the Fibre Channel switch free at all times; do not perform a telnet to the switch and leave the session connected.
![]() | Note: Administration must be performed using the GUI connected to a CXFS administration node (one that has the cluster_admin software package installed) or using the cmgr (1M) command on a CXFS administration node. |
Complete the following steps to ensure that you are ready to configure the initial cluster:
During the course of configuration, you will see various information-only messages in the log files. See “Normal Messages” in Chapter 10.
Verify that you have a CXFS license by using the -d option to the cxfslicense command. For example:
# /usr/cluster/bin/cxfslicense -d CXFS license granted. |
If you have a properly installed license, you will also see a FEATURE CXFS line in the /var/flexlm/license.dat file on each IRIX node.
![]() | Note: The /var/flexlm/license.dat file cannot simply be copied between nodes because it is unique to each node. |
For more information about installing software licenses, see the IRIX 6.5 Installation Instructions booklet.
Enter the following on CXFS administration node to start the cluster daemons:
# /etc/init.d/cluster start |
When you first install the software, the following daemons should be running:
After you start CXFS services, the clconfd daemon is also started.
To determine which daemons are running, enter the following:
# ps -ef | grep cluster |
The following shows an example of the output when just the initial daemons are running; for readability, whitespace has been removed and the daemon names are highlighted:
cxfs6 # ps -ef | grep cluster root 31431 1 0 12:51:36 ? 0:14 /usr/lib32/cluster/cbe/fs2d /var/cluster/cdb/cdb.db # root 31456 31478 0 12:53:01 ? 0:03 /usr/cluster/bin/crsd -l root 31475 31478 0 12:53:00 ? 0:08 /usr/cluster/bin/cad -l -lf /var/cluster/ha/log/cad_log --append_log root 31478 1 0 12:53:00 ? 0:00 /usr/cluster/bin/cmond -L info -f /var/cluster/ha/log/cmond_log root 31570 31408 0 14:01:52 pts/0 0:00 grep cluster |
If you do not see these processes, go to the logs to see what the problem might be. If you must restart the daemons, enter the following:
# /etc/init.d/cluster start |
For more information, see “Stopping and Restarting Cluster Infrastructure Daemons” in Chapter 10, and “Daemons” in Appendix A.
When you are initially configuring the cluster, you must use the IP address or the value of /etc/sys_id (which must match the primary name of the IP address for the node in /etc/hosts) when logging in to the GUI and when defining the first node in the pool. This must be done when the GUI is connected to a CXFS administration node. The value of /etc/sys_id is displayed by the hostname(1) command. For example:
# hostname cxfs6 |
Also, if you use nsd(1M), you must configure your system so that local files are accessed before the network information service (NIS) or the domain name service (DNS). See “Hostname Resolution: /etc/sys_id, /etc/hosts, /etc/nsswitch.conf” in Chapter 2.
![]() | Caution: It is critical that these files are configured properly and that you enter the primary name for the first node defined in the pool; aliases may be used for subsequent node definitions. See Chapter 2, “IRIX Systems: Installation of CXFS Software and System Preparation”. |
Ensure that the output from chkconfig(1M) shows the following flags set to on:
# chkconfig Flag State ==== ===== cluster on cxfs_cluster on |
If they are not, set them to on and reboot.
For example:
# /etc/chkconfig cluster on # /etc/chkconfig cxfs_cluster on # init 6 |
Or:
# init 1 # /etc/chkconfig cluster on # /etc/chkconfig cxfs_cluster # init 2 |
To initially configure the cluster with GUI, do the following:
The CXFS administration node to which you connect the GUI affects your view of the cluster. You should wait for a change to appear in the view area before making another change; the change is not guaranteed to be propagated across the cluster until it appears in the view area. You should only make changes from one instance of the GUI at any given time; changes made by a second GUI instance may overwrite changes made by the first instance.
Start the CXFS Manager by entering the following:
# /usr/sbin/cxfsmgr |
You can also start the GUI from your web browser on a Microsoft Windows, Linux, or other platform. To do this, enter http:// server/CXFSManager/ (where server is the name of a CXFS administration node in the pool) and press Enter. At the resulting webpage, click the CXFS Manager icon. This method of launching CXFS Manager requires you to have enabled Java in your browser's preferences and have installed the Java2 version 1.4.1 plug-in. (After installing the plug-in, you must close any existing Java windows and restart your browser.) The CXFS administration node must be running a web server, such as Apache, and have the s ysadm_cxfs.sw.web software installed.
![]() | Note: If you load the GUI using Netscape on IRIX and then switch to another page in Netscape, CXFS Manager GUI will not operate correctly. To avoid this problem, leave the CXFS Manager GUI web page up and open a new Netscape window if you want to view another page. |
There are other methods of starting the GUI. For more information, see “Starting the GUI” in Chapter 4.
Supply the name of the CXFS administration node you wish to connect to and the root password.
![]() | Caution: For the first node defined in the pool, it is critical that you use the value of /etc/sys_id in the Server field. See “Hostname Resolution: /etc/sys_id, /etc/hosts, /etc/nsswitch.conf” in Chapter 2. |
Figure 3-1 shows the CXFS Manager window.
![]() | Note: Within the CXFS tasks, you can click any blue text to get more information about that concept or input field. In every task, the cluster configuration will not update until you click on OK. |
The Set Up a New Cluster task in the Guided Configuration menu leads you through the steps required to create a new cluster. It encompasses tasks that are detailed elsewhere.
Do the following:
Click Define a Node to define the CXFS administration node to which you are connected. The hostname that appears in /etc/sys_id is used for this first node definition; see “Determine the Hostname of the CXFS Administration Node”. See “Define a Node with the GUI” in Chapter 4.
![]() | Note: If you attempt to define a cluster or other object before the local
node has been defined, you will get an error message that says:
|
(Optional) After the first node icon appears in the view area on the left, click step 2, Define a Node, to define the other nodes in the cluster. The hostname/IP-address pairings and priorities of the networks must be the same for each node in the cluster. See “Define a Node with the GUI” in Chapter 4.
![]() | Note: Do not define another node until this node appears in the view area. If you add nodes too quickly (before the database can include the node), errors will occur. |
Repeat this step for each node. For large clusters, SGI recommends that you define only the first three server-capable nodes and then continue on to the next step; add the remaining nodes after you have a successful small cluster.
Click Define a Cluster to create the cluster definition. See “Define a Cluster with the GUI” in Chapter 4. Verify that the cluster appears in the view area. Choose View: Nodes in Cluster.
After the cluster icon appears in the view area, click Add/Remove Nodes in Cluster to add the nodes to the new cluster. See “Add or Remove Nodes in the Cluster with the GUI” in Chapter 4.
Click Next to move to the second screen of tasks.
(Optional) Click on Test Connectivity to verify that the nodes are physically connected. See “Test Node Connectivity with the GUI” in Chapter 4. (This test requires the proper configuration of the /etc/.rhosts file; see “(Optional) /.rhosts” in Chapter 2.)
If you are using I/O fencing, define the Brocade Fibre Channel switch in the cluster. I/O fencing is required for nodes without system controllers; see “Requirements” in Chapter 1.
Click Start CXFS Services . See “Start CXFS Services with the GUI” in Chapter 4.
Click Close. Clicking on Close exits the task; it does not undo the task.
![]() | Note: Within the CXFS tasks, you can click any blue text to get more information about that concept or input field. In every task, the cluster configuration will not update until you click OK. |
The Set Up a New CXFS Filesystem task leads you through the steps required to create a new filesystem and mount it on all nodes in your cluster. It encompasses tasks that are detailed elsewhere.
Do the following:
Click Start CXFS Services if the services have not been started already. (The current status is displayed beneath the task link.) See “Start CXFS Services with the GUI” in Chapter 4.
Click Label Disks.
![]() | Note: The disk must be initialized before being labeled. If your disk has not been initialized during factory set-up, use the fx(1M) command to initialize the disk. |
For information about XVM tasks, see the XVM Volume Manager Administrator's Guide.
Create slices, which define the physical storage, on the labeled disk. Click Slice Disks.
Create the type of filesystem you want: stripe, mirror, or concat.
Click Make the Filesystem. If you do not want to use the default options, click Specify Sizes and go to the next page. For more information, see the mkfs(1M) man page, the IRIX Admin: Disks and Filesystems guide, and the XVM Volume Manager Administrator's Guide.
Click Define a CXFS Filesystem. This task lets you define a new filesystem, set the ordered list of potential metadata servers, and set the list of client nodes for the filesystem. See “Define a CXFS Filesystem with the GUI” in Chapter 4.
Click Mount a CXFS Filesystem. This task lets you mount the filesystem on all nodes in the cluster. See “Mount a CXFS Filesystem with the GUI” in Chapter 4.
Repeat these steps for each filesystem.
![]() | Note: For the initial installation, SGI highly recommends that you
use the GUI guided configuration tasks. See “Configuring with the GUI”.
For details about cmgr(1M) commands, see the man page and Chapter 5, “Reference to cmgr Tasks for CXFS”. |
To initially configure the cluster with the cmgr(1M) command, do the following:
Follow the directions in “Preliminary Cluster Configuration Steps”.
Define the nodes that are eligible to be part of the cluster. The hostname/IP-address pairings and priorities of the networks must be the same for each node in the cluster. See “Define a Node with cmgr” in Chapter 5.
For large clusters, SGI recommends that you define only the first three CXFS administration nodes and then continue on to the next step; add the remaining nodes after you have a successful small cluster.
The following example sequence defines three nodes. (To use the default value for a prompt, press the Enter key. The Enter key is not shown in the examples in this guide.)
To define the first node, named cxfs6, enter the following:
cxfs6 # /usr/cluster/bin/cmgr -p Welcome to SGI Cluster Manager Command-Line Interface cmgr> define node cxfs6 Enter commands, you may enter "done" or "cancel" at any time to exit Hostname[optional] ? Is this a FailSafe node <true|false> ? false Is this a CXFS node <true|false> ? true Node Function <server_admin|client_admin|client_only> ? server_admin Operating System <IRIX|Linux32|Linux64|AIX|HPUX|Solaris|Windows> ? irix Node ID[optional]? Partition ID[optional] ? (0) Do you wish to define failure hierarchy[y/n]:n Reset type <powerCycle> ? (powerCycle) Do you wish to define system controller info[y/n]:y Sysctrl Type <msc|mmsc|l2> ? (msc) Sysctrl Password[optional] ? ( ) Sysctrl Status <enabled|disabled> ? enabled Sysctrl Owner ? cxfs8 Sysctrl Device ? /dev/ttyd2 Sysctrl Owner Type <tty> ? (tty) NIC 1 - IP Address ? cxfs6 Successfully defined node cxfs6 |
To define the second node, named cxfs7, enter the following:
cmgr> define node cxfs7 Enter commands, you may enter "done" or "cancel" at any time to exit Hostname[optional] ? Is this a FailSafe node <true|false> ? false Is this a CXFS node <true|false> ? true Node Function <server_admin|client_admin|client_only> ? server_admin Operating System <IRIX|Linux32|Linux64|AIX|HPUX|Solaris|Windows> ? irix Node ID[optional] ? Partition ID[optional] ? (0) Do you wish to define failure hierarchy[y/n]:n Reset type <powerCycle> ? (powerCycle) Do you wish to define system controller info[y/n]:y Sysctrl Type <msc|mmsc|l2> ? (msc) Sysctrl Password[optional] ? ( ) Sysctrl Status <enabled|disabled> ? enabled Sysctrl Owner ? cxfs6 Sysctrl Device ? /dev/ttyd2 Sysctrl Owner Type <tty> ? (tty) Number of Network Interfaces ? (1) NIC 1 - IP Address ? cxfs7 Successfully defined node cxfs7 |
To define the third node, named cxfs8, enter the following:
cmgr> define node cxfs8 Enter commands, you may enter "done" or "cancel" at any time to exit Hostname[optional] ? Is this a FailSafe node <true|false> ? false Is this a CXFS node <true|false> ? true Node Function <server_admin|client_admin|client_only> ? server_admin Operating System <IRIX|Linux32|Linux64|AIX|HPUX|Solaris|Windows> ? irix Node ID[optional] ? Partition ID[optional] ? (0) Do you wish to define failure hierarchy[y/n]:n Reset type <powerCycle> ? (powerCycle) Do you wish to define system controller info[y/n]:y Sysctrl Type <msc|mmsc|l2> ? (msc) Sysctrl Password[optional] ? ( ) Sysctrl Status <enabled|disabled> ? enabled Sysctrl Owner ? cxfs7 Sysctrl Device ? /dev/ttyd2 Sysctrl Owner Type <tty> ? (tty) Number of Network Interfaces ? (1) NIC 1 - IP Address ? cxfs8 Successfully defined node cxfs8 |
You now have three nodes defined in the pool. To verify this, enter the following:
cmgr> show nodes in pool 3 Machine(s) defined cxfs6 cxfs7 cxfs8 |
To show the contents of node cxfs6, enter the following:
cmgr> show node cxfs6 Logical Machine Name: cxfs6 Hostname: cxfs6.americas.sgi.com Operating System: irix Node Is FailSafe: false Node Is CXFS: true Node Function: server_admin Nodeid: 13203 Partition id: 0 Reset type: powerCycle System Controller: msc System Controller status: enabled System Controller owner: cxfs8 System Controller owner device: /dev/ttyd2 System Controller owner type: tty ControlNet Ipaddr: cxfs6 ControlNet HB: true ControlNet Control: true ControlNet Priority: 1 Node Weight: 1 |
Define the cluster and add the nodes to it. See “Define a Cluster with cmgr” in Chapter 5.
For example, to define a cluster named cxfs6-8 and add the nodes that are already defined, enter the following:
cmgr> define cluster cxfs6-8 Enter commands, you may enter "done" or "cancel" at any time to exit Is this a FailSafe cluster <true|false> false ? Is this a CXFS cluster <true|false> true ? Cluster Notify Cmd [optional] ? Cluster Notify Address [optional] ? Cluster mode <normal|experimental>[optional] Cluster ID ? 22 No nodes in cluster cxfs6-8 Add nodes to or remove nodes from cluster cxfs6-8 Enter "done" when completed or "cancel" to abort cxfs6-8 ? add node cxfs6 cxfs6-8 ? add node cxfs7 cxfs6-8 ? add node cxfs8 cxfs6-8 ? done Successfully defined cluster cxfs6-8 |
The fail action hierarchy is the set of instructions that determines which method is used in case of failure. If you set a hierarchy including fencing, you could define the switch at this point. For more information, see “Switches and I/O Fencing Tasks with cmgr” in Chapter 5.
For more information, see “Define a Cluster with cmgr” in Chapter 5.
To verify the cluster and its contents, enter the following:
cmgr> show clusters 1 Cluster(s) defined cxfs6-8 cmgr> show cluster cxfs6-8 Cluster Name: cxfs6-8 Cluster Is FailSafe: false Cluster Is CXFS: true Cluster ID: 22 Cluster CX mode: normal Cluster cxfs6-8 has following 3 machine(s) cxfs6 cxfs7 cxfs8 |
For an example of this step using a script, see “Script Example” in Chapter 5.
Start CXFS services for the cluster by entering the following:
start cx_services for cluster clustername |
cmgr> start cx_services for cluster cxfs6-8 CXFS services have been activated in cluster cxfs6-8 |
This action starts CXFS services and sets the configuration so that CXFS services will be restarted automatically whenever the system reboots.
![]() | Note: If you stop CXFS services using either the GUI or cmgr(1M), the automatic restart capability is turned off. You must start CXFS services again to reinstate the automatic restart capability. |
To verify that the cluster is up, you can use the following cmgr(1M) command:
show status of cluster clustername |
cmgr> show status of cluster cxfs6-8 Cluster (cxfs6-8) is not configured for FailSafe CXFS cluster state is ACTIVE. |
You can also use the clconf_info(1M) command. For example:
cxfs6 # /usr/cluster/bin/clconf_info Membership since Wed May 16 14:42:48 2001 Node NodeId Status Age Incarnation CellId cxfs7 12812 UP 0 0 1 cxfs6 13203 UP 0 0 0 cxfs8 14033 UP 0 0 2 0 CXFS FileSystems |
For more information, see “Display a Cluster with cmgr” in Chapter 5.
Obtain a shell window for one of the CXFS administration nodes in the cluster and use the fx(1M) command to create a volume header on the disk drive. For information, see IRIX Admin: Disks and Filesystems.
Create the XVM logical volumes. In the shell window, use the xvm(1M) command line interface. For information, see the XVM Volume Manager Administrator's Guide.
Make the filesystems. In the shell window, use the mkfs(1M) command. For information, see the XVM Volume Manager Administrator's Guide.
Mount the filesystems by using the define cxfs_filesystem subcommand to cmgr(1M). See “CXFS Filesystem Tasks with cmgr” in Chapter 5.
The following example shows two potential metadata servers for the fs1 filesystem; if cxfs6 (the preferred server, with rank 0) is not up when the cluster starts or later fails or is removed from the cluster, then cxfs7 (rank1) will be used. It also shows the filesystem being mounted by default on all nodes in the cluster (Default Local Status enabled) but explicitly not mounted on cxfs8.
Do the following:
cmgr> define cxfs_filesystem fs1 in cluster cxfs6-8 (Enter "cancel" at any time to abort) Device ? /dev/cxvm/d76lun0s0 Mount Point ? /mnts/fs1 Mount Options[optional] ? Use Forced Unmount ? <true|false> ? false Default Local Status <enabled|disabled> ? (enabled) DEFINE CXFS FILESYSTEM OPTIONS 0) Modify Server. 1) Add Server. 2) Remove Server. 3) Add Enabled Node. 4) Remove Enabled Node. 5) Add Disabled Node. 6) Remove Disabled Node. 7) Show Current Information. 8) Cancel. (Aborts command) 9) Done. (Exits and runs command) Enter option:1 No current servers Server Node ? cxfs6 Server Rank ? 0 0) Modify Server. 1) Add Server. 2) Remove Server. 3) Add Enabled Node. 4) Remove Enabled Node. 5) Add Disabled Node. 6) Remove Disabled Node. 7) Show Current Information. 8) Cancel. (Aborts command) 9) Done. (Exits and runs command) Enter option:1 Server Node ? cxfs7 Server Rank ? 1 0) Modify Server. 1) Add Server. 2) Remove Server. 3) Add Enabled Node. 4) Remove Enabled Node. 5) Add Disabled Node. 6) Remove Disabled Node. 7) Show Current Information. 8) Cancel. (Aborts command) 9) Done. (Exits and runs command) Enter option:5 No disabled clients Disabled Node ? cxfs8 0) Modify Server. 1) Add Server. 2) Remove Server. 3) Add Enabled Node. 4) Remove Enabled Node. 5) Add Disabled Node. 6) Remove Disabled Node. 7) Show Current Information. 8) Cancel. (Aborts command) 9) Done. (Exits and runs command) Enter option:7 Current settings for filesystem (fs1) CXFS servers: Rank 0 Node cxfs6 Rank 1 Node cxfs7 Default local status: enabled No explicitly enabled clients Explicitly disabled clients: Disabled Node: cxfs8 0) Modify Server. 1) Add Server. 2) Remove Server. 3) Add Enabled Node. 4) Remove Enabled Node. 5) Add Disabled Node. 6) Remove Disabled Node. 7) Show Current Information. 8) Cancel. (Aborts command) 9) Done. (Exits and runs command) Enter option:9 Successfully defined cxfs_filesystem fs1 cmgr> define cxfs_filesystem fs2 in cluster cxfs6-8 (Enter "cancel" at any time to abort) Device ? /dev/cxvm/d77lun0s0 Mount Point ? /mnts/fs2 Mount Options[optional] ? Use Forced Unmount ? <true|false> ? false Default Local Status <enabled|disabled> ? (enabled) DEFINE CXFS FILESYSTEM OPTIONS 0) Modify Server. 1) Add Server. 2) Remove Server. 3) Add Enabled Node. 4) Remove Enabled Node. 5) Add Disabled Node. 6) Remove Disabled Node. 7) Show Current Information. 8) Cancel. (Aborts command) 9) Done. (Exits and runs command) Enter option:1 Server Node ? cxfs8 Server Rank ? 0 0) Modify Server. 1) Add Server. 2) Remove Server. 3) Add Enabled Node. 4) Remove Enabled Node. 5) Add Disabled Node. 6) Remove Disabled Node. 7) Show Current Information. 8) Cancel. (Aborts command) 9) Done. (Exits and runs command) Enter option:7 Current settings for filesystem (fs2) CXFS servers: Rank 0 Node cxfs8 Default local status: enabled No explicitly enabled clients No explicitly disabled clients 0) Modify Server. 1) Add Server. 2) Remove Server. 3) Add Enabled Node. 4) Remove Enabled Node. 5) Add Disabled Node. 6) Remove Disabled Node. 7) Show Current Information. 8) Cancel. (Aborts command) 9) Done. (Exits and runs command) Enter option:9 Successfully defined cxfs_filesystem fs2 |
To see the modified contents of cluster cxfs6-8, enter the following:
cmgr> show cxfs_filesystems in cluster cxfs6-8 fs1 fs2 |
To quit out of cmgr, enter the following:
cmgr> quit |