This chapter explains how to set up ONC3/NFS services and verify that they work. It provides procedures for enabling exporting on NFS servers, for setting up mounting and automatic mounting on NFS clients, and for setting up the network lock manager. It also explains how to create a CacheFS file system. Before you begin these procedures, you should be thoroughly familiar with the information provided in Chapter 2, “Planning ONC3/NFS Service”.
This chapter contains these sections:
“Mounting a Cached File System”
![]() | Note: To perform the procedures in this chapter, you should have already installed ONC3/NFS software on the server and client systems that will participate in the ONC3/NFS services. The ONC3/NFS Release Notes explain where to find instructions for installing ONC3/NFS software. |
Setting up an NFS server requires verifying that the required software is running on the server, editing the server's /etc/exports file, adding the file systems to be exported, exporting the file systems, and verifying that they have been exported. The instructions below explain the setup procedure. Do this procedure as the superuser on the server.
Use versions to verify the correct software has been installed on the server:
# versions | grep nfs I nfs 07/09/97 Network File System, 6.5 I nfs.man 07/09/97 NFS Documentation I nfs.man.nfs 07/09/97 NFS Support Manual Pages I nfs.man.relnotes 07/09/97 NFS Release Notes I nfs.sw 07/09/97 NFS Software I nfs.sw.autofs 07/09/97 AutoFS Support I nfs.sw.cachefs 07/09/97 CacheFS Support I nfs.sw.nfs 07/09/97 NFS Support I nfs.sw.nis 07/09/97 NIS (formerly Yellow Pages)Support |
This example shows NFS as I (installed). A complete listing of current software modules is contained in the ONC3/NFS Release Notes.
Check the NFS configuration flag on the server.
When the /etc/init.d/network script executes at system startup, it starts NFS running if the chkconfig flag nfs is on. To verify that nfs is on, enter the chkconfig command and check its output, for example:
# /etc/chkconfig Flag State ==== ===== ... nfs on ... |
This example shows that the nfs flag is set to on.
If your output shows that nfs is off, enter the following command and reboot your system:
/etc/chkconfig nfs on |
Verify that NFS daemons are running.
Four nfsd and four biod daemons should be running (the default number specified in /etc/config/nfsd.options and /etc/config/biod.options). Verify that the appropriate NFS daemons are running using the ps command, shown below. The output of your entries should look similar to the output in these examples:
ps -ef | grep nfsd root 102 1 0 Jan 30 ? 0:00 /usr/etc/nfsd 4 root 104 102 0 Jan 30 ? 0:00 /usr/etc/nfsd 4 root 105 102 0 Jan 30 ? 0:00 /usr/etc/nfsd 4 root 106 102 0 Jan 30 ? 0:00 /usr/etc/nfsd 4 root 2289 2287 0 14:04:50 ttyq4 0:00 grep nfsd ps -ef | grep biod root 107 1 0 Jan 30 ? 0:00 /usr/etc/biod 4 root 108 1 0 Jan 30 ? 0:00 /usr/etc/biod 4 root 109 1 0 Jan 30 ? 0:00 /usr/etc/biod 4 root 110 1 0 Jan 30 ? 0:00 /usr/etc/biod 4 root 2291 2287 4 14:04:58 ttyq4 0:00 grep biod |
If no NFS daemons appear in your output, they were not included in the IRIX kernel during NFS installation. To check the kernel, enter the following:
strings /unix | grep nfs |
If there is no output, rebuild the kernel with this command, then reboot the system:
/etc/autoconfig -f |
Verify that mount daemons are registered with the portmapper.
Mount daemons must be registered with the server's portmapper so the portmapper can provide port numbers to incoming NFS requests. Verify that the mount daemons are registered with the portmapper by entering this command:
/usr/etc/rpcinfo –p | grep mountd |
After your entry, you should see output similar to this:
100005 1 tcp 1230 mountd 100005 1 udp 1097 mountd 391004 1 tcp 1231 sgi_mountd 391004 1 udp 1098 sgi_mountd |
The sgi_mountd in this example is an enhanced mount daemon that reports on SGI-specific export options.
Edit the /etc/exports file.
Edit the /etc/exports file to include the file systems you want to export and their export options (/etc/exports and export options are explained in “Operation of /etc/exports and Other Export Files” in Chapter 2). This example shows one possible entry for the /etc/exports file:
/usr/demos -ro,access=client1:client2:client3 |
In this example, the file system /usr/demos is exported with read-only access to three clients: client1, client2, and client3. Domain information can be included in the client names, for example client1.eng.sgi.com.
Run the exportfs command.
Once the /etc/exports file is complete, you must run the exportfs command to make the file systems accessible to clients. You should run exportfs anytime you change the /etc/exports file. Enter the following command:
/usr/etc/exportfs -av |
In this example, the –a option exports all file systems listed in the /etc/exports file, and the –v option causes exportfs to report its progress. Error messages reported by exportfs usually indicate a problem with the /etc/exports file.
Use exportfs to verify your exports.
Type the exportfs command with no parameters to display a list of the exported file system(s) and their export options, as shown in this example:
/usr/etc/exportfs /usr/demos -ro,access=client1:client2:client3 |
In this example, /usr/demos is accessible as a read-only file system to systems client1, client2, and client3. This matches what is listed in the /etc/exports file for this server (see instruction 6 of this procedure). If you see a mismatch between the /etc/exports file and the output of the exportfs command, check the /etc/exports file for syntax errors.
The NFS software for this server is now running and its resources are available for mounting by clients. Repeat these instructions to set up additional NFS servers.
To set up an an NFS client for conventional mounting, you must:
verify that NFS software is running on the client.
edit the /etc/fstab file to add the names of directories to be mounted.
mount directories in /etc/fstab by giving the mount command or by rebooting your system. These directories remain mounted until you explicitly unmount them.
![]() | Note: For instructions on mounting directories not listed in /etc/fstab, see “Temporary NFS Mounting” in Chapter 5. |
The procedure below explains how to set up NFS software on a client and mount its NFS resources using the mount command. You must do this procedure as the superuser.
Use versions to verify the correct software has been installed on the client:
# versions | grep nfd I nfs 07/09/97 Network File System, 6.5 I nfs.man 07/09/97 NFS Documentation I nfs.man.nfs 07/09/97 NFS Support Manual Pages I nfs.man.relnotes 07/09/97 NFS Release Notes I nfs.sw 07/09/97 NFS Software I nfs.sw.autofs 07/09/97 AutoFS Support I nfs.sw.cachefs 07/09/97 CacheFS Support I nfs.sw.nfs 07/09/97 NFS Support I nfs.sw.nis 07/09/97 NIS (formerly Yellow Pages)Support |
This example shows NFS as I (installed). A complete listing of current software modules is contained in the ONC3/NFS Release Notes.
Use chkconfig to check the client's NFS configuration flag.
To verify that nfs is on, give the chkconfig command and check its output (see “Setting Up the NFS Server” in this chapter for details on chkconfig).
If your output shows that nfs is off, enter the following command and reboot your system:
/etc/chkconfig nfs on |
Verify that NFS daemons are running.
Four nfsd and four biod daemons should be running (the default number specified in /etc/config/nfsd.options and /etc/config/biod.options). Verify that the appropriate NFS daemons are running using the ps command, shown below.
The output of your entries should look similar to the output in these examples:
ps -ef | grep nfsd root 102 1 0 Jan 30 ? 0:00 /usr/etc/nfsd 4 root 104 102 0 Jan 30 ? 0:00 /usr/etc/nfsd 4 root 105 102 0 Jan 30 ? 0:00 /usr/etc/nfsd 4 root 106 102 0 Jan 30 ? 0:00 /usr/etc/nfsd 4 root 2289 2287 0 14:04:50 ttyq4 0:00 grep nfsd ps -ef | grep biod root 107 1 0 Jan 30 ? 0:00 /usr/etc/biod 4 root 108 1 0 Jan 30 ? 0:00 /usr/etc/biod 4 root 109 1 0 Jan 30 ? 0:00 /usr/etc/biod 4 root 110 1 0 Jan 30 ? 0:00 /usr/etc/biod 4 root 2291 2287 4 14:04:58 ttyq4 0:00 grep biod |
If no NFS daemons appear in your output, they were not included in the IRIX kernel during NFS installation. To check the kernel, enter the following command:
strings /unix | grep nfs |
If there is no output, rebuild the kernel with this command, then reboot the system:
/etc/autoconfig -f |
Edit the /etc/fstab file.
Add an entry to the /etc/fstab file for each NFS directory you want mounted when the client is booted. The example below illustrates an /etc/fstab with an NFS entry to mount /usr/demos from the server redwood at mount point /n/demos:
/dev/root / xfs rw,raw=/dev/rroot 0 0 /dev/usr /usr xfs rw,raw=/dev/rusr 0 0 redwood:/usr/demos /n/demos nfs ro,bg 0 0 |
![]() | Note: The background (bg) option in this example allows the client to proceed with the boot sequence without waiting for the mount to complete. If the bg option is not used, the client hangs if the server is unavailable. |
Create the mount points for each NFS directory.
After you edit the /etc/fstab file, create a directory to serve as the mount point for each NFS entry in /etc/fstab file. If you specified an existing directory as a mount point for any of your /etc/fstab entries, remember that the contents of the directory are inaccessible while the NFS mount is in effect.
For example, to create the mount point /n/demos for mounting the directory /usr/demos from server redwood, enter the following command:
mkdir -p /n/demos |
Mount each NFS resource.
You can use the mount command in several ways to mount the entries in this client's /etc/fstab. See the mount(1M) man page for a description of the options. The examples below show two methods: mounting each entry individually and mounting all fstab entries that specify a particular server. The first example is:
mount /n/demos |
In this example, only the mount point is specified. All other information needed to perform the mount, the server name redwood and its resource /usr/demos, is provided by the /etc/fstab file.
The second example is:
mount -h redwood |
In this example, all NFS entries in /etc/fstab that specify server redwood are mounted.
![]() | Note: If you reboot the client instead of using the mount command, all NFS entries in /etc/fstab are mounted. |
The NFS software for this client is now ready to support user requests for NFS directories. Repeat these instructions to set up additional NFS clients.
Since the automatic mounters run only on NFS clients, all setup for the automatic mounters is done on the client system. This section provides two procedures for setting up the automatic mounters: one for setting up a default automount or autofs environment (autofs is recommended) and one for setting up a more complex environment.
If you set up a default automatic mounter environment on a client, at system startup automount (or autofs) reads the /etc/config/automount.options file (or /etc/config/autofs.options file) for mount information. By default, the options file contains an entry for a special map called –hosts. The –hosts map tells the automatic mounter to read the hosts database from the Unified Naming Service database; see the nsswitch.conf(4) man page) and use the server specified if the hosts database has a valid entry for that server. When using the –hosts map, when a client accesses a server, the automatic mounter gets the exports list from the server and mounts all directories exported by that server. automount uses /tmp_mnt/hosts as the mount point, and autofs uses /hosts.
A sample –hosts entry in /etc/config/automount.options is:
-v /hosts -hosts -nosuid,nodev |
Use this procedure to set up the default automatic mounter environment on an NFS client. You must do this procedure as the superuser.
Verify that NFS flags are on.
By default, the nfs and autofs (or automount) flags are set to on. To verify that they are on, give the chkconfig command and check its output (see instruction 2 of “Setting Up an NFS Client” in this chapter for sample chkconfig output).
If the command output shows that nfs and autofs (or automount) is set to off, enter either of these sets of commands to reset them, then reboot:
/etc/chkconfig nfs on /etc/chkconfig autofs on or /etc/chkconfig nfs on /etc/chkconfig automount on |
Verify that the default configuration is working:
cd /hosts/servername |
In place of servername, substitute the hostname of any system whose name can be resolved by the hostname resolution method you are using (see the resolver(4) man page). If the system specified is running NFS and has file systems that can be accessed by this client, autofs mounts all available file systems to /hosts/servername (automount uses /tmp_mnt/hosts/servername). If the system is not running NFS or has nothing exported that you have access to, you get an error message when you try to access its file systems.
Verify that directories have been mounted, for example:
mount servername:/ on /hosts/servername type nfs (rw,dev=c0005)(for autofs) or servername:/ on /tmp_mnt/hosts/servername type nfs (rw,dev=c0005)(for automount) |
The automatic mounter has serviced this request. It dynamically mounted /hosts/servername using the default automatic mounter environment.
A customized automatic mounter environment allows you to select the NFS directories that are dynamically mounted on a particular client, and allows you to customize the options in effect for particular mounts. You must complete four general steps to set up a customized automount environment:
Creating the maps.
Starting the automatic mounter program.
Verifying the automatic mounter process.
Testing the automatic mounter.
A customized automatic mounter environment contains a master map and any combination of direct and indirect maps. Although a master map is required, the automatic mounter does not require both direct and indirect maps. You can use either direct or indirect maps exclusively. AutoFS comes with a default /etc/auto_master file that can be modified.
Instructions for creating each type of map are given below. Notice from these instructions that a crosshatch (#) at the beginning of a line indicates a comment line in all types of maps. Include comment lines in your maps to illustrate map formats until you become familiar with each map type.
Create or modify the master map on the client.
The master map points the automatic mounter to other files that have more detailed information needed to complete NFS mounts. To create the master map, become superuser and create a file called /etc/auto.master (for automount) with any text editor. With AutoFS, modify the default /etc/auto_master file.
Specify the mount point, map name, and any options that apply to the direct and indirect maps in your entries, for example:
#Mount Point Map Name Map Options /food/dinner /etc/auto.food -ro /- /etc/auto.exercise -ro,soft /hosts -hosts -nosuid,nodev |
Create the indirect map.
Create your indirect map and insert the entries it needs. This example is the indirect map /etc/auto.food, listed in /etc/auto.master (or /etc/auto_master) in instruction 1:
#Directory Options Location ravioli venice:/food/pasta crepe -rw paris:/food/desserts chowmein hongkong:/food/noodles |
Create the direct map.
Create your direct map and insert the entries it needs. This example is the direct map /etc/auto.exercise, listed in /etc/auto.master (or /etc/auto_master) in instruction 1:
#Directory Options Location /leisure/swim spitz:/sports/water/swim /leisure/tennis becker:/sports/racquet/tennis /leisure/golf palmer:/sports/golf |
You can set up the software on a client so that the automatic mounter starts when the client is booted, and you can also start the automatic mounter from the command line. The procedures in this section explain how to set up the automatic mounter to start during the boot sequence.
If the automatic mounter is configured on at system startup, the /etc/init.d/network script reads the contents of the /etc/config/automount.options file (or /etc/config/autofs.options and /etc/ auto_master files for autofs) to determine how to start the automatic mounter program, what to mount, and how to mount it. Depending on the site configuration specified in the options file, the automatic mounter either finds all necessary information in the options file, or it is directed to local or NIS maps (or both) for additional mounting information.
If you plan to use NIS database maps other than the –hosts built-in map, you need to create the NIS maps. See the NIS Administrator Guide for information on building custom NIS maps. Follow this procedure to set the automatic mounter to start automatically at system startup:
Configure the automatic mounter on by using the chkconfig command (if needed) as follows:
/etc/chkconfig automount on or /etc/chkconfig autofs on |
Modify the /etc/config/automount.options file (or /etc/auto_master file).
Using any standard editor, modify the /etc/config/automount.options (or /etc/auto_master) file to reflect the automatic mounter site environment. (See automount(1M) or autofs(1M) man pages for details on the options file). Based on the previous examples, the /etc/config/automount.options file contains this entry:
-v -m -f /etc/auto.master |
The /etc/config/autofs.options file contains this entry:
-v |
The –v option directs error messages to the screen during startup and into the /var/adm/SYSLOG file once the automatic mounter is up and running. The –m option tells automount not to check the NIS database for a master map. Use this option to isolate map problems to the local system by inhibiting automount from reading the NIS database maps, if any exist. The –f option tells automount that the argument that follows it is the full pathname of the master file.
![]() | Note: In general, it is recommended that you start the automatic mounter with the verbose option (–v), since this option provides messages that can help with problem solving. |
Reboot the system.
Verify that the automatic mounter process is functioning by performing the following two steps.
Validate that the automatic mounter daemon is running by using the ps command, as follows:
ps -ef | grep automount or ps -ef | grep autofs |
You should see output similar to this for automount:
root 455 1 0 Jan 30 ? 0:02 automount -v -m -f /etc/auto.master root 4675 4673 0 12:45:05 ttyq5 0:00 grep automount |
You should see output similar to this for autofs:
root 555 1 0 Jan 30 ? 0:02 autofs -v - /etc/auto_master root 4775 4773 0 12:45:05 ttyq5 0:00 grep autofs |
Check the /etc/mtab entries.
When the automatic mounter program starts, it creates entries in the client's /etc/mtab for each of the automatic mounter's mount points. Entries in /etc/mtab include the process number and port number assigned to the automatic mounter, the mount point for each direct map entry, and each indirect map. The /etc/mtab entries also include the map name, map type (direct or indirect), and any mount options.
Look at the /etc/mtab file. A typical /etc/mtab table with automount running looks similar to this example (wrapped lines end with the \ character):
/dev/root / xfs rw,raw=/dev/rroot 0 0 /dev/usr /usr xfs rw,raw=/dev/rusr 0 0 /debug /debug dbg rw 0 0 /dev/diskless /diskless xfs rw,raw=/dev/rdiskless 0 0 /dev/d /d xfs rw,raw=/dev/rd 0 0 flight:(pid12155) /src/sgi ignore \ ro,port=885,map=/etc/auto.source,direct 0 0 flight:(pid12155) /pam/framedocs/nfs ignore \ ro,port=885,map=/etc/auto.source,direct 0 0 flight:(pid12155) /hosts ignore ro,port=885,\ map=-hosts,indirect,dev=1203 0 0 |
A typical /etc/mtab table with autofs running looks similar to this example:
-hosts on /hosts type autofs (ignore,indirect,nosuid,dev=1000010) -hosts on /hosts2 type autofs \ (ignore,indirect,nosuid,vers=2,dev=100002) -hosts on /hosts3 type autofs \ (ignore,indirect,fstype=cachefs,backfstype=nfs,dev=100003) /etc/auto_test on /text type autofs\ (ignore,indirect,ro,nointr,dev=100004) neteng:/ on /hosts2/neteng type nfs \ (nosuid,vers=2,dev=180004) |
The entries corresponding to automount mount points have the file system type ignore to direct programs to ignore this /etc/mtab entry. For instance, df and mount do not report on file systems with the type ignore. When a directory is NFS mounted by the automount program, the /etc/mtab entry for the directory has nfs as the file system type. df and mount report on file systems with the type nfs.
When the automatic mounter program is set up and running on a client, any regular account can use it to mount remote directories transparently. You can test your automatic mounter setup by changing to a directory specified in your map configuration.
The instructions below explain how to verify that the automatic mounter is working.
As a regular user, enter the cd command to change to an automounted directory.
For example, to test whether the automatic mounter mounts /food/pasta:
cd /food/dinner/ravioli |
This command causes the automatic mounter to look in the indirect map /etc/auto.food to execute a mount request to server venice and apply any specified options to the mount. automount then mounts the directory /food/pasta to the default mount point /tmp_mnt/food/dinner/ravioli. The directory /food/dinner/ravioli is a symbolic link to /tmp_mnt/food/dinner/ravioli. autofs mounts the directory /food/pasta to the default mount point /food/dinner/ravioli.
![]() | Note: The /food/dinner directory appears empty unless one of its subdirectories has been accessed (and therefore mounted). |
Verify that the individual mount has taken place.
Use the pwd command to verify that the mount has taken place, as shown in this example:
% pwd /food/pasta |
Verify that both directories have been automatically mounted.
You can also verify automounted directories by checking the output of a mount command:
% mount |
mount reads the current contents of the /etc/mtab file and includes conventionally mounted and automounted directories in its output.
The custom configuration of automount is set up and ready to work for users on this client.
The NFS lock manager provides file and record locking between a client and server for NFS-mounted directories. The lock manager is implemented by two daemons, lockd and statd (see the lockd(1M) and statd(1M) man pages ). Both are installed as part of NFS software.
The NFS lock manager program must be running on both the NFS client and the NFS server to function properly. Use this procedure to check the lock manager setup:
Use chkconfig on the client to check the lock manager flag.
To verify that the lockd flag is on, enter the chkconfig command and check its output (see instruction 2 of “Setting Up an NFS Client” in this chapter for sample chkconfig output). If your output shows that lockd is off, enter the following command and reboot your system:
/etc/chkconfig lockd on |
Verify that rpc.statd and either nlockmgr or nfsd are running.
Enter the following commands and check their output to verify that the lock manager daemons, rpc.statd and either nlockmgr or nfsd are running:
# ps -ef | grep statd root 131 1 0 Aug 6 ? 0:51 /usr/etc/rpc.statd root 2044 427 2 16:13:24 ttyq1 0:00 grep statd # rpcinfo -p nabokov | grep nlockmgr 100021 1 udp 2049 nlockmgr 100021 3 udp 2049 nlockmgr 100021 4 udp 2049 nlockmgr # ps -ef | grep nfsd root 102 1 0 Jan 30 ? 0:00 /usr/etc/nfsd 4 root 104 102 0 Jan 30 ? 0:00 /usr/etc/nfsd 4 |
If rpc.statd is not running, start it manually by giving the following command:
/usr/etc/rpc.statd |
If neither rpc.lockd or nfsd is running, start rpc.lockd manually by entering the following command:
/usr/etc/rpc.lockd |
Repeat instructions 1 and 2, above, on the NFS server, using rpc.lockd instead of nfsd.
When you set up a cache, you can use all or part of an existing file system. You can also set up a new slice to be used by CacheFS. In addition, when you create a cache, you can specify the percentage of resources, such as number of files or blocks, that CacheFS can use in the front file system. The configurable cache parameters are discussed in the section “Cache Resource Parameters in CacheFS” in Chapter 2.
Before starting to set up CacheFS, check that it is configured to start on the client.
Check the CacheFS configuration flag.
When the /etc/init.d/network script executes at system startup, it starts CacheFS running if the chkconfig flag cachefs is on.
To verify that cachefs is on, enter the chkconfig command and check its output, for example:
/etc/chkconfig Flag State ==== ===== ... cachefs on ... |
This example shows that the cachefs flag is set to on.
If your output shows that cachefs is off, enter the following command and reboot your system:
/etc/chkconfig cachefs on |
CacheFS uses a local XFS file system for the front file system.You can use an existing XFS file system for the front file system or you can create a new one. Using an existing file system is the quickest way to set up a cache. Dedicating a file system exclusively to CacheFS gives you the greatest control over the file system space available for caching.
![]() | Caution: Do not make the front file system read-only and do not set quotas on it. A read-only front file system prevents caching, and file system quotas interfere with control mechanisms built into CacheFS. |
There are two steps to setting up a cached file system:
Create the cache using the cfsadmin command. See “Creating a Cache”. Normally the cache directory is created with default parameters when you use the mount command. If you want to create the cache directory with different parameters, follow the procedures in “Creating a Cache”.
You must mount the file system you want cached using the -t cachefs option to the mount command. See “Mounting a Cached File System”.
The following example is the command to use to create a cache using the cfsadmin command:
# cfsadmin –c directory_name |
The following example creates a cache and creates the cache directory /local/mycache. Make sure the cache directory does not already exist.
# cfsadmin -c /local/mycache |
This example uses the default cache parameter values. The CacheFS parameters are described in the section “Cache Resource Parameters in CacheFS” in Chapter 2. See the cfsadmin(1M) man page and “Cached File System Administration” in Chapter 2 for more information on cfsadmin options.
The following example shows how to set parameters for a cache.
cfsadmin -c -o parameter_list cache_directory |
The parameter_list has the following form:
parameter_name1=value,parameter_name2=value,... |
The parameter names are listed in Table 2-2. You must separate multiple arguments to the –o option with commas.
![]() | Note: The maximum size of the cache is by default 90% of the front file system resources. Performance deteriorates significantly if an XFS file system exceeds 90% capacity. |
The following example creates a cache named /local/cache1 that can use a maximum of 80% of the disk blocks in the front file system and can cache up to a high-water mark of 60% of the front file system blocks before starting to remove files.
cfsadmin -c -o maxblocks=80,hiblocks=60 /local/cache1 |
The following example creates a cache named /local/cache2 that can use up to 75% of the files available in the front file system:
cfsadmin -c -o maxfiles=75 /local/cache2 |
The following example creates a cache named /local/cache3 that can use 75% of the blocks in the front file system, that can cache up to a highwater mark of 60% of the front file system files before starting to remove files, and that has 70% of the files in the front file system as an absolute limit.
cfsadmin -c -o \ maxblocks=75,hifiles=60,maxfiles=70 /local/cache3 |
There are two ways to mount a file system in a cache:
Using the mount command
Creating an entry for the file system in the /etc/fstab file
The following command mounts a file system in a cache.
mount -t cachefs back_file_system mount_point |
The cache directory is automatically created when mounting a cached file system.
For example, the following command makes the file system merlin:/docs available as a cached file system named /docs:
mount -t cachefs -merlin:/docs /docs |
Use the backpath argument when the file system you want to cache has already been mounted. The backpath argument specifies the mount point of the mounted file system. When the backpath argument is used, the back file system must be already mounted as read-only. If you want to write to the back file system, you must unmount it before mounting it as a cached file system.
For example, if the file system merlin:/doc is already NFS-mounted on /nfsdocs, you can cache that file system by giving that pathname as the argument to backpath, as shown in the following example:
mount -t cachefs -o \ backfstype=nfs,cachedir=/local/cache1,backpath=/nfsdocs \ merlin:/doc /doc |
![]() | Note: There is no performance gain in caching a local XFS disk file system. |
So far, examples have illustrated back file systems that are NFS-mounted, and the device argument to the mount command has taken the form server:file_system. If the back file system is an ISO9660 file system, the device argument is the CD-ROM device in the /CDROM directory. The file system type is iso9660.
The following example illustrates caching an ISO9660 back file system on the device /CDROM as /doc in the cache /local/cache1:
mount -t cachefs -o backfstype=iso9660,cachedir=/local/cache1,\ ro,backpath=/CDROM /CDROM /doc |
Because you cannot write to the CD-ROM, the ro argument is specified to make the cached file system read-only. The arguments to the -o option are explained in “Operation of /etc/fstab and Other Mount Files” in Chapter 2.
You must specify the backpath argument because the CD-ROM is automatically mounted when it is inserted. The mount point is in the /CDROM directory and is determined by the name of the CD-ROM. The special device to mount is the same as the value for the backpath argument.
![]() | Note: When a CD-ROM is changed, the CacheFS file system must be unmounted and remounted. |