This chapter provides procedures for building MPI applications on Linux and IRIX systems. It provides examples of the use of the mpirun(1) command to launch MPI jobs. It also provides procedures for building and running SHMEM applications.
To use the 64-bit MPI library, choose one of the following commands:
CC -64 compute.C -lmpi++ -lmpi cc -64 compute.c -lmpi f77 -LANG:recursive=on -64 compute.f -lmpi f90 -LANG:recursive=on -64 compute.f -lmpi |
To use the 32-bit MPI library, choose one of the following commands:
CC -n32 compute.C -lmpi++ -lmpi cc -n32 compute.c -lmpi f77 -n32 compute.f -lmpi f90 -n32 compute.f -lmpi |
If the Fortran 90 compiler version 7.2.1 or later is installed, for compile-time checking of MPI subroutine calls, you can add the -auto_use option as follows:
f90 -auto_use mpi_interface -LANG:recursive=on -64 compute.f -lmpi f90 -auto_use mpi_interface -n32 compute.f -lmpi |
If your program does not perform MPI-2 one-sided operations like put and get to a local Fortran variable or array with the SAVE attribute, you can omit the -LANG:recursive=on option. Note that MPI-2 one-sided communication is not supported for the 32-bit MPI library, and so -LANG:recursive=on is not needed with -n32.
If the MPI application also uses OpenMP directives, you should link the application with the libraries listed in the following order:
CC -mp -64 compute.C -lmp -lmpi++ -lmpi cc -mp -64 compute.c -lmp -lmpi f77 -mp -64 compute.f -lmp -lmpi f90 -mp -64 compute.f -lmp -lmpi |
This order is not required, but under certain cases this order leads to better application performance. For further information about using hybrid applications, see “ Tuning MPI/OpenMP Hybrid Codes” in Chapter 6.
If the MPI application uses the SGI pthreads library, use the following library order when linking the application:
CC -64 compute.C -lmpi++ -lmpi -lpthread cc -64 compute.c -lmpi -lpthread |
This order is necessary because the SGI MPI library contains internal initialization routines that might be required to be run prior to other initialization routines. The SGI libpthread.so library has one of these initialization routines that can conflict with the MPI routines. Use the linkage order shown above to ensure that they do not conflict.
The default locations for the include files, the .so files, the .a files, and the mpirun command are pulled in automatically. Once the MPT RPM is installed as default, the commands to build an MPI-based application using the .so files are as follows:
To use the 64-bit MPI library on Linux systems, choose one of the following commands:
g++ -o myprog myprog.C -lmpi++ -lmpi gcc -o myprog myprog.c -lmpi g77 -I/usr/include -o myprog myprog.f -lmpi |
To compile programs on Linux with the Intel compiler, use the following commands:
efc -o myprog myprog.f -lmpi (Fortran) ecc -o myprog myprog.C -lmpi++ -lmpi (C++) ecc -o myprog myprog.c -lmpi (C) |
![]() | Note: You must use the Intel compiler to compile Fortran 90 programs. |
You must use the mpirun command to start MPI applications. For complete specification of the command line syntax, see the mpirun(1) man page. This section summarizes the procedures for launching an MPI application.
To run an application on the local host, enter the mpirun command with the -np argument. Your entry must include the number of processes to run and the name of the MPI executable file.
The following example starts three instances of the mtest application, which is passed an arguments list (arguments are optional):
mpirun -np 3 mtest 1000 "arg2" |
You are not required to use a different host in each entry that you specify on the mpirun command. You can launch a job that has multiple executable files on the same host. In the following example, one copy of prog1 and five copies of prog2 are run on the local host. Both executable files use shared memory.
mpirun -np 1 prog1 : 5 prog2 |
Note that for IRIX systems running MPMD applications, the executable files must be compiled as either 32-bit or 64-bit applications.
You can use the mpirun command to launch a program that consists of any number of executable files and processes and you can distribute the program to any number of hosts. A host is usually a single machine, or it can be any accessible computer running Array Services software. For available nodes on systems running Array Services software, see the /usr/lib/array/arrayd.conf file.
You can list multiple entries on the mpirun command line. Each entry contains an MPI executable file and a combination of hosts and process counts for running it. This gives you the ability to start different executable files on the same or different hosts as part of the same MPI application.
The examples in this section show various ways to launch an application that consists of multiple MPI executable files on multiple hosts.
The following example runs ten instances of the a.out file on host_a:
mpirun host_a -np 10 a.out |
When specifying multiple hosts, you can omit the -np option and list the number of processes directly. The following example launches ten instances of fred on three hosts. fred has two input arguments.
mpirun host_a, host_b, host_c 10 fred arg1 arg2 |
The following example launches an MPI application on different hosts with different numbers of processes and executable files:
mpirun host_a 6 a.out : host_b 26 b.out |
![]() | Note: MPI spawn functionality is available on IRIX systems only. |
mpirun -up 10 -np 3 mtest |
By using one of the spawn commands, mtest can start up to seven more MPI processes.
To compile a 64-bit SHMEM application on IRIX systems, choose one of the following commands:
CC -64 compute.C -lsma cc -64 compute.c -lsma f77 -LANG:recursive=on -64 compute.f -lsma f90 -LANG:recursive=on -64 compute.f -lsma |
To use the 32-bit SHMEM library, choose one of the following commands:
CC -n32 compute.C -lsma cc -n32 compute.c -lsma f77 -LANG:recursive=on -n32 compute.f -lsma f90 -LANG:recursive=on -n32 compute.f -lsma |
![]() | Note: It is generally not recommended to compile SHMEM applications as 32-bit executable files. |
If the Fortran 90 compiler version 7.2.1 or later is installed, to get compile-time checking of MPI subroutine calls, you can add the -auto_use option as follows:
f90 -auto_use shmem_interface -LANG:recursive=on -64 compute_shmem.f -lsma f90 -auto_use shmem_interface -LANG:recursive=on -n32 compute_shmem.f -lsma |
If your program does not perform SHMEM one-sided operations like put and get to a local Fortran variable or array with the SAVE attribute, you can omit the -LANG:recursive=on option. This option prevents the compiler from holding these variabls in registers across a subroutine call.
You do not need to use mpirun to launch SHMEM applications unless the MPI library was also linked with the application. Use the NPES environment variable to specify the number of SHMEM processes to use when running a SHMEM executable file. For example, the following command runs shmem_app on 32 processes:
%setenv NPES 32 %./shmem_app |
If MPI is also used in the executable file, you must use mpirun to launch the application, as if it were an MPI application.
To use the 64-bit SHMEM library on Linux systems, choose one of the following commands:
g++ compute.C -lsma gcc compute.c -lsma g77 -I/usr/include compute.f -lsma |
To compile SHMEM programs on Linux systems with the Intel compiler, use the following commands:
ecc compute.C -lsma ecc compute.c -lsma efc compute.f -lsma |
Unlike IRIX systems, with Linux systems you must use mpirun to launch SHMEM applications. The NPES variable has no effect on SHMEM programs running on Linux. To request the desired number of processes to launch, you must set the -np option on mpirun.
Currently, SHMEM programs on Linux are limited to a single host.