Global Arrays and ComEx Platform-Specific Notes
The purpose of this page is to provide updated information on the operating system configurations/settings that are relevant for the Global Arrays and ComEx users on different platforms.
If you have questions or need support/help/report-bug, contact the GA/ComEx developers at hpctools@googlegroups.com or visit our Google Group to see the collection of prior postings to the list.
Platforms
- MPI - Progress Ranks (Recommended) (New in v5.4)
- MPI - Progress Threads (New in v5.4)
- MPI - Multithreading (New in v5.4)
- MPI - Two Sided
- Infiniband/OpenIB
MPI - Progress Ranks
-
The progress ranks port uses MPI (requires MPI1). It can be used using --with-mpi-pr compile time option.
Your application code must not rely on MPI_COMM_WORLD directly. Instead, you
must duplicate the MPI communicator that the GA library returns to you in place
of any world communicator. Example code follows:
Fortran77:
program main implicit none #include “mpi.fh" #include "global.fh" #include "ga-mpi.fh" integer comm integer ierr call mpi_init(ierr) call ga_initialize() call ga_mpi_comm(comm) ! use the returned comm as ususal call ga_terminate() call mpi_finalize(ierr) end
C/C++:
#include#include "ga.h" #include "ga-mpi.h" int main(int argc, char **argv) { MPI_Comm comm; MPI_Init(&argc,&argv); GA_Initialize(); comm = GA_MPI_Comm(); GA_Terminate(); MPI_Finalize(); return 0; }
MPI - Progress Threads
MPI - Multithreading
Infiniband/OpenIB
- GA/ComEx is not supported on QLogic hardware as QLogic's OFED interfaces don't work well with GA/ComEx.
- GA versions >= 4.1 supports OpenIB. It has been tested with OFED 1.2.5.1. Please send a message to hpctools@googlegroups.com with your system configuration details if you face any problems running on OFED 1.2.5.1 or later versions.
- Here is a sample set of ENV settings on a 64-bit Linux cluster to build GA.
export TARGET=LINUX64
export ARMCI_NETWORK=OPENIB
# MPI Settings - Using MVAPICH here.
export MPI_HOME=/usr/mpi/gcc/mvapich-0.9.9
export MPI_LIB=$MPI_HOME/lib
export MPI_INCLUDE=$MPI_HOME/include
export LIBMPI="-lmpich"
# IB Settings (path to "ofed")
export IB_HOME=/opt/ofed
export IB_INCLUDE=$IB_HOME/include
export IB_LIB=$IB_HOME/lib64
export IB_LIB_NAME="-libverbs -libumad -lpthread -lrt"