Porting Octopus and Platform Specific Instructions

This page needs updating!

This page contains information about Octopus portability, with specific information to compile and run octopus for many architectures. If you managed to compile octopus for a different system, please contribute. Warning: this information is quite out of date and may no longer be valid.

General information and tips about compilers

SSE2 support

Operating systems

Linux

The main development operating system for Octopus.

Solaris

Octopus compiles correctly either with sun compilers or gcc/gfortran. By default Solaris doesn’t have GNU coreutils, so some test won’t run.

Tru 64

It works.

Mac OS X

It works. Don’t try to compile static binaries, they are not supported by the OS.

Windows

Toy operating systems are not supported for the moment, sorry.

Compilers

Intel Compiler for x86/x86_64

Intel Compiler for Itanium

Open64

This is an open source compiler based on the liberated code of SGI MIPSpro compiler. It is available for x86, x86_64 and Itanium architectures.

Pathscale Fortran Compiler

FCFLAGS="-Wall -O3 -march=auto -mcpu=auto -OPT:Ofast -fno-math-errno"

NAG compiler

AMD64:


FCFLAGS="-colour -kind=byte -mismatch_all -abi=64 -ieee=full -O4 -Ounroll=4"

GNU C Compiler (gcc)

GNU Fortran (gfortran)

[https://www.g95.org/ g95]


FC=g95
FCFLAGS="-O3 -funroll-loops -ffast-math"
FCCPP="cpp -ansi-P"

There may be problems with versions 0.92 or 0.93, depending on the underlying version of gcc. See [[G95]] for info on building version 0.94 with gcc 4.2.4.

Portland 6

Flags:


FCFLAGS="-fast -mcmodel=medium -O4"

Known problems:

The following problem with the PGI compiler version 6.0 and MPICH version 1.2.6 on x86_64 has been reported:

The MPI detection during the configure step does not work properly. This may lead to compilation failures on e. g. the file par_vec.F90 . This problem is considered a bug in either the PGI compiler or the MPICH implementation. Please apply the following change by hand after running configure :

In the file config.h , replace the line


/* -undef MPI_H */

by


-define MPI_H 1

and remove the line


-define MPI_MOD 1

Portland 7, 8, 9

Flags (tested on Cray XT4):


FCFLAGS="-O2 -Munroll=c:1 -Mnoframe -Mlre -Mscalarsse -Mcache_align -Mflushz"

The configure script may fail in the part checking for Fortran libraries of mpif90 for autoconf version 2.59 or earlier. The solution is to update autoconf to 2.60 or later, or manually set FCLIBS in the configure command line to remove a spurious apostrophe.

Portland 10

For Octopus 3.2.0, the file src/basic/lookup.F90 is incorrectly optimized yielding many segmentation faults in the testsuite. With PGI 10.5 the optimization flag should be -O2 or less; with PGI 10.8 the optimization flag should be -O1 or less. Note that -fast and -fastsse are between -O2 and -O3. For later versions of Octopus, a PGI pragma compels this file to be -O0 regardless of what is specified in FCFLAGS, so you may safely set FCFLAGS to -fast.

Portland 11

11.4 does not work and will crash with glibc memory corruption errors. 11.7 is fine.

Portland 12

12.5 and 12.6 cannot compile due to an internal compiler errors of this form:

 PGF90-S-0000-Internal compiler error. sym_of_ast: unexpected ast    6034 (simul_box.F90: 1103)

12.4 and 12.9 are ok.

Absoft

Flags x86:


FCFLAGS="-O3 -YEXT_NAMES=LCS -YEXT_SFX=_"

Flags amd64/em64t:


FCFLAGS="-O3 -mcmodel=medium -m64 -cpu:host -YEXT_NAMES=LCS -YEXT_SFX=_"

Compaq compiler


FCFLAGS="-align dcommons -fast -tune host -arch host -noautomatic"

Xlf

SGI MIPS

-O3 -INLINE -n32 -LANG:recursive=on

Sun Studio

You can download this compiler for free, it supports Linux and Solaris over x86, amd64 and sparc. A very fast compiler but quite buggy.

MPI Implementations

OpenMPI

MPICH2

SGI MPT

Intel MPI

To make gfortran with Intel MPI work, some additional steps are required. They don’t ship the mpi_f08 module by default, so one needs to create it using the binding kit.

cp $I_MPI_ROOT/opt/mpi/binding/intel-mpi-binding-kit.tar.gz .
tar -xf intel-mpi-binding-kit.tar.gz
cd f08
make MPI_INST=${I_MPI_ROOT} F90=gfortran NAME=gfortran

The .mod files are then in include/gfortran.

Actually, once you have the .mod files, they can also be used for older versions of Intel MPI (tested for 2021.6, 2021.7, and 2021.11).

Sun HPC ClusterTools

MVAPICH

NetCDF

Octopus uses the Fortran 90 interface of netCDF, this means that it’s likely that you will have to compile it using the same compiler you will use to compile Octopus. You can get the sources and follow installation instructions from the [https://www.unidata.ucar.edu/software/netcdf/ NetCDF site].

BLAS and LAPACK

These are standard libraries that provide a series of common vector and matrix operations. Octopus uses as much as possible this libraries. There are several version available depending on your hardware. Around 40% of Octopus execution time is spend in BLAS level 1 routines, so getting a fast implementation for your hardware might be important. On the other hand, Lapack performance is not very important.

AMD ACML

This is the AMD Mathematical Library optimized to run in Athlon and Opteron processors. You can get a free copy from https://developer.amd.com/acml.jsp .

ATLAS

Compaq CXML

GOTO BLAS

Probably the fastest implementation of blas, source code is available and it can be compiled in many architectures.

Intel MKL

See https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor for MKL’s advice on the proper way to link. Here is an example, in which --with-lapack is left blank because it is included in --with-blas.

 MKL_DIR=/opt/intel/mkl/lib/lintel64
 --with-blas="-L$MKL_DIR -Wl,--start-group -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread"
 --with-blacs="$MKL_DIR/libmkl_blacs_intelmpi_lp64.a" --with-scalapack="$MKL_DIR/libmkl_scalapack_lp64.a"

Netlib

The reference implementation of BLAS and Lapack. It is available in most linux distributions. You can get the source code from https://www.netlib.org/blas https://www.netlib.org/lapack .