<html dir="ltr">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style id="owaParaStyle">P {
MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
</style>
</head>
<body fPStyle="1" ocsi="0">
<div style="direction: ltr;font-family: Tahoma;color: #000000;font-size: 10pt;">
<p>Dear Robert,</p>
<p> </p>
<p>This is not Molpro related but OpenMPI related. From the mpirun man page (v1.8.4):</p>
<p> </p>
<p>"Please note that mpirun automatically binds processes as of the start of the v1.8 series. Two binding patterns are used in the absence of any further directives:
</p>
<dl><dt><b>Bind to core:</b> </dt><dd>when the number of processes is <= 2 </dd><dt><b>Bind to socket:</b> </dt><dd>when the number of processes is > 2 </dd></dl>
<p>If your application uses threads, then you probably want to ensure that you are either not bound at all (by specifying --bind-to none), or bound to multiple cores using an appropriate binding level or specific number of processing elements per application
process."</p>
<p> </p>
<p>You can make --bind-to none a global default by editing the openmpi-mca-params.conf file in the openmpi/etc folder and adding a line</p>
<p> </p>
<p>orte_process_binding = none</p>
<p> </p>
<p>to the end of the file.</p>
<p> </p>
<p>After this all new calculations should use different cores.</p>
<p> </p>
<p>Cheers,</p>
<p> </p>
<p>Heikki</p>
<p> </p>
<div style="FONT-SIZE: 16px; FONT-FAMILY: Times New Roman; COLOR: #000000">
<hr tabindex="-1">
<div id="divRpF720646" style="DIRECTION: ltr"><font color="#000000" size="2" face="Tahoma"><b>Lähettäjä:</b> Molpro-user [molpro-user-bounces@molpro.net] käyttäjän molpro-user [molpro-user@molpro.net] puolesta<br>
<b>Lähetetty:</b> 29. tammikuuta 2015 12:26<br>
<b>Vastaanottaja:</b> molpro-user@molpro4.chem.cf.ac.uk<br>
<b>Aihe:</b> [molpro-user] Fwd: Problems with parallel build on AMD Opteron(tm) Processor 6376<br>
</font><br>
</div>
<div></div>
<div>
<div style="FONT-SIZE: 10pt; FONT-FAMILY: Verdana,Arial,Helvetica,sans-serif"><br>
<div class="zmail_extra">
<div id="1"><br>
== == == == == == Forwarded message == == == == == == <br>
>From : Robert Polly<polly@kit.edu><br>
To : <molpro-user@molpro.net><br>
Date : Fri, 23 Jan 2015 12:50:51 +0000<br>
Subject : Problems with parallel build on AMD Opteron(tm) Processor 6376<br>
== == == == == == Forwarded message == == == == == == <br>
</div>
<blockquote style="PADDING-LEFT: 6px; MARGIN: 0px 0px 0px 5px; BORDER-LEFT: #0000ff 1px solid">
<br>
<br>
Dear MOLPRO community, <br>
we installed MOLPRO on our new Opteron cluster (64 CPUs per node) <br>
<br>
CPU: AMD Opteron(tm) Processor 6376 <br>
Compiler: ifort/icc <br>
Openmpi: openmpi-1.8.3 <br>
Molpro: 2012.1.18 <br>
<br>
CONFIG file: <br>
<br>
# MOLPRO CONFIG generated at Fri Jan 9 10:31:38 MET 2015, for host <br>
master.hpc1.ine, SHA1=50d6e5f7071a51146f1443020887856fd3d38933 <br>
<br>
CONFIGURE_OPTIONS="-icc" "-ifort" "-mpp" "-openmpi" "-mppbase" <br>
"/home/polly/molpro/TEST/Molpro.2012.1.18.par/src/openmpi-install/include" <br>
<br>
AR=ar <br>
ARCHNAME=Linux/x86_64 <br>
ARFLAGS=-rS <br>
AWK=awk <br>
BIBTEX= <br>
BLASLIB=-L/pub/hpc/module/compilers/intel/xe2015/composer_xe_2015.0.090/mkl/lib/intel64
<br>
-lmkl_intel_ilp64 -lmkl_sequential -lmkl_core <br>
BUILD=p <br>
CAT=cat <br>
CC=/pub/hpc/module/compilers/intel/xe2015/composer_xe_2015.0.090/bin/intel64/icc <br>
CCVERSION=15.0.0 <br>
CC_FRONT= <br>
CDEBUG=-g $(addprefix $(CDEFINE),_DEBUG) <br>
CDEFINE=-D <br>
CFLAGS=-ftz <br>
-I/home/polly/molpro/TEST/Molpro.2012.1.18.par/src/openmpi-install/include <br>
CLDFLAGS= <br>
CLEAN=echo 'target clean only available with git cloned versions, please <br>
unpack the tarball again' <br>
CMPPINCLUDE=/home/polly/molpro/TEST/Molpro.2012.1.18.par/src/openmpi-install/include
<br>
COPT=-O2 <br>
COPT0=-O0 <br>
COPT1=-O1 <br>
COPT2=-O2 <br>
COPT3=-O3 <br>
CP=cp -p <br>
CPROFILE=-p <br>
CUDACC= <br>
CUDACCVERSION= <br>
CUDACDEBUG=-g $(addprefix $(CUDACDEFINE),_DEBUG) <br>
CUDACDEFINE=-D <br>
CUDACFLAGS= <br>
CUDACOPT= <br>
CUDACOPT0=-O0 <br>
CUDACOPT1=-O1 <br>
CUDACOPT2=-O2 <br>
CUDACOPT3=-O3 <br>
CUDACPROFILE=-p <br>
CXX=/pub/hpc/module/compilers/intel/xe2015/composer_xe_2015.0.090/bin/intel64/icpc
<br>
CXXFLAGS=$(CFLAGS) <br>
DOXYGEN=/bin/doxygen <br>
ECHO=/bin/echo <br>
EXPORT=export <br>
F90FLAGS=-stand f03 <br>
FC=/pub/hpc/module/compilers/intel/xe2015/composer_xe_2015.0.090/bin/intel64/ifort
<br>
FCVERSION=15.0.0 <br>
FDEBUG=-g $(addprefix $(FDEFINE),_DEBUG) <br>
FDEFINE=-D <br>
FFLAGS=-i8 -pc64 -auto -warn nousage -align array32byte -cxxlib <br>
FLDFLAGS= <br>
FOPT=-O3 <br>
FOPT0=-O0 <br>
FOPT1=-O1 <br>
FOPT2=-O2 <br>
FOPT3=-O3 <br>
FPROFILE=-p <br>
FSTATIC= <br>
HOSTFILE_FORMAT=%N <br>
INSTALL_FILES_EXTRA=src/openmpi-install/bin/mpirun <br>
src/openmpi-install/bin/orterun <br>
INSTBIN= <br>
INST_PL=0 <br>
INTEGER=8 <br>
LAPACKLIB= <br>
LATEX2HTML= <br>
LAUNCHER=/home/polly/molpro/TEST/Molpro.2012.1.18.par/src/openmpi-install/bin/mpirun
<br>
--mca mpi_warn_on_fork 0 -machinefile %h -np %n %x <br>
LD_ENV=/pub/hpc/module/compilers/intel/xe2015/composer_xe_2015.0.090/compiler/lib/intel64:/pub/hpc/module/compilers/intel/xe2015/composer_xe_2015.0.090/mkl/lib/intel64
<br>
LD_ENVNAME=LD_LIBRARY_PATH <br>
LIBRARY_SUFFIX=a <br>
LIBS=-lpthread <br>
/home/polly/molpro/TEST/Molpro.2012.1.18.par/src/boost-install/lib/libboost_system.a
<br>
/home/polly/molpro/TEST/Molpro.2012.1.18.par/src/boost-install/lib/libboost_thread.a
<br>
-lrt <br>
LN=ln -s <br>
MACROS=MOLPRO NDEBUG MOLPRO_f2003 MOLPRO_bug3990 MPI2 HAVE_BOOST_THREADS <br>
HAVE_SSE2 _I8_ MOLPRO_INT=8 BLAS_INT=8 LAPACK_INT=8 MOLPRO_AIMS <br>
MOLPRO_NECI _MOLCAS_MPP_ MOLPRO_BLAS MOLPRO_LAPACK <br>
MAKEDEPEND_OPTIONS= <br>
MAKEINDEX= <br>
MAPLE= <br>
MAX_INCREMENT_LIBRARY=0 <br>
MKDIR=mkdir -p <br>
MODULE_FLAG=-I <br>
MODULE_SUFFIX=mod <br>
MPILIB=-I/home/polly/molpro/TEST/Molpro.2012.1.18.par/src/openmpi-install/lib <br>
-Wl,-rpath <br>
-Wl,/home/polly/molpro/TEST/Molpro.2012.1.18.par/src/openmpi-install/lib <br>
-Wl,--enable-new-dtags -L/home/polly/molpro/TEST\ <br>
/Molpro.2012.1.18.par/src/openmpi-install/lib -lmpi_usempif08 <br>
-lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lm <br>
-lpciaccess -ldl -lrt -losmcomp -libverbs -lrdmacm -lutil -lpsm_infinipath <br>
MPPLIB= <br>
OBJECT_SUFFIX=o <br>
OPT0=B88.F copyc6.F <br>
OPT1=explicit_util.F avcc.F koopro4.F dlaed4.F frequencies.F optg.F <br>
OPT2=tstfnc.F dftgrid.F mrf12_singles.F90 basis_integral_shells.F <br>
integrals.F90 geminal.F surface.F gcc.F90 <br>
OPT3= <br>
PAPER=a4paper <br>
PARSE=parse-Linux-x86_64-i8.o <br>
PDFLATEX= <br>
PNAME=molprop_2012_1_Linux_x86_64_i8 <br>
PREFIX=/usr/local/molpro/molprop_2012_1_Linux_x86_64_i8 <br>
PTSIZE=11 <br>
PYTHON=/bin/python <br>
RANLIB=ranlib <br>
RM=rm -rf <br>
SHELL=/bin/sh <br>
STRIP=strip <br>
SUFFIXES=F F90 c cpp <br>
TAR=tar -cf <br>
UNTAR=tar -xf <br>
VERBOSE=@ <br>
VERSION=2012.1 <br>
XSD=/bin/xmllint --noout --schema <br>
XSLT=/bin/xsltproc <br>
YACC=bison -b y <br>
<br>
.SUFFIXES: <br>
MAKEFLAGS+=-r <br>
ifneq ($(LD_ENVNAME),) <br>
$(LD_ENVNAME):=$(LD_ENV):$($(LD_ENVNAME)) <br>
endif <br>
<br>
<br>
We encounter the problem, that when 2 MOLPRO jobs run on one node <br>
with 8 processors each, they run on the same 8 processors already <br>
allocated by <br>
thge first job, although there are still 56 free processors on the machine. <br>
<br>
Any suggestions how to solve the problem? <br>
<br>
Best regards, <br>
Robert <br>
<br>
-- <br>
<br>
********************************************************************* <br>
<br>
Karlsruher Institut für Technologie (KIT) <br>
Institut fuer Nukleare Entsorgung <br>
<br>
Dr. Robert Polly <br>
<br>
Quantenchemie <br>
<br>
Institut fuer Nukleare Entsorgung (INE), Campus Nord, Gebaeude 712, <br>
Postfach 3640, 76021 Karlsruhe, Germany <br>
<br>
0049-(0)721-608-24396 <br>
<br>
email: <a href="mailto:polly@kit.edu" target="_blank">polly@kit.edu</a> <br>
www: <a href="http://www.fzk.de/ine" target="_blank">http://www.fzk.de/ine</a> <br>
<br>
KIT - Universität des Landes Baden-Württemberg und <br>
nationales Großforschungszentrum in der Helmholtz-Gemeinschaft <br>
<br>
********************************************************************* <br>
<br>
<br>
</blockquote>
<br>
</div>
<br>
</div>
</div>
</div>
</div>
</body>
</html>