[molpro-user] parallel molpro and PBSPro on more than 1 node
Ulrike Nitzsche
U.Nitzsche at ifw-dresden.de
Mon Sep 12 19:42:53 BST 2011
Dear Manhui,
it is so simple! Thank you very much for the last piece of the puzzle.
After spending a lot of time to openmpi, mvapich2, configure and compiler
options (finally successful) I was not able to see the wood for the trees
concerning the batch script...
Best regards, Ulli.
On Mon, Sep 12, 2011 at 01:06:34PM +0100, Manhui Wang wrote:
> Dear Ulrike,
>
> There is some problem when running molpro with openmpi/infiniband on
> multinodes, and the internal reason is still unknown (you may have seen
> there are a number of options when building and running OpenMPI, it
> might help by adjusting these options). However, with mvapich2 or
> intelmpi over infiniband, molpro runs well. I regularly run molpro jobs
> with mvapich2/Infiniband via PBSPro (pbs_version = PBSPro_10.0.0.82981)
> on multinodes, and it always works fine. Instead of using the extra MPI
> parallel job launcher Mpiexec, I simply submit the Molpro job using a
> rather simple job script like:
> =================================================
> #PBS -l select=4:ncpus=8:mpiprocs=8
> #PBS -l place=scatter
> #PBS -l walltime=02:00:00
> #PBS -N Molprojob
>
>
> cd $PBS_O_WORKDIR
>
> NPROCS=`wc -l $PBS_NODEFILE | awk '{ print $1 }'`
>
> echo "NPROCS=$NPROCS"
>
> echo "Running Molpro job:"
>
> ./bin/molpro -n $NPROCS testjobs/h2o_vdz.test
>
> echo "All Done."
> ====================================================
>
>
> Best wishes,
> Manhui
>
> On 09/09/11 14:45, Ulrike Nitzsche wrote:
>> Hello,
>> I'm asking for help in the following situation:
>>
>> We would like to use molpro (2010.1pl23) on a cluster running
>> PBSPro_10.4.5.110853. The connections are done over infiniband.
>>
>> Usually we use the openmpi implementation for parallel computing without
>> any problem. I tried to compile molpro both with openmpi-1.4.3 and the
>> beta version openmpi-1.5.4 as well. In both cases the error reported in
>> this post:
>>
>> http://www.molpro.net/pipermail/molpro-user/2011-May/004382.html
>>
>> and an unexpected "eating of memory" for the parallel benchmarks is
>> observed when jobs run on more than 1 node.
>> As suggested I tried the compilation with mvapich2 (again with stable 1.6
>> and the beta 1.7rc1) and an interactive mpiexec of the jobs is running fine -
>> no finalizing problem, no memory problem.
>> But: mvapich2 is only running up to PBSPro version 7 (that's the state more
>> than 5 years ago). Of course I asked for help at the mpiexec mailing list
>> and at PBSPro too. Up to now without any hints. Therefore I asked here on the
>> application mailing list:
>>
>> 1. Is there any report about running molpro with openmpi and infiniband on
>> more than 1 node? If not - are there any ideas about the internal problems?
>> 2. Is there any report about molpro with mvapich2 and PBSPro version 8 and
>> above using the select-syntax to submit a job?
>> 3. Any other ideas how to get molpro running in parallel on several hosts
>> under the control of PBSPro?
>>
>> Thanks in advance for your help,
>> Ulrike Nitzsche
>>
>
> --
> -----------
> Manhui Wang
> School of Chemistry, Cardiff University,
> Main Building, Park Place,
> Cardiff CF10 3AT, UK
--
Ulrike Nitzsche | email: u.nitzsche at ifw-dresden.de
| phone: +49-351-4659-463
Standards are wonderful. Everyone should have one!
More information about the Molpro-user
mailing list