[molpro-user] An unsuccessful Molpro termination without an error message
Manhui Wang
wangm9 at cardiff.ac.uk
Wed Jun 16 13:50:12 BST 2010
Dear Oleg,
It appears to be the memory bottleneck on the nodes. As far as I can
see, the job can move forward beyond the output you mentioned by
reducing the memory in the input. In your input, you request at least
about 1500*8*4=48GB on each node. Could you please check it available?
Please be aware that the memory requested is for the main memory stack
in Molpro, and there are some bits of code which dynamically assign
their own memory, and some static memory.For more details about the
memory in Molpro, please refer to previous discussions:
http://www.molpro.net/pipermail/molpro-user/2010-April/003723.html
http://www.molpro.net/pipermail/molpro-user/2010-April/003718.html
http://www.molpro.net/pipermail/molpro-user/2003-January/000562.html
http://www.molpro.net/pipermail/molpro-user/2009-November/003399.html
About your additional questions, molpro should work fine on the mixed
clusters as long as the underlying MPI library works well across the
clusters. About the difference between 2008.1 and 2009.1, please see
http://www.molpro.net/info/current/doc/update/node1.html.
Best wishes,
Manhui
Ol Ga wrote:
> Dear Molpro users and Developers,
>
> Would you please suggest how to treat the situation when MOLPRO 2009.1
> aborts unsuccessful (the job is not complete) but without error message?
>
> Additional information: MOLPRO on
> http://www.nersc.gov/nusers/systems/carver/
>
> 6Gb/proc but we used 20 nodes and reduced a number of proc per node to 4
> proc to increase twice RAM per thread. Hence, we can use 12 Gb per thread.
>
> ARCHNAME : Linux/x86_64
>
> FC : /usr/common/usg/pgi/10.3/linux86-64/10.3/bin/pgf90
>
> FCVERSION : 10.3
>
> BLASLIB
> : -L/usr/common/usg/mkl/10.2.2.025/lib/em64t -lmkl_intel_ilp64
> -lmkl_sequential
> -lmkl_core
>
> id : lbl
>
>
>
> Input file:
>
> ***,for molpro
>
> memory, 1500,m
>
> geomtyp=xyz
>
> geometry={
>
> 9
>
> C 1.2161039424 -0.2039410323 0.5982856002
>
> C 2.0014322291 -0.2678446520 -0.5478486196
>
> H 2.7259403611 0.4959259713 -0.7623311408
>
> H 1.7916050225 -0.9833194730 -1.3218753720
>
> H 0.6492539822 -1.0679929648 0.8949804060
>
> H 1.5335932676 0.4278736078 1.4072982739
>
> O -0.2716324651 0.9144777229 0.1781837837
>
> O -1.8618804350 -0.5856308591 0.3288621744
>
> O -1.2522679048 0.1780536793 -0.4556371056
>
> }
>
> basis={O=VDZ,C=VDZ,H=VDZ}
>
> {hf;wf,40,1,0,0}
>
> {multi,MAXIT=512;occ,24;closed,15;wf,40,1,0,0}
>
> {rs2,maxit=512,maxit=512}
>
> {optg,gradient=1.d-3,MAXIT=100,method=DIIS}
>
> ---
>
> The last part of output file:
>
> 0.00000006 0.13D-09 0.52D-11 4616.28
>
> 9 1 1 1.22149666 -0.71718723 -303.19242479
> 0.00000000 -0.00000014 0.20D-10 0.56D-12 5108.19
>
> Energy contributions for state 1.1:
>
> ===================================
>
>
>
> Energy contr. SQ.Norm of FOWF
>
> Space I -0.01229182 0.00752070
>
> Space S -0.21299291 0.08504430
>
> Space P -0.49190250 0.12893166
>
> -------------- end of output
>
>
> Please, Would ask some additional questions?
> 1) Is it really huge task?
> 2) Is there any problem with molpro running on mixed (shared/distribured
> memory = smp/mpi) clusters?
> 3) Is there any difference MOLPRO 2008 vs. MOLPRO 2009?
>
>
>
> Sincerely,
>
> PhD student Oleg B. Gadzhiev
> _______________________________________________
> Molpro-user mailing list
> Molpro-user at molpro.net
> http://www.molpro.net/mailman/listinfo/molpro-user
--
-----------
Manhui Wang
School of Chemistry, Cardiff University,
Main Building, Park Place,
Cardiff CF10 3AT, UK
Telephone: +44 (0)29208 76637
More information about the Molpro-user
mailing list