[molpro-user] Job termination during Pipek-Mezey Localisation
Seth Olsen
seth.olsen at uq.edu.au
Thu Dec 21 22:46:05 CET 2017
It works for a smaller active space - I’ve run into things like this before when the CAS space is large, but the usual fix (increasing mem) doesn’t seem to be working this time.
===========================
Seth Olsen, PhD.
Honorary Fellow
School of Mathematics & Physics
The University of Queensland
QLD 4072 Australia
Ph: +61 7 3365 2816
===========================
A PGP public key for this address has been uploaded to the key servers.
On 21 Dec 2017, at 5:25 AM, Benj FitzPatrick <benjfitz at gmail.com<mailto:benjfitz at gmail.com>> wrote:
Hi Seth,
It has been a while since I used PM localization, especially with MCSCF. Out of curiosity, have you tried it without state-averaged orbitals (there is a nagging memory of older versions not liking them)?
Thanks,
Benj FitzPatrick
On Tue, Dec 19, 2017 at 4:10 AM, Seth Olsen <seth.olsen at uq.edu.au<mailto:seth.olsen at uq.edu.au>> wrote:
Hi Molpro-User,
I’ve been having a job fail during orbital localization. It is a CASSCF (8 electron in 8 orbital) job. The output ends abruptly:
**********************************************************************************************************************************
Program * Orbital Localization Authors: W. Meyer, H.-J. Werner
Pipek-Mezey Localization
Molecular orbitals read from record 2141.2 Type=MCSCF/NATURAL
Density matrix read from record 2141.2 Type=MCSCF/CHARGE (state averaged)
…but the standard output has a little more description
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI COMMUNICATOR 4 DUP FROM 0
with errorcode 15.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
0:Terminate signal was sent, status=: 15
(rank:0 hostname:r2398 pid:24716):ARMCI DASSERT fail. src/common/signaltrap.c:SigTermHandler():477 cond:0
--------------------------------------------------------------------------
mpirun noticed that process rank 7 with PID 24730 on node r2398 exited on signal 9 (Killed).
--------------------------------------------------------------------------
======================================================================================
Resource Usage on 2017-12-19 20:04:32:
Job Id: 2169493.r-man2
Project: tv58
Exit Status: 0
Service Units: 0.55
NCPUs Requested: 8 NCPUs Used: 8
CPU Time Used: 00:24:10
Memory Requested: 48.0GB Memory Used: 31.96GB
Walltime requested: 03:00:00 Walltime Used: 00:04:09
JobFS requested: 150.0GB JobFS used: 941.47MB
======================================================================================
Any ideas? Anyone seen this before?
Many Thanks,
Seth
===========================
Seth Olsen, PhD.
Honorary Fellow
School of Mathematics & Physics
The University of Queensland
QLD 4072 Australia
Ph: +61 7 3365 2816<tel:+61%207%203365%202816>
===========================
A PGP public key for this address has been uploaded to the key servers.
_______________________________________________
Molpro-user mailing list
Molpro-user at molpro.net<mailto:Molpro-user at molpro.net>
http://www.molpro.net/mailman/listinfo/molpro-user
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.molpro.net/pipermail/molpro-user/attachments/20171221/d7114ef2/attachment-0001.html>
More information about the Molpro-user
mailing list