[molpro-user] parallel IO issue? (fwd)
Seth Olsen
seth.olsen at uq.edu.au
Wed Oct 12 09:43:58 BST 2011
State-averaged single point CASSCF calculation on Malachite green dye. I set it to spit out cube files of different active space orbitals, hence the hold up writing to the same file. I think it was using 2009 verdin.
Seth Olsen, Ph.D.
ARC Australian Research Fellow
School of Mathematics & Physics
The University of Queensland
Brisbane, QLD 4072
Australia
+61 7 3365 2816
On 12/10/2011, at 17:57, "Rika Kobayashi" <rxk900 at anusf.anu.edu.au> wrote:
> Our System Manager has picked up the following. I haven't had a chance to look at it
> but thought I'd throw it into the ether for comments.
> Seth, we might need more details about the job from you (e.g. jobtpe and molpro version).
> Machine details can be found at http://nf.nci.org.au/facilities/vayu/hardware.php.
>>
>> Hi Seth,
>>
>> This job
>>
>> vayu3:~ > nqstat 320238
>> %CPU WallTime Time Lim RSS vmem memlim cpus
>> 320238 R sco564 n62 mgc43s3. 30 35:11:02 48:00:00 163GB 175GB 180GB 16
>>
>> has slowed to a crawl and the reason appears to be that all processes are
>> writing to (overwriting) the same file:
>>
>> /short/m03/Arylmethanes/Malachite-Green/Isomer/c43s3.5.pvdz/mg.c43s3.5.pvdz.c87_orbital_87.1.cube
>>
>> Even if all the processes are writing the same data, its not clear you'll
>> get a meaningful file out of this. And having multiple nodes write to the
>> same locations in a file is a disaster for performance. Is this "standard"
>> Molpro behaviour?
>>
>> David
>>
>
> _______________________________________________
> Molpro-user mailing list
> Molpro-user at molpro.net
> http://www.molpro.net/mailman/listinfo/molpro-user
More information about the Molpro-user
mailing list