ccsd(t) single point fials after magic iteration number 6
Ilja Khavrutskii
ikhavru at emory.edu
Thu Nov 21 04:58:50 GMT 2002
Dear molpro users,
Perhaps, I put this too far in the original message:
> >>> So I thought may be this is due to disk space.
> >>>But when I run this on a machine with huge disks I get the same
error and
> >>>DISK USED * 7.00 GB even though the smallest scratch
directory is 15GB.
If this is not convincing or informative, more details follow: Redhat
7.3 with kernel 2.4.18, molpro version 2002.3.
If I track the size of the disk space used every minute during this
failing job I would see around 6th iteration:
from "df -l" command:
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/hda1 2015984 1375832 537744 72% /
none 1030532 0 1030532 0% /dev/shm
/dev/sda1 17639220 1051104 15692096 7% /scratch1
/dev/hda3 56084596 4638416 48597220 9% /scratch2
then just before it dies:
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/hda1 2015984 1375844 537732 72% /
none 1030532 0 1030532 0% /dev/shm
/dev/sda1 17639220 1051104 15692096 7% /scratch1
/dev/hda3 56084596 3801492 49434144 8% /scratch2
and then finally when it is dead:
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/hda1 2015984 1375812 537764 72% /
none 1030532 0 1030532 0% /dev/shm
/dev/sda1 17639220 20 16743180 1% /scratch1
/dev/hda3 56084596 1476 53234160 1% /scratch2
So, I wonder if this still leaves a chance to "just a full disk".
Perhaps "df" is not a good way to monitor the disk space in this case.
If so would you let me know what would be more definitive way to state
that the disk limit is far from being reached?
Tentatively it looks very much like a bug, may be in molpro or may be in
the linux kernel.
I noticed that when this job runs, system load is substantial and you
will see occasionally "kswapd" and "kupdated" taking over a whole cpu.
Anyway I am very much interested in the solution of this problem. Has
anybody gone over ~7GB with 2.4.18 kernel and molpro 2002.3 running
uccsd(t), if so could you help me?
Best regards,
Ilja
More information about the Molpro-user
mailing list