<div dir="ltr">Dear prof. Joachim,<div><br></div><div style>Thank you very much for the response.</div><div style><br></div><div style>I followed the recomendation, and the reported error isn't occuring anymore. But some trajectories are dying with another error message, like:</div>
<div style><br></div><div style><div> ERROR WRITING 32768 WORDS AT OFFSET 466681080. TO FILE 5 IMPLEMENTATION=df FILE HANDLE= 1030 IERR=******</div><div> ? Error</div><div> ? I/O error</div><div> ? The problem occurs in writew</div>
<div><br></div><div style>This example occured after ~ 120 timesteps. I used [ data,truncate,5 ]. Some other trajectories, using exactly the same options, didn't die.</div><div style><br></div><div style>Is there any way to avoid it?</div>
<div style><br></div><div style>Kind regards,</div><div style><br></div><div style>Gabriel</div><div style><br></div><div style><br></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">2013/6/30 Hans-Joachim Werner <span dir="ltr"><<a href="mailto:werner@theochem.uni-stuttgart.de" target="_blank">werner@theochem.uni-stuttgart.de</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">This has nothing to do with the file size but only with the number of records on a file.<br>
You can use data,truncate to truncate or erase a file after each step of your calculation.<br>
For example<br>
<br>
data,truncate,,1<br>
<br>
erases File 2. But note that you might then loose restart information.<br>
At the end of each job step a list of records ist printed, and you should<br>
look at this in order to decide after how many records the file can be truncated.<br>
For more details please see manual.<br>
<br>
Best regards<br>
Joachim Werner<br>
<br>
<br>
Am 29.06.2013 um 16:08 schrieb Gabriel Freitas <<a href="mailto:gabrielnfreitas@gmail.com">gabrielnfreitas@gmail.com</a>>:<br>
<div><div class="h5"><br>
> Dear Molpro developers / users,<br>
><br>
> I'm running single state, RHF-QCISD(T) AIMS trajectories in serial Molpro 2012 (non-trial version), and my jobs end prematurely with the following message (always on the 251th timestep of the job):<br>
><br>
> More than 512 records on file 2<br>
> ? Error<br>
> ? Too many records<br>
> ? The problem occurs in reservem<br>
><br>
> It happens even though I export a large enough tmp directory (export TMPDIR=<tmpdir>) and adds it with -d option (molpro -d <tmpdir> job). The drive of this directory has 83 GB free.<br>
><br>
> How can I avoid it? Can I "flush" the memory corresponding to the previous steps, regarding the information I need is printed in the AIMS outputs?<br>
><br>
> Best regards,<br>
><br>
> Gabriel<br>
</div></div>> _______________________________________________<br>
> Molpro-user mailing list<br>
> <a href="mailto:Molpro-user@molpro.net">Molpro-user@molpro.net</a><br>
> <a href="http://www.molpro.net/mailman/listinfo/molpro-user" target="_blank">http://www.molpro.net/mailman/listinfo/molpro-user</a><br>
<br>
</blockquote></div><br></div>