[molpro-user] Molpro 2010 problem
Manhui Wang
wangm9 at cardiff.ac.uk
Mon Oct 4 18:05:31 BST 2010
Hello,
Is your file
/package/chem/molpro/2k10.1/bin/molprop_2010_1_Linux_x86_64_i8
the molpro script rather than molpro executable file?
What does it look when you simply run the default molpro script on local
node?
mpd &
./bin/molpro -n 4 JOB.com
The default parallel launcher setting in bin/molpro is something like
LAUNCHER="/path-mvapcih2/bin/mpirun -machinefile %h -np %n %x"
This will require that you have started mpd firstly.
If you like to use mpirun_rsh, simply change that line into
LAUNCHER="/path-mvapcih2/bin/mpirun_rsh -ssh -hostfile %h -np %n %x"
So you can use the Molpro script to run molpro jobs without having
started mpd firstly. eg.
./bin/molpro -n 4 JOB.com (run job on local node)
./bin/molpro --nodefile ./hosts JOB.com (run job on the specified nodes)
Best wishes,
Manhui
./bin/molpro2 --nodefile ./PBSNODEFILE h2o_vdz.test
jyh-shyong wrote:
> Hi,
>
> Thanks for your reply. I checked and rebuilt my GA library, and ran the
> test:
>
> mpirun_rsh -ssh -hostfile ./hosts -np 4 ./global/testing/test.x
>
> .....
>
> CHECKING GA MEMORY RESTRICTIONS (MA not used)
>
>
>
>> Creating array 283 by 283 -- should succeed
> (need 160178 and 631600 out of 641600 bytes are left)
>
> success
>
>> Creating array 566 by 566 -- SHOULD FAIL
> (need 640712 and 470288 out of 641600 bytes are left)
>
> failure
>
>
>
> GA Statistics for process 0
> ------------------------------
>
> create destroy get put acc scatter gather
> read&inc
> calls: 11 10 5057 411 395 16 14 100
> number of processes/call 1.05e+00 1.22e+00 1.03e+00 3.50e+00 4.00e+00
> bytes total: 3.01e+06 1.48e+06 6.47e+04 2.15e+04 2.13e+04
> 8.00e+02
> bytes remote: 1.87e+06 1.03e+06 4.67e+04 1.67e+04 1.59e+04
> 8.00e+02
> Max memory consumed for GA by this process: 171312 bytes
>
>
> It seems OK. Then I rebuilt Molpro2k10.1 and ran a test case, I got:
>
> testjobs at irish2:[1061] mpirun_rsh -ssh -hostfile ./hosts -np 4
> /package/chem/molpro/2k10.1/bin/molprop_2010_1_Linux_x86_64_i8 -o
> JOB.out JOB.com
> Permission denied.
> Some rank on 'irish2' exited without finalize.
> Cleaning up all processes ...
> done.
> Permission denied.
> Some rank on 'irish2' exited without finalize.
> Cleaning up all processes ...
> done.
> Permission denied.
> Some rank on 'irish2' exited without finalize.
> Cleaning up all processes ...
> done.
> Permission denied.
> Some rank on 'irish2' exited without finalize.
> Cleaning up all processes ...
> done.
> Some rank on 'irish2' exited without finalize.
> Cleaning up all processes ...
> done.
>
>
> Jyh-Shyong Ho
>
>
>
> 於 2010/10/4 下午 08:54, Manhui Wang 提到:
>> Hello,
>>
>> It looks something wrong with the mvapich2 or GA.
>> Does your mvapich2 work fine with a simple Hello Word program?
>> Has your Global Arrays been built successfully? Has it passed the
>> tests? eg.
>> mpirun -np 4 ./global/testing/test.x
>> (please ensure mpd is running)
>>
>> Best wishes,
>> Manhui
>>
>>
>> Jyh-Shyong wrote:
>>> Hi Molpro users,
>>>
>>> I compiled parallel version of Molpro 2010.01 with ifort and mvapich/GA
>>> 4.2,when I tried to run a test job I got the following
>>> error message:
>>>
>>> /opt/vltmpi/OPENIB/mpi/bin/mpirun -machinefile ./hosts -np 4
>>> /package/chem/molpro/2k10.1/bin/molprop_2010_1_Linux_x86_64_i8.exe
>>> JOB.com
>>> Permission denied.
>>> Some rank on 'irish3' exited without finalize.
>>> Cleaning up all processes ...
>>> done.
>>>
>>> Any idea?
>>>
>>> File ./hosts contains 4 lines:
>>>
>>> irish3
>>> irish3
>>> irish3
>>> irish3
>>>
>>> Thanks for your help.
>>>
>>> Jyh-Shyong Ho, Ph.D.
>>> Research Scientist
>>> National Center for High Performance Computing
>>> Hsinchu, Taiwan, ROC
>>>
>>>
>>> _______________________________________________
>>> Molpro-user mailing list
>>> Molpro-user at molpro.net
>>> http://www.molpro.net/mailman/listinfo/molpro-user
>
--
-----------
Manhui Wang
School of Chemistry, Cardiff University,
Main Building, Park Place,
Cardiff CF10 3AT, UK
Telephone: +44 (0)29208 76637
More information about the Molpro-user
mailing list