Dear Manhui,<br><br>Thanks a lot for your detailed reply, that's very helpful. Very sorry to answer later, for I have to do a lot of tests. So far, one version of molpro2009.1 is basically ok, but I still have some questions.<br>
<br><ol><li>Compile Part.<br>Openmpi 1.3.3 can pass compile and link for both w GA and w/o GA 4.2. I do not use my own blas, so I use default, this is my configure step,<br><br>./configure -batch -ifort -icc -mppbase $MPI_HOME/include64 -var LIBS="-L/usr/lib64 -libverbs -lm"<span style="color: rgb(255, 0, 0);"> -mpp</span> (in your letter, I guess you forget this necessary option.)<br>
<br>But <b>intelmpi failed for both</b>, I can show the err message seperately below.<br><br><b>Intelmpi w/o GA: </b><br><br>make[1]: Nothing to be done for `default'.<br>make[1]: Leaving directory `/datastore/workspace/scheping/molpro2009.1_intelmpi/molpro2009.1/utilities'<br>
make[1]: Entering directory `/datastore/workspace/scheping/molpro2009.1_intelmpi/molpro2009.1/src'<br>Preprocessing include files<br>make[1]: *** [common.log] Error 1<br>make[1]: *** Deleting file `common.log'<br>
make[1]: Leaving directory `/datastore/workspace/scheping/molpro2009.1_intelmpi/molpro2009.1/src'<br>make: *** [src] Error 2<br><br><b>Intelmpi w GA:</b><br><br>compiling molpro_cvb.f<br>failed<br>molpro_cvb.f(1360): error #5102: Cannot open include file 'common/ploc'<br>
include "common/ploc"<br>--------------^<br>compilation aborted for molpro_cvb.f (code 1)<br>make[3]: *** [molpro_cvb.o] Error 1<br>preprocessing perfloc.f<br>compiling perfloc.f<br>failed<br>perfloc.f(14): error #5102: Cannot open include file 'common/ploc'<br>
include "common/ploc"<br>--------------^<br>perfloc.f(42): error #6385: The highest data type rank permitted is INTEGER(KIND=8). [VARIAT]<br> if(.not.variat)then<br>--------------^<br>perfloc.f(42): error #6385: The highest data type rank permitted is INTEGER(KIND=8).<br>
<br></li><li>No .out file when I use more than about 12 processes, but I can get .xml file. It's very strange, everything is ok when process number is less than 12, but once exceed this number, such as 16 cpus, molpro always gets this err message, <br>
<br>
<span style="color: rgb(255, 0, 0);">orrtl: severe (174): SIGSEGV, segmentation fault occurred</span><br>Image PC Routine Line Source <br>
libopen-pal.so.0 00002AAAAB4805C6 Unknown Unknown Unknown<br>libopen-pal.so.0 00002AAAAB482152 Unknown Unknown Unknown<br>libc.so.6 000000310FC5F07A Unknown Unknown Unknown<br>
molprop_2009_1_Li 00000000005A4C36 Unknown Unknown Unknown<br>molprop_2009_1_Li 00000000005A4B84 Unknown Unknown Unknown<br>molprop_2009_1_Li 000000000053E57B Unknown Unknown Unknown<br>
molprop_2009_1_Li 0000000000540A8C Unknown Unknown Unknown<br>molprop_2009_1_Li 000000000053C5E5 Unknown Unknown Unknown<br>molprop_2009_1_Li 00000000004BCA5C Unknown Unknown Unknown<br>
libc.so.6 000000310FC1D8A4 Unknown Unknown Unknown<br>molprop_2009_1_Li 00000000004BC969 Unknown Unknown Unknown<br><br>Can I ignore this message?<br><br></li><li>Script Err. For molpro openmpi version, the script molpro_openmpi1.3.3/bin/molprop_2009_1_Linux_x86_64_i8 seems not to work. <br>
When I call this script, only one process is started, even if I use -np 8. So I have to run it manually, such as <br>
mpirun -np 8 -machinefile ./hosts molprop_2009_1_Linux_x86_64_i8.exe <a href="http://test.com" target="_blank">test.com</a><br></li><li>Molpro w GA can not cross over nodes. One node is ok, but if cross over nodes, I will get "molpro ARMCI DASSERT fail" err, and molpro can not be terminated normally. Do you know the difference between w GA and w/o GA? If GA is not better than w/o GA, I will pass this GA version.<br>
<br><br><br>Sorry for packing up so many questions, answer is any one question will help me a lot. And I think question 1 and 2 will be more important to me. Thanks.<br><br><br></li></ol><br><div class="gmail_quote">On Thu, Sep 24, 2009 at 5:33 PM, Manhui Wang <span dir="ltr"><<a href="mailto:wangm9@cardiff.ac.uk" target="_blank">wangm9@cardiff.ac.uk</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Hi He Ping,<br>
Yes, you can build parallel Molpro wthout GA for 2009.1. Please see<br>
the manual A..3.3 Configuration<br>
<br>
For the case of using the MPI-2 library, one example can be<br>
<br>
./configure -mpp -mppbase /usr/local/mpich2-install/include<br>
<br>
and the -mppbase directory should contain file mpi.h. Please ensure the<br>
built-in or freshly built MPI-2 library fully supports MPI-2 standard<br>
and works properly.<br>
<br>
<br>
Actually we have tested molpro2009.1 on almost the same system as what<br>
you mentioned (EMT64, Red Hat Enterprise Linux Server release 5.3<br>
(Tikanga), Intel MPI, ifort, icc, Infiniband). For both GA and MPI-2<br>
buildings, all work fine. The configurations are shown as follows(beware<br>
of lines wrapping):<br>
(1) For Molpro2009.1 built with MPI-2<br>
./configure -batch -ifort -icc -blaspath<br>
/software/intel/mkl/<a href="http://10.0.1.014/lib/em64t" target="_blank">10.0.1.014/lib/em64t</a> -mppbase $MPI_HOME/include64<br>
-var LIBS="-L/usr/lib64 -libverbs -lm"<br>
<br>
(2) For Molpro built with GA 4-2:<br>
Build GA4-2:<br>
make TARGET=LINUX64 USE_MPI=y CC=icc FC=ifort COPT='-O3' FOPT='-O3' \<br>
MPI_INCLUDE=$MPI_HOME/include64 MPI_LIB=$MPI_HOME/lib64 \<br>
ARMCI_NETWORK=OPENIB MA_USE_ARMCI_MEM=y<br>
IB_INCLUDE=/usr/include/infiniband IB_LIB=/usr/lib64<br>
<br>
mpirun ./global/testing/test.x<br>
Build Molpro<br>
./configure -batch -ifort -icc -blaspath<br>
/software/intel/mkl/<a href="http://10.0.1.014/lib/em64t" target="_blank">10.0.1.014/lib/em64t</a> -mppbase /GA4-2path -var<br>
LIBS="-L/usr/lib64 -libverbs -lm"<br>
<br>
(LIBS="-L/usr/lib64 -libverbs -lm" will make molpro link with Infiniband<br>
library)<br>
<br>
(some note about MOLPRO built with MPI-2 library can also been in manual<br>
2.2.1 Specifying parallel execution)<br>
Note: for MOLPRO built with MPI-2 library, when n processes are<br>
specified, n-1 processes are used to compute and one process is used to<br>
act as shared counter server (in the case of n=1, one process is used to<br>
compute and no shared counter server is needed). Even so, it is quite<br>
competitive in performance when it is run with a large number of processes.<br>
If you have built both versions, you can also compare the performance<br>
yourself.<br>
<br>
<br>
Best wishes,<br>
Manhui<br>
<div><div></div><div><br>
He Ping wrote:<br>
> Hello,<br>
><br>
> I want to run molpro2009.1 parallel version on infiniband network. I met<br>
> some problems when using GA, from the manual, section 3.2, there is one<br>
> line to say,<br>
><br>
> If the program is to be built for parallel execution then the Global<br>
> Arrays toolkit *or* the<br>
> MPI-2 library is needed.<br>
><br>
> Does that mean I can build molpro parallel version without GA? If so,<br>
> who can tell me some more about how to configure?<br>
> My system is EM64T, Red Hat Enterprise Linux Server release 5.1<br>
> (Tikanga), intel mpi, intel ifort and icc.<br>
><br>
> Thanks a lot.<br>
><br>
> --<br>
><br>
> He Ping<br>
><br>
><br>
</div></div>> ------------------------------------------------------------------------<br>
><br>
> _______________________________________________<br>
> Molpro-user mailing list<br>
> <a href="mailto:Molpro-user@molpro.net" target="_blank">Molpro-user@molpro.net</a><br>
> <a href="http://www.molpro.net/mailman/listinfo/molpro-user" target="_blank">http://www.molpro.net/mailman/listinfo/molpro-user</a><br>
<font color="#888888"><br>
--<br>
-----------<br>
Manhui Wang<br>
School of Chemistry, Cardiff University,<br>
Main Building, Park Place,<br>
Cardiff CF10 3AT, UK<br>
Telephone: +44 (0)29208 76637<br>
<br>
</font></blockquote></div><br><br clear="all"><br>-- <br><br>He Ping<br>[O] 010-58813311<br>