Dear Manhui,<br><br>Thanks for your patience and explicit answer. Let me do some tests and give you feedback.<br><br><div class="gmail_quote">On Fri, Oct 9, 2009 at 10:56 PM, Manhui Wang <span dir="ltr"><<a href="mailto:wangm9@cardiff.ac.uk" target="_blank">wangm9@cardiff.ac.uk</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Dear He Ping,<br>
GA may take full advantage of shared memory on an MPP node, but<br>
MPI-2 doesn't. On the other hand, MPI-2 may take advantage of the<br>
built-in MPI-2 library with fast connection. The performance depends on<br>
lots of facts, including MPI library, machine, network etc. It is better<br>
to build both versions of Molpro, and then choose the better one on that<br>
machine.<br>
It doesn't seem to be hard to make GA4-2 work on machine like<br>
yours. Details were shown in my previous email.<br>
<div><br>
Best wishes,<br>
Manhui<br>
<br>
He Ping wrote:<br>
</div><div>> Hi Manhui,<br>
><br>
> Thanks. I will try these patches first.<br>
> But would you like to tell me something about GA's effect on molpro? So<br>
> far I can not get a GA version molpro, so I can not know the performance<br>
> of GA version. If GA version is not much better than non-GA version, I<br>
> will not take much time building it.<br>
> Thanks a lot<br>
><br>
> On Fri, Oct 9, 2009 at 7:25 PM, Manhui Wang <<a href="mailto:wangm9@cardiff.ac.uk" target="_blank">wangm9@cardiff.ac.uk</a><br>
</div><div>> <mailto:<a href="mailto:wangm9@cardiff.ac.uk" target="_blank">wangm9@cardiff.ac.uk</a>>> wrote:<br>
><br>
</div><div><div></div><div>> Dear He Ping,<br>
> Recent patches include some bugfixes for intel compiler 11,<br>
> OopenMPI, and running molpro across nodes with InfiniBand. If you have<br>
> not updated them, please do it now. It may resolve your existing<br>
> problems.<br>
><br>
> He Ping wrote:<br>
> > Dear Manhui,<br>
> ><br>
> > Thanks a lot for your detailed reply, that's very helpful. Very<br>
> sorry to<br>
> > answer later, for I have to do a lot of tests. So far, one version of<br>
> > molpro2009.1 is basically ok, but I still have some questions.<br>
> ><br>
> > 1. Compile Part.<br>
> > Openmpi 1.3.3 can pass compile and link for both w GA and w/o GA<br>
> > 4.2. I do not use my own blas, so I use default, this is my<br>
> > configure step,<br>
> ><br>
> > ./configure -batch -ifort -icc -mppbase $MPI_HOME/include64 -var<br>
> > LIBS="-L/usr/lib64 -libverbs -lm" -mpp (in your letter, I guess<br>
> > you forget this necessary option.)<br>
> ><br>
> > But *intelmpi failed for both*, I can show the err message<br>
> > seperately below.<br>
> ><br>
> > *Intelmpi w/o GA: *<br>
> ><br>
> > make[1]: Nothing to be done for `default'.<br>
> > make[1]: Leaving directory<br>
> ><br>
> `/datastore/workspace/scheping/molpro2009.1_intelmpi/molpro2009.1/utilities'<br>
> > make[1]: Entering directory<br>
> ><br>
> `/datastore/workspace/scheping/molpro2009.1_intelmpi/molpro2009.1/src'<br>
> > Preprocessing include files<br>
> > make[1]: *** [common.log] Error 1<br>
> > make[1]: *** Deleting file `common.log'<br>
> > make[1]: Leaving directory<br>
> ><br>
> `/datastore/workspace/scheping/molpro2009.1_intelmpi/molpro2009.1/src'<br>
> > make: *** [src] Error 2<br>
> ><br>
> > *Intelmpi w GA:*<br>
> ><br>
> > compiling molpro_cvb.f<br>
> > failed<br>
> > molpro_cvb.f(1360): error #5102: Cannot open include file<br>
> > 'common/ploc'<br>
> > include "common/ploc"<br>
> > --------------^<br>
> > compilation aborted for molpro_cvb.f (code 1)<br>
> > make[3]: *** [molpro_cvb.o] Error 1<br>
> > preprocessing perfloc.f<br>
> > compiling perfloc.f<br>
> > failed<br>
> > perfloc.f(14): error #5102: Cannot open include file<br>
> 'common/ploc'<br>
> > include "common/ploc"<br>
> > --------------^<br>
> > perfloc.f(42): error #6385: The highest data type rank permitted<br>
> > is INTEGER(KIND=8). [VARIAT]<br>
> > if(.not.variat)then<br>
> > --------------^<br>
> > perfloc.f(42): error #6385: The highest data type rank permitted<br>
> > is INTEGER(KIND=8).<br>
><br>
> Which version of intel compilers are you using? Has your GA worked fine?<br>
> We have tested Molpro2009.1 with<br>
> (1) intel/compilers/10.1.015 (11.0.074), GA 4-2 hosted by intel/mpi/3.1<br>
> (3.2)<br>
> (2) without GA, intel/compilers/10.1.015 (11.0.074), intel/mpi/3.1 (3.2)<br>
><br>
> all work fine. CONFIG files will be helpful to see the problems.<br>
> ><br>
> > 2. No .out file when I use more than about 12 processes, but I can<br>
> > get .xml file. It's very strange, everything is ok when process<br>
> > number is less than 12, but once exceed this number, such as 16<br>
> > cpus, molpro always gets this err message,<br>
> ><br>
> > orrtl: severe (174): SIGSEGV, segmentation fault occurred<br>
> > Image PC Routine<br>
> > Line Source<br>
> > libopen-pal.so.0 00002AAAAB4805C6 Unknown<br>
> > Unknown Unknown<br>
> > libopen-pal.so.0 00002AAAAB482152 Unknown<br>
> > Unknown Unknown<br>
> > libc.so.6 000000310FC5F07A Unknown<br>
> > Unknown Unknown<br>
> > molprop_2009_1_Li 00000000005A4C36 Unknown<br>
> > Unknown Unknown<br>
> > molprop_2009_1_Li 00000000005A4B84 Unknown<br>
> > Unknown Unknown<br>
> > molprop_2009_1_Li 000000000053E57B Unknown<br>
> > Unknown Unknown<br>
> > molprop_2009_1_Li 0000000000540A8C Unknown<br>
> > Unknown Unknown<br>
> > molprop_2009_1_Li 000000000053C5E5 Unknown<br>
> > Unknown Unknown<br>
> > molprop_2009_1_Li 00000000004BCA5C Unknown<br>
> > Unknown Unknown<br>
> > libc.so.6 000000310FC1D8A4 Unknown<br>
> > Unknown Unknown<br>
> > molprop_2009_1_Li 00000000004BC969 Unknown<br>
> > Unknown Unknown<br>
> ><br>
> > Can I ignore this message?<br>
> Have you seen this on one or multiple nodes? If on multiple nodes, the<br>
> problem has been fixed by recent patches. By default, both *.out and<br>
> .xml can be obtained, but you can use option --no-xml-output to disable<br>
> the *xml.<br>
> In addition, OpenMPI seems to be unstable sometime. When lots of jobs<br>
> are run with OpenMPI, some jobs hang up unexpectedly. This behavior is<br>
> not seen for Intel MPI.<br>
><br>
> ><br>
> > 3. Script Err. For molpro openmpi version, the script<br>
> > molpro_openmpi1.3.3/bin/molprop_2009_1_Linux_x86_64_i8 seems not<br>
> > to work.<br>
> > When I call this script, only one process is started, even if I<br>
> > use -np 8. So I have to run it manually, such as<br>
> > mpirun -np 8 -machinefile ./hosts<br>
> > molprop_2009_1_Linux_x86_64_i8.exe <a href="http://test.com" target="_blank">test.com</a><br>
</div></div>> <<a href="http://test.com" target="_blank">http://test.com</a>> <<a href="http://test.com" target="_blank">http://test.com</a>><br>
<div><div></div><div>> Have your ./bin/molpro worked?. For me, it works fine. In ./bin/molpro,<br>
> some environmental settings are included. In the case that ./bin/molpro<br>
> doesn't work properly, you might want to directly use<br>
> molprop_2009_1_Linux_x86_64_i8.exe, then it is your responsibility to<br>
> set up these environmental variables.<br>
> > 4. Molpro w GA can not cross over nodes. One node is ok, but if<br>
> cross<br>
> > over nodes, I will get "molpro ARMCI DASSERT fail" err, and<br>
> molpro<br>
> > can not be terminated normally. Do you know the difference<br>
> between<br>
> > w GA and w/o GA? If GA is not better than w/o GA, I will<br>
> pass this<br>
> > GA version.<br>
> I think this problem has been fixed by recent patches.<br>
> As the difference between molpro w GA and w/o GA, it is hard to make a<br>
> simple conclusion. For calculations with a small number of processes(<br>
> eg. < 8), molpro w GA might be somewhat fast, but molpro without GA is<br>
> quite competitive in performance when it is run with a large number of<br>
> processes. Please refer to the benchmark<br>
> results(<a href="http://www.molpro.net/info/bench.php" target="_blank">http://www.molpro.net/info/bench.php</a>).<br>
> ><br>
> ><br>
> ><br>
> > Sorry for packing up so many questions, answer is any one<br>
> question<br>
> > will help me a lot. And I think question 1 and 2 will be more<br>
> > important to me. Thanks.<br>
> ><br>
> ><br>
> ><br>
><br>
> Best wishes,<br>
> Manhui<br>
><br>
><br>
><br>
> > On Thu, Sep 24, 2009 at 5:33 PM, Manhui Wang <<a href="mailto:wangm9@cardiff.ac.uk" target="_blank">wangm9@cardiff.ac.uk</a><br>
> <mailto:<a href="mailto:wangm9@cardiff.ac.uk" target="_blank">wangm9@cardiff.ac.uk</a>><br>
</div></div><div><div></div><div>> > <mailto:<a href="mailto:wangm9@cardiff.ac.uk" target="_blank">wangm9@cardiff.ac.uk</a> <mailto:<a href="mailto:wangm9@cardiff.ac.uk" target="_blank">wangm9@cardiff.ac.uk</a>>>> wrote:<br>
> ><br>
> > Hi He Ping,<br>
> > Yes, you can build parallel Molpro wthout GA for 2009.1.<br>
> Please see<br>
> > the manual A..3.3 Configuration<br>
> ><br>
> > For the case of using the MPI-2 library, one example can be<br>
> ><br>
> > ./configure -mpp -mppbase /usr/local/mpich2-install/include<br>
> ><br>
> > and the -mppbase directory should contain file mpi.h. Please<br>
> ensure the<br>
> > built-in or freshly built MPI-2 library fully supports MPI-2<br>
> standard<br>
> > and works properly.<br>
> ><br>
> ><br>
> > Actually we have tested molpro2009.1 on almost the same system<br>
> as what<br>
> > you mentioned (EMT64, Red Hat Enterprise Linux Server release 5.3<br>
> > (Tikanga), Intel MPI, ifort, icc, Infiniband). For both GA and<br>
> MPI-2<br>
> > buildings, all work fine. The configurations are shown as<br>
> follows(beware<br>
> > of lines wrapping):<br>
> > (1) For Molpro2009.1 built with MPI-2<br>
> > ./configure -batch -ifort -icc -blaspath<br>
> > /software/intel/mkl/<a href="http://10.0.1.014/lib/em64t" target="_blank">10.0.1.014/lib/em64t</a><br>
> <<a href="http://10.0.1.014/lib/em64t" target="_blank">http://10.0.1.014/lib/em64t</a>><br>
> > <<a href="http://10.0.1.014/lib/em64t" target="_blank">http://10.0.1.014/lib/em64t</a>> -mppbase $MPI_HOME/include64<br>
> > -var LIBS="-L/usr/lib64 -libverbs -lm"<br>
> ><br>
> > (2) For Molpro built with GA 4-2:<br>
> > Build GA4-2:<br>
> > make TARGET=LINUX64 USE_MPI=y CC=icc FC=ifort COPT='-O3'<br>
> > FOPT='-O3' \<br>
> > MPI_INCLUDE=$MPI_HOME/include64 MPI_LIB=$MPI_HOME/lib64 \<br>
> > ARMCI_NETWORK=OPENIB MA_USE_ARMCI_MEM=y<br>
> > IB_INCLUDE=/usr/include/infiniband IB_LIB=/usr/lib64<br>
> ><br>
> > mpirun ./global/testing/test.x<br>
> > Build Molpro<br>
> > ./configure -batch -ifort -icc -blaspath<br>
> > /software/intel/mkl/<a href="http://10.0.1.014/lib/em64t" target="_blank">10.0.1.014/lib/em64t</a><br>
> <<a href="http://10.0.1.014/lib/em64t" target="_blank">http://10.0.1.014/lib/em64t</a>><br>
> > <<a href="http://10.0.1.014/lib/em64t" target="_blank">http://10.0.1.014/lib/em64t</a>> -mppbase /GA4-2path -var<br>
> > LIBS="-L/usr/lib64 -libverbs -lm"<br>
> ><br>
> > (LIBS="-L/usr/lib64 -libverbs -lm" will make molpro link with<br>
> Infiniband<br>
> > library)<br>
> ><br>
> > (some note about MOLPRO built with MPI-2 library can also been<br>
> in manual<br>
> > 2.2.1 Specifying parallel execution)<br>
> > Note: for MOLPRO built with MPI-2 library, when n processes are<br>
> > specified, n-1 processes are used to compute and one process<br>
> is used to<br>
> > act as shared counter server (in the case of n=1, one process<br>
> is used to<br>
> > compute and no shared counter server is needed). Even so, it<br>
> is quite<br>
> > competitive in performance when it is run with a large number of<br>
> > processes.<br>
> > If you have built both versions, you can also compare the<br>
> performance<br>
> > yourself.<br>
> ><br>
> ><br>
> > Best wishes,<br>
> > Manhui<br>
> ><br>
> > He Ping wrote:<br>
> > > Hello,<br>
> > ><br>
> > > I want to run molpro2009.1 parallel version on infiniband<br>
> network.<br>
> > I met<br>
> > > some problems when using GA, from the manual, section 3.2, there<br>
> > is one<br>
> > > line to say,<br>
> > ><br>
> > > If the program is to be built for parallel execution then<br>
> the Global<br>
> > > Arrays toolkit *or* the<br>
> > > MPI-2 library is needed.<br>
> > ><br>
> > > Does that mean I can build molpro parallel version without<br>
> GA? If so,<br>
> > > who can tell me some more about how to configure?<br>
> > > My system is EM64T, Red Hat Enterprise Linux Server release 5.1<br>
> > > (Tikanga), intel mpi, intel ifort and icc.<br>
> > ><br>
> > > Thanks a lot.<br>
> > ><br>
> > > --<br>
> > ><br>
> > > He Ping<br>
> > ><br>
> > ><br>
> > ><br>
> ><br>
> ------------------------------------------------------------------------<br>
> > ><br>
> > > _______________________________________________<br>
> > > Molpro-user mailing list<br>
> > > <a href="mailto:Molpro-user@molpro.net" target="_blank">Molpro-user@molpro.net</a> <mailto:<a href="mailto:Molpro-user@molpro.net" target="_blank">Molpro-user@molpro.net</a>><br>
</div></div>> <mailto:<a href="mailto:Molpro-user@molpro.net" target="_blank">Molpro-user@molpro.net</a> <mailto:<a href="mailto:Molpro-user@molpro.net" target="_blank">Molpro-user@molpro.net</a>>><br>
<div>> > > <a href="http://www.molpro.net/mailman/listinfo/molpro-user" target="_blank">http://www.molpro.net/mailman/listinfo/molpro-user</a><br>
> ><br>
> > --<br>
> > -----------<br>
> > Manhui Wang<br>
> > School of Chemistry, Cardiff University,<br>
> > Main Building, Park Place,<br>
> > Cardiff CF10 3AT, UK<br>
> > Telephone: +44 (0)29208 76637<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> ><br>
> > He Ping<br>
> > [O] 010-58813311<br>
><br>
> --<br>
> -----------<br>
> Manhui Wang<br>
> School of Chemistry, Cardiff University,<br>
> Main Building, Park Place,<br>
> Cardiff CF10 3AT, UK<br>
> Telephone: +44 (0)29208 76637<br>
><br>
><br>
><br>
><br>
> --<br>
><br>
> He Ping<br>
> [O] 010-58813311<br>
<br>
</div>--<br>
<div><div></div><div>-----------<br>
Manhui Wang<br>
School of Chemistry, Cardiff University,<br>
Main Building, Park Place,<br>
Cardiff CF10 3AT, UK<br>
Telephone: +44 (0)29208 76637<br>
<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><br>He Ping<br>[O] 010-58813311<br>