Hi Manhui,<br><br>About how to use nfs file system as the tmp dir, it's still wrong with you solution<br>export TMPDIR=$SCRATCHPATH/$HOSTNAME<br><br>When I test this, TMPDIR is always decided by the node on which you submit the job. So it met the same problem as before.<br>
<br>Would you like to give some more suggestions?<br>Thanks<br><br><div class="gmail_quote">On Mon, Oct 12, 2009 at 2:58 AM, Manhui Wang <span dir="ltr"><<a href="mailto:wangm9@cardiff.ac.uk">wangm9@cardiff.ac.uk</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Hi He Ping,<br>
<div class="im"><br>
He Ping wrote:<br>
> Hi Manhui,<br>
><br>
</div><div class="im">> The latest version with patches does solve one of my problems, xml to<br>
> out file. Thanks alot.<br>
> But I still have two problems.<br>
><br>
> *One* is to set TMPDIR path. When install, I don't set TMPDIR, no error<br>
> is reported. If only use one node, nothing is wrong. But when I use two<br>
> different nodes, it seemed that I could not set TMPDIR to a shared path.<br>
> For example,<br>
><br>
> My working dir is a shared dir by SNFS, so ./tmp is shared from each<br>
> node, but node1 and node2 have their separate /tmp dir.<br>
><br>
> This command is OK, molpro will use default TMPDIR, /tmp<br>
> mpirun -np 2 -machinefile ./hosts molprop.exe ./<a href="http://h2f_merge.com" target="_blank">h2f_merge.com</a><br>
</div>> <<a href="http://h2f_merge.com" target="_blank">http://h2f_merge.com</a>><br>
<div class="im">> $ cat hosts<br>
> node1<br>
> node1<br>
><br>
> This command is wrong, molpro will use ./tmp as TMPDIR,<br>
> mpirun -np 2 -machinefile ./hosts molprop.exe -d ./tmp ./<a href="http://h2f_merge.com" target="_blank">h2f_merge.com</a><br>
</div>> <<a href="http://h2f_merge.com" target="_blank">http://h2f_merge.com</a>><br>
<div class="im">> $cat hosts<br>
> node1<br>
> node2<br>
><br>
> From out file, error is like this<br>
> --------------------------------------------------------------------<br>
> Recomputing integrals since basis changed<br>
><br>
><br>
> Using spherical harmonics<br>
><br>
> Library entry F S cc-pVDZ selected for orbital group 1<br>
> Library entry F P cc-pVDZ selected for orbital group 1<br>
> Library entry F D cc-pVDZ selected for orbital group 1<br>
><br>
><br>
> ERROR OPENING FILE 4<br>
> NAME=/home_soft/home/scheping/tmp/molpro2009.1_examples/examples/my_tests/./tmp/sf_T0400027332.TMP<br>
> IMPLEMENTATION=sf STATUS=scratch IERR= 35<br>
> ? Error<br>
> ? I/O error<br>
> ? The problem occurs in openw<br>
> --------------------------------------------------------------------<br>
<br>
</div>It is not recommended all TMPDIR across nodes use the same shared<br>
directory. This may cause some potential file conflict, since processes<br>
in different nodes may happen to have same ids, etc.<br>
Your problem can be avoided with something in the job script, or .bashrc<br>
<br>
if [ ! -d /scratch/path/$HOSTNAME ]; then<br>
mkdir -p /scratch/path/$HOSTNAME<br>
fi<br>
export TMPDIR=/scratch/path/$HOSTNAME<br>
<br>
This will ensure each node has its TMPDIRs (they can be local or global).<br>
<br>
<br>
><br>
> *The other one* is script molprop_2009_1_Linux_x86_64_i8 can not invoke<br>
<div class="im">> parallel run *crossing different* node. When I use<br>
><br>
</div>> molprop_2009_1_Linux_x86_64_i8* -N node1:4* <a href="http://test.com" target="_blank">test.com</a> <<a href="http://test.com" target="_blank">http://test.com</a>>*, *<br>
<div class="im">><br>
> it is ok; but when use<br>
><br>
> molprop_2009_1_Linux_x86_64_i8 *-N node1:4, node2:4* <a href="http://test.com" target="_blank">test.com</a><br>
</div>> <<a href="http://test.com" target="_blank">http://test.com</a>>*, *<br>
<div class="im">><br>
> I will find only one process is running, and from the "-v" option, I can<br>
> see some message,<br>
><br>
</div>> mpirun -machinefile some.hosts.file* -np 1 *<a href="http://test.com" target="_blank">test.com</a> <<a href="http://test.com" target="_blank">http://test.com</a>>.<br>
<div class="im">><br>
> Some.hosts.file is correct, but the number of processes is always ONE.So<br>
> far I have to pass over this script and directly use<br>
> mpirun -np N -machinefile hosts molprop_2009_1_Linux_x86_64_i8.exe<br>
</div>> <a href="http://test.com" target="_blank">test.com</a> <<a href="http://test.com" target="_blank">http://test.com</a>>.<br>
This question actually is the same as question 3 in your last mail. It<br>
is ./bin/molpro (not molprop_2009_1_Linux_x86_64_i8.exe) invokes<br>
parallel run, since it includes some environmental settings etc.<br>
It is not possible to make molprop_2009_1_Linux_x86_64_i8.exe deal with<br>
MPI arguments.<br>
Have your ./bin/molpro worked? If not, you may need to look at the<br>
script, and make some changes on some special machines, and we are<br>
interested to investigate it deeply.<br>
<div class="im"><br>
<br>
><br>
> Can you give some suggestion, especially for the first question? In our<br>
> system, writing to /tmp is alway forbidden.<br>
><br>
> Thanks.<br>
><br>
<br>
</div>As you said, your system is similar like this(EMT64, Red Hat Enterprise<br>
Linux Server release 5.3 (Tikanga), Intel MPI, ifort10/11, icc10/11,<br>
Infiniband). Molpro should work fine on it. Could you provide more<br>
details about the other existing problems?<br>
<br>
<br>
Best wishes,<br>
<font color="#888888">Manhui<br>
</font><div class="im">><br>
> On Sat, Oct 10, 2009 at 2:34 PM, He Ping <<a href="mailto:heping@sccas.cn">heping@sccas.cn</a><br>
</div><div class="im">> <mailto:<a href="mailto:heping@sccas.cn">heping@sccas.cn</a>>> wrote:<br>
><br>
> Dear Manhui,<br>
><br>
> Thanks for your patience and explicit answer. Let me do some tests<br>
> and give you feedback.<br>
><br>
><br>
> On Fri, Oct 9, 2009 at 10:56 PM, Manhui Wang <<a href="mailto:wangm9@cardiff.ac.uk">wangm9@cardiff.ac.uk</a><br>
</div><div><div></div><div class="h5">> <mailto:<a href="mailto:wangm9@cardiff.ac.uk">wangm9@cardiff.ac.uk</a>>> wrote:<br>
><br>
> Dear He Ping,<br>
> GA may take full advantage of shared memory on an MPP node, but<br>
> MPI-2 doesn't. On the other hand, MPI-2 may take advantage of the<br>
> built-in MPI-2 library with fast connection. The performance<br>
> depends on<br>
> lots of facts, including MPI library, machine, network etc. It<br>
> is better<br>
> to build both versions of Molpro, and then choose the better one<br>
> on that<br>
> machine.<br>
> It doesn't seem to be hard to make GA4-2 work on machine like<br>
> yours. Details were shown in my previous email.<br>
><br>
> Best wishes,<br>
> Manhui<br>
><br>
> He Ping wrote:<br>
> > Hi Manhui,<br>
> ><br>
> > Thanks. I will try these patches first.<br>
> > But would you like to tell me something about GA's effect on<br>
> molpro? So<br>
> > far I can not get a GA version molpro, so I can not know the<br>
> performance<br>
> > of GA version. If GA version is not much better than non-GA<br>
> version, I<br>
> > will not take much time building it.<br>
> > Thanks a lot<br>
> ><br>
> > On Fri, Oct 9, 2009 at 7:25 PM, Manhui Wang<br>
> <<a href="mailto:wangm9@cardiff.ac.uk">wangm9@cardiff.ac.uk</a> <mailto:<a href="mailto:wangm9@cardiff.ac.uk">wangm9@cardiff.ac.uk</a>><br>
</div></div><div class="im">> > <mailto:<a href="mailto:wangm9@cardiff.ac.uk">wangm9@cardiff.ac.uk</a> <mailto:<a href="mailto:wangm9@cardiff.ac.uk">wangm9@cardiff.ac.uk</a>>>><br>
> wrote:<br>
> ><br>
</div><div><div></div><div class="h5">> > Dear He Ping,<br>
> > Recent patches include some bugfixes for intel<br>
> compiler 11,<br>
> > OopenMPI, and running molpro across nodes with InfiniBand.<br>
> If you have<br>
> > not updated them, please do it now. It may resolve your<br>
> existing<br>
> > problems.<br>
> ><br>
> > He Ping wrote:<br>
> > > Dear Manhui,<br>
> > ><br>
> > > Thanks a lot for your detailed reply, that's very<br>
> helpful. Very<br>
> > sorry to<br>
> > > answer later, for I have to do a lot of tests. So far,<br>
> one version of<br>
> > > molpro2009.1 is basically ok, but I still have some<br>
> questions.<br>
> > ><br>
> > > 1. Compile Part.<br>
> > > Openmpi 1.3.3 can pass compile and link for both w<br>
> GA and w/o GA<br>
> > > 4.2. I do not use my own blas, so I use default,<br>
> this is my<br>
> > > configure step,<br>
> > ><br>
> > > ./configure -batch -ifort -icc -mppbase<br>
> $MPI_HOME/include64 -var<br>
> > > LIBS="-L/usr/lib64 -libverbs -lm" -mpp (in your<br>
> letter, I guess<br>
> > > you forget this necessary option.)<br>
> > ><br>
> > > But *intelmpi failed for both*, I can show the err<br>
> message<br>
> > > seperately below.<br>
> > ><br>
> > > *Intelmpi w/o GA: *<br>
> > ><br>
> > > make[1]: Nothing to be done for `default'.<br>
> > > make[1]: Leaving directory<br>
> > ><br>
> ><br>
> `/datastore/workspace/scheping/molpro2009.1_intelmpi/molpro2009.1/utilities'<br>
> > > make[1]: Entering directory<br>
> > ><br>
> ><br>
> `/datastore/workspace/scheping/molpro2009.1_intelmpi/molpro2009.1/src'<br>
> > > Preprocessing include files<br>
> > > make[1]: *** [common.log] Error 1<br>
> > > make[1]: *** Deleting file `common.log'<br>
> > > make[1]: Leaving directory<br>
> > ><br>
> ><br>
> `/datastore/workspace/scheping/molpro2009.1_intelmpi/molpro2009.1/src'<br>
> > > make: *** [src] Error 2<br>
> > ><br>
> > > *Intelmpi w GA:*<br>
> > ><br>
> > > compiling molpro_cvb.f<br>
> > > failed<br>
> > > molpro_cvb.f(1360): error #5102: Cannot open<br>
> include file<br>
> > > 'common/ploc'<br>
> > > include "common/ploc"<br>
> > > --------------^<br>
> > > compilation aborted for molpro_cvb.f (code 1)<br>
> > > make[3]: *** [molpro_cvb.o] Error 1<br>
> > > preprocessing perfloc.f<br>
> > > compiling perfloc.f<br>
> > > failed<br>
> > > perfloc.f(14): error #5102: Cannot open include file<br>
> > 'common/ploc'<br>
> > > include "common/ploc"<br>
> > > --------------^<br>
> > > perfloc.f(42): error #6385: The highest data type<br>
> rank permitted<br>
> > > is INTEGER(KIND=8). [VARIAT]<br>
> > > if(.not.variat)then<br>
> > > --------------^<br>
> > > perfloc.f(42): error #6385: The highest data type<br>
> rank permitted<br>
> > > is INTEGER(KIND=8).<br>
> ><br>
> > Which version of intel compilers are you using? Has your<br>
> GA worked fine?<br>
> > We have tested Molpro2009.1 with<br>
> > (1) intel/compilers/10.1.015 (11.0.074), GA 4-2 hosted by<br>
> intel/mpi/3.1<br>
> > (3.2)<br>
> > (2) without GA, intel/compilers/10.1.015 (11.0.074),<br>
> intel/mpi/3.1 (3.2)<br>
> ><br>
> > all work fine. CONFIG files will be helpful to see the<br>
> problems.<br>
> > ><br>
> > > 2. No .out file when I use more than about 12<br>
> processes, but I can<br>
> > > get .xml file. It's very strange, everything is ok<br>
> when process<br>
> > > number is less than 12, but once exceed this<br>
> number, such as 16<br>
> > > cpus, molpro always gets this err message,<br>
> > ><br>
> > > orrtl: severe (174): SIGSEGV, segmentation fault<br>
> occurred<br>
> > > Image PC Routine<br>
> > > Line Source<br>
> > > libopen-pal.so.0 00002AAAAB4805C6 Unknown<br>
> > > Unknown Unknown<br>
> > > libopen-pal.so.0 00002AAAAB482152 Unknown<br>
> > > Unknown Unknown<br>
> > > libc.so.6 000000310FC5F07A Unknown<br>
> > > Unknown Unknown<br>
> > > molprop_2009_1_Li 00000000005A4C36 Unknown<br>
> > > Unknown Unknown<br>
> > > molprop_2009_1_Li 00000000005A4B84 Unknown<br>
> > > Unknown Unknown<br>
> > > molprop_2009_1_Li 000000000053E57B Unknown<br>
> > > Unknown Unknown<br>
> > > molprop_2009_1_Li 0000000000540A8C Unknown<br>
> > > Unknown Unknown<br>
> > > molprop_2009_1_Li 000000000053C5E5 Unknown<br>
> > > Unknown Unknown<br>
> > > molprop_2009_1_Li 00000000004BCA5C Unknown<br>
> > > Unknown Unknown<br>
> > > libc.so.6 000000310FC1D8A4 Unknown<br>
> > > Unknown Unknown<br>
> > > molprop_2009_1_Li 00000000004BC969 Unknown<br>
> > > Unknown Unknown<br>
> > ><br>
> > > Can I ignore this message?<br>
> > Have you seen this on one or multiple nodes? If on<br>
> multiple nodes, the<br>
> > problem has been fixed by recent patches. By default, both<br>
> *.out and<br>
> > .xml can be obtained, but you can use option<br>
> --no-xml-output to disable<br>
> > the *xml.<br>
> > In addition, OpenMPI seems to be unstable sometime. When<br>
> lots of jobs<br>
> > are run with OpenMPI, some jobs hang up unexpectedly. This<br>
> behavior is<br>
> > not seen for Intel MPI.<br>
> ><br>
> > ><br>
> > > 3. Script Err. For molpro openmpi version, the script<br>
> > ><br>
> molpro_openmpi1.3.3/bin/molprop_2009_1_Linux_x86_64_i8 seems not<br>
> > > to work.<br>
> > > When I call this script, only one process is<br>
> started, even if I<br>
> > > use -np 8. So I have to run it manually, such as<br>
> > > mpirun -np 8 -machinefile ./hosts<br>
> > > molprop_2009_1_Linux_x86_64_i8.exe <a href="http://test.com" target="_blank">test.com</a><br>
> <<a href="http://test.com" target="_blank">http://test.com</a>><br>
> > <<a href="http://test.com" target="_blank">http://test.com</a>> <<a href="http://test.com" target="_blank">http://test.com</a>><br>
> > Have your ./bin/molpro worked?. For me, it works fine. In<br>
> ./bin/molpro,<br>
> > some environmental settings are included. In the case that<br>
> ./bin/molpro<br>
> > doesn't work properly, you might want to directly use<br>
> > molprop_2009_1_Linux_x86_64_i8.exe, then it is your<br>
> responsibility to<br>
> > set up these environmental variables.<br>
> > > 4. Molpro w GA can not cross over nodes. One node is<br>
> ok, but if<br>
> > cross<br>
> > > over nodes, I will get "molpro ARMCI DASSERT fail"<br>
> err, and<br>
> > molpro<br>
> > > can not be terminated normally. Do you know the<br>
> difference<br>
> > between<br>
> > > w GA and w/o GA? If GA is not better than w/o GA,<br>
> I will<br>
> > pass this<br>
> > > GA version.<br>
> > I think this problem has been fixed by recent patches.<br>
> > As the difference between molpro w GA and w/o GA, it is<br>
> hard to make a<br>
> > simple conclusion. For calculations with a small number of<br>
> processes(<br>
> > eg. < 8), molpro w GA might be somewhat fast, but molpro<br>
> without GA is<br>
> > quite competitive in performance when it is run with a<br>
> large number of<br>
> > processes. Please refer to the benchmark<br>
> > results(<a href="http://www.molpro.net/info/bench.php" target="_blank">http://www.molpro.net/info/bench.php</a>).<br>
> > ><br>
> > ><br>
> > ><br>
> > > Sorry for packing up so many questions, answer is<br>
> any one<br>
> > question<br>
> > > will help me a lot. And I think question 1 and 2<br>
> will be more<br>
> > > important to me. Thanks.<br>
> > ><br>
> > ><br>
> > ><br>
> ><br>
> > Best wishes,<br>
> > Manhui<br>
> ><br>
> ><br>
> ><br>
> > > On Thu, Sep 24, 2009 at 5:33 PM, Manhui Wang<br>
> <<a href="mailto:wangm9@cardiff.ac.uk">wangm9@cardiff.ac.uk</a> <mailto:<a href="mailto:wangm9@cardiff.ac.uk">wangm9@cardiff.ac.uk</a>><br>
> > <mailto:<a href="mailto:wangm9@cardiff.ac.uk">wangm9@cardiff.ac.uk</a> <mailto:<a href="mailto:wangm9@cardiff.ac.uk">wangm9@cardiff.ac.uk</a>>><br>
> > > <mailto:<a href="mailto:wangm9@cardiff.ac.uk">wangm9@cardiff.ac.uk</a><br>
> <mailto:<a href="mailto:wangm9@cardiff.ac.uk">wangm9@cardiff.ac.uk</a>> <mailto:<a href="mailto:wangm9@cardiff.ac.uk">wangm9@cardiff.ac.uk</a><br>
> <mailto:<a href="mailto:wangm9@cardiff.ac.uk">wangm9@cardiff.ac.uk</a>>>>> wrote:<br>
> > ><br>
> > > Hi He Ping,<br>
> > > Yes, you can build parallel Molpro wthout GA for<br>
> 2009.1.<br>
> > Please see<br>
> > > the manual A..3.3 Configuration<br>
> > ><br>
> > > For the case of using the MPI-2 library, one example<br>
> can be<br>
> > ><br>
> > > ./configure -mpp -mppbase<br>
> /usr/local/mpich2-install/include<br>
> > ><br>
> > > and the -mppbase directory should contain file<br>
> mpi.h. Please<br>
> > ensure the<br>
> > > built-in or freshly built MPI-2 library fully<br>
> supports MPI-2<br>
> > standard<br>
> > > and works properly.<br>
> > ><br>
> > ><br>
> > > Actually we have tested molpro2009.1 on almost the<br>
> same system<br>
> > as what<br>
> > > you mentioned (EMT64, Red Hat Enterprise Linux<br>
> Server release 5.3<br>
> > > (Tikanga), Intel MPI, ifort, icc, Infiniband). For<br>
> both GA and<br>
> > MPI-2<br>
> > > buildings, all work fine. The configurations are<br>
> shown as<br>
> > follows(beware<br>
> > > of lines wrapping):<br>
> > > (1) For Molpro2009.1 built with MPI-2<br>
> > > ./configure -batch -ifort -icc -blaspath<br>
> > > /software/intel/mkl/<a href="http://10.0.1.014/lib/em64t" target="_blank">10.0.1.014/lib/em64t</a><br>
> <<a href="http://10.0.1.014/lib/em64t" target="_blank">http://10.0.1.014/lib/em64t</a>><br>
> > <<a href="http://10.0.1.014/lib/em64t" target="_blank">http://10.0.1.014/lib/em64t</a>><br>
> > > <<a href="http://10.0.1.014/lib/em64t" target="_blank">http://10.0.1.014/lib/em64t</a>> -mppbase<br>
> $MPI_HOME/include64<br>
> > > -var LIBS="-L/usr/lib64 -libverbs -lm"<br>
> > ><br>
> > > (2) For Molpro built with GA 4-2:<br>
> > > Build GA4-2:<br>
> > > make TARGET=LINUX64 USE_MPI=y CC=icc FC=ifort<br>
> COPT='-O3'<br>
> > > FOPT='-O3' \<br>
> > > MPI_INCLUDE=$MPI_HOME/include64<br>
> MPI_LIB=$MPI_HOME/lib64 \<br>
> > > ARMCI_NETWORK=OPENIB MA_USE_ARMCI_MEM=y<br>
> > > IB_INCLUDE=/usr/include/infiniband IB_LIB=/usr/lib64<br>
> > ><br>
> > > mpirun ./global/testing/test.x<br>
> > > Build Molpro<br>
> > > ./configure -batch -ifort -icc -blaspath<br>
> > > /software/intel/mkl/<a href="http://10.0.1.014/lib/em64t" target="_blank">10.0.1.014/lib/em64t</a><br>
> <<a href="http://10.0.1.014/lib/em64t" target="_blank">http://10.0.1.014/lib/em64t</a>><br>
> > <<a href="http://10.0.1.014/lib/em64t" target="_blank">http://10.0.1.014/lib/em64t</a>><br>
> > > <<a href="http://10.0.1.014/lib/em64t" target="_blank">http://10.0.1.014/lib/em64t</a>> -mppbase /GA4-2path -var<br>
> > > LIBS="-L/usr/lib64 -libverbs -lm"<br>
> > ><br>
> > > (LIBS="-L/usr/lib64 -libverbs -lm" will make molpro<br>
> link with<br>
> > Infiniband<br>
> > > library)<br>
> > ><br>
> > > (some note about MOLPRO built with MPI-2 library can<br>
> also been<br>
> > in manual<br>
> > > 2.2.1 Specifying parallel execution)<br>
> > > Note: for MOLPRO built with MPI-2 library, when n<br>
> processes are<br>
> > > specified, n-1 processes are used to compute and one<br>
> process<br>
> > is used to<br>
> > > act as shared counter server (in the case of n=1,<br>
> one process<br>
> > is used to<br>
> > > compute and no shared counter server is needed).<br>
> Even so, it<br>
> > is quite<br>
> > > competitive in performance when it is run with a<br>
> large number of<br>
> > > processes.<br>
> > > If you have built both versions, you can also<br>
> compare the<br>
> > performance<br>
> > > yourself.<br>
> > ><br>
> > ><br>
> > > Best wishes,<br>
> > > Manhui<br>
> > ><br>
> > > He Ping wrote:<br>
> > > > Hello,<br>
> > > ><br>
> > > > I want to run molpro2009.1 parallel version on<br>
> infiniband<br>
> > network.<br>
> > > I met<br>
> > > > some problems when using GA, from the manual,<br>
> section 3.2, there<br>
> > > is one<br>
> > > > line to say,<br>
> > > ><br>
> > > > If the program is to be built for parallel<br>
> execution then<br>
> > the Global<br>
> > > > Arrays toolkit *or* the<br>
> > > > MPI-2 library is needed.<br>
> > > ><br>
> > > > Does that mean I can build molpro parallel version<br>
> without<br>
> > GA? If so,<br>
> > > > who can tell me some more about how to configure?<br>
> > > > My system is EM64T, Red Hat Enterprise Linux<br>
> Server release 5.1<br>
> > > > (Tikanga), intel mpi, intel ifort and icc.<br>
> > > ><br>
> > > > Thanks a lot.<br>
> > > ><br>
> > > > --<br>
> > > ><br>
> > > > He Ping<br>
> > > ><br>
> > > ><br>
> > > ><br>
> > ><br>
> ><br>
> ------------------------------------------------------------------------<br>
> > > ><br>
> > > > _______________________________________________<br>
> > > > Molpro-user mailing list<br>
> > > > <a href="mailto:Molpro-user@molpro.net">Molpro-user@molpro.net</a><br>
> <mailto:<a href="mailto:Molpro-user@molpro.net">Molpro-user@molpro.net</a>> <mailto:<a href="mailto:Molpro-user@molpro.net">Molpro-user@molpro.net</a><br>
> <mailto:<a href="mailto:Molpro-user@molpro.net">Molpro-user@molpro.net</a>>><br>
> > <mailto:<a href="mailto:Molpro-user@molpro.net">Molpro-user@molpro.net</a><br>
> <mailto:<a href="mailto:Molpro-user@molpro.net">Molpro-user@molpro.net</a>> <mailto:<a href="mailto:Molpro-user@molpro.net">Molpro-user@molpro.net</a><br>
> <mailto:<a href="mailto:Molpro-user@molpro.net">Molpro-user@molpro.net</a>>>><br>
> > > > <a href="http://www.molpro.net/mailman/listinfo/molpro-user" target="_blank">http://www.molpro.net/mailman/listinfo/molpro-user</a><br>
> > ><br>
> > > --<br>
> > > -----------<br>
> > > Manhui Wang<br>
> > > School of Chemistry, Cardiff University,<br>
> > > Main Building, Park Place,<br>
> > > Cardiff CF10 3AT, UK<br>
> > > Telephone: +44 (0)29208 76637<br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> > > --<br>
> > ><br>
> > > He Ping<br>
> > > [O] 010-58813311<br>
> ><br>
> > --<br>
> > -----------<br>
> > Manhui Wang<br>
> > School of Chemistry, Cardiff University,<br>
> > Main Building, Park Place,<br>
> > Cardiff CF10 3AT, UK<br>
> > Telephone: +44 (0)29208 76637<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> ><br>
> > He Ping<br>
> > [O] 010-58813311<br>
><br>
> --<br>
> -----------<br>
> Manhui Wang<br>
> School of Chemistry, Cardiff University,<br>
> Main Building, Park Place,<br>
> Cardiff CF10 3AT, UK<br>
> Telephone: +44 (0)29208 76637<br>
><br>
><br>
><br>
><br>
> --<br>
><br>
> He Ping<br>
> [O] 010-58813311<br>
><br>
><br>
><br>
><br>
> --<br>
><br>
> He Ping<br>
> [O] 010-58813311<br>
<br>
</div></div>--<br>
<div><div></div><div class="h5">-----------<br>
Manhui Wang<br>
School of Chemistry, Cardiff University,<br>
Main Building, Park Place,<br>
Cardiff CF10 3AT, UK<br>
Telephone: +44 (0)29208 76637<br>
<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>He Ping<br>