<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Dear Ivan,<br>
<br>
On 17/06/12 19:51, Ivan Reche wrote:
<blockquote
cite="mid:CACRFBJUVpqhT=AxV6wHR2XabiVvFCEgX2nN48UTEZs5JKuTYUA@mail.gmail.com"
type="cite">Hello,
<div><br>
</div>
<div>I'm a new user of Molpro 2010 and I'm compiling it to work on
a standard home-made Beowulf cluster.</div>
<div><br>
</div>
<div>I've compiled it with global arrays 5.0.2 and mpich2 1.4.1.
However, I'm having some doubts:</div>
<div><br>
</div>
<div>1. From what I understood by reading the docs, is there an
option to not use global arrays? Should I use just pure MPICH2
for my setup? Is global arrays optimized for InfiniBand and
other non-conventional networks and, as such, not recommended to
my standard ethernet network?</div>
</blockquote>
Molpro can be built with Global Arrays
toolkit or the MPI-2 library. Globle Arrays may take full advantage
of shared memory on a single MPP node, but MPI-2 doesn't. On the
other hand, MPI-2 may take advantage of the built-in MPI-2 library
with fast connection. The performance depends on lots of facts,
including MPI library, network connection etc. For your case (standard
home-made Beowulf cluster connected with ethernet network), it is
better to build Molpro with Global Arrays. You could achieve
reasonable speedup on a single node, but it could be very slow
across nodes due to the ethernet network. Please also see the
previous discussion,
<a class="moz-txt-link-freetext" href="http://www.molpro.net/pipermail/molpro-user/2010-February/003587.html">http://www.molpro.net/pipermail/molpro-user/2010-February/003587.html</a>
<blockquote
cite="mid:CACRFBJUVpqhT=AxV6wHR2XabiVvFCEgX2nN48UTEZs5JKuTYUA@mail.gmail.com"
type="cite"><br>
<div>2. By running some test jobs that came with Molpro and
comparing the serial results with the parallel results, the
serial ones were faster in all cases. Is there a common mistake
that I might be causing these undesired results? What issues
should I look for?</div>
</blockquote>
If you run the testjobs across the nodes in parallel, the
performance may be very poor due to the Ethernet network. Molpro
bench results can be found at <br>
<a class="moz-txt-link-freetext" href="http://www.molpro.net/info/bench.php">http://www.molpro.net/info/bench.php</a><br>
<blockquote
cite="mid:CACRFBJUVpqhT=AxV6wHR2XabiVvFCEgX2nN48UTEZs5JKuTYUA@mail.gmail.com"
type="cite">
<div><br>
</div>
<div>3. Let's say that I'm trying to separate my computation in
two computers: foo and bar. If I call molpro in the foo machine
like this:</div>
<div><br>
</div>
<div>molpro -N user:foo:1,user:bar:1 <a moz-do-not-send="true"
href="http://myjob.com">myjob.com</a></div>
<div><br>
</div>
<div>I get an MPI error complained that it couldn't connect to
rank 0. However, if I switch the orders of the hosts like this:</div>
<div><br>
</div>
<div>molpro -N user:bar:1,user:foo:1 <a moz-do-not-send="true"
href="http://myjob.com">myjob.com</a></div>
<div><br>
</div>
<div>It works, but dumps the results in the bar machine. Both
machine can login with "user" using ssh without passwords. What
am I missing here? Maybe using the rank 0 machine for the
computation is not recommended?</div>
</blockquote>
The rank 0 process certainly can be used as computation like other
processes.<br>
<br>
I couldn't see this problem for the similar Molpro
configuration(with auto-configuration):<br>
<br>
./configure -batch -ifort -icc -mpp -auto-ga-tcgmsg-mpich2<br>
<br>
The following commands in different orders work for me:<br>
./bin/molpro -N arccacluster11:2,arccacluster10:2
testjobs/h2o_vdz.test <br>
./bin/molpro -N arccacluster10:2,arccacluster11:2
testjobs/h2o_vdz.test <br>
<br>
I suspect it might be some problem with your own mpich2 build. In
order to isolate the problem, you could run a standalone MPI program
and see if it works fine with machine files in different orders.<br>
<br>
For example, you could run the mpich2's own example ( examples/cpi).
The following is my test, and the final two runnings should succeed
if the mpich2 library is built properly. <br>
============================================================<br>
[sacmw4@arccacluster11 examples]$ hostname<br>
arccacluster11<br>
[sacmw4@arccacluster11 examples]$ cat machines1<br>
arccacluster11<br>
arccacluster11<br>
arccacluster10<br>
arccacluster10<br>
[sacmw4@arccacluster11 examples]$ cat machines2<br>
arccacluster10<br>
arccacluster10<br>
arccacluster11<br>
arccacluster11<br>
<br>
[sacmw4@arccacluster11 examples]$ ../../mpich2-install/bin/mpiexec
-machinefile machines1 -np 4 ./cpi<br>
<br>
[sacmw4@arccacluster11 examples]$ ../../mpich2-install/bin/mpiexec
-machinefile machines2 -np 4 ./cpi<br>
================================================================<br>
<br>
<blockquote
cite="mid:CACRFBJUVpqhT=AxV6wHR2XabiVvFCEgX2nN48UTEZs5JKuTYUA@mail.gmail.com"
type="cite">
<div><br>
</div>
<div>4. What is the deal with the helper servers? I couldn't
understand their roles by just reading the docs.</div>
</blockquote>
For Molpro built with GA, there is no helper, while for MPI-2
Molpro,
one helper per node is the default setting. For further details,
please see<br>
<a id="ddDoi" href="http://dx.doi.org/10.1016/j.cpc.2011.03.020"
target="doilink" onclick="var doiWin;
doiWin=window.open('http://dx.doi.org/10.1016/j.cpc.2011.03.020','doilink','scrollbars=yes,resizable=yes,directories=yes,toolbar=yes,menubar=yes,status=yes');
doiWin.focus()">http://dx.doi.org/10.1016/j.cpc.2011.03.020</a><br>
<a id="ddDoi" href="http://dx.doi.org/10.1016/j.cpc.2009.05.002"
target="doilink" onclick="var doiWin;
doiWin=window.open('http://dx.doi.org/10.1016/j.cpc.2009.05.002','doilink','scrollbars=yes,resizable=yes,directories=yes,toolbar=yes,menubar=yes,status=yes');
doiWin.focus()">http://dx.doi.org/10.1016/j.cpc.2009.05.002</a><br>
<br>
Best wishes,<br>
Manhui<br>
<blockquote
cite="mid:CACRFBJUVpqhT=AxV6wHR2XabiVvFCEgX2nN48UTEZs5JKuTYUA@mail.gmail.com"
type="cite">
<div><br>
</div>
<div>Thanks in advance for your time and attention.</div>
<div><br>
</div>
<div>Cheers,</div>
<div><br>
</div>
<div>Ivan</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Molpro-user mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Molpro-user@molpro.net">Molpro-user@molpro.net</a>
<a class="moz-txt-link-freetext" href="http://www.molpro.net/mailman/listinfo/molpro-user">http://www.molpro.net/mailman/listinfo/molpro-user</a></pre>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
-----------
Manhui Wang
School of Chemistry, Cardiff University,
Main Building, Park Place,
Cardiff CF10 3AT, UK
</pre>
</body>
</html>