INSTALLING anuga_parallel
If you installed anuga and with the ANUGA_PARALLEL environment set via
export ANUGA_PARALLEL="mpich2"
or
export ANUGA_PARALLEL="openmpi"
then you should already have parallel support.
Setting up parallel support
Let's suppose that you initially only set up anuga to run in sequential mode. Then to setup parallel mode you will need to install an MPI environment (mpich2 or openmpi) and the python wrapper pypar.
We will assume you have install anuga from source and the source is in the directory anuga_core
Updating anuga_core
If you had already downloaded anuga_core then it is sensible to update to the most recent version of the code using the subversion update command. Run the following command from the anuga_core directory
svn update
and then
sudo python setup.py install python runtests.py
This should update an old version to the most recent version.
Install anuga parallel
Now to get the parallel version of anuga to work, we need to install some other packages first, in particular MPI for the parallel message passing and pypar a simple python wrapper of MPI.
MPI
Now you need to install MPI on your system. OPENMPI and MPICH2 are supported by pypar (see below) so both should be ok. But I tend to use mpich2.
So install mpich2 on your system via apt-get
sudo apt-get install mpich2
Make sure mpi works. You should be able to run a program in parallel. Something as simple as
mpirun -np 4 pwd
should produce the output of pwd 4 times.
pypar
We use pypar as the interface between mpi and python. The most recent version of PYPAR is available from http://code.google.com/p/pypar/ Use svn to get the most recent version of the code. The tarred version is a little old.
(There is also an old version on sourceforge, do not use that)
From your home directory run the command
svn checkout http://pypar.googlecode.com/svn/ pypar
This produces a directory pypar
Change to that directory, and then run the command
sudo python setup.py install
This should install pypar.
Fire up python and see if you can import pypar
You should obtain
>>> import pypar Pypar (version 2.1.4) initialised MPI OK with 1 processors
Also make sure that the pypar examples work.
By the way, I suggest firing up a new console to see if these installations work in a clean console.
Compile anuga parallel code
Actually the parallel code is already in the anuga_core directory. We just need to reinstall anuga.
From the anuga_core directory force a rebuild and reinstall of anuga via
sudo python setup.py build -f sudo python setup.py install
Running anuga in parallel
You should now be ready to run some parallel anuga code.
python runtests.py
Hopefully that all works. If you are observant you should see that the number of unittests has increased by about 30, those are the parallel tests.
Example program
From the anuga_core/examples/parallel directory: Run run_parallel_sw_merimbula.py
First just run it as a sequential program, via
python run_parallel_sw_merimbula.py
Then try a parallel run using a command like
mpirun -np 4 python run_parallel_sw_merimbula.py
That should run on 4 processors
You should look at the code in run_parallel_sw_merimbula.py
Essentially this a fairly standard anuga script, with the extra command
domain = distribute(domain)
which sets up all the parallel stuff.
Also for efficiency reasons we only setup the original full sequential mesh on processor 0, hence the statement
if myid == 0: domain = create_domain_from_file(mesh_filename) domain.set_quantity('stage', Set_Stage(x0, x1, 2.0)) else: domain = None
The output will be an sww file associated to each processor.
sww_merge
There is a script anuga/utilities/sww_merge.py which provides a function to merge sww files into one sww file for viewing with the anuga viewer.
Suppose your parallel code produced 3 sww files, domain_P3_0.sww domain_P3_1.sww and domain_P3_2.sww
The base name would be "domain" and the number of processors would be 3. To stitch these 3 files together either run sww_merge.py as a script with the command
python /home/******/anuga_core/anuga/utilities/sww_merge.py -f domain -np 3
or you can add a command of the form
domain.sww_merge()
at the end of your simulation script, if you want to keep the individual parallel sww files or
domain.sww_merge(delete_old=True)
(check out the script run_parallel_sw_merimbula.py which demos this) if you are happy for the individual sww files to be deleted after the merge operation.