== INSTALLING anuga_parallel == === Install anuga === First you should install the most uptodate version of the code. Follow the instructions to install [InstallUbuntuSvn Anuga on Ubuntu.] By following those instructions you should end up with a download of the anuga_core code (which contains the sequential code (in the source/anuga directory) and the anuga_parallel code (in source/anuga_parallel)). You should end up with a directory {{{ /home/username/anuga_core }}} where username is of course your username on your machine. Make sure you have setup your PYTHONPATH to point to the anuga source directory For instance I have the following line in my .bashrc file {{{ export PYTHONPATH=/home/username/anuga_core/source }}} At this stage you should have a working version of the sequential anuga program. I.e. you should be able to run command {{{ python test_all.py }}} from the anuga_core directory and have your installation pass all the unit tests (well nearly all, as this is the development version and there are sometimes a few minor unit tests that fail). ==== Updating anuga_core ==== If you had already downloaded {{{anuga_core}}} then it is sensible to update to the most recent version of the code using the subversion update command. Run the following command from the {{{anuga_core}}} directory {{{svn update}}} and then {{{ python compile_all.py python test_all.py }}} This should update an old version to the most recent version. === Install anuga_parallel === Now to get anuga_parallel to work, we need to install some other packages first, in particular {{{MPI}}} for the parallel message passing and {{{pypar}}} a simple python wrapper of {{{MPI}}}. ==== MPI ==== Now you need to install MPI on your system. OPENMPI and MPICH2 are supported by pypar (see below) so both should be ok. But I tend to use mpich2. So install mpich2 on your system via apt-get {{{ sudo apt-get install mpich2 }}} Make sure mpi works. You should be able to run a program in parallel. Something as simple as {{{ mpirun -np 4 pwd }}} should produce the output of pwd 4 times. ==== pypar ==== We use pypar as the interface between mpi and python. The most recent version of PYPAR is available from http://code.google.com/p/pypar/ Use svn to get the most recent version of the code. The tarred version is a little old. (There is also an old version on sourceforge, do not use that) From your home directory run the command {{{ svn checkout http://pypar.googlecode.com/svn/ pypar }}} This produces a directory {{{ /home/username/pypar }}} Change to the {{{/home/username/pypar/source}}} directory, and then run the command {{{ sudo python setup.py install }}} This should install pypar. Fire up python and see if you can {{{import pypar}}} You should obtain {{{ >>> import pypar Pypar (version 2.1.4) initialised MPI OK with 1 processors }}} Also make sure the pypar examples work By the way, it is sometimes useful to fire up a new console to see if these installations work in a clean console. ==== pymetis ==== In the anuga_parallel directory there is a subdirectory pymetis. Follow the instructions in README to install. Essentially just run make. From the pymetis diectory run {{{ make }}} From the pymetis directory, test using test_all.py, i.e. run {{{ python test_all.py }}} === Running anuga_parallel === You should now be ready to run some parallel anuga code. Go back to the anuga_parallel directory and run the tests {{{ cd /home/username/anuga_core/source/anuga_parallel python test_all.py }}} Hopefully that all works. ==== Example program ==== Run run_parallel_sw_merimbula.py First just run it as a sequential program, via {{{ python run_parallel_sw_merimbula.py }}} Then try a parallel run using a command like {{{ mpirun -np 4 python run_parallel_sw_merimbula.py }}} That should run on 4 processors You should look at the code in run_parallel_sw_merimbula.py Essentially a fairly standard example, with the extra command {{{ domain = distribute(domain) }}} which sets up all the parallel stuff. Also for efficiency reasons we only setup the original full sequential mesh on processor 0, hence the statement {{{ if myid == 0: domain = create_domain_from_file(mesh_filename) domain.set_quantity('stage', Set_Stage(x0, x1, 2.0)) else: domain = None }}} The output will be an sww file associated to each processor. ==== sww_merge ==== There is a script anuga/utilities/sww_merge.py which provides a function to merge sww files into one sww file for viewing with the anuga viewer. Suppose your parallel code produced 3 sww files, domain_P3_0.sww domain_P3_1.sww and domain_P3_2.sww The base name would be "domain" and the number of processors would be 3. To stitch these 3 files together either run sww_merge.py as a script with the command {{{ python /home/username/anuga_core/source/anuga/utilities/sww_merge.py -f domain -np 3 }}} or you can add a command of the form {{{ domain.sww_merge() }}} at the end of your simulation script, if you want to keep the individual parallel sww files or {{{ domain.sww_merge(delete_old=True) }}} (check out the script {{{run_parallel_sw_merimbula.py}}} which demos this) if you are happy for the individual sww files to be deleted after the merge operation.