Changes between Version 1 and Version 2 of AnugaParallel


Ignore:
Timestamp:
Sep 30, 2011, 11:53:25 AM (14 years ago)
Author:
steve
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • AnugaParallel

    v1 v2  
    1 =========================
    2 
    3 INSTALLING anuga_parallel
    4 
    5 =========================
     1== INSTALLING anuga_parallel ==
    62
    73
     4=== anuga_parallel ===
    85
    9 anuga_parallel
     6Well first you need to get the anuga_parallel code. You can get this from our svn repository with userid anonymous (blank password)
    107
    11 ==============
    12 
    13 
    14 
    15 Well first you need to get the anuga_parallel code. You can get this from
    16 
    17 our svn repository with userid anonymous (blank password)
    18 
    19 
    20 
    21 The location is
    22 
    23 
    24 
    25 https://anuga.anu.edu.au/svn/anuga/trunk/anuga_core/source/anuga_parallel
    26 
    27 
     8The location is https://anuga.anu.edu.au/svn/anuga/trunk/anuga_core/source/anuga_parallel
    289
    2910(By the way, the most recent version of the development code of anuga
    30 
    3111is available at
    32 
    33 
    34 
    3512https://anuga.anu.edu.au/svn/anuga/trunk/anuga_core/source/anuga
    36 
    3713)
    38 
    39 
    4014
    4115Setup your PYTHONPATH to point to location of the source directory
    4216
    43 
    44 
    45 For instance I havethe following line in my .bashrc file
    46 
    47 
     17For instance I have the following line in my .bashrc file
    4818
    4919export PYTHONPATH=/home/steve/anuga/anuga_core/source
    5020
    5121
     22=== MPI ===
    5223
     24Now you need to install MPI on your system. OPENMPI and MPICH2 are supported by pypar (see below) so both should be ok.
    5325
    54 
    55 
    56 
    57 MPI
    58 
    59 ===
    60 
    61 
    62 
    63 Now you need to install MPI on your system. OPENMPI and MPICH2
    64 
    65 are supported by pypar (see below) so both should be ok.
    66 
    67 
    68 
    69 Make sure mpi works. You should be able to run a program in parallel.
    70 
    71 
    72 
    73 Try something as simple as
    74 
    75 
     26Make sure mpi works. You should be able to run a program in parallel. Try something as simple as
    7627
    7728mpirun -np 4 pwd
    78 
    79 
    8029
    8130should produce the output of pwd 4 times.
     
    8332
    8433
    85 PYPAR
    86 
    87 ======
     34=== PYPAR ===
    8835
    8936
    9037
    91 We use pypar as the interface between mpi and python. The most recent
    92 
    93 version of PYPAR is available from
    94 
    95 
    96 
     38We use pypar as the interface between mpi and python. The most recent version of PYPAR is available from
    9739http://code.google.com/p/pypar/
    9840
     41(There is an old version on sourceforge http://sourceforge.net/projects/pypar/ don't use that)
    9942
    100 
    101 (There is an old version on sourceforge
    102 
    103 http://sourceforge.net/projects/pypar/ don't use that)
    104 
    105 
    106 
    107 Install pypar following the instructions in the download. Should be able
    108 
    109 use standard command
    110 
    111 
     43Install pypar following the instructions in the download. Should be able use standard command
    11244
    11345python setup.py install
    11446
    115 
    116 
    11747or maybe
    118 
    119 
    12048
    12149sudo python setup.py install
    12250
    123 
    124 
    125 
    126 
    12751Make sure the pypar examples work
    12852
    129 
    130 
    131 
    132 
    133 PYMETIS
    134 
    135 =======
    136 
    137 
     53=== PYMETIS ====
    13854
    13955In the anuga_parallel directory there is a subdirectory pymetis.
    14056
    141 
    142 
    14357Follow the instructions in README to install. Essentially just run make.
    144 
    145 
    14658
    14759If you have a 64 bit machine run
    14860
    149 
    150 
    15161make COPTIONS="-fPIC"
    15262
    153 
    154 
    15563From the pymetis directory, test using test_all.py, ie
    156 
    157 
    15864
    15965python test_all.py
     
    16167
    16268
    163 ANUGA_PARALLEL
     69=== ANUGA_PARALLEL  ===
    16470
    165 ==============
    166 
    167 
    168 
    169 Should now be ready to run some parallel anuga code.
    170 
    171 
    172 
    173 Go back to the anuga_parallel directory and run test_all.py
    174 
    175 
     71Should now be ready to run some parallel anuga code. Go back to the anuga_parallel directory and run test_all.py
    17672
    17773Hopefully that all works.
    17874
    179 
     75=== Example program ===
    18076
    18177Run run_parallel_sw_merimbula.py
    18278
    183 
    184 
    18579First just run it as a sequential program, via
    186 
    187 
    18880
    18981python run_parallel_sw_merimbula.py
    19082
    191 
    192 
    19383Then try a parallel run using a command like
    19484
    195 
    196 
    19785mpirun -np 4 python run_parallel_sw_merimbula.py
    198 
    199 
    20086
    20187That should run on 4 processors
    20288
    20389
    204 
    20590You should look at the code in  run_parallel_sw_merimbula.py
    206 
    207 
    20891
    20992Essentially a fairly standard example, with the extra command
    21093
    211 
    212 
    21394domain = distribute(domain)
    214 
    215 
    21695
    21796which sets up all the parallel stuff.
     
    21998
    22099
    221 Also for efficiency reasons we only setup the original full sequential mesh
    222 
    223 on processor 0, hence the statement
    224 
     100Also for efficiency reasons we only setup the original full sequential mesh on processor 0, hence the statement
    225101
    226102
    227103if myid == 0:
    228 
    229104     domain = create_domain_from_file(mesh_filename)
    230 
    231105     domain.set_quantity('stage', Set_Stage(x0, x1, 2.0))
    232 
    233106else:
    234 
    235107     domain = None
    236108
    237109
    238110
    239 The output will be an sww file associated to each
     111The output will be an sww file associated to each processor.
    240112
    241 processor.
    242 
    243 
    244 
    245 There is a script anuga/utilities/sww_merge.py which provides
    246 
    247 a function to merge sww files into one sww file for viewing
    248 
     113There is a script anuga/utilities/sww_merge.py which provides a function to merge sww files into one sww file for viewing
    249114with the anuga viewer.