| | 1 | ========================= |
| | 2 | |
| | 3 | INSTALLING anuga_parallel |
| | 4 | |
| | 5 | ========================= |
| | 6 | |
| | 7 | |
| | 8 | |
| | 9 | anuga_parallel |
| | 10 | |
| | 11 | ============== |
| | 12 | |
| | 13 | |
| | 14 | |
| | 15 | Well first you need to get the anuga_parallel code. You can get this from |
| | 16 | |
| | 17 | our svn repository with userid anonymous (blank password) |
| | 18 | |
| | 19 | |
| | 20 | |
| | 21 | The location is |
| | 22 | |
| | 23 | |
| | 24 | |
| | 25 | https://anuga.anu.edu.au/svn/anuga/trunk/anuga_core/source/anuga_parallel |
| | 26 | |
| | 27 | |
| | 28 | |
| | 29 | (By the way, the most recent version of the development code of anuga |
| | 30 | |
| | 31 | is available at |
| | 32 | |
| | 33 | |
| | 34 | |
| | 35 | https://anuga.anu.edu.au/svn/anuga/trunk/anuga_core/source/anuga |
| | 36 | |
| | 37 | ) |
| | 38 | |
| | 39 | |
| | 40 | |
| | 41 | Setup your PYTHONPATH to point to location of the source directory |
| | 42 | |
| | 43 | |
| | 44 | |
| | 45 | For instance I havethe following line in my .bashrc file |
| | 46 | |
| | 47 | |
| | 48 | |
| | 49 | export PYTHONPATH=/home/steve/anuga/anuga_core/source |
| | 50 | |
| | 51 | |
| | 52 | |
| | 53 | |
| | 54 | |
| | 55 | |
| | 56 | |
| | 57 | MPI |
| | 58 | |
| | 59 | === |
| | 60 | |
| | 61 | |
| | 62 | |
| | 63 | Now you need to install MPI on your system. OPENMPI and MPICH2 |
| | 64 | |
| | 65 | are supported by pypar (see below) so both should be ok. |
| | 66 | |
| | 67 | |
| | 68 | |
| | 69 | Make sure mpi works. You should be able to run a program in parallel. |
| | 70 | |
| | 71 | |
| | 72 | |
| | 73 | Try something as simple as |
| | 74 | |
| | 75 | |
| | 76 | |
| | 77 | mpirun -np 4 pwd |
| | 78 | |
| | 79 | |
| | 80 | |
| | 81 | should produce the output of pwd 4 times. |
| | 82 | |
| | 83 | |
| | 84 | |
| | 85 | PYPAR |
| | 86 | |
| | 87 | ====== |
| | 88 | |
| | 89 | |
| | 90 | |
| | 91 | We use pypar as the interface between mpi and python. The most recent |
| | 92 | |
| | 93 | version of PYPAR is available from |
| | 94 | |
| | 95 | |
| | 96 | |
| | 97 | http://code.google.com/p/pypar/ |
| | 98 | |
| | 99 | |
| | 100 | |
| | 101 | (There is an old version on sourceforge |
| | 102 | |
| | 103 | http://sourceforge.net/projects/pypar/ don't use that) |
| | 104 | |
| | 105 | |
| | 106 | |
| | 107 | Install pypar following the instructions in the download. Should be able |
| | 108 | |
| | 109 | use standard command |
| | 110 | |
| | 111 | |
| | 112 | |
| | 113 | python setup.py install |
| | 114 | |
| | 115 | |
| | 116 | |
| | 117 | or maybe |
| | 118 | |
| | 119 | |
| | 120 | |
| | 121 | sudo python setup.py install |
| | 122 | |
| | 123 | |
| | 124 | |
| | 125 | |
| | 126 | |
| | 127 | Make sure the pypar examples work |
| | 128 | |
| | 129 | |
| | 130 | |
| | 131 | |
| | 132 | |
| | 133 | PYMETIS |
| | 134 | |
| | 135 | ======= |
| | 136 | |
| | 137 | |
| | 138 | |
| | 139 | In the anuga_parallel directory there is a subdirectory pymetis. |
| | 140 | |
| | 141 | |
| | 142 | |
| | 143 | Follow the instructions in README to install. Essentially just run make. |
| | 144 | |
| | 145 | |
| | 146 | |
| | 147 | If you have a 64 bit machine run |
| | 148 | |
| | 149 | |
| | 150 | |
| | 151 | make COPTIONS="-fPIC" |
| | 152 | |
| | 153 | |
| | 154 | |
| | 155 | From the pymetis directory, test using test_all.py, ie |
| | 156 | |
| | 157 | |
| | 158 | |
| | 159 | python test_all.py |
| | 160 | |
| | 161 | |
| | 162 | |
| | 163 | ANUGA_PARALLEL |
| | 164 | |
| | 165 | ============== |
| | 166 | |
| | 167 | |
| | 168 | |
| | 169 | Should now be ready to run some parallel anuga code. |
| | 170 | |
| | 171 | |
| | 172 | |
| | 173 | Go back to the anuga_parallel directory and run test_all.py |
| | 174 | |
| | 175 | |
| | 176 | |
| | 177 | Hopefully that all works. |
| | 178 | |
| | 179 | |
| | 180 | |
| | 181 | Run run_parallel_sw_merimbula.py |
| | 182 | |
| | 183 | |
| | 184 | |
| | 185 | First just run it as a sequential program, via |
| | 186 | |
| | 187 | |
| | 188 | |
| | 189 | python run_parallel_sw_merimbula.py |
| | 190 | |
| | 191 | |
| | 192 | |
| | 193 | Then try a parallel run using a command like |
| | 194 | |
| | 195 | |
| | 196 | |
| | 197 | mpirun -np 4 python run_parallel_sw_merimbula.py |
| | 198 | |
| | 199 | |
| | 200 | |
| | 201 | That should run on 4 processors |
| | 202 | |
| | 203 | |
| | 204 | |
| | 205 | You should look at the code in run_parallel_sw_merimbula.py |
| | 206 | |
| | 207 | |
| | 208 | |
| | 209 | Essentially a fairly standard example, with the extra command |
| | 210 | |
| | 211 | |
| | 212 | |
| | 213 | domain = distribute(domain) |
| | 214 | |
| | 215 | |
| | 216 | |
| | 217 | which sets up all the parallel stuff. |
| | 218 | |
| | 219 | |
| | 220 | |
| | 221 | Also for efficiency reasons we only setup the original full sequential mesh |
| | 222 | |
| | 223 | on processor 0, hence the statement |
| | 224 | |
| | 225 | |
| | 226 | |
| | 227 | if myid == 0: |
| | 228 | |
| | 229 | domain = create_domain_from_file(mesh_filename) |
| | 230 | |
| | 231 | domain.set_quantity('stage', Set_Stage(x0, x1, 2.0)) |
| | 232 | |
| | 233 | else: |
| | 234 | |
| | 235 | domain = None |
| | 236 | |
| | 237 | |
| | 238 | |
| | 239 | The output will be an sww file associated to each |
| | 240 | |
| | 241 | processor. |
| | 242 | |
| | 243 | |
| | 244 | |
| | 245 | There is a script anuga/utilities/sww_merge.py which provides |
| | 246 | |
| | 247 | a function to merge sww files into one sww file for viewing |
| | 248 | |
| | 249 | with the anuga viewer. |