Changes between Version 1 and Version 2 of AnugaParallel
- Timestamp:
- Sep 30, 2011, 11:53:25 AM (14 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
AnugaParallel
v1 v2 1 ========================= 2 3 INSTALLING anuga_parallel 4 5 ========================= 1 == INSTALLING anuga_parallel == 6 2 7 3 4 === anuga_parallel === 8 5 9 anuga_parallel 6 Well first you need to get the anuga_parallel code. You can get this from our svn repository with userid anonymous (blank password) 10 7 11 ============== 12 13 14 15 Well first you need to get the anuga_parallel code. You can get this from 16 17 our svn repository with userid anonymous (blank password) 18 19 20 21 The location is 22 23 24 25 https://anuga.anu.edu.au/svn/anuga/trunk/anuga_core/source/anuga_parallel 26 27 8 The location is https://anuga.anu.edu.au/svn/anuga/trunk/anuga_core/source/anuga_parallel 28 9 29 10 (By the way, the most recent version of the development code of anuga 30 31 11 is available at 32 33 34 35 12 https://anuga.anu.edu.au/svn/anuga/trunk/anuga_core/source/anuga 36 37 13 ) 38 39 40 14 41 15 Setup your PYTHONPATH to point to location of the source directory 42 16 43 44 45 For instance I havethe following line in my .bashrc file 46 47 17 For instance I have the following line in my .bashrc file 48 18 49 19 export PYTHONPATH=/home/steve/anuga/anuga_core/source 50 20 51 21 22 === MPI === 52 23 24 Now you need to install MPI on your system. OPENMPI and MPICH2 are supported by pypar (see below) so both should be ok. 53 25 54 55 56 57 MPI 58 59 === 60 61 62 63 Now you need to install MPI on your system. OPENMPI and MPICH2 64 65 are supported by pypar (see below) so both should be ok. 66 67 68 69 Make sure mpi works. You should be able to run a program in parallel. 70 71 72 73 Try something as simple as 74 75 26 Make sure mpi works. You should be able to run a program in parallel. Try something as simple as 76 27 77 28 mpirun -np 4 pwd 78 79 80 29 81 30 should produce the output of pwd 4 times. … … 83 32 84 33 85 PYPAR 86 87 ====== 34 === PYPAR === 88 35 89 36 90 37 91 We use pypar as the interface between mpi and python. The most recent 92 93 version of PYPAR is available from 94 95 96 38 We use pypar as the interface between mpi and python. The most recent version of PYPAR is available from 97 39 http://code.google.com/p/pypar/ 98 40 41 (There is an old version on sourceforge http://sourceforge.net/projects/pypar/ don't use that) 99 42 100 101 (There is an old version on sourceforge 102 103 http://sourceforge.net/projects/pypar/ don't use that) 104 105 106 107 Install pypar following the instructions in the download. Should be able 108 109 use standard command 110 111 43 Install pypar following the instructions in the download. Should be able use standard command 112 44 113 45 python setup.py install 114 46 115 116 117 47 or maybe 118 119 120 48 121 49 sudo python setup.py install 122 50 123 124 125 126 127 51 Make sure the pypar examples work 128 52 129 130 131 132 133 PYMETIS 134 135 ======= 136 137 53 === PYMETIS ==== 138 54 139 55 In the anuga_parallel directory there is a subdirectory pymetis. 140 56 141 142 143 57 Follow the instructions in README to install. Essentially just run make. 144 145 146 58 147 59 If you have a 64 bit machine run 148 60 149 150 151 61 make COPTIONS="-fPIC" 152 62 153 154 155 63 From the pymetis directory, test using test_all.py, ie 156 157 158 64 159 65 python test_all.py … … 161 67 162 68 163 ANUGA_PARALLEL 69 === ANUGA_PARALLEL === 164 70 165 ============== 166 167 168 169 Should now be ready to run some parallel anuga code. 170 171 172 173 Go back to the anuga_parallel directory and run test_all.py 174 175 71 Should now be ready to run some parallel anuga code. Go back to the anuga_parallel directory and run test_all.py 176 72 177 73 Hopefully that all works. 178 74 179 75 === Example program === 180 76 181 77 Run run_parallel_sw_merimbula.py 182 78 183 184 185 79 First just run it as a sequential program, via 186 187 188 80 189 81 python run_parallel_sw_merimbula.py 190 82 191 192 193 83 Then try a parallel run using a command like 194 84 195 196 197 85 mpirun -np 4 python run_parallel_sw_merimbula.py 198 199 200 86 201 87 That should run on 4 processors 202 88 203 89 204 205 90 You should look at the code in run_parallel_sw_merimbula.py 206 207 208 91 209 92 Essentially a fairly standard example, with the extra command 210 93 211 212 213 94 domain = distribute(domain) 214 215 216 95 217 96 which sets up all the parallel stuff. … … 219 98 220 99 221 Also for efficiency reasons we only setup the original full sequential mesh 222 223 on processor 0, hence the statement 224 100 Also for efficiency reasons we only setup the original full sequential mesh on processor 0, hence the statement 225 101 226 102 227 103 if myid == 0: 228 229 104 domain = create_domain_from_file(mesh_filename) 230 231 105 domain.set_quantity('stage', Set_Stage(x0, x1, 2.0)) 232 233 106 else: 234 235 107 domain = None 236 108 237 109 238 110 239 The output will be an sww file associated to each 111 The output will be an sww file associated to each processor. 240 112 241 processor. 242 243 244 245 There is a script anuga/utilities/sww_merge.py which provides 246 247 a function to merge sww files into one sww file for viewing 248 113 There is a script anuga/utilities/sww_merge.py which provides a function to merge sww files into one sww file for viewing 249 114 with the anuga viewer.