Changeset 1575
- Timestamp:
- Jul 4, 2005, 5:57:46 PM (19 years ago)
- Location:
- inundation/ga/storm_surge
- Files:
-
- 12 edited
Legend:
- Unmodified
- Added
- Removed
-
inundation/ga/storm_surge/parallel/build_submesh.py
r1559 r1575 6 6 ######################################################### 7 7 # 8 # Subdivide the triangles into non-overlapping domains. 8 # Subdivide the triangles into non-overlapping domains. 9 9 # 10 10 # *) The subdivision is controlled by triangles_per_proc. 11 11 # The first triangles_per_proc[0] triangles are assigned 12 12 # to the first processor, the second triangles_per_proc[1] 13 # are assigned to the second processor etc. 13 # are assigned to the second processor etc. 14 14 # 15 15 # *) nodes, triangles and boundary contains all of the … … 30 30 31 31 # initialise 32 32 33 33 tlower = 0 34 34 nproc = len(triangles_per_proc) … … 38 38 boundary_list = [] 39 39 submesh = {} 40 40 41 41 # loop over processors 42 42 43 43 for p in range(nproc): 44 44 45 45 # find triangles on processor p 46 46 47 47 tupper = triangles_per_proc[p]+tlower 48 48 subtriangles = triangles[tlower:tupper] … … 55 55 subboundary[k]=boundary[k] 56 56 boundary_list.append(subboundary) 57 57 58 58 # find nodes in processor p 59 59 60 60 nodemap = map(lambda n: 0, nodes) 61 61 for t in subtriangles: … … 63 63 nodemap[t[1]]=1 64 64 nodemap[t[2]]=1 65 subnodes = [] 65 subnodes = [] 66 66 for i in range(nnodes): 67 67 if nodemap[i] == 1: … … 70 70 71 71 # move to the next processor 72 72 73 73 tlower = tupper 74 74 75 75 # put the results in a dictionary 76 76 77 77 submesh["full_nodes"] = node_list 78 78 submesh["full_triangles"] = triangle_list … … 80 80 81 81 # clean up before exiting 82 82 83 83 del (nodemap) 84 84 … … 113 113 114 114 def ghost_layer(submesh, mesh, p, tupper, tlower): 115 115 116 116 # find the first layer of boundary triangles 117 117 118 118 trianglemap = map(lambda n: 0, mesh.triangles) 119 119 for t in range(tlower, tupper): … … 132 132 133 133 # find the second layer of boundary triangles 134 134 135 135 for t in range(len(trianglemap)): 136 136 if trianglemap[t]==1: … … 149 149 150 150 # build the triangle list and make note of the vertices 151 151 152 152 nodemap = map(lambda n: 0, mesh.coordinates) 153 153 fullnodes = submesh["full_nodes"][p] … … 162 162 163 163 # keep a record of the triangle vertices, if they are not already there 164 164 165 165 subnodes = [] 166 166 for n in fullnodes: … … 172 172 173 173 # clean up before exiting 174 174 175 175 del (nodemap) 176 176 del (trianglemap) 177 177 178 178 # return the triangles and vertices sitting on the boundary layer 179 179 180 180 return subnodes, subtriangles 181 181 … … 200 200 201 201 # loop over the ghost triangles 202 202 203 203 ghost_commun = [] 204 204 for t in subtri: … … 206 206 207 207 # find which processor contains the full triangle 208 208 209 209 nproc = len(tri_per_proc) 210 210 neigh = nproc-1 … … 217 217 218 218 # keep a copy of the neighbour processor number 219 219 220 220 ghost_commun.append([global_no, neigh]) 221 221 222 222 return ghost_commun 223 223 224 224 ######################################################### 225 225 # … … 252 252 253 253 # loop over the processor 254 254 255 255 for p in range(nproc): 256 256 257 257 # loop over the full triangles in the current processor 258 258 # and build an empty dictionary 259 259 260 260 fcommun = {} 261 261 tupper = tri_per_proc[p]+tlower … … 266 266 267 267 # loop over the processor again 268 268 269 269 for p in range(nproc): 270 270 … … 273 273 # and make note that that processor must send updates to this 274 274 # processor 275 275 276 276 for g in submesh["ghost_commun"][p]: 277 277 neigh = g[1] … … 280 280 return full_commun 281 281 282 282 283 283 ######################################################### 284 284 # … … 305 305 306 306 def submesh_ghost(submesh, mesh, triangles_per_proc): 307 307 308 308 nproc = len(triangles_per_proc) 309 309 tlower = 0 … … 311 311 ghost_nodes = [] 312 312 ghost_commun = [] 313 313 314 314 # loop over processors 315 315 316 316 for p in range(nproc): 317 317 318 318 # find the full triangles in this processor 319 319 320 320 tupper = triangles_per_proc[p]+tlower 321 321 322 322 # build the ghost boundary layer 323 323 324 324 [subnodes, subtri] = ghost_layer(submesh, mesh, p, tupper, tlower) 325 325 ghost_triangles.append(subtri) … … 327 327 328 328 # build the communication pattern for the ghost nodes 329 329 330 330 gcommun = ghost_commun_pattern(subtri, p, triangles_per_proc) 331 331 ghost_commun.append(gcommun) 332 332 333 333 # move to the next processor 334 334 335 335 tlower = tupper 336 336 337 337 # record the ghost layer and communication pattern 338 338 339 339 submesh["ghost_nodes"] = ghost_nodes 340 340 submesh["ghost_triangles"] = ghost_triangles … … 342 342 343 343 # build the communication pattern for the full triangles 344 344 345 345 full_commun = full_commun_pattern(submesh, triangles_per_proc) 346 346 submesh["full_commun"] = full_commun 347 347 348 348 # return the submesh 349 349 350 350 return submesh 351 351 … … 369 369 # temporarily build the mesh to find the neighbouring 370 370 # triangles 371 371 372 372 mesh = Mesh(nodes, triangles) 373 373 374 374 # subdivide into non-overlapping partitions 375 375 376 376 submeshf = submesh_full(nodes, triangles, edges, triangles_per_proc) 377 377 378 378 # add any extra ghost boundary layer information 379 379 380 380 submeshg = submesh_ghost(submeshf, mesh, triangles_per_proc) 381 381 … … 398 398 ######################################################### 399 399 def extract_hostmesh(submesh): 400 400 401 401 submesh_cell = {} 402 402 submesh_cell["full_nodes"] = submesh["full_nodes"][0] … … 411 411 412 412 413 -
inundation/ga/storm_surge/parallel/parallel_advection.py
r1563 r1575 24 24 25 25 from advection import * 26 Advection_Domain = Domain27 26 from Numeric import zeros, Float, Int, ones, allclose, array 28 27 import pypar 29 28 30 29 31 class Parallel_ Advection_Domain(Advection_Domain):30 class Parallel_Domain(Domain): 32 31 33 32 def __init__(self, coordinates, vertices, boundary = None, … … 38 37 self.numproc = pypar.size() 39 38 40 Advection_Domain.__init__(self, coordinates, vertices, boundary,41 39 Domain.__init__(self, coordinates, vertices, boundary, 40 velocity = velocity) 42 41 43 42 N = self.number_of_elements … … 65 64 66 65 def check_integrity(self): 67 Advection_Domain.check_integrity(self)66 Domain.check_integrity(self) 68 67 69 68 msg = 'Will need to check global and local numbering' … … 73 72 74 73 # Calculate local timestep 75 Advection_Domain.update_timestep(self, yieldstep, finaltime)74 Domain.update_timestep(self, yieldstep, finaltime) 76 75 77 76 import time … … 85 84 gtimestep = zeros( 1, Float) # Buffer for results 86 85 87 88 #LINDA89 86 pypar.raw_reduce(ltimestep, gtimestep, pypar.MIN, 0) 90 87 pypar.broadcast(gtimestep,0) 91 #pypar.Barrier()92 88 93 89 self.timestep = gtimestep[0] … … 103 99 # the separate processors 104 100 105 import weave106 from weave import converters107 108 101 import time 109 102 t0 = time.time() … … 117 110 for send_proc in self.full_send_dict: 118 111 if send_proc != iproc: 119 # LINDA:120 # now store full as local id, global_id, value121 112 122 113 Idf = self.full_send_dict[send_proc][0] … … 125 116 N = len(Idf) 126 117 127 128 #==============================129 # Original python Code130 118 for i in range(N): 131 119 Xout[i,0] = stage_cv[Idf[i]] 132 #==============================133 134 135 #LINDA:136 #could not get the code below to work, kept on complaining about error: no match for call to `(py::list) (int&)'137 138 code1 = """139 for (int i=0; i<N ; i++){140 Xout(i,0) = stage_cv(Idf(i));141 }142 """143 #weave.inline(code1, ['stage_cv','Idf','Xout','N'],144 # type_converters = converters.blitz, compiler='gcc');145 120 146 121 pypar.send(Xout,send_proc) … … 159 134 N = len(Idg) 160 135 161 #LINDA: had problems getting C code to work162 136 163 164 #===========================165 # Origin Python Code166 137 for i in range(N): 167 138 stage_cv[Idg[i]] = X[i,0] 168 #===========================169 139 170 171 code2 = """172 for (int i=0; i<N; i++){173 stage_cv(Idg(i)) = X(i,0);174 }175 """176 # weave.inline(code2, ['stage_cv','Idg','X','N'],177 # type_converters = converters.blitz, compiler='gcc');178 140 179 141 #local update of ghost cells … … 191 153 N = len(Idg) 192 154 193 194 #======================================195 # Original python loop196 155 for i in range(N): 197 156 #print i,Idg[i],Idf[i] 198 157 stage_cv[Idg[i]] = stage_cv[Idf[i]] 199 #======================================200 158 201 159 202 code3 = """ 203 for (int i=0; i<N; i++){ 204 stage_cv(Idg(i)) = stage_cv(Idf(i)); 205 } 206 """ 207 #weave.inline(code3, ['stage_cv','Idg','Idf','N'], 208 # type_converters = converters.blitz, compiler='gcc'); 160 209 161 210 162 self.communication_time += time.time()-t0 … … 237 189 238 190 #Call basic machinery from parent class 239 for t in Advection_Domain.evolve(self, yieldstep, finaltime):191 for t in Domain.evolve(self, yieldstep, finaltime): 240 192 241 193 #Pass control on to outer loop for more specific actions 242 194 yield(t) 243 -
inundation/ga/storm_surge/parallel/parallel_shallow_water.py
r1563 r1575 25 25 26 26 from shallow_water import * 27 Shallow_Water_Domain = Domain28 27 from Numeric import zeros, Float, Int, ones, allclose, array 29 28 import pypar 30 29 31 30 32 class Parallel_ Shallow_Water_Domain(Shallow_Water_Domain):31 class Parallel_Domain(Domain): 33 32 34 33 def __init__(self, coordinates, vertices, boundary = None, … … 38 37 self.numproc = pypar.size() 39 38 40 Shallow_Water_Domain.__init__(self, coordinates, vertices, boundary)39 Domain.__init__(self, coordinates, vertices, boundary) 41 40 42 41 N = self.number_of_elements … … 66 65 67 66 def check_integrity(self): 68 Shallow_Water_Domain.check_integrity(self)67 Domain.check_integrity(self) 69 68 70 69 msg = 'Will need to check global and local numbering' … … 78 77 79 78 # Calculate local timestep 80 Shallow_Water_Domain.update_timestep(self, yieldstep, finaltime)79 Domain.update_timestep(self, yieldstep, finaltime) 81 80 82 81 import time … … 108 107 # the separate processors 109 108 110 import weave111 from weave import converters112 109 113 110 import time … … 202 199 203 200 #Call basic machinery from parent class 204 for t in Shallow_Water_Domain.evolve(self, yieldstep, finaltime):201 for t in Domain.evolve(self, yieldstep, finaltime): 205 202 206 203 #Pass control on to outer loop for more specific actions 207 204 yield(t) 208 205 209 -
inundation/ga/storm_surge/parallel/pmesh_divide.py
r1565 r1575 18 18 ######################################################### 19 19 20 from pmesh2domain import pmesh_to_domain_instance 20 21 21 from math import floor 22 22 … … 39 39 ######################################################### 40 40 41 def pmesh_divide (f, Domain, n_x = 1, n_y = 1):41 def pmesh_divide_linda(f, Domain, n_x = 1, n_y = 1): 42 42 43 43 # read in the pmesh … … 128 128 129 129 130 def pmesh_divide_steve(f, Domain, n_x = 1, n_y = 1): 131 132 # read in the pmesh 133 134 domain = pmesh_to_domain_instance(f, Domain) 130 def pmesh_divide(domain, n_x = 1, n_y = 1): 135 131 136 132 … … 146 142 y_coord_max = domain.xy_extent[3] 147 143 148 rect = domain.xy_extent149 144 150 145 # find the size of each sub-box … … 213 208 boundary[proc_sum[t[0]]+t[1], b[1]] = domain.boundary[b] 214 209 210 # relabel the vertex value of the stage quantity 211 from Numeric import zeros, Float 212 Stage = domain.quantities['stage'] 213 214 #print Stage.vertex_values 215 216 stage_relabelled = zeros( Stage.vertex_values.shape, Float) 217 for i in range(N): 218 bin = tri_index[i][0] 219 bin_off_set = tri_index[i][1] 220 index = proc_sum[bin]+bin_off_set 221 stage_relabelled[index] = Stage.vertex_values[i] 222 223 #print stage_relabelled 224 225 226 #print max(Stage.vertex_values - stage_relabelled) 227 228 215 229 # extract the node list 216 230 nodes = domain.coordinates.copy() 217 231 218 return nodes, triangles, boundary, triangles_per_proc, rect 219 220 221 232 return nodes, triangles, boundary, triangles_per_proc, stage_relabelled 233 -
inundation/ga/storm_surge/parallel/run_advection.py
r1520 r1575 1 import pdb2 pdb.set_trace()1 #import pdb 2 #pdb.set_trace() 3 3 4 4 import sys … … 46 46 for t in domain.evolve(yieldstep = 0.1, finaltime = 1.5): 47 47 domain.write_time() 48 pdb.set_trace() 49 48 #pdb.set_trace() -
inundation/ga/storm_surge/parallel/run_parallel_advection.py
r1563 r1575 8 8 from parallel_meshes import parallel_rectangle 9 9 10 from advection import Domain as Advection_Domain11 from parallel_advection import Parallel_ Advection_Domain10 from advection import Domain 11 from parallel_advection import Parallel_Domain 12 12 from parallel_advection import Transmissive_boundary, Dirichlet_boundary 13 13 … … 28 28 29 29 #Create advection domain with direction (1,-1) 30 domain = Parallel_ Advection_Domain(points, vertices, boundary,30 domain = Parallel_Domain(points, vertices, boundary, 31 31 full_send_dict, ghost_recv_dict, velocity=[1.0, 0.0]) 32 32 -
inundation/ga/storm_surge/parallel/run_parallel_merimbula.py
r1563 r1575 86 86 print "WARNING: number of subboxes is not equal to the number of proc" 87 87 88 [nodes, triangles, boundary, triangles_per_proc, rect] =\ 89 pmesh_divide(filename, Advection_Domain, nx, ny) 88 domain_full = pmesh_to_domain_instance(filename, Domain) 89 90 nodes, triangles, boundary, triangles_per_proc, stage_relabelled =\ 91 pmesh_divide(domain_full, nx, ny) 90 92 91 93 # subdivide the mesh 94 rect = array(domain_full.xy_extent, Float) 92 95 93 96 print rect -
inundation/ga/storm_surge/parallel/run_parallel_sw_merimbula.py
r1565 r1575 54 54 #from shallow_water import Domain 55 55 56 from shallow_water import Domain as Shallow_Water_Domain57 from parallel_shallow_water import Parallel_ Shallow_Water_Domain56 from shallow_water import Domain 57 from parallel_shallow_water import Parallel_Domain 58 58 59 59 # mesh partition routines 60 60 61 from pmesh_divide import pmesh_divide , pmesh_divide_steve61 from pmesh_divide import pmesh_divide 62 62 from build_submesh import * 63 63 from build_local import * 64 64 from build_commun import * 65 from pmesh2domain import pmesh_to_domain_instance 65 66 66 67 # read in the processor information … … 74 75 rect = zeros( 4, Float) # Buffer for results 75 76 77 class Set_Stage: 78 """Set an initial condition with constant water height, for x<x0 79 """ 80 81 def __init__(self, x0=0.25, x1=0.5, h=1.0): 82 self.x0 = x0 83 self.x1 = x1 84 self.h = h 85 86 def __call__(self, x, y): 87 return self.h*((x>self.x0)&(x<self.x1)) 88 89 90 91 76 92 if myid == 0: 77 93 78 94 # read in the test files 79 95 80 #filename = 'test-100.tsh'81 filename = 'merimbula_10785_1.tsh'96 filename = 'test-100.tsh' 97 #filename = 'merimbula_10785_1.tsh' 82 98 nx = numprocs 83 99 ny = 1 … … 85 101 print "WARNING: number of subboxes is not equal to the number of proc" 86 102 87 [nodes, triangles, boundary, triangles_per_proc, rect] =\ 88 pmesh_divide_steve(filename, Shallow_Water_Domain, nx, ny) 89 90 # subdivide the mesh 91 92 #print rect 93 94 rect = array(rect, Float) 103 domain_full = pmesh_to_domain_instance(filename, Domain) 104 105 domain_full.set_quantity('stage', Set_Stage(200.0,300.0,1.0)) 106 107 108 nodes, triangles, boundary, triangles_per_proc, stage_re = \ 109 pmesh_divide(domain_full, nx, ny) 110 111 rect = array(domain_full.xy_extent, Float) 112 113 print rect 95 114 96 115 submesh = build_submesh(nodes, triangles, boundary, triangles_per_proc) 116 117 118 lower = 0 119 submesh["full_stage"] = [] 120 submesh["ghost_stage"] = [] 121 122 print submesh["full_stage"] 123 124 for p in range(numprocs): 125 upper = lower+triangles_per_proc[p] 126 print lower, upper 127 128 submesh["full_stage"].append(stage_re[lower:upper]) 129 130 M = len(submesh["ghost_triangles"][p]) 131 submesh["ghost_stage"].append(zeros( (M,3) , Float)) 132 for j in range(M): 133 submesh["ghost_stage"][p][j] = stage_re[submesh["ghost_triangles"][p][j][0]] 134 135 lower = upper 136 print p 137 print submesh["full_stage"][p] 138 print submesh["ghost_stage"][p] 139 97 140 98 141 # send the mesh partition to the appropriate processor … … 102 145 103 146 hostmesh = extract_hostmesh(submesh) 104 [points, vertices, boundary, ghost_recv_dict, full_send_dict]= \147 points, vertices, boundary, ghost_recv_dict, full_send_dict = \ 105 148 build_local_mesh(hostmesh, 0, triangles_per_proc[0], numprocs) 106 149 … … 110 153 111 154 else: 112 [points, vertices, boundary, ghost_recv_dict, full_send_dict] = \ 113 rec_submesh(0) 155 points, vertices, boundary, ghost_recv_dict, full_send_dict = rec_submesh(0) 114 156 115 157 #if myid == 0: … … 125 167 #print rect 126 168 127 domain = Parallel_ Shallow_Water_Domain(points, vertices, boundary,128 129 169 domain = Parallel_Domain(points, vertices, boundary, 170 full_send_dict = full_send_dict, 171 ghost_recv_dict = ghost_recv_dict) 130 172 131 173 domain.initialise_visualiser(rect=rect) 132 domain.default_order = 2174 domain.default_order = 1 133 175 134 176 #Boundaries … … 139 181 domain.set_boundary( {'outflow': R, 'inflow': R, 'inner':R, 'exterior': R, 'open':R} ) 140 182 141 class Set_Stage: 142 """Set an initial condition with constant water height, for x<x0 143 """ 144 145 def __init__(self, x0=0.25, x1=0.5, h=1.0): 146 self.x0 = x0 147 self.x1 = x1 148 self.h = h 149 150 def __call__(self, x, y): 151 return self.h*((x>self.x0)&(x<self.x1)) 152 153 #domain.set_quantity('stage', Set_Stage(250.0,300.0,1.0)) 154 domain.set_quantity('stage', Set_Stage(756000.0,756500.0,4.0)) 183 184 domain.set_quantity('stage', Set_Stage(250.0,300.0,1.0)) 185 #domain.set_quantity('stage', Set_Stage(756000.0,756500.0,4.0)) 155 186 156 187 #--------- … … 158 189 t0 = time.time() 159 190 domain.visualise = True 160 #yieldstep = 0.1161 #finaltime = 1000 162 163 yieldstep = 10164 finaltime = 2000191 yieldstep = 0.1 192 finaltime = 0.1 193 194 #yieldstep = 10 195 #finaltime = 10 165 196 166 197 for t in domain.evolve(yieldstep = yieldstep, finaltime = finaltime): 167 198 if myid == 0: 168 199 domain.write_time() 200 print domain.quantities['stage'].centroid_values[0] 169 201 170 202 if myid == 0: -
inundation/ga/storm_surge/pyvolution/advection.py
r1564 r1575 70 70 71 71 def initialise_visualiser(self,scale_z=1.0,rect=None): 72 #Realtime visualisation 72 #Realtime visualisation 73 73 if self.visualiser is None: 74 74 from realtime_visualisation_new import Visualiser … … 110 110 def compute_fluxes(self): 111 111 112 self.compute_fluxes_weave() 112 try: 113 import weave 114 self.weave_available = True 115 except: 116 self.weave_available = False 117 118 if self.weave_available: 119 self.compute_fluxes_weave() 120 else: 121 self.compute_fluxes_python() 122 123 113 124 114 125 def compute_fluxes_python(self): -
inundation/ga/storm_surge/pyvolution/mesh.py
r1392 r1575 586 586 # assert self.neighbours[id,edge] < 0 587 587 # 588 589 590 588 #NOTE (Ole): I reckon this was resolved late 2004? 589 # 590 #See domain.set_boundary 591 591 592 592 -
inundation/ga/storm_surge/wiki/issues.txt
r1137 r1575 8 8 Action: See under pmesh/documentation 9 9 10 Issue: geo_reference is not passed into domain and further on to sww file. Hence we can't export results properly to e.g GIS 10 Issue: geo_reference is not passed into domain and further on to sww file. Hence we can't export results properly to e.g GIS 11 11 Importance: High 12 12 Action: Make it happen! 13 13 14 Issue: With reflective boundaries, and very small timesteps and very shallow 15 stage, there seems to be a situation where we lose mass. Must investigate! 16 Importance: Mid-High 17 Action: Check out boundary condition 14 18 15 19 20 16 21 CLOSED ISSUES: 17 22 ------------ 18 Issue: Segments in pmesh that are solely used to separate regions 19 (e.g. for different resolution) will appear as an *internal* 20 boundary in the resulting domain even though it may not have been intended to 23 Issue: Segments in pmesh that are solely used to separate regions 24 (e.g. for different resolution) will appear as an *internal* 25 boundary in the resulting domain even though it may not have been intended to 21 26 have a boundary condition applied. 22 Background: Genuine internal boundaries are created the same way as 23 external boundaries through the 'boundary' dictionary mapping triangle_id 27 Background: Genuine internal boundaries are created the same way as 28 external boundaries through the 'boundary' dictionary mapping triangle_id 24 29 and edge number to a tag. 25 When a mesh is created, this dictionary will override the neighbour structure 30 When a mesh is created, this dictionary will override the neighbour structure 26 31 and thereby enforce the internal boundary status. 27 32 This is all very well for 'True' internal boundaries. 28 However, once a segment belongs to a boundary (internal or external) 29 it must be bound to a boundary condition object in order to supply 30 values for the flux computations. Hence there is no way of allowing 31 normal flow between neighbouring triangles in the case where they are 33 However, once a segment belongs to a boundary (internal or external) 34 it must be bound to a boundary condition object in order to supply 35 values for the flux computations. Hence there is no way of allowing 36 normal flow between neighbouring triangles in the case where they are 32 37 separated by an (unintended) internal boundary. 33 An older version of pyvolution allowed None as a boundary object which 34 probably meant that a zero dirichlet condition was imposed. 38 An older version of pyvolution allowed None as a boundary object which 39 probably meant that a zero dirichlet condition was imposed. 35 40 Not sure about this, though. 36 41 Importance: High … … 40 45 boundary. It defaults to no tag. 41 46 Status: resolved 42
Note: See TracChangeset
for help on using the changeset viewer.