Changeset 3185
- Timestamp:
- Jun 20, 2006, 1:54:04 PM (18 years ago)
- Location:
- inundation/parallel
- Files:
-
- 3 edited
Legend:
- Unmodified
- Added
- Removed
-
inundation/parallel/documentation/code/RunParallelSwMerimbulaMetis.py
r3096 r3185 85 85 # Read in the test files 86 86 87 filename = ' merimbula_10785_1.tsh'87 filename = 'parallel/merimbula_10785_1.tsh' 88 88 89 89 # Build the whole mesh -
inundation/parallel/documentation/results.tex
r3184 r3185 14 14 \section{Advection, Rectangular Domain} 15 15 16 The first example looked at the rectangular domain example given in Section \ref{s ec:codeRPA}, excecpt that we changed the finaltime time to 1.0 (\code{domain.evolve(yieldstep = 0.1, finaltime = 1.0)}).16 The first example looked at the rectangular domain example given in Section \ref{subsec:codeRPA}, excecpt that we changed the finaltime time to 1.0 (\code{domain.evolve(yieldstep = 0.1, finaltime = 1.0)}). 17 17 18 18 For this particular example we can control the mesh size by changing the parameters \code{N} and \code{M} given in the following section of code taken from 19 Section \ref{s ec:codeRPA}.19 Section \ref{subsec:codeRPA}. 20 20 21 21 \begin{verbatim} … … 89 89 the Merimbula test problem. Inother words, we ran the code given in Section 90 90 \ref{subsec:codeRPMM}, except the final time was reduced to 10000 91 (\code{ \label{subsec:codeRPMM}). The results are given in Table \ref{tbl:rpm}.91 (\code{finaltime = 10000}). The results are given in Table \ref{tbl:rpm}. 92 92 These are good efficiency results, especially considering the structure of the 93 Merimbula mesh. Note that since we are solving an advection problem the amount94 of calculation done on each triangle is relatively low, when we more to other 95 problems that involve more calculations we would expect the computation to 96 communication ratio to increase and thus get an increase in efficiency.93 Merimbula mesh. 94 %Note that since we are solving an advection problem the amount of calculation 95 %done on each triangle is relatively low, when we more to other problems that 96 %involve more calculations we would expect the computation to communication ratio to increase and thus get an increase in efficiency. 97 97 98 98 \begin{table} 99 99 \caption{Parallel Efficiency Results for the Advection Problem on the 100 Merimbula Mesh {\tt N} = 160, {\tt M} = 160.\label{tbl:rpm}}100 Merimbula Mesh.\label{tbl:rpm}} 101 101 \begin{center} 102 102 \begin{tabular}{|c|c c|}\hline … … 109 109 \end{center} 110 110 \end{table} 111 112 \section{Shallow Water, Merimbula Mesh} 113 114 The final example we looked at is the shallow water equation on the 115 Merimbula mesh. We used the code listed in Section \ref{subsec:codeRPSMM}. The 116 results are listed in Table \ref{tbl:rpsm}. The efficiency results are not as 117 good as initally expected so we profiled the code and found that 118 the problem is with the \code{update_boundary} routine in the {\tt domain.py} 119 file. On one processor the \code{update_boundary} routine accounts for about 120 72\% of the total computation time and unfortunately it is difficult to 121 parallelise this routine. When metis subpartitions the mesh it is possible 122 that one processor will only get a few boundary edges (some may not get any) 123 while another processor may contain a relatively large number of boundary 124 edges. The profiler indicated that when running the problem on 8 processors, 125 Processor 0 spent about 3.8 times more doing the \code{update_boundary} 126 calculations than Processor 7. This load imbalance reduced the parallel 127 efficiency. 128 129 Before doing the shallow equation calculations on a larger number of 130 processors we recommend that the \code{update_boundary} calculations be 131 optimised as much as possible to reduce the effect of the load imbalance. 132 133 134 \begin{table} 135 \caption{Parallel Efficiency Results for the Shallow Water Equation on the 136 Merimbula Mesh.\label{tbl:rpsm}} 137 \begin{center} 138 \begin{tabular}{|c|c c|}\hline 139 $n$ & $T_n$ (sec) & $E_n (\%)$ \\\hline 140 1 & 7.04 & \\ 141 2 & 3.62 & 97 \\ 142 4 & 1.94 & 91 \\ 143 8 & 1.15 & 77 \\\hline 144 \end{tabular} 145 \end{center} 146 \end{table} -
inundation/parallel/parallel_shallow_water.py
r3117 r3185 29 29 from pyvolution.shallow_water import * 30 30 from Numeric import zeros, Float, Int, ones, allclose, array 31 from pypar_dist import pypar32 31 #from pypar_dist import pypar 32 import pypar 33 33 34 34 class Parallel_Domain(Domain): … … 98 98 """ 99 99 100 101 Domain.update_timestep(self, yieldstep, finaltime) 100 #LINDA: 101 # Moved below so timestep is found before doing update 102 103 #Domain.update_timestep(self, yieldstep, finaltime) 102 104 103 105 import time … … 118 120 self.communication_broadcast_time += time.time()-t0 119 121 120 122 # LINDA: 123 # Moved timestep to here 124 125 Domain.update_timestep(self, yieldstep, finaltime) 121 126 122 127 … … 124 129 """Calculate local timestep 125 130 """ 131 132 # LINDA: Moved below so timestep is updated before 133 # calculating statistic 126 134 127 135 #Compute minimal timestep on local process 128 Domain.update_timestep(self, yieldstep, finaltime)136 #Domain.update_timestep(self, yieldstep, finaltime) 129 137 130 138 pypar.barrier() … … 170 178 171 179 self.timestep = self.global_timestep[0] 172 180 181 # LINDA: 182 # update local stats now 183 184 #Compute minimal timestep on local process 185 Domain.update_timestep(self, yieldstep, finaltime) 173 186 174 187 #update_timestep = update_timestep_1
Note: See TracChangeset
for help on using the changeset viewer.