Changeset 2723
- Timestamp:
- Apr 18, 2006, 8:50:20 PM (19 years ago)
- Location:
- inundation/parallel/documentation
- Files:
-
- 3 edited
Legend:
- Unmodified
- Added
- Removed
-
inundation/parallel/documentation/parallel.tex
r2697 r2723 7 7 There are four main steps required to run the code in parallel. They are; 8 8 \begin{enumerate} 9 \item subdivide the domain into a set of non-overlapping subdomains (\code{pmesh_divide_metis} from {\tt pmesh_divde.py}), 10 \item build a \lq ghost\rq\ or communication layer of boundary triangles around each subdomain and define the communication pattern (\code{build_submesh} from {\tt build_submesh.py}), 11 \item distribute the subdomains over the processors (\code{send_submesh} and \code{rec_submesh} from {\tt build_commun.py}), 12 \item and update the numbering scheme for the local domain assigned to a processor (\code{build_local_mesh} from {\tt build_local.py}). 9 \item subdivide the domain into a set of non-overlapping subdomains 10 (\code{pmesh_divide_metis} from {\tt pmesh_divde.py}), 11 \item build a \lq ghost\rq\ or communication layer of boundary triangles 12 around each subdomain and define the communication pattern (\code{build_submesh} from {\tt build_submesh.py}), 13 \item distribute the subdomains over the processors (\code{send_submesh} 14 and \code{rec_submesh} from {\tt build_commun.py}), 15 \item and update the numbering scheme for the local domain assigned to a 16 processor (\code{build_local_mesh} from {\tt build_local.py}). 13 17 \end{enumerate} 14 18 See Figure \ref{fig:subpart} … … 22 26 \subsection {Subdividing the Global Domain} 23 27 24 The first step in parallelising the code is to subdivide the domain into 25 equally sized partitions. On a rectangular domain this may be done by a simple co-ordinate based dissection, but on a complicated domain such as the Merimbula grid shown in Figure \ref{fig:subpart} a more sophisticated approach must be used. We use pymetis, a python wrapper around the Metis 26 (\url{http://www-users.cs.umn.edu/~karypis/metis/}) partitioning 27 library. The \code{pmesh_divide_metis} function defined in {\tt pmesh_divide.py} uses Metis to divide the domain for parallel computation. Metis was chosen as the partitioner based on the results in the paper \cite{gk:metis}. 28 The first step in parallelising the code is to subdivide the domain 29 into equally sized partitions. On a rectangular domain this may be 30 done by a simple co-ordinate based dissection, but on a complicated 31 domain such as the Merimbula grid shown in Figure \ref{fig:subpart} 32 a more sophisticated approach must be used. We use pymetis, a 33 python wrapper around the Metis 34 (\url{http://glaros.dtc.umn.edu/gkhome/metis/metis/overview}) 35 partitioning library. The \code{pmesh_divide_metis} function defined 36 in {\tt pmesh_divide.py} uses Metis to divide the domain for 37 parallel computation. Metis was chosen as the partitioner based on 38 the results in the paper \cite{gk:metis}. 28 39 29 40 \begin{figure}[hbtp] … … 33 44 \end{figure} 34 45 35 Figure \ref{fig:mermesh4} shows the Merimbula grid partitioned over four processor. Table \ref{tbl:mermesh4} gives the node distribution over the four processors while Table \ref{tbl:mermesh8} shows the distribution over eight processors. These results imply that Pymetis gives a reasonably well balanced partition of the domain. 46 Figure \ref{fig:mermesh4} shows the Merimbula grid partitioned over four processor. Table \ref{tbl:mermesh4} gives the node distribution over the four processors while Table \ref{tbl:mermesh8} shows the distribution over eight processors. These results imply that Pymetis gives a reasonably well balanced partition of the domain. 36 47 37 48 \begin{figure}[hbtp] … … 79 90 \subsection {Sending the Subdomains} 80 91 81 All of functions described so far must be run in serial on Processor 0, the next step is to start the parallel computation by spreading the subdomains over the processors. The communication is carried out by 92 All of functions described so far must be run in serial on Processor 0, the next step is to start the parallel computation by spreading the subdomains over the processors. The communication is carried out by 82 93 \code{send_submesh} and \code{rec_submesh} defined in {\tt build_commun.py}. 83 The \code{send_submesh} function should be called on Processor 0 and sends the Subdomain $p$ to Processor $p$, while \code{rec_submesh} should be called by Processor $p$ to receive Subdomain $p$ from Processor 0. Note that the order of communication is very important, if any changes are made to the \code{send_submesh} function the corresponding change must be made to the \code{rec_submesh} function. 94 The \code{send_submesh} function should be called on Processor 0 and sends the Subdomain $p$ to Processor $p$, while \code{rec_submesh} should be called by Processor $p$ to receive Subdomain $p$ from Processor 0. Note that the order of communication is very important, if any changes are made to the \code{send_submesh} function the corresponding change must be made to the \code{rec_submesh} function. 84 95 85 96 While it is possible to get Processor 0 to communicate it's subdomain to itself, it is an expensive unnessary communication call. The {\tt build_commun.py} file also includes a function called \code{extract_hostmesh} which simply extracts Subdomain $0$. -
inundation/parallel/documentation/report.tex
r2697 r2723 49 49 \setcounter{secnumdepth}{3} 50 50 51 \includeonly{parallel}51 %\includeonly{parallel} 52 52 53 53 \begin{document} … … 69 69 \bibitem{gk:metis} 70 70 George Karypis and Vipin Kumar. 71 \newblock A fast and high quality multilevel scheme for partitioning irregular graphs. 71 \newblock A fast and high quality multilevel scheme for partitioning irregular graphs. 72 72 \newblock {\em SIAM Journal on Scientific Computing}, 20(1):359--392, 1999. 73 \newblock \url{http:// www-users.cs.umn.edu/~karypis/publications/Papers/PDF/mlevel\_serial.pdf}73 \newblock \url{http://glaros.dtc.umn.edu/gkhome/fetch/papers/mlSIAMSC99.pdf} 74 74 \end{thebibliography} 75 75 -
inundation/parallel/documentation/visualisation.tex
r2697 r2723 29 29 amount unless an overriding value exists in scale\_z. 30 30 \end{itemize} 31 Screenshot:\\ 32 \includegraphics{vis-screenshot. pdf}\\31 Screenshot:\\ 32 \includegraphics{vis-screenshot.eps}\\ 33 33 34 34 Unlike the old VPython visualiser, the behaviour of the VTK
Note: See TracChangeset
for help on using the changeset viewer.