Changeset 2723


Ignore:
Timestamp:
Apr 18, 2006, 8:50:20 PM (19 years ago)
Author:
steve
Message:
 
Location:
inundation/parallel/documentation
Files:
3 edited

Legend:

Unmodified
Added
Removed
  • inundation/parallel/documentation/parallel.tex

    r2697 r2723  
    77There are four main steps required to run the code in parallel. They are;
    88\begin{enumerate}
    9 \item subdivide the domain into a set of non-overlapping subdomains (\code{pmesh_divide_metis} from {\tt pmesh_divde.py}),
    10 \item build a \lq ghost\rq\ or communication layer of boundary triangles around each subdomain and define the communication pattern (\code{build_submesh} from {\tt build_submesh.py}),
    11 \item distribute the subdomains over the processors (\code{send_submesh} and \code{rec_submesh} from {\tt build_commun.py}),
    12 \item and update the numbering scheme for the local domain assigned to a processor (\code{build_local_mesh} from {\tt build_local.py}).
     9\item subdivide the domain into a set of non-overlapping subdomains
     10(\code{pmesh_divide_metis} from {\tt pmesh_divde.py}),
     11\item build a \lq ghost\rq\ or communication layer of boundary triangles
     12around each subdomain and define the communication pattern (\code{build_submesh} from {\tt build_submesh.py}),
     13\item distribute the subdomains over the processors (\code{send_submesh}
     14and \code{rec_submesh} from {\tt build_commun.py}),
     15\item and update the numbering scheme for the local domain assigned to a
     16processor (\code{build_local_mesh} from {\tt build_local.py}).
    1317\end{enumerate}
    1418See Figure \ref{fig:subpart}
     
    2226\subsection {Subdividing the Global Domain}
    2327
    24 The first step in parallelising the code is to subdivide the domain into
    25 equally sized partitions. On a rectangular domain this may be done by a simple co-ordinate based dissection, but on a complicated domain such as the Merimbula grid shown in Figure \ref{fig:subpart} a more sophisticated approach must be used.  We use pymetis, a python wrapper around the Metis
    26 (\url{http://www-users.cs.umn.edu/~karypis/metis/}) partitioning
    27 library. The \code{pmesh_divide_metis} function defined in {\tt pmesh_divide.py} uses Metis to divide the domain for parallel computation. Metis was chosen as the partitioner based on the results in the paper \cite{gk:metis}.
     28The first step in parallelising the code is to subdivide the domain
     29into equally sized partitions. On a rectangular domain this may be
     30done by a simple co-ordinate based dissection, but on a complicated
     31domain such as the Merimbula grid shown in Figure \ref{fig:subpart}
     32a more sophisticated approach must be used.  We use pymetis, a
     33python wrapper around the Metis
     34(\url{http://glaros.dtc.umn.edu/gkhome/metis/metis/overview})
     35partitioning library. The \code{pmesh_divide_metis} function defined
     36in {\tt pmesh_divide.py} uses Metis to divide the domain for
     37parallel computation. Metis was chosen as the partitioner based on
     38the results in the paper \cite{gk:metis}.
    2839
    2940\begin{figure}[hbtp]
     
    3344\end{figure}
    3445
    35 Figure \ref{fig:mermesh4} shows the Merimbula grid partitioned over four processor. Table \ref{tbl:mermesh4} gives the node distribution over the four processors while Table \ref{tbl:mermesh8} shows the distribution over eight processors. These results imply that Pymetis gives a reasonably well balanced partition of the domain. 
     46Figure \ref{fig:mermesh4} shows the Merimbula grid partitioned over four processor. Table \ref{tbl:mermesh4} gives the node distribution over the four processors while Table \ref{tbl:mermesh8} shows the distribution over eight processors. These results imply that Pymetis gives a reasonably well balanced partition of the domain.
    3647
    3748\begin{figure}[hbtp]
     
    7990\subsection {Sending the Subdomains}
    8091
    81 All of functions described so far must be run in serial on Processor 0, the next step is to start the parallel computation by spreading the subdomains over the processors. The communication is carried out by 
     92All of functions described so far must be run in serial on Processor 0, the next step is to start the parallel computation by spreading the subdomains over the processors. The communication is carried out by
    8293\code{send_submesh} and \code{rec_submesh} defined in {\tt build_commun.py}.
    83 The \code{send_submesh} function should be called on Processor 0 and sends the Subdomain $p$ to Processor $p$, while \code{rec_submesh} should be called by Processor $p$ to receive Subdomain $p$ from Processor 0. Note that the order of communication is very important, if any changes are made to the \code{send_submesh} function the corresponding change must be made to the \code{rec_submesh} function. 
     94The \code{send_submesh} function should be called on Processor 0 and sends the Subdomain $p$ to Processor $p$, while \code{rec_submesh} should be called by Processor $p$ to receive Subdomain $p$ from Processor 0. Note that the order of communication is very important, if any changes are made to the \code{send_submesh} function the corresponding change must be made to the \code{rec_submesh} function.
    8495
    8596While it is possible to get Processor 0 to communicate it's subdomain to itself, it is an expensive unnessary communication call. The {\tt build_commun.py} file also includes a function called \code{extract_hostmesh} which simply extracts Subdomain $0$.
  • inundation/parallel/documentation/report.tex

    r2697 r2723  
    4949\setcounter{secnumdepth}{3}
    5050
    51 \includeonly{parallel}
     51%\includeonly{parallel}
    5252
    5353\begin{document}
     
    6969\bibitem{gk:metis}
    7070George Karypis and Vipin Kumar.
    71 \newblock A fast and high quality multilevel scheme for partitioning irregular graphs.   
     71\newblock A fast and high quality multilevel scheme for partitioning irregular graphs.
    7272\newblock {\em SIAM Journal on Scientific Computing}, 20(1):359--392, 1999.
    73 \newblock \url{http://www-users.cs.umn.edu/~karypis/publications/Papers/PDF/mlevel\_serial.pdf}
     73\newblock \url{http://glaros.dtc.umn.edu/gkhome/fetch/papers/mlSIAMSC99.pdf}
    7474\end{thebibliography}
    7575
  • inundation/parallel/documentation/visualisation.tex

    r2697 r2723  
    2929  amount unless an overriding value exists in scale\_z.
    3030\end{itemize}
    31 Screenshot:\\ 
    32 \includegraphics{vis-screenshot.pdf}\\
     31Screenshot:\\
     32\includegraphics{vis-screenshot.eps}\\
    3333
    3434Unlike the old VPython visualiser, the behaviour of the VTK
Note: See TracChangeset for help on using the changeset viewer.