Changeset 2849


Ignore:
Timestamp:
May 11, 2006, 4:08:02 PM (18 years ago)
Author:
ole
Message:

Editorial suggestions

Location:
inundation/parallel/documentation
Files:
2 edited

Legend:

Unmodified
Added
Removed
  • inundation/parallel/documentation/parallel.tex

    r2786 r2849  
    88\begin{enumerate}
    99\item partition the mesh into a set of non-overlapping submeshes
    10 (\code{pmesh_divide_metis} from {\tt pmesh_divde.py}),
     10(\code{pmesh_divide_metis} from {\tt pmesh_divide.py}),
    1111\item build a \lq ghost\rq\ or communication layer of triangles
    1212around each submesh and define the communication pattern (\code{build_submesh} from {\tt build_submesh.py}),
     
    4444\end{figure}
    4545
    46 Figure \ref{fig:mergrid4} shows the Merimbula grid partitioned over four processor. Table \ref{tbl:mer4} gives the node distribution over the four processors while Table \ref{tbl:mer8} shows the distribution over eight processors. These results imply that Pymetis gives a reasonably well balanced partition of the mesh.
     46Figure \ref{fig:mergrid4} shows the Merimbula grid partitioned over four processors. Note that one submesh may comprise several unconnected mesh partitions. Table \ref{tbl:mer4} gives the node distribution over the four processors while Table \ref{tbl:mer8} shows the distribution over eight processors. These results imply that Pymetis gives a reasonably well balanced partition of the mesh.
    4747
    4848\begin{figure}[hbtp]
     
    115115Looking at Figure \ref{fig:subdomaing} we see that after each \code{evolve} step Processor 0  will have to send the updated values for Triangle 3 and Triangle 5 to Processor 1, and similarly Processor 1 will have to send the updated values for triangles 4, 7 and 6 (recall that Submesh $p$ will be assigned to Processor $p$). The \code{build_submesh} function builds a dictionary that defines the communication pattern.
    116116
    117 Finally, the ANUGA code assumes that the triangles (and nodes etc.) are numbered consecutively starting from 1. Consequently, if Submesh 1 in Figure \ref{fig:subdomaing} was passed into the \code{evolve} calculations it would crash. The \code{build_submesh} function determines a local numbering scheme for each submesh, but it does not actually update the numbering, that is left to \code{build_local}.
     117Finally, the ANUGA code assumes that the triangles (and nodes etc.) are numbered consecutively starting from 1 (FIXME (Ole): Isn't it 0?). Consequently, if Submesh 1 in Figure \ref{fig:subdomaing} was passed into the \code{evolve} calculations it would crash due to the 'missing' triangles. The \code{build_submesh} function determines a local numbering scheme for each submesh, but it does not actually update the numbering, that is left to the function \code{build_local}.
    118118
    119119\subsection {Sending the Submeshes}\label{sec:part3}
    120120
    121 All of functions described so far must be run in serial on Processor 0, the next step is to start the parallel computation by spreading the submeshes over the processors. The communication is carried out by
     121All of functions described so far must be run in serial on Processor 0. The next step is to start the parallel computation by spreading the submeshes over the processors. The communication is carried out by
    122122\code{send_submesh} and \code{rec_submesh} defined in {\tt build_commun.py}.
    123123The \code{send_submesh} function should be called on Processor 0 and sends the Submesh $p$ to Processor $p$, while \code{rec_submesh} should be called by Processor $p$ to receive Submesh $p$ from Processor 0. Note that the order of communication is very important, if any changes are made to the \code{send_submesh} function the corresponding change must be made to the \code{rec_submesh} function.
     
    127127
    128128\subsection {Building the Local Mesh}
    129 After using \code{send_submesh} and \code{rec_submesh}, Processor $p$ should have its own local copy of Submesh $p$, however as stated previously the triangle numbering may be incorrect. The \code{build_local_mesh} function from {\tt build_local.py} primarily focuses on renumbering the information stored with the submesh; including the nodes, vertices and quantities. Figure \ref{fig:subdomainf} shows what the mesh in each processor may look like.
     129After using \code{send_submesh} and \code{rec_submesh}, Processor $p$ should have its own local copy of Submesh $p$, however as stated previously the triangle numbering will be incorrect on all processors except number $0$. The \code{build_local_mesh} function from {\tt build_local.py} primarily focuses on renumbering the information stored with the submesh; including the nodes, vertices and quantities. Figure \ref{fig:subdomainf} shows what the mesh in each processor may look like.
    130130
    131131
     
    144144The first example in Section \ref{sec:codeRPA} solves the advection equation on a
    145145rectangular mesh. A rectangular mesh is highly structured so a coordinate based decomposition can be use and the partitioning is simply done by calling the
    146 routine \code{parallel_rectangle} as show below.
     146routine \code{parallel_rectangle} as shown below.
    147147\begin{verbatim}
    148148#######################
     
    182182            pmesh_divide_metis(domain_full, numprocs)
    183183
    184     # Build the mesh that should be assigned to each processor,
    185     # this includes ghost nodes and the communicaiton pattern
     184    # Build the mesh that should be assigned to each processor.
     185    # This includes ghost nodes and the communication pattern
    186186   
    187187    submesh = build_submesh(nodes, triangles, boundary, quantities, \
     
    202202    # Read in the mesh partition that belongs to this
    203203    # processor (note that the information is in the
    204     # correct form for the GA data structure
     204    # correct form for the ANUGA data structure
    205205
    206206    points, vertices, boundary, quantities, ghost_recv_dict, full_send_dict = \
     
    240240\end{verbatim}
    241241
    242 The processors receive a given subpartition by calling \code{rec_submesh}. The \code{rec_submesh} routine also calls \code{build_local_mesh}. The \code{build_local_mesh} routine described in Section \ref{sec:part4} ensures that the information is stored in a way that is compatible with the Domain datastructure. This means, for example, that the triangles and nodes must be numbered consecutively starting from 1.
     242The processors receive a given subpartition by calling \code{rec_submesh}. The \code{rec_submesh} routine also calls \code{build_local_mesh}. The \code{build_local_mesh} routine described in Section \ref{sec:part4} ensures that the information is stored in a way that is compatible with the Domain datastructure. This means, for example, that the triangles and nodes must be numbered consecutively starting from 1 (FIXME (Ole): or is it 0?).
    243243\begin{verbatim}
    244244    points, vertices, boundary, quantities, ghost_recv_dict, full_send_dict = \
  • inundation/parallel/documentation/results.tex

    r2697 r2849  
    3939\]
    4040Where $T_1$ is the time for the single processor case, $n$ is the
    41 number of processors and $T_{n,1}$ is the first processor of the $n$
     41number of processors and $T_{n,1}$ is the time for the first processor of the $n$
    4242cpu run.  Results were generated using an 8 processor test run,
    4343compared against the 1 processor test run. The test script used was
Note: See TracChangeset for help on using the changeset viewer.