Changeset 3244
- Timestamp:
- Jun 27, 2006, 2:20:08 PM (18 years ago)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
inundation/parallel/documentation/parallel.tex
r3164 r3244 144 144 145 145 The first example in Section \ref{subsec:codeRPA} solves the advection equation on a 146 rectangular mesh. A rectangular mesh is highly structured so a coordinate based decomposition can be use and the partitioning is simply done by calling the146 rectangular mesh. A rectangular mesh is highly structured so a coordinate based decomposition can be used and the partitioning is simply done by calling the 147 147 routine \code{parallel_rectangle} as shown below. 148 148 \begin{verbatim} … … 252 252 \section{Running the Code} 253 253 \subsection{Compiling Pymetis and Metis} 254 Currently, Metis and its Python wrapper Pymetis are not built by the 255 \verb|compile_all.py| script. A makefile is provided to automate the build 256 process. Change directory to the \verb|ga/inundation/pymetis/| directory and 257 ensure that the subdirectory \verb|metis-4.0| exists and contains an254 Unlike the rest of ANUGA, Metis and its Python wrapper Pymetis are not built 255 by the \verb|compile_all.py| script. A makefile is provided to automate the 256 build process. Change directory to the \verb|ga/inundation/pymetis/| directory 257 and ensure that the subdirectory \verb|metis-4.0| exists and contains an 258 258 unmodified Metis 4.0 source tree. Under most varieties of Linux, build the 259 259 module by running \verb|make|. Under x86\_64 versions of Linux, build the … … 262 262 that the module works by running the supplied PyUnit test case with 263 263 \verb|python test_metis.py|. 264 \subsection{Running the Job} 265 Communication between nodes running in parallel is performed by pypar, which 266 requires the following: 267 \begin{itemize} 268 \item Python 2.0 or later 269 \item Numeric Python (incling RandomArray) matching the Python installation 270 \item Native MPI C library 271 \item Native C compiler 272 \end{itemize} 273 Jobs are started by running appropriate commands for the local MPI 274 installation. Due to variations in MPI environments, specific details 275 regarding MPI commands are beyond the scope of this document. It is likely 276 that parallel jobs will need to be scheduled through some kind of queuing 277 system. Sample job scripts are available for adaptation in section 278 \ref{sec:codeSJ}. They should be easily adaptable to any queuing system 279 derived from PBS, such as TORQUE.
Note: See TracChangeset
for help on using the changeset viewer.