Changeset 9265


Ignore:
Timestamp:
Jul 14, 2014, 4:48:21 PM (11 years ago)
Author:
steve
Message:

Moving checkpoint.py to shallow_water

Location:
trunk/anuga_core
Files:
4 edited
1 moved

Legend:

Unmodified
Added
Removed
  • trunk/anuga_core/source/anuga/config.py

    r9148 r9265  
    3434# Major revision number for use with create_distribution
    3535# and update_anuga_user_guide
    36 major_revision = '1.3.0-beta'
     36major_revision = '1.3.1'
    3737
    3838################################################################################
  • trunk/anuga_core/user_manual/source/anuga_user_manual.tex

    r9090 r9265  
    2929\usepackage{graphicx}
    3030\usepackage{hyperref}
    31 \usepackage[english]{babel}
     31\usepackage[australian]{babel}
    3232\usepackage{datetime}
    3333\usepackage[hang,small,bf]{caption}
     
    246246  \item All spatial coordinates are assumed to be UTM (meters). As such,
    247247  \anuga is unsuitable for modelling flows in areas larger than one UTM zone
    248   (6 degrees wide).
     248  (6 degrees wide), though we have run over 2 zones by projecting onto one zone and
     249  living with the distortion.
    249250  \item Fluid is assumed to be inviscid -- i.e.\ no kinematic viscosity included.
    250251  \item The finite volume is a very robust and flexible numerical technique,
     
    275276
    276277What follows is a discussion of the structure and operation of a
    277 script called \file{runup.py}.
     278script called \file{runup.py} (which is available in the \file{demos} directory
     279of \file{anuga_core}.
    278280
    279281This example carries out the solution of the shallow-water wave
     
    331333
    332334\label{ref:runup_py_code}
    333 \verbatiminput{demos/runup.py}
     335\verbatiminput{../../demos/runup.py}
     336
    334337
    335338\subsection{Establishing the Domain}\index{domain, establishing}
    336339
    337 The first task is to set up the triangular mesh to be used for the
     340The very first thing to do is import the various modules, of which the
     341\anuga{} module is the most important.
     342%
     343\begin{verbatim}
     344import anuga
     345\end{verbatim}
     346%
     347Then we need to set up the triangular mesh to be used for the
    338348scenario. This is carried out through the statement:
    339349
     
    341351domain = anuga.rectangular_cross_domain(10, 5, len1=10.0, len2=5.0)
    342352\end{verbatim}
    343 
     353%
    344354The above assignment sets up a $10 \times
    3453555$ rectangular mesh, triangulated in a regular way with boundary tags \code{'left'}, \code{'right'},
     
    350360domain = anuga.Domain(points, vertices, boundary)
    351361\end{verbatim}
    352 %
    353362where
    354363\begin{itemize}
     
    909918Here is the code for \file{runcairns.py}:
    910919
    911 \verbatiminput{demos/cairns/runcairns.py}
     920\verbatiminput{../../demos/cairns/runcairns.py}
    912921
    913922In discussing the details of this example, we follow the outline
     
    970979\file{project.py}:
    971980
    972 \verbatiminput{demos/cairns/project.py}
     981\verbatiminput{../../demos/cairns/project.py}
    973982
    974983Figure \ref{fig:cairns3d} illustrates the landscape of the region
     
    13261335\end{figure}
    13271336
     1337
     1338\section{A Parallel Simulation}
     1339
     1340The previous examples were run just using one processor. \anuga also has the option of running in parallel using \mpi.
     1341
     1342Such jobs are run using the command
     1343
     1344\begin{verbatim}
     1345mpirun -np n python runParallelCairns.py
     1346\end{verbatim}
     1347where \code{n} is the total number of processors being used for the parallel run.
     1348
     1349Essentially we can expect speedups comparable to the number of cores available. This is measured via scalablity. We can expect scalability of 70\% when using up to the number of processors so that the local partitioned domains contain around 2000 triangles.
     1350
     1351\subsection{The Code}
     1352
     1353Here is the code for \file{runParallelCairns.py}:
     1354
     1355\verbatiminput{../../demos/cairns/runParallelCairns.py}
     1356
     1357\subsection{Structure of the Code}
     1358
     1359The code is very similar to the sequential code. The same procedures are used to setup the domain, setup the inital conditions, boundary conditions and evolve.
     1360
     1361
     1362We first import a few procedures  need forthe parallel code.
     1363\begin{verbatim}
     1364from anuga import distribute, myid, numprocs, finalize, barrier
     1365\end{verbatim}
     1366
     1367\code{myid} returns the id of the current processor running the code.
     1368
     1369\code{numprocs} returns the total number of processors involved in this parallel jobs (the \code{n} in the original \code{mpirun} command.
     1370
     1371\code{finalize} is called at the end of a script to close down the parallel job.
     1372
     1373\code{distribute} is used to partition and setup the parallel domains.
     1374
     1375\code{barrier} is a command for processors to wait as all the other processor to catchup to this point.
     1376
     1377
     1378
     1379
     1380The creation of the \code{domain} is only done on processor 0. Hence we have the structure:
     1381
     1382\begin{verbatim}
     1383#-------------------------------------
     1384# Do the domain creation on processor 0
     1385#-------------------------------------
     1386if myid == 0:
     1387        ....
     1388        domain = ...
     1389       
     1390else:
     1391        domain = None
     1392\end{verbatim}
     1393
     1394We only need to create the original domain on one processor, otherwise we will have multiple copies of the full domain (which will easily eat up our memory).
     1395
     1396Once we have our code{domain} setup we partition it and send the partitions to each of the other processors, via the command
     1397
     1398\begin{verbatim}
     1399#-------------------------------------
     1400# Now produce paralle domain
     1401#-------------------------------------
     1402domain = distribute(domain)
     1403\end{verbatim}
     1404
     1405This takes the \code{domain} on processor 0 and distributes that domain to each of the processors, (it overwrites the full domain on processor 0).  From this point of the code, there is a different domain on each processor, with each domain comunicating with the other domains to ensure required transfer of inforation to allow flow over the combined domains.
     1406
     1407It is important to apply the boundary conditions after the \code{distribute}
     1408
     1409
     1410
     1411As the partitoined domain evolve, they will store their data to individual sww files, named as \code{domain_name_Pn_m.sww}, where \code{n} is the total number of processors being used and \code{m} is the specific processor id.
     1412
     1413We have a procedure to merge these individual sww files via the command
     1414
     1415\begin{verbatim}
     1416domain.sww_merge()
     1417\end{verbatim}
     1418
     1419And we close down the parallel job by issuing the command
     1420
     1421\begin{verbatim}
     1422finalize()
     1423\end{verbatim}
     1424
     1425
     1426
     1427\section{Validation Tests}
     1428
     1429We have an extensive suite of validation tests, ranging from simple analytical tests to large scale case studies. These can be found in the \file{validation_tests} directory in the \file{anuga_core} directory.
     1430
     1431All these tests can be run in parallel or in sequential mode, and they also provide the mechanism to produce a report detailing the results of the test. (pdfLatex needs to be available to produce the reports).
     1432
     1433
     1434
    13281435%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    13291436
     
    13961503use in Python code. For example, suppose we wish to specify that the
    13971504function \function{create\_mesh\_from\_regions} is in a module called
    1398 \module{mesh\_interface} in a subfolder of \module{inundation} called
     1505\module{mesh\_interface} in a subfolder of \module{anuga} called
    13991506\code{pmesh}. In Linux or Unix syntax, the pathname of the file
    1400 containing the function, relative to \file{inundation}, would be:
     1507containing the function, relative to \file{anuga}, would be:
    14011508
    14021509\begin{verbatim}
     
    42174324%\begin{center}
    42184325%\pgfplotstableset{% could be used in preamble
    4219 %empty cells with={--}, % replace empty cells with ’--’
     4326%empty cells with={--}, % replace empty cells with ᅵ--ᅵ
    42204327%every head row/.style={%
    42214328%   before row={%
  • trunk/anuga_core/user_manual/source/manual_example.tex

    r9090 r9265  
    77
    88\documentclass{manual}
     9
     10
     11\usepackage{graphicx}
     12\usepackage{hyperref}
     13%\usepackage[english]{babel}
     14\usepackage{datetime}
     15\usepackage[hang,small,bf]{caption}
     16\usepackage{amsbsy,enumerate}
     17
     18\usepackage{amsmath, amssymb, amsthm}
    919
    1020\title{Big Python Manual}
  • trunk/anuga_core/validation_tests/case_studies/towradgi/Compare_results_with_fieldObs.py

    r9238 r9265  
    6464                      CellSize=CellSize,EPSG_CODE=32756,output_dir=tif_outdir)
    6565    print 'Made tifs'
    66     # Plot depth raster with discrepency between model and data
    67     depthFile=tif_outdir+'/Towradgi_historic_flood_depth_max_max.tif'
    68     myDepth=scipy.misc.imread(depthFile)
    69     X=scipy.arange(p.xllcorner, p.xllcorner+myDepth.shape[1]*CellSize, CellSize)
    70     Y=scipy.arange(p.yllcorner, p.yllcorner+myDepth.shape[0]*CellSize, CellSize)
    71     X,Y=scipy.meshgrid(X,Y)
    72     pyplot.clf()
    73     pyplot.figure(figsize=(12,6))
    74     pyplot.plot([X.min(),X.max()],[Y.min(),Y.max()],' ')
    75     pyplot.imshow(scipy.flipud(myDepth),extent=[X.min(),X.max(),Y.min(),Y.max()],origin='lower',cmap=pyplot.get_cmap('Greys'))
    76     pyplot.gca().set_aspect('equal')
    77     pyplot.colorbar(orientation='horizontal').set_label('Peak Depth in model (m)')
    78     er1=floodLevels[:,3]-modelled_level
    79     pyplot.scatter(floodLevels[:,0], floodLevels[:,1], c=er1,s=20,cmap=pyplot.get_cmap('spectral'))
    80     pyplot.colorbar().set_label(label='Field observation - Modelled Peak Stage (m)')
    81     pyplot.xlim([p.x.min()+p.xllcorner,p.x.max()+p.xllcorner])
    82     pyplot.ylim([p.y.min()+p.yllcorner,p.y.max()+p.yllcorner])
    83     pyplot.savefig('Spatial_Depth_and_Error.png')
    8466except:
    8567    print 'Cannot make GIS plot -- perhaps GDAL etc are not installed?'
     68
     69   
     70# Plot depth raster with discrepency between model and data
     71depthFile=tif_outdir+'/Towradgi_historic_flood_depth_max.tif'
     72myDepth=scipy.misc.imread(depthFile)
     73X=scipy.arange(p.xllcorner, p.xllcorner+myDepth.shape[1]*CellSize, CellSize)
     74Y=scipy.arange(p.yllcorner, p.yllcorner+myDepth.shape[0]*CellSize, CellSize)
     75X,Y=scipy.meshgrid(X,Y)
     76pyplot.clf()
     77pyplot.figure(figsize=(12,6))
     78pyplot.plot([X.min(),X.max()],[Y.min(),Y.max()],' ')
     79pyplot.imshow(scipy.flipud(myDepth),extent=[X.min(),X.max(),Y.min(),Y.max()],origin='lower',cmap=pyplot.get_cmap('Greys'))
     80pyplot.gca().set_aspect('equal')
     81pyplot.colorbar(orientation='horizontal').set_label('Peak Depth in model (m)')
     82er1=floodLevels[:,3]-modelled_level
     83pyplot.scatter(floodLevels[:,0], floodLevels[:,1], c=er1,s=20,cmap=pyplot.get_cmap('spectral'))
     84pyplot.colorbar().set_label(label='Field observation - Modelled Peak Stage (m)')
     85pyplot.xlim([p.x.min()+p.xllcorner,p.x.max()+p.xllcorner])
     86pyplot.ylim([p.y.min()+p.yllcorner,p.y.max()+p.yllcorner])
     87pyplot.savefig('Spatial_Depth_and_Error.png')
     88
Note: See TracChangeset for help on using the changeset viewer.