\documentclass{manual} \title{ANUGA requirements for the Application Programmers Interface (API)} \author{Ole Nielsen, Duncan Gray, Jane Sexton, Nick Bartzis} % Please at least include a long-lived email address; % the rest is at your discretion. \authoraddress{Geoscience Australia \\ Email: \email{ole.nielsen@ga.gov.au} } %Draft date \date{\today} % update before release! % Use an explicit date so that reformatting % doesn't cause a new date to be used. Setting % the date to \today can be used during draft % stages to make it easier to handle versions. \release{1.0} % release version; this is used to define the % \version macro \makeindex % tell \index to actually write the .idx file %\makemodindex % If this contains a lot of module sections. \begin{document} \maketitle % This makes the contents more accessible from the front page of the HTML. \ifhtml \chapter*{Front Matter\label{front}} \fi \chapter{Introduction} This document outlines the agreed requirements for the ANUGA API. \chapter{Public API} \section{General principles} The ANUGA API must be simple to use. Operations that are conceptually simple should be easy to do. An example would be setting up a small test problem on a unit square without any geographic orientation. Complex operations should be manageable and not require the user to enter information that isn't strictly part of the problem description. Examples are entering UTM coordinates (or geographic coordinates) as read from a map should not require any reference to a particular origin. Nor should the same information have to be entered more than once per scenario. \section{Georeferencing} Currently ANUGA is limited to UTM coordinates assumed to belong to one zone. ANUGA shall throw an exception if this assumption is violated. It must be possible in general to enter data points as \begin{itemize} \item A list of 2-tuples of coordinates in which case the points are assumed to be in absolute UTM coordinates in an undefined zone \item An N by 2 Numeric array of coordinates. Points are assumed to be in absolute UTM coordinates in an undefined zone \item A geospatial dataset object that contains properly georeferenced points \end{itemize} General \begin{itemize} \item The undefined zone must be a symbol or number that does not exist geographically. \item Any component that needs coordinates to be relative to a particular point shall be responsible for deriving that origin. Examples are meshes where absolute coordinates may cause numerical problems. An example of a derived origin would be using the South-West most point on the boundary. \item Coordinates must be passed around as either geospatial objects or absolute UTM unless there is a compelling reason to use relative coordinates on grounds of efficiency or numerical stability. \item Passing Geo_reference's as a keyword should not be done Where it is currently done, it doesn't have to be changed as a matter of priority, but don't document this 'feature' in the user manual. If you are refactoring this API, then please remove geo_reference as a keyword. \end{itemize} \chapter{Internal API} \section{Damage Model - Requirements} Generally, damage model determines a percentage damage to a set of structures and their content. The dollar loss for each structure and its contents due to this damage is then determined. The damage model used in ANUGA is expected to change. The requirements for this damage model is based on three algorithms. \begin{itemize} \item Probability of structural collapse. Given the distance from the coast and the maximum inundation height above ground floor, the percentage probability of collapse is calculated. The distance from the coast is 'binned' into one of 4 distance ranges. The height in binned into one of 5 ranges. The percentage result is therefore represented by a 4 by 5 array. \item Structural damage curve. Given the type of building (X or Y) and the maximum inundation height above ground floor, the percentage damage loss to the structure is determined. The curve is based on a set of [height, percentage damage] points. \item Content damage curve. Given the maximum inundation height above ground floor, the percentage damage loss to the content of each structure is determined. The curve is based on a set of [height, percentage damage] points. \end{itemize} Interpolate between points when using the damage curves. The national building exposure database (NBED) gives the following relevant information for each structure; \begin{itemize} \item Location, Latitude and Longitude. \item The total cost of the structure. \item The total cost of the structures contents. \item The building type (Have to check how this is given). \end{itemize} This information is given in a csv file. Each row is a structure. So how will these dry algorithms (look-up tables) be used? Given NBED, an sww and an assumed ground floor height the percent structure and content loss and probability of collapse can be determined. The probability of collapse will be used in a way to have each structure either collapsed, or not not collapsed. There will not be any 20\% probablity of collapse structures when calculating the damage loss. This is we will get either collapsed, or not not collapsed from a probability of collapse; \begin{itemize} \item Count the number of houses (sample size) with each unique probability of collapse (excluding 0). \item probability of collapse * sample size = Number of collapsed buildings (NCB). \item Round the number of collapsed buildings. \item Randomly 'collapse' NCB buildings, from the sample structures. This is done by setting the \% damage loss to structures and content to 100. This overrides losses calculated from the curves. \end{itemize} What is the final output? Add these columns to the NBED file. \begin{itemize} \item \% content damage \item \% structure damage \item damage cost to content \item damage cost to structure \end{itemize} How will the ground floor height be given? Have it passed as a keyword argument, defaulting to .3. \section{Damage Model - Design} It has to be modular. In the future the three algorithms will be combined to give a cumulative probability distribution, so this part doesn't have to be designed to be too flexible. This change will occur before the shape of the damage curves change, Ken believes. Have one file that has general damage functions/classes, such as interrogating nbed csv files and calculating maximum inundation above ground hight. \chapter{Efficiency and optimisation} \section{Parallelisation of pyvolution} (From ANU meeting 27/7/5) Remaining loose ends and ideas are \begin{itemize} \item fluxes in ghostcells should not affect timestep computation. \item a function for re-assembling model output should be made available \item scoping of methodologies for automatic domain decomposition \item implementation of automatic domain decomposition (using C extensions for maximal sequential performance in order to minimise performance penalties due to Amdahl's law) \item in depth testing and tuning of parallel performance. This may require adding wrappers for non-blocking MPI communication to pypar if needed. \item ability to read in precomputed sub-domains. Perhaps using caching.py \end{itemize} \end{document}