source: documentation/requirements/anuga_API_requirements.tex @ 3383

Last change on this file since 3383 was 3230, checked in by duncan, 19 years ago

additional requirements.

File size: 7.4 KB
Line 
1\documentclass{manual}
2
3
4\title{ANUGA requirements for the Application Programmers Interface (API)}
5
6\author{Ole Nielsen, Duncan Gray, Jane Sexton, Nick Bartzis}
7
8% Please at least include a long-lived email address;
9% the rest is at your discretion.
10\authoraddress{Geoscience Australia \\
11  Email: \email{ole.nielsen@ga.gov.au}
12}
13
14%Draft date
15\date{\today}                   % update before release!
16                                % Use an explicit date so that reformatting
17                                % doesn't cause a new date to be used.  Setting
18                                % the date to \today can be used during draft
19                                % stages to make it easier to handle versions.
20
21\release{1.0}                   % release version; this is used to define the
22                                % \version macro
23
24\makeindex                      % tell \index to actually write the .idx file
25%\makemodindex                  % If this contains a lot of module sections.
26
27
28
29\begin{document}
30\maketitle
31
32
33
34% This makes the contents more accessible from the front page of the HTML.
35\ifhtml
36\chapter*{Front Matter\label{front}}
37\fi
38
39
40
41
42\chapter{Introduction}
43
44This document outlines the agreed requirements for the ANUGA API.
45
46       
47
48\chapter{Public API}           
49\section{General principles}   
50
51 The ANUGA API must be simple to use.  Operations that are
52conceptually simple should be easy to do. An example would be setting
53up a small test problem on a unit square without any geographic
54orientation.  Complex operations should be manageable and not require
55the user to enter information that isn't strictly part of the problem
56description. Examples are entering UTM coordinates (or geographic
57coordinates) as read from a map should not require any reference to a
58particular origin.  Nor should the same information have to be entered
59more than once per scenario.
60
61
62
63\section{Georeferencing}
64
65Currently ANUGA is limited to UTM coordinates assumed to belong to one zone.
66ANUGA shall throw an exception if this assumption is violated.
67
68It must be possible in general to enter data points as
69       
70\begin{itemize}
71  \item A list of 2-tuples of coordinates in which case the points are
72    assumed to be in absolute UTM coordinates in an undefined zone
73  \item An N by 2 Numeric array of coordinates. Points are assumed to
74    be in absolute UTM coordinates in an undefined zone 
75  \item A geospatial dataset object that contains properly
76    georeferenced points
77\end{itemize}   
78
79
80General
81\begin{itemize} 
82  \item The undefined zone must be a symbol or number that does
83    not exist geographically.
84
85  \item Any component that needs coordinates to be relative to a
86  particular point shall be responsible for deriving that
87  origin. Examples are meshes where absolute coordinates may cause
88  numerical problems. An example of a derived origin would be using
89  the South-West most point on the boundary.
90
91  \item Coordinates must be passed around as either geospatial objects
92  or absolute UTM unless there is a compelling reason to use relative
93  coordinates on grounds of efficiency or numerical stability.
94 
95  \item Passing Geo_reference's as a keyword should not be done  Where
96  it is currently done, it doesn't have to be
97  changed as a matter of priority, but don't document this 'feature'
98  in the user manual.  If you are refactoring this API, then please
99  remove geo_reference as a keyword.
100 
101\end{itemize}   
102 
103 
104 
105\chapter{Internal API}
106
107\section{Damage Model - Requirements}
108Generally, damage model determines a percentage damage to a set of
109structures and their content.
110The dollar loss for each structure and its contents due to this damage
111is then determined.
112
113The damage model used in ANUGA is expected to change. The requirements
114for this damage model is based on three algorithms.
115
116 
117\begin{itemize} 
118  \item Probability of structural collapse.  Given the distance from
119  the coast and the maximum inundation height above ground floor, the
120  percentage probability of collapse is calculated.  The distance from
121  the coast is
122  'binned' into one of 4 distance ranges.  The height in binned into
123  one of 5 ranges.  The percentage result is therefore represented by
124  a 4 by 5 array.
125  \item Structural damage curve.  Given the type of building (X or Y)
126  and the maximum inundation height above ground floor, the
127  percentage damage loss to the structure is determined.  The curve is
128  based on a set of [height, percentage damage] points. 
129  \item Content damage curve.  Given the maximum inundation height above
130  ground floor, the
131  percentage damage loss to the content of each structure is
132  determined.  The curve is based on a set of [height, percentage
133  damage] points. 
134\end{itemize} 
135Interpolate between points when using the damage curves.
136
137
138The national building exposure database (NBED) gives the following relevant
139information for each structure;
140\begin{itemize} 
141  \item Location, Latitude and Longitude.
142  \item The total cost of the structure.
143  \item The total cost of the structures contents.
144  \item The building type  (Have to check how this is given).
145\end{itemize} 
146This information is given in a csv file. Each row is a structure.
147 
148So how will these dry algorithms (look-up tables) be used? 
149Given NBED, an sww and an assumed ground floor height the percent
150structure and content loss and probability of collapse can be determined.
151 
152The probability of collapse will be used in a way to have each
153structure either collapsed, or not not collapsed.  There will not be
154any 20\% probablity of collapse structures when calculating the damage
155loss.
156
157This is we will get either collapsed, or not not collapsed from a
158probability of collapse;
159\begin{itemize} 
160  \item Count the number of houses (sample size) with each unique
161 probability of collapse (excluding 0).
162  \item probability of collapse * sample size = Number of collapsed
163  buildings (NCB).
164  \item Round the number of collapsed buildings.
165  \item Randomly 'collapse' NCB buildings, from the sample structures.
166  This is done by setting the \% damage loss to structures and content
167  to 100.  This overrides losses calculated from the curves.   
168\end{itemize} 
169 
170What is the final output?
171Add these columns to the NBED file.
172\begin{itemize} 
173  \item \% content damage
174  \item \% structure damage
175  \item damage cost to content
176  \item damage cost to structure
177\end{itemize}   
178
179How will the ground floor height be given?
180Have it passed as a keyword argument, defaulting to .3.
181
182\section{Damage Model - Design}
183It has to be modular.  In the future the three algorithms will be
184combined to give a cumulative probability distribution, so this part
185doesn't have to be designed to be too flexible.  This change will
186occur before the shape of the damage curves change, Ken believes. 
187
188Have one file that has general damage functions/classes, such as
189interrogating nbed csv files and calculating maximum inundation above
190ground hight.
191
192\chapter{Efficiency and optimisation}
193
194
195\section{Parallelisation of pyvolution}
196
197
198(From ANU meeting 27/7/5)
199 
200Remaining loose ends and ideas are
201\begin{itemize} 
202  \item fluxes in ghostcells should not affect timestep computation.
203  \item a function for re-assembling model output should be made available
204  \item scoping of methodologies for automatic domain decomposition
205  \item implementation of automatic domain decomposition (using C
206    extensions for maximal sequential performance in order to minimise
207    performance penalties due to Amdahl's law)
208  \item in depth testing and tuning of parallel performance. This may require
209    adding wrappers for non-blocking MPI communication to pypar if needed.   
210  \item ability to read in precomputed sub-domains. Perhaps using caching.py 
211
212\end{itemize}   
213
214
215\end{document}
216
217
218
219
Note: See TracBrowser for help on using the repository browser.