source: anuga_core/documentation/requirements/anuga_API_requirements.tex @ 7086

Last change on this file since 7086 was 3467, checked in by ole, 19 years ago

Added in requirements from pyvolution wiki

File size: 9.7 KB
Line 
1\documentclass{manual}
2
3
4\title{ANUGA requirements for the Application Programmers Interface (API)}
5
6\author{Ole Nielsen, Duncan Gray, Jane Sexton, Nick Bartzis}
7
8% Please at least include a long-lived email address;
9% the rest is at your discretion.
10\authoraddress{Geoscience Australia \\
11  Email: \email{ole.nielsen@ga.gov.au}
12}
13
14%Draft date
15\date{\today}                   % update before release!
16                                % Use an explicit date so that reformatting
17                                % doesn't cause a new date to be used.  Setting
18                                % the date to \today can be used during draft
19                                % stages to make it easier to handle versions.
20
21\release{1.0}                   % release version; this is used to define the
22                                % \version macro
23
24\makeindex                      % tell \index to actually write the .idx file
25%\makemodindex                  % If this contains a lot of module sections.
26
27
28
29\begin{document}
30\maketitle
31
32
33
34% This makes the contents more accessible from the front page of the HTML.
35\ifhtml
36\chapter*{Front Matter\label{front}}
37\fi
38
39
40
41
42\chapter{Introduction}
43
44This document outlines the agreed requirements for the ANUGA API.
45
46       
47
48\chapter{Public API}           
49\section{General principles}   
50
51 The ANUGA API must be simple to use.  Operations that are
52conceptually simple should be easy to do. An example would be setting
53up a small test problem on a unit square without any geographic
54orientation.  Complex operations should be manageable and not require
55the user to enter information that isn't strictly part of the problem
56description. Examples are entering UTM coordinates (or geographic
57coordinates) as read from a map should not require any reference to a
58particular origin.  Nor should the same information have to be entered
59more than once per scenario.
60
61
62
63\section{Georeferencing}
64
65Currently ANUGA is limited to UTM coordinates assumed to belong to one zone.
66ANUGA shall throw an exception if this assumption is violated.
67
68It must be possible in general to enter data points as
69       
70\begin{itemize}
71  \item A list of 2-tuples of coordinates in which case the points are
72    assumed to be in absolute UTM coordinates in an undefined zone
73  \item An N by 2 Numeric array of coordinates. Points are assumed to
74    be in absolute UTM coordinates in an undefined zone 
75  \item A geospatial dataset object that contains properly
76    georeferenced points
77\end{itemize}   
78
79
80General
81\begin{itemize} 
82  \item The undefined zone must be a symbol or number that does
83    not exist geographically.
84
85  \item Any component that needs coordinates to be relative to a
86  particular point shall be responsible for deriving that
87  origin. Examples are meshes where absolute coordinates may cause
88  numerical problems. An example of a derived origin would be using
89  the South-West most point on the boundary.
90
91  \item Coordinates must be passed around as either geospatial objects
92  or absolute UTM unless there is a compelling reason to use relative
93  coordinates on grounds of efficiency or numerical stability.
94 
95  \item Passing Geo_reference's as a keyword should not be done  Where
96  it is currently done, it doesn't have to be
97  changed as a matter of priority, but don't document this 'feature'
98  in the user manual.  If you are refactoring this API, then please
99  remove geo_reference as a keyword.
100 
101\end{itemize}   
102 
103 
104 
105\chapter{Internal API}
106
107
108\section{Pmesh - ideas}
109If we were to automatically define regions of variable resolution it
110would be good to automatically tag regions defined by an alpha
111shape. - For this to be useful pmesh would also have to add mesh
112files.  A story would be load a file of points representing 5 and -5
113contour lines.  Use alpha shape to add segments, and define this
114region.  Add an overall outline and generate the mesh.   
115
116\section{Point files - ideas}
117
118Currently title information must be given for points files.  The api
119could be changed so the titles are an input.  These titles could be
120used if the file does not have titles.  This should reduce the amount
121of file pre-processing.
122
123Remove the ability of points files to have geo-refferencing
124information.  The values be absolute. Cons - zone information is lost.
125Pros - text files will be a csv format.
126
127
128\section{Damage Model - Requirements}
129Generally, damage model determines a percentage damage to a set of
130structures and their content.
131The dollar loss for each structure and its contents due to this damage
132is then determined.
133
134The damage model used in ANUGA is expected to change. The requirements
135for this damage model is based on three algorithms.
136
137 
138\begin{itemize} 
139  \item Probability of structural collapse.  Given the distance from
140  the coast and the maximum inundation height above ground floor, the
141  percentage probability of collapse is calculated.  The distance from
142  the coast is
143  'binned' into one of 4 distance ranges.  The height in binned into
144  one of 5 ranges.  The percentage result is therefore represented by
145  a 4 by 5 array.
146  \item Structural damage curve.  Given the type of building (X or Y)
147  and the maximum inundation height above ground floor, the
148  percentage damage loss to the structure is determined.  The curve is
149  based on a set of [height, percentage damage] points. 
150  \item Content damage curve.  Given the maximum inundation height above
151  ground floor, the
152  percentage damage loss to the content of each structure is
153  determined.  The curve is based on a set of [height, percentage
154  damage] points. 
155\end{itemize} 
156Interpolate between points when using the damage curves.
157
158
159The national building exposure database (NBED) gives the following relevant
160information for each structure;
161\begin{itemize} 
162  \item Location, Latitude and Longitude.
163  \item The total cost of the structure.
164  \item The total cost of the structures contents.
165  \item The building type  (Have to check how this is given).
166\end{itemize} 
167This information is given in a csv file. Each row is a structure.
168 
169So how will these dry algorithms (look-up tables) be used? 
170Given NBED, an sww and an assumed ground floor height the percent
171structure and content loss and probability of collapse can be determined.
172 
173The probability of collapse will be used in a way to have each
174structure either collapsed, or not not collapsed.  There will not be
175any 20\% probablity of collapse structures when calculating the damage
176loss.
177
178This is we will get either collapsed, or not not collapsed from a
179probability of collapse;
180\begin{itemize} 
181  \item Count the number of houses (sample size) with each unique
182 probability of collapse (excluding 0).
183  \item probability of collapse * sample size = Number of collapsed
184  buildings (NCB).
185  \item Round the number of collapsed buildings.
186  \item Randomly 'collapse' NCB buildings, from the sample structures.
187  This is done by setting the \% damage loss to structures and content
188  to 100.  This overrides losses calculated from the curves.   
189\end{itemize} 
190 
191What is the final output?
192Add these columns to the NBED file.
193\begin{itemize} 
194  \item \% content damage
195  \item \% structure damage
196  \item damage cost to content
197  \item damage cost to structure
198  \item inundation above ground height
199\end{itemize}   
200
201How will the ground floor height be given?
202Have it passed as a keyword argument, defaulting to .3.
203
204\section{Damage Model - Design}
205It has to be modular.  In the future the three algorithms will be
206combined to give a cumulative probability distribution, so this part
207doesn't have to be designed to be too flexible.  This change will
208occur before the shape of the damage curves change, Ken believes. 
209
210Have one file that has general damage functions/classes, such as
211interrogating nbed csv files and calculating maximum inundation above
212ground hight.
213
214\chapter{Efficiency and optimisation}
215
216
217\section{Parallelisation of pyvolution}
218
219
220(From ANU meeting 27/7/5)
221 
222Remaining loose ends and ideas are
223\begin{itemize} 
224  \item fluxes in ghostcells should not affect timestep computation.
225  \item a function for re-assembling model output should be made available
226  \item scoping of methodologies for automatic domain decomposition
227  \item implementation of automatic domain decomposition (using C
228    extensions for maximal sequential performance in order to minimise
229    performance penalties due to Amdahl's law)
230  \item in depth testing and tuning of parallel performance. This may require
231    adding wrappers for non-blocking MPI communication to pypar if needed.   
232  \item ability to read in precomputed sub-domains. Perhaps using caching.py 
233
234\end{itemize}   
235
236
237\chapter{Old stuff from pyvolution/wiki}
238
239
240\begin{itemize} 
241  \item Make origins instances of a georeference class
242        (like class point from GPScape).
243         It would conatin multiple representation, projections and code for adding itself to e.g. NetCDF files.
244  \item Tagged regions, edges etc should be done in terms of set-theory,
245so that each tag maps e.g. to a list of indices.
246
247
248  \item The information from a .tsh file shall be converted to a data
249structure that pyvolution understands.
250  \item The vertex attribute information will be converted to field values
251 and they shall be accessable using the attribute tag.
252  \item Volumes will have a region tag. This shall be implemented using the
253  principals of set theory.
254  \item For a given region and a given field value/conserved quantity apply
255  a supplied function with parameters (x,y and the old value
256  of the field value/conserved quantity). 
257  \item Also need a flag specifying if the vertex or centroid values are
258  being set.(Rational - will can be used to avoid discontinuites/ or
259  create them, depending on what is required.)
260  Example calls: domain.set_field_values('bed_elevation',
261                                         f, location='vertices')
262                                         
263  Example calls: domain.set_conserved_quantities('stage', f,
264                                                 location='centroid')
265                                                                         
266                                       
267                                       
268  where f has the form
269  def f(x, y, old_value):
270      ...
271      return z                                                                           
272\end{itemize} 
273
274
275\end{document}
276
277
278
279
Note: See TracBrowser for help on using the repository browser.