Changeset 3342


Ignore:
Timestamp:
Jul 17, 2006, 11:13:48 PM (18 years ago)
Author:
sexton
Message:

more updates

Location:
production/onslow_2006/report
Files:
4 edited

Legend:

Unmodified
Added
Removed
  • production/onslow_2006/report/computational_setup.tex

    r3341 r3342  
    11To set up a model for the tsunami scenario, a study area is first
    2 determined. Preliminary investigations have indicated that point
    3 at which the deep water and shallow water models can exchange data is
     2determined. Preliminary investigations have indicated the point
     3at which the output from MOST is the input to ANUGA is
    44sufficient at the 100m bathymetric contour line\footnote{
    55Preliminary investigations indicate that MOST and ANUGA compare
     
    1818approximately 10m elevation.
    1919
    20 To initiate the modelling, a triangular mesh is constructed to
    21 cover the study region. Each triangular cell is defined a cell area
    22 which is chosen to balance
    23 computational time and desired resolution in areas of interest,
    24 particularly in the interface between onshore and offshore.
    25 Figure \ref{fig:onslow_area} illustrates the data extent for the
    26 scenario, the study area and where further mesh refinement has been made.
    27 The choice
    28 of the refinement is based around the inter-tidal zones and
    29 other important features such as islands and rivers.
     20The finite volume technique relies on the construction of a triangular mesh which covers the study region. This mesh can be altered to suit the needs of the scenario in question. The mesh can be refined in areas of interest, particularly in the coastal region where the complex behaviour is likely to occur. In setting up the model, the user defines the area of the triangular cells in each region of interest\footnote{Note that the cell
     21area will be the maximum cell area within the defined region and that each
     22cell in the region does not necessarily have the same area.}.
     23The area should not be too small as to exceed realistic computational time, and not too great as to inadequately capture important behaviour. There are no gains in choosing the area to be less than the supporting data.
     24Figure \ref{fig:onslow_area} shows the study area and where further mesh refinement has been made. For each region, a maximum triangular cell area is defined and its associated lateral accuracy.
     25With these cell areas, the study area consists of 401939 triangles
     26in which water levels and momentums are tracked through time. The lateral accuracy refers to the distance at which we are confident in stating a region is inundated. Therefore we can only be confident in the calculated inundation extent in the Onslow town centre to within 30m.
    3027
    3128\begin{figure}[hbt]
     
    3431             {../report_figures/onslow_data_poly.png}}
    3532
    36   \caption{Study area for Onslow scenario highlighting areas of increased
    37 refinement.
     33  \caption{Study area for Onslow scenario highlighting four regions of increased refinement.
     34Region 1: Surrounds Onslow town centre with a cell area of 500 m$^2$ (lateral accuracy 30m).
     35Region 2: Surrounds the coastal region with a cell area of 2500 m$^2$ (lateral accuracy 70m).
     36Region 3: Water depths to the 50m contour line (approximately) with a cell area of 20000 m$^2$ (later accuracy 200m).
     37Region 4: Water depths to the boundary (approximately 100m contour line) with a cell area of 100000 m$^2$ (lateral accuracy 445m).
    3838}
    3939  \label{fig:onslow_area}
    4040\end{figure}
    41 
    42 In addition to refining the mesh in regions where complex behaviour
    43 will occur, it is important that the mesh also be
    44 commensurate with the underlying data. Referring to the onshore data
    45 discussed
    46 in Section \ref{sec:data}, we choose a cell area of 500 m$^2$ per triangle
    47 for the region surrounding the Onslow town centre.
    48 It is worth noting here that the cell
    49 area will be the maximum cell area within the defined region and that each
    50 cell in the region does not necessarily have the same area.
    51 In contrast to the onshore data, the offshore
    52 data is a series of survey points which is typically not supplied on a fixed
    53 grid which complicates the issue of determining an appropriate cell area.
    54 In addition, the data is not necessarily complete, as can be
    55 seen in Figure \ref{fig:onslow_area}.
    56 The remaining cell areas are
    57 2500 m$^2$ for the region surrounding the coast,
    58 20000 m$^2$ for the region reaching approximately the 50m contour line, with
    59 the remainder of the study area having a cell area of 100000 m$^2$.
    60 These choice of cell areas are more than adequate to propagate the tsunami wave
    61 in the deepest sections of the study area.\footnote{
    62 With a wavelength of 20km, the minimum (square) grid resolution would
    63 be around 2000m (allowing ten cells per wavelength).
    64 This results in a square cell area of 4000000 m$^2$ which indicates a minimum
    65 triangular cell area of 2000000 m$^2$.}
    66 The resultant computational mesh is shown in Figure \ref{fig:mesh_onslow}.
    67 
    68 With these cell areas, the study area consists of 401939 triangles
    69 in which water levels and momentums are tracked through time.
    70 The associated lateral accuracy
    71 for these cell areas is approximatly 30m, 70m, 200m and 445m for the respective
    72 areas. This means
    73 that we can only be confident in the calculated inundation extent to
    74 approximately 30m lateral accuracy within the Onslow town centre.
    7541
    7642\begin{figure}[hbt]
     
    8551\end{figure}
    8652
    87 The final item to be addressed to complete the model setup is to
    88 define the boundary condition. As
     53The final item to be addressed to complete the model setup is the
     54definition of the boundary condition. As
    8955discussed in Section \ref{sec:tsunamiscenario}, a Mw 9 event provides
    9056the tsunami source. The resultant tsunami wave is made up of a series
  • production/onslow_2006/report/damage.tex

    r3340 r3342  
    55Exposure data are sourced from the National Building Exposure Database (NBED),
    66developed by GA\footnote{http://www.ga.gov.au/urban/projects/ramp/NBED.jsp}.
    7 It contains information about residential buildings, people, and the
     7It contains information about residential buildings, people and the
    88cost of replacing buildings and contents.
    99
     
    1111residential collapse vulnerability models and casualty models were developed.
    1212The vulnerability models have been developed for
    13 framed residential construction using data from teh Indian Ocean tsunami event.
    14 The models predict the collapse
    15 probability for an exposed population and incorporate the following
     13framed residential construction using data from the Indian Ocean tsunami event. The models predict the collapse
     14probability for an exposed population and incorporates the following
    1615parameters known to influence building damage \cite{papathoma:vulnerability},
    1716
     
    2625%In applying the model, all structures in the inundation zone were
    2726%spatially located and the local water depth and building row
    28 %number from the exposed edge of the suburb were determined for each structure.
     27%number from the exposed edge of the suburb were determined for each %structure.
    2928
    3029Casualty models were based on the
     
    5453based on the total contents value of \$85,410,060 for
    5554the Onslow region. The injuries sustained is summarised
    56 in Table \ref{table:injuries} with around \% affected in the 0m AHD
    57 scenario.
    58 Around \%
    59 of the population are affected in the 1.5m AHD scenario with around \%
    60 affected in the 0m AHD scenario.
    61 
     55in Table \ref{table:injuries}. The HAT scenario is the only scenario to cause damage to Onslow with around \% of the population affected.
    6256
    6357\begin{table}[h]
  • production/onslow_2006/report/data.tex

    r3340 r3342  
    44mesh.
    55Ideally, the data should adequately capture all complex features
    6 of the underlying bathymetry and topography and that mesh
    7 is commensurate with the underlying data, as discussed in
    8 Section \ref{sec:anuga}. Any limitations
     6of the underlying bathymetry and topography. Any limitations
    97in the resolution and accuracy of the data will introduce
    108errors to the inundation maps, in addition to the range of approximations
     
    2422increased accuracy over the DTED data.
    2523
    26 Figure \ref{fig:contours_compare} shows the contour lines for
     24Figure \ref{fig:contours_compare}(a) shows the contour lines for
    2725HAT, MSL and LAT for Onslow using the DTED data where it is evident
    2826that the extent of the tidal inundation is exaggerated. This is due to
    2927short comings with the digital elevation model (DEM) created from
    3028the DTED data. The DEM has been
    31 derived from 20m contour lines. {\bf Need some words from hamish here.}
    32 As a result, we turned to the WA DLI onshore data to present
    33 the results in this report. Figure \ref{fig:contours_compare} shows
     29derived from 20m contour lines. {\bf Need some words from hamish here.} Figure \ref{fig:contours_compare}(b) shows
    3430the contour lines for HAT, MSL and LAT for Onslow using the WA DLI data.
    3531It is obvious that there are significant differences in each DEM with
    36 secondary information regarding total station surveys and the knowledge
     32total station survey information and the knowledge
    3733of the HAT contour line pointing to increased confidence in the WA DLI
    38 data over the DTED data for use in inundation modelling.
     34data over the DTED data for use in the inundation modelling.
    3935The impact difference based on these two onshore data sets
    4036will be discussed in Section \ref{sec:issues}.
     
    5854
    5955  \caption{Onslow region showing the -1.5m AHD (LAT), 0m AHD (MSL)
    60 and 1.5m AHD (HAT) contour lines using the DTED Level 2 data (a) and
    61 the WA DLI data (b).}
     56and 1.5m AHD (HAT) contour lines using the (a) DTED Level 2 data and
     57the (b) WA DLI data.}
    6258 % \label{fig:contours_dli}
    6359 \label{fig:contours_compare}
     
    6965similar data have been provided by DPI for Pt Hedland and Broome.)
    7066The Australian Hydrographic Office (AHO) has supplied extensive
    71 fairsheet data which has also been utilised.
     67fairsheet data which has also been utilised. In contrast to the onshore data, the offshore data is a series of survey points which is typically not supplied on a fixed grid. In addition, offshore data typically does not have the coverage of the onshore data, and often the offshore data will have gaps where surveys have not been conducted.
    7268The coastline has been generated by
    7369using the aerial photography, two detailed surveys provided
     
    7773Appendix \ref{sec:metadata} provides more details and the supporting metadata
    7874for this study.
    79 Table \ref{table:data} summarises the available data for this study.
     75Table \ref{table:data} summarises the available data for this study.
     76Figure \ref{fig:onslowdataarea} shows the offshore data indicating a number of gaps.
    8077
    8178\begin{table}
     
    9390\end{table}
    9491
    95 %\begin{figure}[hbt]
    96 %
    97 %  \centerline{ \includegraphics[width=100mm, height=75mm]
    98 %{../report_figures/onslow_data_extent.png}}
    99 %
    100 %  \caption{Data extent for Onslow scenario. Offshore data shown in blue
    101 %and onshore data in green.}
    102 %  \label{fig:onslowdataarea}
    103 %\end{figure}
     92\begin{figure}[hbt]
     93
     94  \centerline{ \includegraphics[width=100mm, height=75mm]
     95{../report_figures/onslow_data_extent.png}}
     96
     97  \caption{Data extent for Onslow scenario. Offshore data shown in blue
     98and onshore data in green.}
     99  \label{fig:onslowdataarea}
     100\end{figure}
    104101
    105102
  • production/onslow_2006/report/modelling_methodology.tex

    r3252 r3342  
    1 Tsunami hazard models have been available for some time. They generally
    2 work by converting the energy released by a subduction earthquake into
    3 a vertical displacement of the ocean surface. The resulting wave is
    4 then propagated across a sometimes vast stretch of ocean using a
    5 relatively coarse model based on bathymetries with a typical
    6 resolution of two arc minutes (check this with David).
    7 The maximal wave height at a fixed contour line near the coastline
    8 (e.g.\ 50m) is then reported as the hazard to communities ashore.
    9 Models such as Method of Splitting Tsunamis (MOST) \cite{VT:MOST} and the
    10 URS Corporation's
    11 Probabilistic Tsunami Hazard Analysis 
    12 \cite{somerville:urs} follow this paradigm.
     1GA bases its risk modelling on the process of understanding the hazard and a community's vulnerability in order to determine the impact of a particular hazard event. The resultant risk relies on an assessment of the likelihood of the event. An overall risk assessment for a particular hazard would then rely on scaling each event's impact by its likelihood.
    132
    14 To capture the \emph{impact} of a hydrological disaster such as tsunamis on a
    15 community one must model the details of how waves are reflected and otherwise
    16 shaped by the local bathymetries as well as the dynamics of the
    17 runup process onto the topography in question.
     3To develop a tsunami risk assessment, the tsunami hazard itself must first be understood. These events are generally modelled by converting
     4the energy released by a subduction earthquake into a vertical displacement of the ocean surface.
     5%Tsunami hazard models have been available for some time.
     6The resulting wave is
     7then propagated across a sometimes vast stretch of ocean towards the
     8area of interest.
     9%using a relatively coarse model
     10%based on bathymetries with a typical resolution of two arc minutes.
     11The hazard itself is then reported as a maximum wave height at a fixed contour line near the coastline, (e.g. \50m). This is how the preliminary tsunami hazard assessment was reported by GA to FESA in September 2005 \cite{}. That assessment used the Method of Splitting Tsunamis (MOST)
     12\cite{VT:MOST} model.
     13%The maximal wave height at a fixed contour line near the coastline
     14%(e.g.\ 50m) is then reported as the hazard to communities ashore.
     15%Models such as Method of Splitting Tsunamis (MOST) \cite{VT:MOST} and the
     16%URS Corporation's
     17%Probabilistic Tsunami Hazard Analysis 
     18%\cite{somerville:urs} follow this paradigm.
     19
     20MOST, which generates and propagates the tsunami wave from its source, is not adequate to model the wave's impact to communities ashore. 
     21To capture the \emph{impact} of a tsunami to a coastal community, the model must be capable of capturing more detail about the wave, particularly how it is affected by the local bathymetry, as well as the local topography as the wave penetrates onshore.
     22%the details of how waves are reflected and otherwise
     23%shaped by the local bathymetries as well as the dynamics of the
     24%runup process onto the topography in question.
    1825It is well known that local bathymetric and topographic effects are
    1926critical in determining the severity of a hydrological disaster
    20 \cite{matsuyama:1999}. To model the
    21 details of tsunami inundation of a community one must therefore capture what is
    22 known as non-linear effects and use a much higher resolution for the
    23 elevation data.
    24 Linear models typically use data resolutions of the order
    25 of hundreds of metres, which is sufficient to model the tsunami waves
    26 in deeper water where the wavelength is longer.
    27 Non-linear models however require much finer resolution in order to capture
    28 the complexity associated with the water flow from offshore
    29 to onshore. By contrast, the data
    30 resolution required is typically of the order of tens of metres.
    31 The model ANUGA \cite{ON:modsim} is suitable for this type of non-linear
    32 modelling.
    33 Using a non-linear model capable of resolving local bathymetric effects
    34 and runup using detailed elevation data will require more computational
    35 resources than the typical hazard model making it infeasible to use it
    36 for the entire, end-to-end, modelling.
     27\cite{matsuyama:1999}. To model the impact of the tsunami wave on the coastal community, we use ANUGA \cite{ON:modsim}. In order to capture the details of the wave and its interactions, a much finer resolution is required than that of the hazard model. As a result, ANUGA concentrates on a specific coastal community. MOST by contrast can tolerate a coarser resolution and covers often vast areas. To develop the impact from an earthquake event a distant source, we adopt the hybrid approach of modelling the event itself with MOST and modelling the impact with ANUGA. In this way, the output from MOST serves as an input to ANUGA. In modelling terms, the MOST output is a boundary condition for ANUGA.
     28 
     29The risk of this tsunami event cannot be determined until the likelihood of the event is known. GA is currently building a complete probabilistic hazard map which is due for completion later this year. Therefore, we report on the impact of a single tsunami event only. As the hazard map is completed, the impact will be assessed for a range of events which will ultimately determine a tsunami risk assessment for the NW shelf.
     30%To model the
     31%details of tsunami inundation of a community one must therefore capture %what is
     32%known as non-linear effects and use a much higher resolution for the
     33%elevation data.
     34%Linear models typically use data resolutions of the order
     35%of hundreds of metres, which is sufficient to model the tsunami waves
     36%in deeper water where the wavelength is longer.
     37%Non-linear models however require much finer resolution in order to %capture
     38%the complexity associated with the water flow from offshore
     39%to onshore. By contrast, the data
     40%resolution required is typically of the order of tens of metres.
     41%The model ANUGA \cite{ON:modsim} is suitable for this type of non-linear
     42%modelling.
     43%Using a non-linear model capable of resolving local bathymetric effects
     44%and runup using detailed elevation data will require more computational
     45%resources than the typical hazard model making it infeasible to use it
     46%for the entire, end-to-end, modelling.
    3747
    38 We have adopted a hybrid approach whereby the output from the 
    39 hazard model MOST is used as input to ANUGA at the seaward boundary of its study area.
    40 In other words, the output of MOST serves as boundary condition for the
    41 ANUGA model. In this way, we restrict the computationally intensive part only to
    42 regions where we are interested in the detailed inundation process. 
     48%We have adopted a hybrid approach whereby the output from the 
     49%hazard model MOST is used as input to ANUGA at the seaward boundary of its %study area.
     50%In other words, the output of MOST serves as boundary condition for the
     51%ANUGA model. In this way, we restrict the computationally intensive part %only to
     52%regions where we are interested in the detailed inundation process. 
    4353
    44 Furthermore, to avoid unnecessary computations ANUGA works with an
    45 unstructured triangular mesh rather than the rectangular grids
    46 used by e.g.\ MOST. The advantage of an unstructured mesh
    47 is that different regions can have different resolutions allowing
    48 computational resources to be directed where they are most needed.
    49 For example, one might use very high resolution near a community
    50 or in an estuary, whereas a coarser resolution may be sufficient
    51 in deeper water where the bathymetric effects are less pronounced.
    52 Figure \ref{fig:refinedmesh} shows a mesh of variable resolution.
     54%Furthermore, to avoid unnecessary computations ANUGA works with an
     55%unstructured triangular mesh rather than the rectangular grids
     56%used by e.g.\ MOST. The advantage of an unstructured mesh
     57%is that different regions can have different resolutions allowing
     58%computational resources to be directed where they are most needed.
     59%For example, one might use very high resolution near a community
     60%or in an estuary, whereas a coarser resolution may be sufficient
     61%in deeper water where the bathymetric effects are less pronounced.
     62%Figure \ref{fig:refinedmesh} shows a mesh of variable resolution.
    5363
    54 \begin{figure}[hbt]
    55 
    56   \centerline{ \includegraphics[width=100mm, height=75mm]
    57              {../report_figures/refined_mesh.jpg}}
    58 
    59   \caption{Unstructured mesh with variable resolution.}
    60   \label{fig:refinedmesh}
    61 \end{figure}
     64%\begin{figure}[hbt]
     65%
     66%  \centerline{ \includegraphics[width=100mm, height=75mm]
     67%             {../report_figures/refined_mesh.jpg}}
     68%
     69%  \caption{Unstructured mesh with variable resolution.}
     70%  \label{fig:refinedmesh}
     71%\end{figure}
    6272   
    6373
Note: See TracChangeset for help on using the changeset viewer.