Changeset 3342
- Timestamp:
- Jul 17, 2006, 11:13:48 PM (18 years ago)
- Location:
- production/onslow_2006/report
- Files:
-
- 4 edited
Legend:
- Unmodified
- Added
- Removed
-
production/onslow_2006/report/computational_setup.tex
r3341 r3342 1 1 To set up a model for the tsunami scenario, a study area is first 2 determined. Preliminary investigations have indicated th atpoint3 at which the deep water and shallow water models can exchange datais2 determined. Preliminary investigations have indicated the point 3 at which the output from MOST is the input to ANUGA is 4 4 sufficient at the 100m bathymetric contour line\footnote{ 5 5 Preliminary investigations indicate that MOST and ANUGA compare … … 18 18 approximately 10m elevation. 19 19 20 To initiate the modelling, a triangular mesh is constructed to 21 cover the study region. Each triangular cell is defined a cell area 22 which is chosen to balance 23 computational time and desired resolution in areas of interest, 24 particularly in the interface between onshore and offshore. 25 Figure \ref{fig:onslow_area} illustrates the data extent for the 26 scenario, the study area and where further mesh refinement has been made. 27 The choice 28 of the refinement is based around the inter-tidal zones and 29 other important features such as islands and rivers. 20 The finite volume technique relies on the construction of a triangular mesh which covers the study region. This mesh can be altered to suit the needs of the scenario in question. The mesh can be refined in areas of interest, particularly in the coastal region where the complex behaviour is likely to occur. In setting up the model, the user defines the area of the triangular cells in each region of interest\footnote{Note that the cell 21 area will be the maximum cell area within the defined region and that each 22 cell in the region does not necessarily have the same area.}. 23 The area should not be too small as to exceed realistic computational time, and not too great as to inadequately capture important behaviour. There are no gains in choosing the area to be less than the supporting data. 24 Figure \ref{fig:onslow_area} shows the study area and where further mesh refinement has been made. For each region, a maximum triangular cell area is defined and its associated lateral accuracy. 25 With these cell areas, the study area consists of 401939 triangles 26 in which water levels and momentums are tracked through time. The lateral accuracy refers to the distance at which we are confident in stating a region is inundated. Therefore we can only be confident in the calculated inundation extent in the Onslow town centre to within 30m. 30 27 31 28 \begin{figure}[hbt] … … 34 31 {../report_figures/onslow_data_poly.png}} 35 32 36 \caption{Study area for Onslow scenario highlighting areas of increased 37 refinement. 33 \caption{Study area for Onslow scenario highlighting four regions of increased refinement. 34 Region 1: Surrounds Onslow town centre with a cell area of 500 m$^2$ (lateral accuracy 30m). 35 Region 2: Surrounds the coastal region with a cell area of 2500 m$^2$ (lateral accuracy 70m). 36 Region 3: Water depths to the 50m contour line (approximately) with a cell area of 20000 m$^2$ (later accuracy 200m). 37 Region 4: Water depths to the boundary (approximately 100m contour line) with a cell area of 100000 m$^2$ (lateral accuracy 445m). 38 38 } 39 39 \label{fig:onslow_area} 40 40 \end{figure} 41 42 In addition to refining the mesh in regions where complex behaviour43 will occur, it is important that the mesh also be44 commensurate with the underlying data. Referring to the onshore data45 discussed46 in Section \ref{sec:data}, we choose a cell area of 500 m$^2$ per triangle47 for the region surrounding the Onslow town centre.48 It is worth noting here that the cell49 area will be the maximum cell area within the defined region and that each50 cell in the region does not necessarily have the same area.51 In contrast to the onshore data, the offshore52 data is a series of survey points which is typically not supplied on a fixed53 grid which complicates the issue of determining an appropriate cell area.54 In addition, the data is not necessarily complete, as can be55 seen in Figure \ref{fig:onslow_area}.56 The remaining cell areas are57 2500 m$^2$ for the region surrounding the coast,58 20000 m$^2$ for the region reaching approximately the 50m contour line, with59 the remainder of the study area having a cell area of 100000 m$^2$.60 These choice of cell areas are more than adequate to propagate the tsunami wave61 in the deepest sections of the study area.\footnote{62 With a wavelength of 20km, the minimum (square) grid resolution would63 be around 2000m (allowing ten cells per wavelength).64 This results in a square cell area of 4000000 m$^2$ which indicates a minimum65 triangular cell area of 2000000 m$^2$.}66 The resultant computational mesh is shown in Figure \ref{fig:mesh_onslow}.67 68 With these cell areas, the study area consists of 401939 triangles69 in which water levels and momentums are tracked through time.70 The associated lateral accuracy71 for these cell areas is approximatly 30m, 70m, 200m and 445m for the respective72 areas. This means73 that we can only be confident in the calculated inundation extent to74 approximately 30m lateral accuracy within the Onslow town centre.75 41 76 42 \begin{figure}[hbt] … … 85 51 \end{figure} 86 52 87 The final item to be addressed to complete the model setup is t o88 defin ethe boundary condition. As53 The final item to be addressed to complete the model setup is the 54 definition of the boundary condition. As 89 55 discussed in Section \ref{sec:tsunamiscenario}, a Mw 9 event provides 90 56 the tsunami source. The resultant tsunami wave is made up of a series -
production/onslow_2006/report/damage.tex
r3340 r3342 5 5 Exposure data are sourced from the National Building Exposure Database (NBED), 6 6 developed by GA\footnote{http://www.ga.gov.au/urban/projects/ramp/NBED.jsp}. 7 It contains information about residential buildings, people ,and the7 It contains information about residential buildings, people and the 8 8 cost of replacing buildings and contents. 9 9 … … 11 11 residential collapse vulnerability models and casualty models were developed. 12 12 The vulnerability models have been developed for 13 framed residential construction using data from teh Indian Ocean tsunami event. 14 The models predict the collapse 15 probability for an exposed population and incorporate the following 13 framed residential construction using data from the Indian Ocean tsunami event. The models predict the collapse 14 probability for an exposed population and incorporates the following 16 15 parameters known to influence building damage \cite{papathoma:vulnerability}, 17 16 … … 26 25 %In applying the model, all structures in the inundation zone were 27 26 %spatially located and the local water depth and building row 28 %number from the exposed edge of the suburb were determined for each structure.27 %number from the exposed edge of the suburb were determined for each %structure. 29 28 30 29 Casualty models were based on the … … 54 53 based on the total contents value of \$85,410,060 for 55 54 the Onslow region. The injuries sustained is summarised 56 in Table \ref{table:injuries} with around \% affected in the 0m AHD 57 scenario. 58 Around \% 59 of the population are affected in the 1.5m AHD scenario with around \% 60 affected in the 0m AHD scenario. 61 55 in Table \ref{table:injuries}. The HAT scenario is the only scenario to cause damage to Onslow with around \% of the population affected. 62 56 63 57 \begin{table}[h] -
production/onslow_2006/report/data.tex
r3340 r3342 4 4 mesh. 5 5 Ideally, the data should adequately capture all complex features 6 of the underlying bathymetry and topography and that mesh 7 is commensurate with the underlying data, as discussed in 8 Section \ref{sec:anuga}. Any limitations 6 of the underlying bathymetry and topography. Any limitations 9 7 in the resolution and accuracy of the data will introduce 10 8 errors to the inundation maps, in addition to the range of approximations … … 24 22 increased accuracy over the DTED data. 25 23 26 Figure \ref{fig:contours_compare} shows the contour lines for24 Figure \ref{fig:contours_compare}(a) shows the contour lines for 27 25 HAT, MSL and LAT for Onslow using the DTED data where it is evident 28 26 that the extent of the tidal inundation is exaggerated. This is due to 29 27 short comings with the digital elevation model (DEM) created from 30 28 the DTED data. The DEM has been 31 derived from 20m contour lines. {\bf Need some words from hamish here.} 32 As a result, we turned to the WA DLI onshore data to present 33 the results in this report. Figure \ref{fig:contours_compare} shows 29 derived from 20m contour lines. {\bf Need some words from hamish here.} Figure \ref{fig:contours_compare}(b) shows 34 30 the contour lines for HAT, MSL and LAT for Onslow using the WA DLI data. 35 31 It is obvious that there are significant differences in each DEM with 36 secondary information regarding total station surveysand the knowledge32 total station survey information and the knowledge 37 33 of the HAT contour line pointing to increased confidence in the WA DLI 38 data over the DTED data for use in inundation modelling.34 data over the DTED data for use in the inundation modelling. 39 35 The impact difference based on these two onshore data sets 40 36 will be discussed in Section \ref{sec:issues}. … … 58 54 59 55 \caption{Onslow region showing the -1.5m AHD (LAT), 0m AHD (MSL) 60 and 1.5m AHD (HAT) contour lines using the DTED Level 2 data (a)and61 the WA DLI data (b).}56 and 1.5m AHD (HAT) contour lines using the (a) DTED Level 2 data and 57 the (b) WA DLI data.} 62 58 % \label{fig:contours_dli} 63 59 \label{fig:contours_compare} … … 69 65 similar data have been provided by DPI for Pt Hedland and Broome.) 70 66 The Australian Hydrographic Office (AHO) has supplied extensive 71 fairsheet data which has also been utilised. 67 fairsheet data which has also been utilised. In contrast to the onshore data, the offshore data is a series of survey points which is typically not supplied on a fixed grid. In addition, offshore data typically does not have the coverage of the onshore data, and often the offshore data will have gaps where surveys have not been conducted. 72 68 The coastline has been generated by 73 69 using the aerial photography, two detailed surveys provided … … 77 73 Appendix \ref{sec:metadata} provides more details and the supporting metadata 78 74 for this study. 79 Table \ref{table:data} summarises the available data for this study. 75 Table \ref{table:data} summarises the available data for this study. 76 Figure \ref{fig:onslowdataarea} shows the offshore data indicating a number of gaps. 80 77 81 78 \begin{table} … … 93 90 \end{table} 94 91 95 %\begin{figure}[hbt]96 % 97 %\centerline{ \includegraphics[width=100mm, height=75mm]98 %{../report_figures/onslow_data_extent.png}}99 % 100 %\caption{Data extent for Onslow scenario. Offshore data shown in blue101 %and onshore data in green.}102 %\label{fig:onslowdataarea}103 %\end{figure}92 \begin{figure}[hbt] 93 94 \centerline{ \includegraphics[width=100mm, height=75mm] 95 {../report_figures/onslow_data_extent.png}} 96 97 \caption{Data extent for Onslow scenario. Offshore data shown in blue 98 and onshore data in green.} 99 \label{fig:onslowdataarea} 100 \end{figure} 104 101 105 102 -
production/onslow_2006/report/modelling_methodology.tex
r3252 r3342 1 Tsunami hazard models have been available for some time. They generally 2 work by converting the energy released by a subduction earthquake into 3 a vertical displacement of the ocean surface. The resulting wave is 4 then propagated across a sometimes vast stretch of ocean using a 5 relatively coarse model based on bathymetries with a typical 6 resolution of two arc minutes (check this with David). 7 The maximal wave height at a fixed contour line near the coastline 8 (e.g.\ 50m) is then reported as the hazard to communities ashore. 9 Models such as Method of Splitting Tsunamis (MOST) \cite{VT:MOST} and the 10 URS Corporation's 11 Probabilistic Tsunami Hazard Analysis 12 \cite{somerville:urs} follow this paradigm. 1 GA bases its risk modelling on the process of understanding the hazard and a community's vulnerability in order to determine the impact of a particular hazard event. The resultant risk relies on an assessment of the likelihood of the event. An overall risk assessment for a particular hazard would then rely on scaling each event's impact by its likelihood. 13 2 14 To capture the \emph{impact} of a hydrological disaster such as tsunamis on a 15 community one must model the details of how waves are reflected and otherwise 16 shaped by the local bathymetries as well as the dynamics of the 17 runup process onto the topography in question. 3 To develop a tsunami risk assessment, the tsunami hazard itself must first be understood. These events are generally modelled by converting 4 the energy released by a subduction earthquake into a vertical displacement of the ocean surface. 5 %Tsunami hazard models have been available for some time. 6 The resulting wave is 7 then propagated across a sometimes vast stretch of ocean towards the 8 area of interest. 9 %using a relatively coarse model 10 %based on bathymetries with a typical resolution of two arc minutes. 11 The hazard itself is then reported as a maximum wave height at a fixed contour line near the coastline, (e.g. \50m). This is how the preliminary tsunami hazard assessment was reported by GA to FESA in September 2005 \cite{}. That assessment used the Method of Splitting Tsunamis (MOST) 12 \cite{VT:MOST} model. 13 %The maximal wave height at a fixed contour line near the coastline 14 %(e.g.\ 50m) is then reported as the hazard to communities ashore. 15 %Models such as Method of Splitting Tsunamis (MOST) \cite{VT:MOST} and the 16 %URS Corporation's 17 %Probabilistic Tsunami Hazard Analysis 18 %\cite{somerville:urs} follow this paradigm. 19 20 MOST, which generates and propagates the tsunami wave from its source, is not adequate to model the wave's impact to communities ashore. 21 To capture the \emph{impact} of a tsunami to a coastal community, the model must be capable of capturing more detail about the wave, particularly how it is affected by the local bathymetry, as well as the local topography as the wave penetrates onshore. 22 %the details of how waves are reflected and otherwise 23 %shaped by the local bathymetries as well as the dynamics of the 24 %runup process onto the topography in question. 18 25 It is well known that local bathymetric and topographic effects are 19 26 critical in determining the severity of a hydrological disaster 20 \cite{matsuyama:1999}. To model the 21 details of tsunami inundation of a community one must therefore capture what is 22 known as non-linear effects and use a much higher resolution for the 23 elevation data. 24 Linear models typically use data resolutions of the order 25 of hundreds of metres, which is sufficient to model the tsunami waves 26 in deeper water where the wavelength is longer. 27 Non-linear models however require much finer resolution in order to capture 28 the complexity associated with the water flow from offshore 29 to onshore. By contrast, the data 30 resolution required is typically of the order of tens of metres. 31 The model ANUGA \cite{ON:modsim} is suitable for this type of non-linear 32 modelling. 33 Using a non-linear model capable of resolving local bathymetric effects 34 and runup using detailed elevation data will require more computational 35 resources than the typical hazard model making it infeasible to use it 36 for the entire, end-to-end, modelling. 27 \cite{matsuyama:1999}. To model the impact of the tsunami wave on the coastal community, we use ANUGA \cite{ON:modsim}. In order to capture the details of the wave and its interactions, a much finer resolution is required than that of the hazard model. As a result, ANUGA concentrates on a specific coastal community. MOST by contrast can tolerate a coarser resolution and covers often vast areas. To develop the impact from an earthquake event a distant source, we adopt the hybrid approach of modelling the event itself with MOST and modelling the impact with ANUGA. In this way, the output from MOST serves as an input to ANUGA. In modelling terms, the MOST output is a boundary condition for ANUGA. 28 29 The risk of this tsunami event cannot be determined until the likelihood of the event is known. GA is currently building a complete probabilistic hazard map which is due for completion later this year. Therefore, we report on the impact of a single tsunami event only. As the hazard map is completed, the impact will be assessed for a range of events which will ultimately determine a tsunami risk assessment for the NW shelf. 30 %To model the 31 %details of tsunami inundation of a community one must therefore capture %what is 32 %known as non-linear effects and use a much higher resolution for the 33 %elevation data. 34 %Linear models typically use data resolutions of the order 35 %of hundreds of metres, which is sufficient to model the tsunami waves 36 %in deeper water where the wavelength is longer. 37 %Non-linear models however require much finer resolution in order to %capture 38 %the complexity associated with the water flow from offshore 39 %to onshore. By contrast, the data 40 %resolution required is typically of the order of tens of metres. 41 %The model ANUGA \cite{ON:modsim} is suitable for this type of non-linear 42 %modelling. 43 %Using a non-linear model capable of resolving local bathymetric effects 44 %and runup using detailed elevation data will require more computational 45 %resources than the typical hazard model making it infeasible to use it 46 %for the entire, end-to-end, modelling. 37 47 38 We have adopted a hybrid approach whereby the output from the39 hazard model MOST is used as input to ANUGA at the seaward boundary of itsstudy area.40 In other words, the output of MOST serves as boundary condition for the41 ANUGA model. In this way, we restrict the computationally intensive partonly to42 regions where we are interested in the detailed inundation process.48 %We have adopted a hybrid approach whereby the output from the 49 %hazard model MOST is used as input to ANUGA at the seaward boundary of its %study area. 50 %In other words, the output of MOST serves as boundary condition for the 51 %ANUGA model. In this way, we restrict the computationally intensive part %only to 52 %regions where we are interested in the detailed inundation process. 43 53 44 Furthermore, to avoid unnecessary computations ANUGA works with an45 unstructured triangular mesh rather than the rectangular grids46 used by e.g.\ MOST. The advantage of an unstructured mesh47 is that different regions can have different resolutions allowing48 computational resources to be directed where they are most needed.49 For example, one might use very high resolution near a community50 or in an estuary, whereas a coarser resolution may be sufficient51 in deeper water where the bathymetric effects are less pronounced.52 Figure \ref{fig:refinedmesh} shows a mesh of variable resolution.54 %Furthermore, to avoid unnecessary computations ANUGA works with an 55 %unstructured triangular mesh rather than the rectangular grids 56 %used by e.g.\ MOST. The advantage of an unstructured mesh 57 %is that different regions can have different resolutions allowing 58 %computational resources to be directed where they are most needed. 59 %For example, one might use very high resolution near a community 60 %or in an estuary, whereas a coarser resolution may be sufficient 61 %in deeper water where the bathymetric effects are less pronounced. 62 %Figure \ref{fig:refinedmesh} shows a mesh of variable resolution. 53 63 54 \begin{figure}[hbt]55 56 \centerline{ \includegraphics[width=100mm, height=75mm]57 {../report_figures/refined_mesh.jpg}}58 59 \caption{Unstructured mesh with variable resolution.}60 \label{fig:refinedmesh}61 \end{figure}64 %\begin{figure}[hbt] 65 % 66 % \centerline{ \includegraphics[width=100mm, height=75mm] 67 % {../report_figures/refined_mesh.jpg}} 68 % 69 % \caption{Unstructured mesh with variable resolution.} 70 % \label{fig:refinedmesh} 71 %\end{figure} 62 72 63 73
Note: See TracChangeset
for help on using the changeset viewer.