# Changeset 6240

Ignore:
Timestamp:
Jan 29, 2009, 4:16:09 PM (10 years ago)
Message:

updated patong validation paper. Introduction motivates need for new validation benchmark. Section 2 describes data sets needed. Remaining sections only contains brief outline of what should be written

File:
1 edited

### Legend:

Unmodified
 r6084 %------Abstract-------------- \begin{abstract} Geoscience Australia, in an open collaboration with the Mathematical Sciences Institute, The Australian National University, is developing a software application, ANUGA, to model the hydrodynamics of tsunamis, floods and storm surges. The open source software implements a finite volume central-upwind Godunov method to solve the non-linear depth-averaged shallow water wave equations. This paper investigates the veracity of ANUGA  when used to model tsunami inundation.  A particular aim was to make use of the comparatively large amount of observed data corresponding to the Indian ocean tsunmai event of December 2004, to provide a conditional assessment of the computational model's performance. Specifically a comparison is made between an inundation map, constructed from observed data, against modelled maximum inundation. This comparison shows that there is very good agreement between the simulated and observed values. The sensitivity of model results to the resolution of bathymetry data used in the model was also investigated. It was found that the performance of the model could be drastically improved by using finer bathymetric data which better captures local topographic features. The effects of two different source models was also explored. \end{abstract} %======================Section 1================= Notes:  * Model source developed independently of inundation data. * Patong region was chosen because high resolution inundation map and bathymetry and topography data was available there \section{Introduction} Tsunamis are a potential hazard to coastal communities all over the world. These waves' can cause loss of life and have huge social and economic impacts. The Indian Ocean tsunami killed around 230,000 people and caused billions of dollars in damage on the 26th of December 2004 (Synolakis {\it et al.} 2005). Hundreds of millions of dollars in aid has been donated to the rebuilding process and still the lives of hundreds of thousands of people will never be the same. Fortunately, catastrophic tsunamis of the scale of the 26 December 2004 event are exceedingly rare (Jankaew et al. 2008). However, smaller-scale tsunamis are more common and regularly threaten coastal communities around the world. Earthquakes that occur in the Java Trench near Indonesia (e.g. Tsuji {\it et al.} 1995) and along the Puysegur Ridge to the south of New Zealand (e.g. Lebrun {\it et al.} 1998) have potential to generate tsunamis that may threaten Australia's northwestern and southeastern coastlines.\nocite{synolakis05,tsuji95,lebrun98} For these reasons there has been an increased focus on tsunami hazard mitigation over the past three years. Tsunami hazard mitigation involves detection, forecasting, and emergency preparedness (Synolakis {\it et al.} 2005). Unfortunately, due to the small time scales (at the most a few hours) over which tsunamis take to impact coastal communities, real time models that can be used for guidance as an event unfolds are currently underdeveloped. Consequently current tsunami mitigation efforts must focus on developing a database of pre-simulated scenarios to help increase effectiveness of immediate relief efforts. Firstly areas of high vulnerability, such as densely populated regions at risk of extreme damage, are identified. Action can then be undertaken before the event to minimise damage (early warning systems, breakwalls etc.) and protocols put in place to be followed when the flood waters subside. In this spirit, Titov {\it et al.} (2001)\nocite{titov01} discuss a current Short-term Inundation Forecasting (SIFT) project for tsunamis. Several approaches are currently used to solve these problems. They differ in the way that the propagation of a tsunami is described. The shallow water wave equations, linearised shallow water wave equations, and Boussinesq-type equations are commonly accepted descriptions of flow. But the complex nature of these equations and the highly variable nature of the phenomena that they describe necessitate the use of numerical simulations. Geoscience Australia, in an open collaboration with the Mathematical Sciences Institute, The Australian National University, is in the final stages of completing a hydrodynamic modelling tool called ANUGA to simulate the shallow water propagation and run-up of tsunamis. Further development of this tool requires comprehensive assessment of the model. In particular the model must be validated and tested to ensure it is sufficiently robust and that the interactions and outcomes demonstrated are feasible and defensible, given the objectives. These objectives include: simulating flow over dry beds and the appearance of dry states within previously wet regions; accurately describing steady state flows and small perturbations from these steady states over rapidly-varying topography; and accurately resolve shocks. Applications of ANUGA include, but are not limited to, dam-breaks, storm surges, and tsunami propagation. The process of validating the ANUGA application is in its early stages, but initial indications are encouraging. As part of the Third International Workshop on Long-wave run-up Models in 2004\footnote{http://www.cee.cornell.edu/longwave}, four benchmark problems were specified to allow the comparison of numerical, analytical and physical models with laboratory and field data. One of these problems describes a wave tank simulation of the 1993 Okushiri Island tsunami off Hokkaido, Japan (Matsuyama {\it et al.} 2001)\nocite{matsuyama01}. The wave tank simulation of the Hokkaido tsunami was used as the first scenario for validating ANUGA. The dataset provided bathymetry and topography along with initial water depth and the wave specifications. The dataset also contained water depth time series from three wave gauges situated offshore from the simulated inundation area. Although good agreement was obtained between the observed and simulated water depth at each of the three gauges (Roberts {\it et al.} 2006) \nocite{roberts06} further validation is needed. Tsunami are a potential hazard to coastal communities all over the world. A number of recent large events have increased community and scientific awareness of the need for effective tsunami hazard mitigation. Tsunami modelling is major component of hazard mitigation, which involves detection, forecasting, and emergency preparedness (Synolakis {\it et al.} 2005). Accurate models can be used to provide information that increases the effectiveness of action undertaken before the event to minimise damage (early warning systems, breakwalls etc.) and protocols put in place to be followed when the flood waters subside. Several approaches are currently used to model tsunami propagation and inundation. These methods differ in both the formulation used to describe the evolution of the tsunami and the numerical methods used to solve the governing equations. The shallow water wave equations, linearised shallow water wave equations, and Boussinesq-type equations are commonly accepted descriptions of flow. The complex nature of these equations and the highly variable nature of the phenomena that they describe necessitate the use of numerical models. These models are typically used to predict quantities such as arrival times, wave speeds and heights and inundation extents which are used to develop efficient hazard mitigation plans. Inaccuracies in model prediction can result in inappropriate evacuation plans and town zoning which may result in loss of life and large financial losses. Consequently tsunami models must undergo sufficient testing to increase scientific and community confidence in the model predictions. Complete 100\% confidence in a model of a physical system cannot be proven. One can only show that the model does not fail under certain conditions. However, the utility of a model can be assessed through a process of validation and verification. Validation assesses the accuracy of the numerical method used to solve the governing equations and verification is used to investigate whether the model adequately represents the physical system. %Verification must be used to reduce numerical error before validation is used to assess model structure. In some situations it may be possible to increase the numerical accuracy of a model and produce a worse fit of the observed data. The sources of data used to validate and verify a model can be separated into three main categories, analytical solutions, scale experiments and field measurements. Analytical solutions of the governing equations of a model, if available, provide the best means of validating a numerical hydrodynamic model. The solutions provide spatially and temporally distributed values of important observables that can be compared against modelled results. However analytical solutions to the governing equations are frequently limited to a small set of idealised examples that do not completely capture the more complex behaviour of 'real' events. Scale experiments, typically in the form of wave-tank experiments provide a much more realistic source of data that better captures the complex dynamics of natural tsunami, whilst allowing control of the event and much easier and accurate measurement of the tsunami properties. However comparison of numerical predictions with field data provides the most stringent test of model veracity. The use of field data increases the generality and significance of conclusions made regarding model utility. However the use of field data also significantly increase the uncertainty of the validation experiment that may constrain the ability to make unequivacol statements~\cite{lane94}. Currently the amount of tsunami related field data is limited. The cost of tsunami monitoring programs and bathymetry and topography surveys prohibits the collection of data in many of the regions in which tsunamis pose greatest threat. The resulting lack of data has limited the number of field data sets available to validate tsunami models, particularly those modelling tsunami inundation. Synolakis et. al~\cite{synolakis07} have developed a set of standards, criteria and procedures for evaluating numerical models of tsunami. They propose three analytical solutions to help identify the validity of a model and  five scale comparisons (wave-tank benchmarks) and two field events to assess model veracity.  The two field data benchmarks are very useful but only capture a small subset of possible tsunami behaviours and only one of the benchmarks can be used to validate tsunami inundation. The type ans size of a tsunami source, propagation extent, and local bathymetry and topography all affect the energy, waveform and subsequent inundation of a tsunami. Consequently additional field data benchmarks that further capture the variability and sensitivity of the real world system would be useful to allow model developers verify their models and subsequently use their models with greater confidence. In this paper we develop a field data benchmark to be used in conjuction with the other tests proposed by Synolakis et al. to validate and verify tsunami inundation. The benchmark is constructed from data collected around Patong Bay, Thailand during and immeadiately following the 2004 Indian Ocean tsunami tsunami. This area was chosen because the authors were able to obtain unusually high resolution bathymetry and topography data in this area and an extensive inundation map generated from a survey performed in the aftermath of the tsunami. A description of this data is give in Section\ref{sec:data}. An associated aim of this paper is to illustrate the use of this new benchmark to validate a operational tsunami model. The specific intention is to test the ability of ANUGA to reproduce the inundation survey of maximum runup. ANUGA is a hydrodynamic modelling tool used to simulate the tsunami propagation and run rain-induced floods. The components of ANUGA are discussed in Secion\ref{sec:ANUGA}. %=================Section===================== \section{Indian Ocean tsunami of 24th December 2004} Although appalling, the devastation caused by the 2004 Indian Ocean tsunami has heightened community, scientific and governmental interest in tsunami and in doing so has provided a unique opportunity for further validation of tsunami models. Enormous resources have been spent to obtain many measurements of phenomenon pertaining to this event to better understand the destruction that occurred. Data sets from seismometers, tide gauges, GPS stations, a few satellite overpasses, subsequent coastal field surveys of run-up and flooding and measurements from ship-based expeditions, have now been made available (Vigny {\it et al.} 2005, Amnon {\it et al.} 2005, Kawata {\it et al.} 2005, and Liu {\it et al.} 2005)\nocite{vigny05,amnon05,kawata05,liu05}. A number of studies have utilised this data to calibrate models of the tsunami source\cite{grilli07} , match tide gauge recordings\cite{}, maximum wave heights~\cite{asavanant08} and runup locations~\cite{ioualalen07}. We propose to use this event as an additional field-data benchmark for verification of tsuanmi models. This event captures certain tsunami behaviours that are not present in the benchmarks proposed by Synolakis et. al~\cite{synolakis07}. Synolakis detail two field data banchmarks. The first test compares model results gainst observed data from the Hokkaido-Nansei-Oki tsunami that occurred around Okushiri Island, Japan on the 12th of July 1993. This tsunami provides an example of extreme runup generated from reflections and constructive interference resulting from local topography and bathymetry. The benchmark consists of two tide gauge records and numerous spatially distributed point sites at which maximum runup elevations were observed. The second becnhmark is based upon the Rat Islands Tsunami that occured off the coast of Alaska on the 17th of November 2003. Rat island tsunami provides a good test for real-time forecasting models since tsunami was recorded at three tsunameters. The test requires matching the propagation model data with the DART recording to constrain the tsunami source model and using a propgation model to to reproduce the tide gauge record at Hilo. %The tsunamis used by the two standard benchmarks and the 2004 tsunami are quite different. They all arise from cosiesmic displacement resulting from an earthquake, however they all occur in very different geographical regions. The Hokkaido-Nansei-Oki tsunami was generated by an earthquake with a magnitude of 7.8 and only travelled a small distance before inundating Okishiri Island. The event provides an example of extreme runup genereated from reflections and constructive interference resulting from local topography and bathymetry. In comparison the Rat islands tsunami was generated by an earthquake of the same magnitude but had to travel a much greater distance. The event provides a number of tide gauge recordings that capture the change in wave form as the tsunami evolved. The 2004 December tsunami was a much larger event than the previous two described. It was generated by a disturbance, resulting from a M$_w$=9.2-9.3 mega-thrust earthquake, that propagated 1200-1300 km. Consequently the energy of the resulting wave was much larger than the waves generated from the the more localised and smaller magnitude aforementioned events. WAS THE WAVELENGTH< VELOCITY (and thus average ocean depth) DIFFERENT FROM THESE TWO EVENTS??? If so state something like. This larger wavelength and energy and simply the different geology of the area produced different a wave signal and different pattern of inundation. Here we foucs on the larage inundation experienced at Patong bay on the West coast of Thaliand. \section{Data} Hydrodynamic simulations require very little data in comparison to models of many other environemental systems. Tsunami models typically only require baythymetry and topography data to approximate the local geography, paramterisation of the tsunami source from which appropriate initial conditions can be generated, and a locally distributed quantity such as manning's friction coefficient to approximate friction. Here we dicuss the bathymetric and topographical data sets and source condition that are necessary to implement the proposed benchmark. Friction is discussed in Section~\ref{sec:} The Patong Bay and surrounding region is source to an unusually large amount of data, pertaining to the 2004 tsunami, which is necessary for tsunami verification. The authours obtained a number of raw data sets which were analysed and checked for quality (QCd) and subsequently gridded for easier visualisation and input into tsunami models. \subsection{Bathymetric and topographic data} The two minute arc grid data set, DBDB2, was obtained from US Naval Research Labs and used to approximate the bathymetry in the Bay of Bengal. This grid was further interpolated to a 27 second arc grid. In the Andaman Sea we replaced the DBDB2 data with a a 3 second grid obtained from NOAA. Finally a 1 second grid was used to approximate the bathymetry in Patong Bay and the immeadiately adjacent regions. This elevation data was created from the digitised Thai Navy bathymetry chart, no 358. A visualisation of the topography data set used in Patong bay is shown in Figure~\ref{fig:patong_bathymetry}. The continuous topgraphy is an interpolation of the 1 second grid created for this area from the known elevation measured at the coloured dots. The sub-sampling of larger grids was performed by using {\bf resample}  a GMT program. The gridding of data was performed using {\bf Intrepid} a commercial geophysical processing package developed by Intrepid Geophysics. The gridding scheme employed the nearest neighbour algorithm followed by and application of minimum curvature akima spline smoothing. \begin{figure}[ht] \begin{center} \includegraphics[width=8.0cm,keepaspectratio=true]{monai-gauge-05-new.png} \caption{Comparison of ANUGA simulation against the wave tank simulation of the 1993 Okushiri Island tsunami off Hokkaido, Japan} \label{fig:most_3_ruptures} \includegraphics[width=8.0cm,keepaspectratio=true]{patong_bay_data.jpg} \caption{Is there a new picture with river included???} \label{fig:patong_bathymetry} \end{center} \end{figure} Although appalling, the devastation caused by the 2004 Indian Ocean tsunami has heightened community, scientific and governmental interest in tsunami and in doing so has provided a unique opportunity for further validation of tsunami models. Enormous resources have been spent to obtain many measurements of phenomenon pertaining to this event to better understand the destruction that occurred. Data sets from seismometers, tide gauges, GPS stations, a few satellite overpasses, subsequent coastal field surveys of run-up and flooding and measurements from ship-based expeditions, have now been made available (Vigny {\it et al.} 2005, Amnon {\it et al.} 2005, Kawata {\it et al.} 2005, and Liu {\it et al.} 2005)\nocite{vigny05,amnon05,kawata05,liu05}. An aim of this paper is to use this relative abundance of observed data corresponding to this event to further validate the use of ANUGA for modelling the inundation of tsunami.  The specific intention is to test the ability of the model to reproduce an inundation survey of maximum runup constructed in the aftermath of the 2004 tsunami. A further aim is to test the sensitvity of the model predictions to bathymetry and tsunami source used. %=================Section===================== \section{Modelling the Tsunami of 24th December 2004} The evolution of earthquake-generated tsunamis has three distinctive stages: generation, propagation and run-up (Titov and Gonzalez, 1997) \nocite{titov97a}. To accurately model the evolution of a tsunami all three stages must be dealt with. Here we investigate the use of two different source models, URS and the Method of Splitting Tsunamis Model (MOST) and  to model the generation of a tsunami and open ocean propagation. The resulting data is then used to provide boundary conditions for the inundation package ANUGA (see below) which is used to simulate the propagation of the tsunami in shallow water and the tsunami run-up. \begin{figure} \begin{center} \includegraphics[width=3.0in,keepaspectratio=true]{3stages.jpg} \end{center} \caption{Triangular elements in the 2D finite volume method.} \label{fig:2dmesh} \end{figure} Here we note that the MOST model was developed as part of the Early Detection and Forecast of Tsunami (EDFT) project (Titov {\it et al.} 2005)\nocite{titov05}. MOST is a suite of integrated numerical codes capable of simulating tsunami generation, its propagation across, and its subsequent run-up. The exact nature of the MOST model is explained in (Titov and Synolakis 1995, Titov and Gonzalez 1997, Titov and Synolakis 1997, and Titov {\it et al.} 2005)\nocite{titov95,titov97a,titov97b,titov05}. ANUGA is an inundation tool that solves the depth integrated shallow water wave equations. The scheme used by ANUGA, first presented by Zoppou and Roberts (1999)\nocite{zoppou99}, is a high-resolution Godunov-type method that uses the rotational invariance property of the shallow water equations to transform the two-dimensional problem into local one-dimensional problems. These local Riemann problems are then solved using the semi-discrete central-upwind scheme of Kurganov {\it et al.} (2001) \nocite{kurganov01} for solving one-dimensional conservation equations. The numerical scheme is presented in detail in (Zoppou and Roberts 1999, Zoppou and Roberts 2000, and Roberts and Zoppou 2000, Nielsen {\it et al.} 2005) \nocite{zoppou99,zoppou00,roberts00,nielsen05}. An important capability of the software is that it can model the process of wetting and drying as water enters and leaves an area. This means that it is suitable for simulating water flow onto a beach or dry land and around structures such as buildings. It is also capable of adequately resolving hydraulic jumps due to the ability of the finite-volume method to handle discontinuities. \subsection{Tsunami Generation} The Indian Ocean tsunami of 2004 was generated by severe coseismic displacement of the sea floor as a result of one of the largest earthquakes on record. The M$_w$=9.2-9.3 mega-thrust earthquake occurred on the 26 December 2004 at 0h58'53'' UTC approximately 70 km offshore North Sumatra. The disturbance propagated 1200-1300 km along the Sumatra-Andaman trench time at a rate of 2.5-3 km.s$^{-1}$ and lasted approximately 8-10 minutes (Amnon {\it et al.} 2005)\nocite{amnon05}. At present ANUGA does not possess an explicit easy to use method for generating tsunamis from coseismic displacement, although such functionality could easily be added in the future. Implementing an explicit method for simulating coseismic displacement in ANUGA requires time for development and testing that could not be justified given the aims of the project and the time set aside for completion. Consequently in the following we employ the URS model and the MOST model to determine the sea floor deformation. The URS code uses a source model based on Wang (Wang et al. 2003) which is an elastic crustal model. The source parameters used to simulate the 2004 Indian Ocean Tsunami were taken from Chlieh (2007). The resulting sea floor displacement ranges from about - 5.0 to 5.0 metres and is shown in figure 3. The solution of Gusiakov (1972) \nocite{gusiakov72} is used by the MOST model to calculate the initial condition. This solution describes an earthquake consisting of two orthogonal shears with opposite sign. Specifically we adopt the parameterisation of Greensdale (2007) \nocite{greensdale07} who modelled the corresponding displacement by dividing the rupture zone into three fault segments with different morphologies and earthquake parameters. Details of the parameters associated with each of three regions used here are given in the same paper. The resulting sea floor displacement is shown in Figure \ref{fig:most_3_ruptures} and ranges between 3.6 m and 6.2 m. \subsection{Tsunami source} The Indian Ocean tsunami of 2004 was generated by severe coseismic displacement of the sea floor as a result of one of the largest earthquakes on record. The M$_w$=9.2-9.3 mega-thrust earthquake occurred on the 26 December 2004 at 0h58'53'' UTC approximately 70 km offshore North Sumatra. The disturbance propagated 1200-1300 km along the Sumatra-Andaman trench time at a rate of 2.5-3 km.s$^{-1}$ and lasted approximately 8-10 minutes (Amnon {\it et al.} 2005)\nocite{amnon05}. Many parameterisations of the 2004 tsunami source are available. Some are determeined from various geolocial surveys of the site, others solve an inverse problem which calibrates the source based upon the tsunami wave signal and or runup. Although possibly producing a closer match between observed and simulated data, the later later is in approporaite for use by this benchmark. The data used to calibrate the model needs to be independent of the validation data. The source parameters used to simulate the 2004 Indian Ocean Tsunami were taken from Chlieh (2007). HOW IS SOURCE PARAMTERISED. FROM GEOGRAPHICAL STUDY OF INVERSE PROBLEM TRYING TO MATCH WAVE SIGNAL. DOES ANYONE HAVE A COPY THEY COULD SEND ME PLEAESE? The resulting sea floor displacement ranges from about - 5.0 to 5.0 metres and is shown in Figure~\ref{fig:chlieh_slip_model}. \begin{figure}[ht] \includegraphics[width=8.0cm,keepaspectratio=true]{chlieh_slip_model.png} \caption{Location and magnitude of the sea floor displacement associated with the December 24 2004 tsunami. Source parameters taken from Chlieh {\it et al.} (2007)} \label{fig:most_3_ruptures} \label{fig:chlieh_slip_model} \end{center} \end{figure} \subsection{Tsunami Propagation} We use both the URS model and the MOST model to simulate the propagation of the 2004 Indian Ocean tsunami in the deep ocean ocean, based on a discrete representation of the initial deformation of the sea floor, described above. \subsection{Inundation survey data} The bathymetry data and source parameterisation can be inserted into the tsunami model and run. From the simulation runup and ocean surface elevation can be obtained. We propose that a correct' tsunami model should reproduce the inundation map shown in Figure~\ref{fig:patongescapemap}. Furthermore the model should simulate a leading depression followed by 3??? crests. Is there any eye witness accounts of how many waves arrived a patong??? \begin{figure}[ht] \begin{center} \includegraphics[width=8.0cm,keepaspectratio=true]{patongescapemap.jpg} \caption{Map of maximum inundation at Patong bay.} \label{fig:patongescapemap} \end{center} \end{figure} \section{Verification Procedure} %=================Section===================== \subsection{ANUGA} ANUGA is an inundation tool that solves the depth integrated shallow water wave equations. The scheme used by ANUGA, first presented by Zoppou and Roberts (1999)\nocite{zoppou99}, is a high-resolution Godunov-type method that uses the rotational invariance property of the shallow water equations to transform the two-dimensional problem into local one-dimensional problems. These local Riemann problems are then solved using the semi-discrete central-upwind scheme of Kurganov {\it et al.} (2001) \nocite{kurganov01} for solving one-dimensional conservation equations. The numerical scheme is presented in detail in (Zoppou and Roberts 1999, Zoppou and Roberts 2000, and Roberts and Zoppou 2000, Nielsen {\it et al.} 2005) \nocite{zoppou99,zoppou00,roberts00,nielsen05}. An important capability of the software is that it can model the process of wetting and drying as water enters and leaves an area. This means that it is suitable for simulating water flow onto a beach or dry land and around structures such as buildings. It is also capable of adequately resolving hydraulic jumps due to the ability of the finite-volume method to handle discontinuities. \subsection{Tsunami Souce and Propagation} We use the URS model to simulate the propagation of the 2004 Indian Ocean tsunami in the deep ocean ocean, based on a discrete representation of the initial deformation of the sea floor, described above. The URS code models the propagation of the tsunami in deep water using the finite difference method to solve the non-linear shallow water equations in spherical co-ordinates with friction and coriolis terms. The code is based on Satake (1995) with significant modifications made by the URS corporation (Thio et al. 2007) and Geoscience Australia (Burbidge et al. 2007). The tsunami is propagated via a stagered grid system starting with coarser grids and ending with the finest one. The URS code is also capable of calculating inundation. Most models the propogation of the tsunami using a numerical dispersion scheme that solves the non-linear shallow-water wave equations in spherical coordinates, with Coriolis terms. This model has been extensively tested against a number of laboratory experiments and was successfully used for simulations of many historical tsunamis (Titov and Synolakis 1997, Titov and Gonzalez 1997, Bourgeois {\it et al.} 1999, and Yeh {\it et al.} 1994)\nocite{titov97a,titov97b,bourgeois99,yeh94}. The computational domain for the MOST simulation, was defined to extend from $...$E to $...$E and from $...$S to $...$S. The bathymetry in this region was estimated using ... and ending with the finest one. The URS code is also capable of calculating inundation. CAN WE PRODUCE AN INUNDATINO MAP OVER THE SAME AREA TO COMPARE WITH ANUGA??? The computational domain for the URS simulation, was defined to extend from $...$E to $...$E and from $...$S to $...$S. The bathymetry in this region was estimated using ... %a 4 arc minute data set developed by the CSIRO specifically for the ocean forecasting system used here. It is based on dbdb2 (NRL), and GEBCO data sets. The tsunami propagation incorporated here was modelled by the Bureau of Meteorology, Australia for six hours using a time step of 5 seconds (4320 time steps in total). The output of the URS and MOST models were produced for the sole purpose of providing an approximation of the tsunami's size and momentum that can be used to estimate the tsunami run-up. ANUGA could also have been used to model the propagation of the tsunami in the open ocean. The capabilities of the numerical scheme over such a large extent, however, have not been adequately tested. This issue will be addressed in future work. The output of the URS model was produced for the sole purpose of providing an approximation of the tsunami's size and momentum that can be used to estimate the tsunami run-up. ANUGA could also have been used to model the propagation of the tsunami in the open ocean. The capabilities of the numerical scheme over such a large extent, however, have not been adequately tested. This issue will be addressed in future work. \subsection{Tsunami Inundation} The utility of the URSGA model decreases with water depth unless an intricate sequence of nested grids is employed. On the other hand, while the ANUGA model is less suitable for earth quake source modelling and large study areas, it is designed with detailed on-shore inundation in mind. Consequently, the Geoscience Australia tsunami modelling methodology is based on a hybrid approach using models like URSGA (or the MOST model) for tsunami generation and propagation up to a 100m depth contour where the wave is picked up by ANUGA and propagated on shore using the finite-volume method on unstructured triangular meshes. Consequently, the Geoscience Australia tsunami modelling methodology is based on a hybrid approach using models like URSGA for tsunami generation and propagation up to a 100m depth contour where the wave is picked up by ANUGA and propagated on shore using the finite-volume method on unstructured triangular meshes. In this case the open ocean boundary of the ANUGA study area was chosen to roughly follow the 100m depth contour along the west coast of Phuket Island. \begin{center} \includegraphics[width=8.0cm,keepaspectratio=true]{new_domain.png} \caption{Computational Domain. Can we easily create picture like this one for our new scenario} \caption{Computational Domain. CAN WE CREATE A PICTURE LIKE THIS FOR OUR NEW SCENARIO} \label{fig:computational_domain} \end{center} The domain was discretised into approximately ...,000 triangles. The resolution of the grid was increased in certain regions to efficiently increase the accuracy of the simulation. The grid resolution ranged between a maximum triangle area of $...\times 10^5$ m$^2$ near the Western ocean boundary to $...$ m$^2$ in the small regions surrounding the run-up points and tide gauges. The triangle size around islands and obstacles which "significantly affect" the tsunami was also reduced. The authors used their discretion to determine what obstacles significantly affect the wave through an iterative process. The bathymetry and topography of the region was estimated using...% a data set produced by NOAA. Specifically the bathymetry was specified on a 2 arc minute grid (ETOPO2) and the topography on a 3 arc second grid. A penalised least squares technique was then used to interpolate the elevation onto the computational grid. \subsubsection{Boundary Conditions} The boundary of the computational domain comprises N=... linear segments. Those segments which lie entirely on land were set as reflective boundaries. The segments that lie in depths greater than 50m were set as Dirichlet boundary conditions with the stage (water elevation) equal to zero. Finally all other segments were time varying boundaries. The value at these boundaries was interpolated from the estimates of the wave depth and momentum obtained from the URS and MOST simulation. \subsection{Bathymetric and Topographic Data} %================Section====================== \section{Results} \begin{figure}[ht] \begin{center} \includegraphics[width=8.0cm,keepaspectratio=true]{patong_bay_data.jpg} \caption{Is there a new picture with river included???} \label{fig:computational_domain} \includegraphics[width=8.0cm,keepaspectratio=true]{Patong_0_8lowres.jpg} \caption{Simulated inundation versus observed inundation} \label{fig:inundationcomparison} \end{center} \end{figure} Both the source models MOST and ... require the input of bathymetric data desribing the geometry of the sea floor. The data used ... %================Section====================== \section{Results} Table \ref{tab:run-up_locations}  also highlights the misrepresentation of the local coastline. Large discrepancies, in the order of metres, exist between the modelled and observed elevation. Furthermore, three run-up observation sites were deemed to be initially underwater. This suggests that results could be improved further by employing finer bathymetric data when it becomes available. Yet, despite the poor bathymetric data there is still a moderate correlation between the observed and modelled run-up values suggesting that local variations in the energy of the tsunami are being approximated reasonably well. %================Section===================== We have simulated the inundation of the tsunami of a small irregular region of the west Thailand coast surrounding Phuket using the inundation tool ANUGA. The tsunami size and position at the boundaries of this region were estimated using the MOST model which was used to simulate the generation and propagation of the tsunami in the deep ocean. Specifically the parameterisation of Greensdale {\it et al.} (2007) \nocite{greensdale07} was used to describe the tsunami source and the subsequent wave elevation and momentum required by the inundation simulation were interpolated from the MOST simulation at each time step. Comparison between observed and modelled run-up at 18 sites show reasonable agreement. We also find a modest agreement between the observed and modelled tsunami signal at the two tide gauge sites. The arrival times of the tsunami is approximated well at both sites. The amplitude of the first trough and peak is approximated well at the first tide gauge (Taphao-Noi), however the amplitude of the first wave was underestimated at the second gauge (Mercator yacht).  The amplitude of subsequent peaks and troughs, at both gauges, are underestimated and a phase lag between the observed and modelled arrival times of wave peaks is evident after the first peak. Grilli {\it et al.} (2006) \nocite{grilli06} also could not reproduce the correct arrival time at the Taphao-Noi tide gauge or reproduce the signal at the Mercator yacht. The performance of the model could be improved by using finer bathymetric data, which at present cannot be obtained by the authors, and by a more accurate estimation of the initial tsunami source. The wave height observed at a particular point along the coast is strongly influenced by relatively small scale bathymetric and coastal features which may be under-resolved by the current computational mesh or poorly represented by the sparse bathymetry and topography data set. These problems may also cause errors in simulated arrival times in coastal areas adjacent to regions consisting of inaccurate bathymetry data. Titov and Gonzalez (1997) \nocite{titov97a} state that for most cases 10-50m horizontal resolution of bathymetry data is essential. As mentioned above we could only obtain 2 arc minute (~3.6km) bathymetry which is most likely insufficient. Topography is approximated using a 3 arc second (~90m) grid which is much more appropriate. However, when combined, these data sets do not reproduce the position of coastline well. If a finer resolved bathymetric data set could be obtained for the shallow waters of the Thai coast (say in regions with important bathymetric features) a much better result could be expected. The approximation of the tsunami source also affects the near shore amplitude of the tsunami wave. As the graphs and tables above show, the amplitude of the tsunami is at times misrepresented and this is partly due to an suboptimal reproduction of the initial coseismic displacement. Grilli {\it et al.} (2006) \nocite{grilli06} obtain improved reproduction of tsunami amplitude when they optimise the parameters of the tsunami source based on the model's ability to reproduce certain observed behaviour. We would like to think, and will explore, that it is this optimisation that yields more accurate results rather than any deficiency of the ANUGA model. %================Acknowledgement=================== Digitised Thai Navy bathymetry chart no 358. The sub-sampling of larger grids was performed by using \bold resample \endbold  a GMT program. The gridding of data was performed using \bold Intrepid \endbold a commercial geophysical processing package developed by Intrepid Geophysics. The sub-sampling of larger grids was performed by using {\bf resample}  a GMT program. The gridding of data was performed using {\bf Intrepid} a commercial geophysical processing package developed by Intrepid Geophysics. The gridding scheme was nearest neighbour followed by minimum curvature akima spline smoothing. \end{document} We can never prove that a model of a physical system is correct only that it does not fail under certain conditions. A model must be verified and validation. The former is the process of indentifying whether the numerical solver used produces an accurate solution of the governing equaations. The later is used to assess whether the model adequately represents the physcial system. This is achived by comparing the model resutls with physical measurements/observed data and theory. Sometimes coincidence will mean that a less numerically accurate solution can match the measured data more closely than a more numerically accurate one. So it is necessary to first reduce numerical error through the verification processs and then assess modelling errors. lane Is the difference between modelled resutls and observations a result of of poor model process and representation and numerics or poor model paramteterisation horrit Main source of uncertainty arises from inaccuracies in initial condition (source), inaccurate bathymetry data, to a lesser extent friction single experiment can refute model but cannot validate it. Need as many tests as possible to be confident in rpediction. Question arises. How mnay should we do. With finite experiments more weight should be given to a particular experiment if the range of the inout function and the material properties are both broad so that the universal character of the model is tested. Expressions: sufficient verification/falsification of model Confidently utilise a model Predictive valdiation of only one aspect of model evaluation. Need to assess model explanation. Conservation of mass convergence spatial and temporal discretisation errors, round off errors due to limited numerical precision analytical benchmarking: ensuring equations are solved accurately single wave on a beach Solitary wave on composite beach subaerial landslide on simple beach Analytical solutions only represent idealised and simplfied events that do not fully capture the complexity of 'real' flows. Provide temporally and spatially distributed data that field data can rearely match. scale comparisions (laboratory benchmarking): Scale differences are not belived to be important. scale experiments generally do not have same bootom firction characteristics as real scenario but has not proven to be a problem. The long wavelngth of tsunami tends to mean that the friction is less important in comparison to the motion of the wave Single wave on a simple beacj Solitary wave on composite beach Conical island Monai Valley Landslide includes comparisons with validation data sets generated by other models of higher dimensionality and resolution. Often flow geometries are simplified Field benchmarking: Most important verification process Hydrodynamic inversion to predict the source is an ill posed problem 12 July 1993 Hokkaido-Nansei-Oki tsunami around Okushiri Island Japan exreme runup height of 31.7m was found at the tip of a narrow gulley with the small cove at Monai 17 November 2003 Rat Islands Tsunami Construction of more than one model can reveal biases in a single model. Two types of comparisons 1 between those that are comceptually simailar and those that re different. In former case interested in how choice of numerical solver and discretisation effects results and the later can help determine the level of physical processs representation necessary to represent an observed data set. Movinf to field data increases the gnereality and siginificance of svientifice evidence obatined. However we also significantly incerase the uncertainty of the validation experioment that may constrain the ability to make unequivacol statments. E.g. in bathymetry source condition friction. Calibratino of the model is often used to compensate for uncertainty in the model inputs. Calibartion results in a further loss of experimental control as a unique solution may not exist. verfication need to assess point data, spatially distributed data and bulk (lumped) data. Synolakis et. al~\cite{synolakis07} detail two field events that have been previoulsy used to validate tsunami models, the Hokkaido-Nansei-Oki tsunami that occured around Okushiri Island, Japan on 2nd of July 1993  and the Rat Islands Tsunami that inundated the occured off the coast of Alaska on the 17th of November 2003. inundation map only useful if mesh and topography resolution fine enough hard to measure what the model predicts how deep does inundation need to be for it to be visible during a field study Notes: Okushiri provides an example of extreme runup genereated from reflections and constructive interference resulting from local topography and bathymetry. Numerous point sites at which runup elevations were observed are available.  The highest runup of 31.7 m in a valley north of Monai needs to be approximated with the numerical model. In addition, two tide gage records at Iwanai and Esashi need to be estimated. Rat island tsuanmi provides a good test for real-time forecasting models since tsnumai was recorded at three tsunameters. The test requires matching the propagation model data with the DART recording to constrain the tsunami source model. The inundation model is to reproduce the tide gauge record at Hilo. Patong Bay benchamark provides spatially distributed field data for comparison. single experiment can refute model but cannot validate it. Need as many tests as possible to be confident in prediction. Question arises. How mnay should we do. DO I SAY WE HAVE MUX @ FILES DESCRIBING SHAPE OF WAVE YES. MAKES CONSISTENT Notes:  * Model source developed independently of inundation data. * Patong region was chosen because high resolution inundation map and bathymetry and topography data was available there Geoscience Australia, in an open collaboration with the Mathematical Sciences Institute, The Australian National University, is developing a software application, ANUGA, to model the hydrodynamics of tsunamis, floods and storm surges. The open source software implements a finite volume central-upwind Godunov method to solve the non-linear depth-averaged shallow water wave equations. This paper investigates the veracity of ANUGA  when used to model tsunami inundation.  A particular aim was to make use of the comparatively large amount of observed data corresponding to the Indian ocean tsunmai event of December 2004, to provide a conditional assessment of the computational model's performance. Specifically a comparison is made between an inundation map, constructed from observed data, against modelled maximum inundation. This comparison shows that there is very good agreement between the simulated and observed values. The sensitivity of model results to the resolution of bathymetry data used in the model was also investigated. It was found that the performance of the model could be drastically improved by using finer bathymetric data which better captures local topographic features. The effects of two different source models was also explored. different even types submarine mass failure generate larger events because of proximity more directional wave generation even if data is available it is hard to access article={ioualalen07, title={Modeling the 26 December 2004 Indian Ocean tsunami: Case study of impact in Thailand}, author=-{Ioualalen, M. and Asavanant, J. and  Kaewbanjak, N. and Grilli, S.~T. and Kirby, J.~T. and Watts, P.}, year={2007}, journal ={ J. Geophys. Res.}, volume={112}, doi={http://dx.doi.org/10.1029/2006JC003850} } article={hirata06 title={The 2004 Indian Ocean tsunami: Tsunami source model from satellite altimetry}, author={Hirata, K. and Satake, K. and Tanioka, Y. and  Kuragano, T. and Hasegawa, Y. and   Hayashi, Y. and Hamada, N.}, journal={Earth, Planets and Space} year={2006}, volume={58}, number={2}, pages={195--201} } @InBook{asavanant08, ALTauthor = {Asavanant, J. and  Ioualalen, M. and Kaewbanjak, N. and Grilli, S.~T. and Watts, P. and Kirby, J.~T. and Shi, F.}, ALTeditor = {}, title = {Modeling, Simulation and Optimization of Complex Processes}, chapter = {Numerical Simulation of the December 26, 2004: Indian Ocean Tsunami }, publisher = {   Springer Berlin Heidelberg}, year = {2008}, pages = {59--68}, } @article{grilli07, author = {St\'{e}phan T. Grilli and Mansour Ioualalen and Jack Asavanant and Fengyan Shi and James T. Kirby and Philip Watts}, title = {Source Constraints and Model Simulation of the December 26, 2004, Indian Ocean Tsunami}, publisher = {ASCE}, year = {2007}, journal = {Journal of Waterway, Port, Coastal, and Ocean Engineering}, volume = {133}, number = {6}, pages = {414-428}, url = {http://link.aip.org/link/?QWW/133/414/1}, doi = {10.1061/(ASCE)0733-950X(2007)133:6(414)} }