# Changeset 6506

Ignore:
Timestamp:
Mar 13, 2009, 12:02:18 PM (14 years ago)
Message:

Comments on Patong paper by Jane and Ole

File:
1 edited

### Legend:

Unmodified
 r6505 Currently the extent of tsunami related field data is limited. The cost of tsunami monitoring programs and bathymetry and topography surveys prohibits the collection of data in many of the regions in which tsunamis pose greatest threat. The resulting lack of data has limited the number of field data sets available to validate tsunami models, particularly those modelling tsunami inundation. Synolakis et. al~\cite{synolakis07} have developed a set of standards, criteria and procedures for evaluating numerical models of tsunami. They propose three analytical solutions to help identify the validity of a model and  five scale comparisons (wave-tank benchmarks) and two field events to assess model veracity.  The two field data benchmarks are very useful but only capture a small subset of possible tsunami behaviours and only one of the benchmarks can be used to validate tsunami inundation. The type and size of a tsunami source, propagation extent, and local bathymetry and topography all affect the energy, waveform and subsequent inundation of a tsunami. Consequently additional field data benchmarks that further capture the variability and sensitivity of the real world system would be useful to allow model developers verify their models and subsequently use their models with greater confidence. In this paper we develop a field data benchmark to be used in conjunction with the other tests proposed by Synolakis et al. to validate and verify tsunami inundation. The benchmark is constructed from data collected around Patong Bay, Thailand during and immediately following the 2004 Indian Ocean tsunami tsunami. This area was chosen because the authors were able to obtain unusually high resolution bathymetry and topography data in this area and an extensive inundation map generated from a survey performed in the aftermath of the tsunami. A description of this data is give in Section~\ref{sec:data}. An associated aim of this paper is to illustrate the use of this new benchmark to validate a operational tsunami model. The specific intention is to test the ability of ANUGA to reproduce the inundation survey of maximum runup. ANUGA is a hydrodynamic modelling tool used to simulate the tsunami propagation and run rain-induced floods. The components of ANUGA are discussed in Secion~\ref{sec:veri_procedure}. In this paper we develop a field data benchmark to be used in conjunction with the other tests proposed by Synolakis et al. to validate and verify tsunami inundation. The benchmark is constructed from data collected around Patong Bay, Thailand immediately following the 2004 Indian Ocean tsunami. This area was chosen because the authors were able to obtain high resolution bathymetry and topography data in this area and an inundation map generated from a survey performed in the aftermath of the tsunami. A description of this data is give in Section~\ref{sec:data}. An associated aim of this paper is to illustrate the use of this new benchmark to validate an operational tsunami model called ANUGA (see Secion~\ref{sec:veri_procedure}). The specific intention is to test the ability of ANUGA to reproduce the inundation survey of maximum runup. ANUGA is a hydrodynamic modelling tool used to simulate the tsunami propagation and run rain-induced floods. %=================Section===================== \section{Indian Ocean tsunami of 24th December 2004} Although appalling, the devastation caused by the 2004 Indian Ocean tsunami has heightened community, scientific and governmental interest in tsunami and in doing so has provided a unique opportunity for further validation of tsunami models. Enormous resources have been spent to obtain many measurements of phenomenon pertaining to this event to better understand the destruction that occurred. Data sets from seismometers, tide gauges, GPS stations, a few satellite overpasses, subsequent coastal field surveys of run-up and flooding and measurements from ship-based expeditions, have now been made available (Vigny {\it et al.} 2005, Amnon {\it et al.} 2005, Kawata {\it et al.} 2005, and Liu {\it et al.} 2005)\nocite{vigny05,amnon05,kawata05,liu05}. A number of studies have utilised this data to calibrate models of the tsunami source\cite{grilli07} , match tide gauge recordings\cite{}, maximum wave heights~\cite{asavanant08} and runup locations~\cite{ioualalen07}. We propose to use this event as an additional field-data benchmark for verification of tsunami models. This event captures certain tsunami behaviours that are not present in the benchmarks proposed by Synolakis et. al~\cite{synolakis07}. The devastation caused by the 2004 Indian Ocean tsunami has heightened community, scientific and governmental interest in tsunami and in doing so has provided a unique opportunity for further validation of tsunami models. Data sets from seismometers, tide gauges, GPS stations, a few satellite overpasses, subsequent coastal field surveys of run-up and flooding and measurements from ship-based expeditions, have now been made available (Vigny {\it et al.} 2005, Amnon {\it et al.} 2005, Kawata {\it et al.} 2005, and Liu {\it et al.} 2005)\nocite{vigny05,amnon05,kawata05,liu05}. A number of studies have utilised this data to calibrate models of the tsunami source\cite{grilli07} , match tide gauge recordings\cite{}, maximum wave heights~\cite{asavanant08} and runup locations~\cite{ioualalen07}. We propose to use this event as an additional field-data benchmark for verification of tsunami models. This event captures certain tsunami behaviours that are not present in the benchmarks proposed by Synolakis et.\ al~\cite{synolakis07}. FIXME: What kind of behaviours??? Synolakis detail two field data benchmarks. The first test compares model results against observed data from the Hokkaido-Nansei-Oki tsunami that occurred around Okushiri Island, Japan on the 12th of July 1993. This tsunami provides an example of extreme runup generated from reflections and constructive interference resulting from local topography and bathymetry. The benchmark consists of two tide gauge records and numerous spatially distributed point sites at which maximum runup elevations were observed. The second benchmark is based upon the Rat Islands Tsunami that occurred off the coast of Alaska on the 17th of November 2003. Rat island tsunami provides a good test for real-time forecasting models since tsunami was recorded at three tsunameters. The test requires matching the propagation model data with the DART recording to constrain the tsunami source model and using a propagation model to to reproduce the tide gauge record at Hilo. %The tsunamis used by the two standard benchmarks and the 2004 tsunami are quite different. They all arise from coseismic displacement resulting from an earthquake, however they all occur in very different geographical regions. The Hokkaido-Nansei-Oki tsunami was generated by an earthquake with a magnitude of 7.8 and only travelled a small distance before inundating Okishiri Island. The event provides an example of extreme runup generated from reflections and constructive interference resulting from local topography and bathymetry. In comparison the Rat islands tsunami was generated by an earthquake of the same magnitude but had to travel a much greater distance. The event provides a number of tide gauge recordings that capture the change in wave form as the tsunami evolved. The 2004 December tsunami was a much larger event than the previous two described. It was generated by a disturbance, resulting from a M$_w$=9.2-9.3 mega-thrust earthquake, that propagated 1200-1300 km. Consequently the energy of the resulting wave was much larger than the waves generated from the the more localised and smaller magnitude aforementioned events. WAS THE WAVELENGTH< VELOCITY (and thus average ocean depth) DIFFERENT FROM THESE TWO EVENTS??? If so state something like. This larger wavelength and energy and simply the different geology of the area produced different a wave signal and different pattern of inundation. Here we focus on the large inundation experienced at Patong bay on the West coast of Thailand. The 2004 Indian Ocean tsunami was a much larger event than the previous two described. It was generated by a disturbance, resulting from a M$_w$=9.3 mega-thrust earthquake, that propagated 1200-1300 km. Consequently the energy of the resulting wave was much larger than the waves generated from the more localised and smaller magnitude aforementioned events. WAS THE WAVELENGTH< VELOCITY (and thus average ocean depth) DIFFERENT FROM THESE TWO EVENTS??? If so state something like. This larger wavelength and energy and simply the different geology of the area produced different a wave signal and different pattern of inundation. Here we focus on the large inundation experienced at Patong Bay on the west coast of Thailand. \section{Data}\label{sec:data} Hydrodynamic simulations require very little data in comparison to models of many other environmental systems. Tsunami models typically only require baythymetry and topography data to approximate the local geography, parameterisation of the tsunami source from which appropriate initial conditions can be generated, and a locally distributed quantity such as Manning's friction coefficient to approximate friction. Here we discuss the bathymetric and topographical data sets and source condition that are necessary to implement the proposed benchmark. Friction is discussed in Section~\ref{sec:inundation} The Patong Bay and surrounding region is source to an unusually large amount of data, pertaining to the 2004 tsunami, which is necessary for tsunami verification. The authors obtained a number of raw data sets which were analysed and checked for quality (QCd) and subsequently gridded for easier visualisation and input into tsunami models. (FIXME (OLE): Remove? Hydrodynamic simulations require very little data in comparison to models of many other environmental systems). Tsunami models typically only require baythymetry and topography data to approximate the local geography, parameterisation of the tsunami source from which appropriate initial conditions can be generated, and a locally distributed quantity such as Manning's friction coefficient to approximate friction. Here we discuss the bathymetric and topographical data sets and source condition that are necessary to implement the proposed benchmark. Friction is discussed in Section~\ref{sec:inundation} An unusually large amount of data for the 2004 tsunami, necessary for tsunami verification, is available at Patong Bay and surrounding regions. A number of raw data sets were obtained, analysed and checked for quality and subsequently gridded for easier visualisation and input into the tsunami models. \subsection{Bathymetric and topographic data} The two minute arc grid data set, DBDB2, was obtained from US Naval Research Labs and used to approximate the bathymetry in the Bay of Bengal. This grid was further interpolated to a 27 second arc grid. In the Andaman Sea we replaced the DBDB2 data with a a 3 second grid obtained from NOAA. Finally a 1 second grid was used to approximate the bathymetry in Patong Bay and the immediately adjacent regions. This elevation data was created from the digitised Thai Navy bathymetry chart, no 358. A visualisation of the topography data set used in Patong bay is shown in Figure~\ref{fig:patong_bathymetry}. The continuous topography is an interpolation of known elevation measured at the coloured dots. The sub-sampling of larger grids was performed by using {\bf resample}  a GMT program. The gridding of data was performed using {\bf Intrepid} a commercial geophysical processing package developed by Intrepid Geophysics. The gridding scheme employed the nearest neighbour algorithm followed by and application of minimum curvature akima spline smoothing. FIXME(OLE): Need Intro to this section aka: we obtained data sets at different resolutions from various sources and merged them to build a model appropriate for inundation modelling. The resolution required was generally relatively coarse in the deeper water and progressively finer towards the bay itself with the finest data in the intertidal zone and around the built environment. The two minute arc grid data set, DBDB2, was obtained from US Naval Research Labs and used to approximate the bathymetry in the Bay of Bengal. This grid was further interpolated to a 27 second arc grid. In the Andaman Sea the DBDB2 data was replaced with a 3 second grid obtained from NOAA (REF?). Finally, a 1 second grid was used to approximate the bathymetry in Patong Bay and the immediately adjacent regions (FROM WHERE?). This elevation data was created from the digitised Thai Navy bathymetry chart, no 358. A visualisation of the elevation data set used in Patong bay is shown in Figure~\ref{fig:patong_bathymetry}. The continuous topography is an interpolation of known elevation measured at the coloured dots. The sub-sampling of larger grids was performed by using {\bf resample} a GMT program (\cite{XXX}). The gridding of data was performed using {\bf Intrepid} a commercial geophysical processing package developed by Intrepid Geophysics. The gridding scheme employed the nearest neighbour algorithm followed by and application of minimum curvature akima spline smoothing. \end{figure} Details of the lineage of this dataset is outlined in Appendix~\ref{XXXXX} and the final dataset is available at XXXX. \subsection{Tsunami source}\label{sec:source} The Indian Ocean tsunami of 2004 was generated by severe coseismic displacement of the sea floor as a result of one of the largest earthquakes on record. The M$_w$=9.2-9.3 mega-thrust earthquake occurred on the 26 December 2004 at 0h58'53'' UTC approximately 70 km offshore North Sumatra. The disturbance propagated 1200-1300 km along the Sumatra-Andaman trench time at a rate of 2.5-3 km.s$^{-1}$ and lasted approximately 8-10 minutes (Amnon {\it et al.} 2005)\nocite{amnon05}. \subsection{ANUGA} ANUGA is an inundation tool that solves the depth integrated shallow water wave equations. The scheme used by ANUGA, first presented by Zoppou and Roberts (1999)\nocite{zoppou99}, is a high-resolution Godunov-type method that uses the rotational invariance property of the shallow water equations to transform the two-dimensional problem into local one-dimensional problems. These local Riemann problems are then solved using the semi-discrete central-upwind scheme of Kurganov {\it et al.} (2001) \nocite{kurganov01} for solving one-dimensional conservation equations. The numerical scheme is presented in detail in (Zoppou and Roberts 1999, Zoppou and Roberts 2000, and Roberts and Zoppou 2000, Nielsen {\it et al.} 2005) \nocite{zoppou99,zoppou00,roberts00,nielsen05}. An important capability of the software is that it can model the process of wetting and drying as water enters and leaves an area. This means that it is suitable for simulating water flow onto a beach or dry land and around structures such as buildings. It is also capable of adequately resolving hydraulic jumps due to the ability of the finite-volume method to handle discontinuities. ANUGA has been validated against a number of analytical solutions and the wave tank simulation of the 1993 Okushiri Island tsunami (Roberts {\it et al.} 2006)\nocite{roberts06}. ANUGA is an inundation tool that solves the depth integrated shallow water wave equations. The scheme used by ANUGA, first presented by Zoppou and Roberts (1999)\nocite{zoppou99}, is a high-resolution Godunov-type method that uses the rotational invariance property of the shallow water equations to transform the two-dimensional problem into local one-dimensional problems. These local Riemann problems are then solved using the semi-discrete central-upwind scheme of Kurganov {\it et al.} (2001) \nocite{kurganov01} for solving one-dimensional conservation equations. The numerical scheme is presented in detail in (Zoppou and Roberts 1999, Zoppou and Roberts 2000, and Roberts and Zoppou 2000, Nielsen {\it et al.} 2005) \nocite{zoppou99,zoppou00,roberts00,nielsen05}. An important capability of the software is that it can model the process of wetting and drying as water enters and leaves an area. This means that it is suitable for simulating water flow onto a beach or dry land and around structures such as buildings. It is also capable of adequately resolving hydraulic jumps due to the ability of the finite-volume method to handle discontinuities. ANUGA has been validated against a number of analytical solutions and the wave tank simulation of the 1993 Okushiri Island tsunami (Roberts {\it et al.} 2006 and Nielsen {\it et al.} 2005) \nocite{roberts06,nielsen05}. \subsection{URSGA} verfication need to assess point data, spatially distributed data and bulk (lumped) data. Synolakis et. al~\cite{synolakis07} detail two field events that have been previoulsy used to validate tsunami models, the Hokkaido-Nansei-Oki tsunami that occured around Okushiri Island, Japan on 2nd of July 1993  and the Rat Islands Tsunami that inundated the occured off the coast of Alaska on the 17th of November 2003. Synolakis et. al~\cite{synolakis07} detail two field events that have been previoulsy used to validate tsunami models, the Hokkaido-Nansei-Oki tsunami that occured around Okushiri Island, Japan on 2nd of July 1993 and the Rat Islands Tsunami that inundated the occured off the coast of Alaska on the 17th of November 2003. Rat island tsuanmi provides a good test for real-time forecasting models since tsnumai was recorded at three tsunameters. The test requires matching the propagation model data with the DART recording to constrain the tsunami source model. The inundation model is to reproduce the tide gauge record at Hilo. Patong Bay benchamark provides spatially distributed field data for comparison. Rat Island tsuanmi provides a good test for real-time forecasting models since tsnumai was recorded at three tsunameters. The test requires matching the propagation model data with the recordings to constrain the tsunami source model. The inundation model is to reproduce the tide gauge record at Hilo. Patong Bay benchmark provides spatially distributed field data for comparison. single experiment can refute model but cannot validate it. Need as many tests as possible to be confident in prediction. Question arises. How mnay should we do.