################################### ALL INFORMATION IS NOW IN TRAC DON'T EDIT THIS FILE ANYMORE ################################### ANUGA Future Directions (2005-2006) -------------------------------------------------- Draft of known issues regarding the ANUGA software -------------------------------------------------- ---------------- Numerical issues CFL condition (centroid-midpt distances versus inscribed circle ? or error estimator) Gradient limiters Forcing terms: Friction of Mud Viscosity -------------------- Functionality issues Mesh generation: (Duncan) Through graphical generator Through scripts Alternative triangulators (Strang, ?.) alpha-shapes refactoring triangle interface outputting a "triangle" readable file (from pmesh?) simple method in mesh factory that will convert regularly gridded data to mesh using points as vertices Pmesh (Duncan) - from pmesh issues file: Main issues (TBA) - Better API for handling (scripted) meshes: *Error messages *Refined regions should be specified easily *Ways of handling different point sets (e.g. 10m, 100m, 250m) and populate according to refinement of mesh (wavelets or other adaptive method) Georeferencing Code works in UTM coordinates. Zone boundaries cannot be crossed: Introduce nonstandard central meridians if necessary Could code be cast in terms of lat/lons? make sure that georeferencing is used consistently (domain, pmesh, sww files, ...) (Duncan) Test that least_squares (and elsewhere) correctly reconciles different georefs. (See Karratha) Least_squares, (Duncan) use of quad-tree may not use all datapoints (currently fixed by expand search, need algorithm for searching neighbouring treenodes) (TRAC) Give feedback when points outside mesh mesh outside points boundary ! if mesh is corrupted (see lwru2.py and create_mesh.py in debug dir) if segments are repeated (maybe in mesh.check_integrity) (TRAC) use sparse matrix (CSR) format in "build_interpolation_matrix_A" (TRAC) simplify geo-referencing (make sure that coordinate changes 'change_points' are used everywhere) (TRAC) memory issue: ! (TRAC) If LS is used for fitting compute self.Atz and self AtA directly and avoid self.A. need alpha If LS is to be used for interpolation compute self.A no alpha needed refactor interpolate_sww in terms of Interpolation_function (Ole or Duncan) Checkpointing (largely done but needs to be revisited) Ability to create additional quantities and initialise arbitrarily FIXMEs (TBA) File management (msh, pts, dem, ..) - introduce the notion of a project Reduce # of methods that read/write sww files (Yes) ----------------- Efficiency issues Parallelisation (First cut is done) Edgefluxes are currently computed twice (f01 = -f10) (This has been fixed by Matt Hardy in June 2005) Increase Timesteps (Implicit techniques?) Optimise creation and importing of domains (C-code and caching) (Ole) Caching needs optional arguments: bypass_args and bypass_kwargs. These should be passed onto the function when it is evaluated but not participate in the hashing. Rationale: Some arguments such as classes change their signature at every run and thus cause the functin to be evaluated every time, defeating the purpose of caching. NetCDF can't deal with files > 2GB except on cyclone. It is possible to write netCDF files that exceed 2 GiByte on platforms that have "Large File Support" (LFS). Such files are platform-independent to other LFS platforms, but trying to open them on an older platform without LFS yields a "file too large" error. ------------ Architecture Settle dataflows and formats (pts, dem, msh, sww, etc) (Ole) finaltime to be renamed duration ---- API: Suggestion for set_quantity set_quantity(name, X, location, region) where name: Name of quantity X: -Compatible list, -compatible list or numeric array, -constant -list of points or array with attribute values (use LS) (How to distinguish this from a numeric array? - perhaps use keyword arguments for everything) -callable object (f: x, y -> z) where x,y,z are arrays - inline - file functions - polygon functions -another quantity Q or an expression of the form a*Q+b*R+c, where a,b,c are scalars, arrays or quantities Q,R are quantities (or 1?) - pts filename (use LS and caching) (how to select attribute?) - general expression to be parsed location: Where values are to be stored. Permissible options are: vertices, edges, centroid, unique_vertices region: Identify subset of triangles. Permissible values are - tag name (refers to tagged region) - indices (refers to specific triangles) - polygon (identifies region) (incorporate uniqueness/non-uniqueness) IDEA for set_quantity: Use keywords and call underlying specific method, e.g. if filename is not None: vertex_values = caching( fitfunc, filename, dependencies = ...) set_quantity(..., ..., vertex_values) elif: Largely done. Should create class Point_set representing points and attribute(s) Include filename too? Yep (and selected attributes) This could also facilitate use of multiple point sources (e.g. at different resolutions) (IN TRAC now) Making methods private, using _private Or write the API in a separate module. Introduce create_quantity in domain.py: (Ole) It will make a new named instance and populate it by calling set_quantity if desired. Create_quantity would be called automatically by shallow_water. Make stage appear as any other quantity: Either 1: Make stage a subclass of quantity having knowledge of elevation and a special limiter (or more limiters) Or 2: Equip each quantity with a limiter class (Con: A limiter for stage should never be applied to any other quantity) Also, investigate if Quantity and Conserved_quantity should be one class (Steve). Finally, reconcile the optimised gradient limiter of Matt's with the more general framework Boundary (Dirichlet): Think about specifying values by quantity name (and having defaults) Dirichlet(stage = 1.0, ymomentum = 0.2) Finish MOST2SWW with bathymetry(Trevor) Make Steve's new boundary spatially dependent (Ole) Look at matplotlib's verbosity object. ------ Other: Should we remove Python code superseded by C-extensions? Pros: Leaner code and no risk Cons: Less readable algorithms Move py code into files such as quantity_ext.py and have python wrappers with doc strings for all functions. Name the code in extensions _ext and conditional import as usual. Generally separate file_format stuff from functionality (and work with a small number of formats) Have a .tms format for straight timeseries, use sww for f(t,x,y). Modify Interpolation function and file_function accordingly. (Use lwru2.py as test bed) (Ole) ------------- Visualisation Swollen: Export to movie Use maps Colourcoding z-scale Matplotlib: Realtime and 'post mortem' generation of colour coded maps with contourlines. Also useful for graphs Visual Python: Steves new and improved tool ----------- Installation and setup - Assume that root dir for AnuGA is on Pythonpath. That way the same code can work with sandpits and final installation - need installers for all modules - pmesh needs a compile.py that will compile triangle whan used from a sandpit. Currently it seems one needs a full setup to use pmesh ----------- Development Restructuring and moving svn repository (Ole) Flatten directory structure (Ole or DUncan) Apply buildbot Use real bugtracking/project management tool (see http://www.generalconcepts.com/resources/tracking, plone, basecamp,? Try out TRAC which integrates into subversion (Ole and Kat 27/7/5) - Done Zeus? Install Matplotlib on nautilus -------------- Documentation Hire technical writer to produce - Getting started - User guide - Reference manual - semi automated Mathematical model description (Steve & Chris) ----------------- Validation and QA Merimbula (Steve & Chris) PNG Landslide study or watertank data (Adrian?) BOM SMEC (Lex) CSIRO (Kathy) Watertank study (Okushiri) (Ole) Done Macca... -------- Naming The collection as well as individual modules may benefit from better names. -------- Release Search for appropriate procedure for OSS release (Ole is onto that with AGIMO)