id summary reporter owner description type status priority milestone component version severity resolution keywords cc 178 Time to load and fit mesh file (domain.set_quantity) is the slowest part for large parallel model runs nick steve "Here are the some stats and times for different parts of ANUGA for 3 similar models scenarios except for the number of triangles {{{ J:\inundation\data\western_australia\broome_tsunami_scenario_2006\anuga\outputs\20070518_060438_run_final_0_dampier_nbartzis 380000 triangles, on Tornado bathy file load 58000 sec (15hrs) load and fit boundary condition 35000 (10hrs) evolution 25000 (7hrs) J:\inundation\data\western_australia\broome_tsunami_scenario_2006\anuga\outputs\20070621_235838_run_final_0_onslow_nbartzis 430000 triangles, on cyclone bathy file load 160000 sec (44hrs) load and fit boundary condition 56000 (15hrs) evolution 80000 (22hrs) PARALLEL using 4 cpus J:\inundation\data\western_australia\broome_tsunami_scenario_2006\anuga\outputs\20070615_062002_run_final_0_onslow_nbartzis 590000 triangles, on cyclone bathy file load 250000 sec (70hrs) load and fit boundary condition 25000 (7hrs) evolution 270000 (75hrs) NOT parallel, so can expect around 25 hours with parallel }}} I think the next effort to increase speed in anuga should be focused on loading and fiting the bathy file to the mesh (domain.set_quantity). Even if this process cached more regularly so when it is half way through and there is a failure it doesn't need to run the whole thing again. Also the length of time a model needs to run is not the biggest issue really, but greatly increases the risk of the following things we can't control. Such as network issue, file access problems, node failures, custer node reboots, gniess and perlite system reboots and rifts in the space-time continuum." enhancement closed normal Efficiency and optimisation normal fixed duncan sexton rwilson