Opened 19 years ago
Closed 15 years ago
#35 closed defect (fixed)
Introduce large file support for NetCDF to allow larger than 2 GByte files
Reported by: | ole | Owned by: | rwilson |
---|---|---|---|
Priority: | normal | Milestone: | AnuGA ready for release |
Component: | Efficiency and optimisation | Version: | 1.0 |
Severity: | normal | Keywords: | NetCDF |
Cc: |
Description
The barrier of 2GB can be removed by moving 2 v3.6 and 64bit storage (even on 32 bit platforms). See
http://www.unidata.ucar.edu/software/netcdf/docs/faq.html#Large%20File%20Support4
Change History (8)
comment:1 Changed 19 years ago by ole
- Priority changed from normal to high
comment:2 Changed 19 years ago by ole
- Owner changed from someone to duncan
comment:3 Changed 18 years ago by ole
Currently, there is some functionality for splitting Netcdf files exceeding 2GB in data_manager.py but it does not have a unit test. In fact the max_size argument to the class Data_format_sww which is suppose to control the split isn't even passed down from get_dataobject or store_connectivity where one would like to control this. Especially if one were to write a small test with a small value for max_size.
comment:4 Changed 18 years ago by anonymous
- Priority changed from high to low
comment:5 Changed 16 years ago by ole
- Owner changed from duncan to rwilson
- Priority changed from low to normal
comment:6 Changed 16 years ago by rwilson
It appears that recent changes in NetCDF (beyond the 64-bit offset format changes) allow large dataset file sizes:
With the netCDF-4/HDF5 format size limitations are further relaxed, and files can be as large as the underlying file system supports.
http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html#index-limitations-of-netCDF-61
comment:7 Changed 16 years ago by rwilson
- Status changed from new to assigned
comment:8 Changed 15 years ago by nariman
- Resolution set to fixed
- Status changed from assigned to closed
Duncan, would you mind putting this on *your* list of things todo?