We are using fesapi v1.2.1. In an earlier version we used to be able to set the max chunk size in HdfProxy. Sometime prior to fesapi v0.16 this call disappeared and it seems that we are reliant on fesapi to set the max chunk size, which seems to be a constant 4GB.
We are writing very large hdf files, with an example property being 1.2GB, all in one chunk. Our Resqml/HDF consuming code cannot handle chunks this large - it reads properties in a split fashion so every time it requests a partial read, the HDF5 library has to decompress the entire chunk, discard it and repeat (the chunk is too large for the cache too). This is infeasible.
I am a dev on the writing side. Our consuming developers recommend that we write with a max chunk size of 1MB. Is there any way that we can control the max HDF5 chunk size when writing? Does fesapi v2.0 support this?