/work is a Lustre file system. Lustre is optimised for reading and writing small numbers of large files (Configuring the Lustre /work file system: opening and closing large numbers of files can be slow large numbers of processes reading or writing files can contend for access to the file system
OpenFOAM produces a lot of small files during the computation, and with a peculiar hierarchy of directories. During a single simulations, many thousands small files are created, read and written, and this puts a lot of pressure on the filesystem, potentially causing substantial performance degradation, and affecting the whole cluster.
OpenFOAM is well known and well acknowedged as a very flexible and stable environment to develop new solvers, however it has a bit of a reputation for scaling badly on big super computers, leaving people to assume it should only be used when your problem can be tackled by a stand-alone workstation or using only a few nodes on your favourite big HPC system.
OpenFOAM is known to significantly stress shared filesystems, since a lot of (small) files are generated during an OpenFOAM simulation. Shared filesystems are typically optimised for dealing with (a small number of) large files, and are usually a poor match for workloads that involve a (very) large number of small files.
A single OpenFOAM user has exceeded one hundred million files/directories on a filesystem ill suited to large numbers of files and directories.
writeFormat binary
purgeWrite 1
runTimeModifiable no
writeControl / writeInterval
Synchronisation and log file entry written per time step deltaT
Can reduce file count by a factor of 40
Figures from ARCHER show an actual performance boost of 50%
writeCompression on
/local
Notes for this are on the ARC website
If you have any questions or would like to learn more about Research Computing, please do not hesitate to get in touch with us.