Dynamic Meshes in OpenFOAM

Dynamic Meshes in OpenFOAM

“Dynamic mesh” describes situations where the mesh in CFD changes, either topologically by adding or removing cells, or by capturing the motion of the solution domain. It also relates to changes in the distribution of cells during a parallel simulation. This article describes the redesign of dynamic mesh functionality, released in OpenFOAM v10 and the development version of OpenFOAM (OpenFOAM-dev). The redesign was motivated by the development of non-conformal coupling (NCC). It specifically overcame a limitation of the previous dynamic mesh functionality which permitted only a single form of mesh motion or topological change within a simulation.

Modular Solvers in OpenFOAM

Modular Solvers in OpenFOAM

In August 2022, CFD Direct introduced modular solvers to the OpenFOAM development version. Modular solvers are written as classes, in contrast to the traditional application solvers which have been integral to OpenFOAM since icoFoam in 1993.  They are simpler to use, maintain and understand than application solvers. They are more flexible; in particular, modules for different fluids and solids can be coupled across multiple regions, e.g. for conjugate heat transfer (CHT) with multiphase flow.  Modular solvers are deployed using the foamRun or foamMultiRun applications, which contain a generic solution algorithm for single and multiple regions, respectively. Additional modules and applications replace existing tools for data processing and case configuration.

Web CFDDFC® Released

Web CFDDFC® | CFD Direct | 2023

CFD Direct is pleased to announce the release of Web CFD Direct From the Cloud (CFDDFC®), providing OpenFOAM via a remote desktop in a web browser, running on Amazon Web Services (AWS). Web CFDDFC is a pay-as-you-go Amazon Machine Image (AMI) from AWS Marketplace, providing a full desktop environment with graphical applications, including OpenFOAM v10, ParaView v5.6.0, OpenMPI v4.1.1 and FreeCAD v0.19.2, running on Ubuntu 20.04 LTS. Web CFDDFC runs on single instances with up to 64 C6i Intel cores (c6i.32xlarge) or 96 C6a AMD cores (c6a.48xlarge), and in clusters with with good parallel scaling on clusters of instances up to 1000 cores.

Effective OpenFOAM Maintenance

Effective OpenFOAM Maintenance

OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD), distributed by The OpenFOAM Foundation. In 2014, OpenFOAM had accumulated significant “technical debt” due to a drive for new functionality at the expense of maintenance. Facing an unsustainable level of technical debt, CFD Direct was founded to manage and develop OpenFOAM back to a sustainable position. Code repair has targeted niche functionality that receives less testing. Redesign of larger, critical components of OpenFOAM has eliminated clusters of issues. By 2022, CFD Direct has recovered most of the technical debt, making OpenFOAM significantly more robust, usable and extensible.

OpenFOAM v10 Training 2023

OpenFOAM v10 Training 2023

In Spring/Summer 2023, CFD Direct is running its OpenFOAM Training courses — Essential CFD, Applied CFD and Programming CFD — fully updated with the latest features of the new version 10 release of OpenFOAM. The training uses new features in OpenFOAM v10 for more productive and effective CFD. All courses are delivered as live virtual training. Essential and Applied CFD courses are available: 6-10 Feb, 13-17 Mar, 24-28 Apr, 15-19 May. Programming CFD is available 24-36 Jan, 28 Feb-2 Mar, 28-30 Mar, 6-8 Jun, 26-28 Jun.

CFDDFC® v10 Released

CFDDFC® v10 Released

CFD Direct is pleased to announce the release of version 10 of CFD Direct From the Cloud™ (CFDDFC®), available on Amazon Web Services (AWS) Marketplace as the standard CFDDFC product and CFDDFC (Arm). Standard CFDDFC v10 includes OpenFOAM v10, ParaView v5.6.0, OpenMPI v4.1.1 and FreeCAD v0.19.2, running on Ubuntu 20.04 LTS. CFDDFC (Arm) is server-only including OpenFOAM v10 and OpenMPI v4.1.1, running on Ubuntu 20.04 LTS. CFDDFC runs on single instances with up to 64 C6i Intel cores (c6i.32xlarge), 96 C6a AMD cores (c6a.48xlarge) or 64 C7g Graviton cores (c7g.16xlarge) and in clusters with with good parallel scaling on clusters of instances up to 1000 cores.