OpenFOAM is software for computational fluid dynamics (CFD), maintained by a core team at CFD Direct and licensed free and open source by the OpenFOAM Foundation.  OpenFOAM provides parallel computing through domain decomposition and an interface to software written to the message passing interface (MPI) standard, e.g. Open MPI.  Small parallel computations use multiple cores, e.g. 2-36, from a multi-core computer processor.  Computer clusters connect multiple processors to increase the number of available cores.  High performance computing (HPC) involves a larger number of cores, e.g. 100s, with fast networking to deliver good scalability.

Amazon EC2 C5n Instances

Amazon Elastic Cloud Compute (EC2) includes Compute Optimized instances for compute-intensive workloads.  The C5 series targets fast networking with the enhanced network adapter (ENA).  In November 2018, AWS launched the Amazon EC2 C5n instances.  The C5n instances provide significantly higher network performance across all instance sizes, compared to the standard C5 instances.  The  largest instance size, c5n.18xlarge, contains 36 physical cores and has 100Gbps of network bandwidth, compared to 25Gbps with c5.18xlarge.

CFD Direct From the Cloud™

CFD Direct From the Cloud™ (CFDDFC) is a Marketplace Product for AWS EC2, that provides a complete platform with OpenFOAM and supporting software running on the latest long-term support (LTS) version of Ubuntu Linux.  It includes the CFDDFC command line interface (CLI) to enable simple management of CFDDFC instances, data transfer and running of OpenFOAM applications.  CFDDFC is enabled for the ENA, so can extract optimal network performance from the C5 series of instances.

Cluster of C5n Instances with CFDDFC

We launched clusters of C5n instances to test parallel scalability.  Our CFDDFC CLI provides an easy and convenient way to launch a cluster.  First, the main instance was launched with the c5n.18xlarge instance type and 200 GB of storage by the following command, using eu-west-1 as the default region:

cfddfc launch -instance c5n.18xlarge -volume 200

Once the main instance was launched, the cluster was created by launching secondary instances of the same type.  For example, the following command adds 6 secondary instances to create a cluster of (1 + 6) × 36 = 252 cores:

cfddfc cluster -secondary 6

Parallel Scaling with C5n Instances

We tested parallel scaling of C5n instances on 2 cases:

  • steady-state, turbulent, incompressible flow simulation of external aerodynamics around a car;
  • transient, turbulent, incompressible two-phase simulation of water flowing over a weir with a hydraulic jump.

We ran a series of simulations on these 2 cases with meshes of different numbers of cells, M, on clusters with different numbers of cores, C.  The clock time, T, and simulated time, t, were recorded using the time function object.  We calculated a performance factor F = (t /T)⋅(M /C), which was normalised against F for a single instance (36 cores) and presented as a percentage.  On that basis, 100% denotes “linear scaling”, with super- and sub-linear scaling represented by percentages above and below 100, respectively.

External Aerodynamics around a Car

The configuration of the external aerodynamics was very similar to that described in cost of CFD in the cloud.  The number of cells was increased to approximately 97million (97m); at a cluster size of 500 cores, there are then 200thousand (200k) cells per core.  The solution converged in 2500 iterations to a mean drag coefficient of 0.325 (see below).

We ran simulations on a single instance and clusters of 2, 4, 7 and 14 instances using all available physical cores. The graph below shows a departure from linear (100%) scaling at around 200 cores, with 90% scaling maintained at 504 cores.

An additional simulation with 684 m cells, running on 252 cores, exhibited similar scaling behaviour to the 97m cell case.


External Aerodynamics of a Vehicle in OpenFOAM

Flow over a Weir with Hydraulic Jump

We configured the simulation of flow over a weir with a fixed 100,000 cells per core (weak scaling).  The weir was 30m high, with an ogee profile common in some dam spillways (comparable in size to the Burdekin Dam).  A preliminary mesh was created with the weir profile using snappyHexMesh.  The final mesh was then generated by extruding a 2D flattened patch to a width of 150 m in the case of a single instance.  On cluster configurations, we increased the weir width proportionally with the number of instances to maintain the same total cells per core, for the same cell size.  The water flow rate was 21.6 kL per m width. (L = litre)

OpenFOAM HPC of Flow over a Weir with Hydraulic Jump

All simulations were run to t = 200s to allow a pronounced hydraulic jump to form.  The cases ran with a variable time step Δt corresponding to a maximum Courant number Co = 2 and simulations ran for approximately 10,000 steps with an average Δt ≈ 0.02s.  Iso-surface and slice data was written at intervals of 0.5s, producing 400 frames for animation.  Simulations ran on clusters of 2, 4, 7 and 14 instances, with the animation below taken from the largest cluster of 504 cores, with a mesh of 50.4m cells, a weir width of 2.1km and 45ML flow rate.

The graph above shows the scaling performance.  In this weak scaling test (at 100k cells per core), there is a departure from linear scaling at 100 cores, falling to 70% scaling at 504 cores.  80MB of data is written every 25 time steps across the network to a file system on the main instance.  Above 100 cores, data writing accounts for ~20% of the drop of scaling performance.  At 504 cores, for example, 20 × (100 – 70) = 6% performance is recovered when data writing is switched off, increasing scaling from 70% to 76%.

Cost Breakdown

We have previously described the cost of CFD in the cloud with CFDDFC running on Amazon EC2.  CFDDFC runs by default on spot instances to take advantage of cheaper pricing on unused EC2 capacity.  It notifies users of the charges for transferring data from EC2 instances.  CFDDFC is charged as an hourly software fee at rates listed on the product page on AWSThe following table shows the cost of EC2 instances and CFDDFC software for the benchmark simulations with 504 cores, including indicative costs for data transfer from EC2, but ignoring the much smaller cost of Amazon Elastic Block Storage used by the EC2 instance.  Costs are as of March 2019 in the eu-west-1 region, rounded up to the nearest $1.

Charge Unit Price Aerodynamics Flow over a Weir
EC2 Instance $1.24/hr 14 × 2 hr → $35 14 × 5 hr → $87
Software $0.94/hr 14 × 2 hr → $26 14 × 5 hr → $66
Data Transfer $0.09/GB 10GB → $1 30GB → $3
TOTAL   $62 $156

OpenFOAM HPC

  • CFD Direct From the Cloud™ enables simple, fast creation of a cluster of EC2 instances to run CFD with OpenFOAM.
  • The Amazon EC2 C5n instances offer network performance which delivers 70%-90% scaling at 504 cores.
  • With 100 m cells or more, on at least 500 cores, C5n instances provide OpenFOAM HPC for CFD applications.
  • High quality open source software and public cloud delivers HPC for an “entry” price in the order of $100.
  • This can compare to $100,000+ to purchase sufficient on-premises hardware and licences of proprietary CFD software.
OpenFOAM HPC with AWS C5n