Compute- and HPC-Optimised Instances
AWS provides a range of Elastic Compute Cloud (EC2) instance types with varying combinations of CPU, memory, storage, and networking capacity. For CFD, we generally recommend the Compute-Optimised (C-series) and HPC-Optimised (HPC-series) instance types due to their high performance processors. The main C and HPC-series instance types are summarised below including Intel and AMD processors with x86 architecture and Amazon Graviton processors based on Arm architecture.
Intel processors
- C5 (including “new” C5): Xeon Skylake-SP and Cascade Lake processors, up to 48 cores (
c5.24xlarge
). - C5n: as C5, but with significantly higher network performance, up to 36 cores (
c5n.18xlarge
). - C6i/C6in†: Xeon Ice Lake processors, up to 64 cores (
c6i.32xlarge
). - C7i: 4th generation Sapphire Rapids processors, up to 96 cores (
c7i.48xlarge
).
AMD processors
- C5a: 2nd generation EPYC 7002 series, up to 48 cores (
c5a.24xlarge
). - C6a: 3rd generation EPYC 7003 series, up to 96 cores (
c6a.48xlarge
). - HPC6a: low-cost C6a with high network performance (
hpc6a.48xlarge
).
Graviton (Arm-based) processors
- C6g/C6gn†: with Graviton 2 processors, up to 64 cores (
c6g.16xlarge
). - C7g: with Graviton 3 processors, up to 64 cores (
c7g.16xlarge
). - C7gn†: with Graviton 3E processors, up to 64 cores (
c7gn.16xlarge
). - HPC7g: low-cost C7g with high network performance (
hpc7g.16xlarge
).
† = “n” variant has significantly higher network performance.
Number of Processor Cores
Each family of instances is available in a range of sizes corresponding to a number of available processor cores. AWS reports the size in terms of “vCPUs”, where a single processor core corresponds to 2× vCPUs for x86 processors since they support multithreading. For example, a c6i.2xlarge
instance has 8 vCPUs, which equates to 4 single processor cores.
CFD does not benefit from multithreading so users need to recognise the actual “physical” cores. The C6g family uses Arm-based processors with a larger number of cores, but its lower speed means 2 of its cores gives performance that is approximately equivalent to a single core of the other C-series types.
EC2 Pricing
CFD computing on the cloud generally involves: launching instances; running CFD on the instances; storing and transferring data; and, finally, terminating the instances. Costs are charged on a pay-as-you-go basis and are principally for 1) the instance; 2) data storage and data transfer from the instances. Below is a very approximate summary of the principal charges for EC2. For further Information, see CFDDFC on AWS: Pricing of Computing Resources.
- On-demand instances: $0.10 per core-hr for x86 instances, $0.07 per 2×core-hr for Arm-based.
- Spot instances (recommended for CFD): typically $0.03 per core-hr for x86 processors and per 2×core-hr for Arm-based.
- Storage: $0.015 per hr for 100GB volume, $0.09 per GB of data transfer from an instance across the Internet.
Remote Desktop
CFD Direct From the Cloud (CFDDFC) for instances with x86 processors provides access via remote desktop using X2Go. Arm-based processors do not have the same level of software support as the x86 architectures. They are largely confined to server operating systems without graphical applications. CFDDFC (Arm) runs on Ubuntu Server for Arm, so does not support the remote desktop.
Clusters of Instances
Multiple instances can be networked to create a computer cluster that can run parallel computations using the combined nodes of the instances. C5n instances are recommended for clusters over their C5 counterpart, since their higher network performance gives good parallel scaling to at least 500 cores. A cluster of c5.18xlarge
instances can include the network interface known as Elastic Fabric Adapter (EFA). EFA improves parallel scaling a little further, with good parallel scaling to at least 1,000 cores.