EC2’s M4 instances offer a balance of compute, memory, and networking resources and are a good choice for many different types of applications.
We launched the M4 instances last year (read The New M4 Instance Type to learn more) and gave you a choice of five sizes, from large up to 10xlarge. Today we are expanding the range with the introduction of a new m4.16xlarge with 64 vCPUs and 256 GiB of RAM. Here’s the complete set of specs:
|Instance Name||vCPU Count
||Instance Storage||Network Performance||EBS-Optimized|
|m4.large||2||8 GiB||EBS Only||Moderate||450 Mbps|
|m4.xlarge||4||16 GiB||EBS Only||High||750 Mbps|
|m4.2xlarge||8||32 GiB||EBS Only||High||1,000 Mbps|
|m4.4xlarge||16||64 GiB||EBS Only||High||2,000 Mbps|
|m4.10xlarge||40||160 GiB||EBS Only||10 Gbps||4,000 Mbps|
|m4.16xlarge||64||256 GiB||EBS Only||20 Gbps||10,000 Mbps|
The new instances are based on Intel Xeon E5-2686 v4 (Broadwell) processors that are optimized specifically for EC2. When used with Elastic Network Adapter (ENA) inside of a placement group, the instances can deliver up to 20 Gbps of low-latency network bandwidth. To learn more about the ENA, read my post, Elastic Network Adapter – High Performance Network Interface for Amazon EC2.
Like the m4.10xlarge, the m4.x16large allows you to control the C states to enable higher turbo frequencies when you using just a few cores. You can also control the P states to lower performance variability (read my extended description in New C4 Instances to learn more about both of these features).
You can purchase On-Demand Instances and Reserved Instances; visit the EC2 Pricing page for more information.
As part of today’s launch we are also making the M4 instances available in the China (Beijing), South America (Brazil), and AWS GovCloud (US) regions.