The Root Drive
The root drive on an EC2 instance can be SSDs or Magnetic based on the type of machine selected. All additional mounted/persistent disk drives on that machine will probably be of the same type. This is an SSD but it is a network drive.
EBS disks IOPS and MB/S are provisioned exactly as described in the EC2 documentation. The most common GP2 SSDs have a burst IOPS limit and a sustained IOPS limit. They also have a maximum MB/S transfer rate. Both the sustained IOPS limit and the maximum transfer rate are affected by the size of the provisioned disk. Larger disks can sustain higher IOPS and have higher throughput.
We sized our disk at 20GB which gave us low IOPS credit earning rates and lower MB/S transfer rates. That was a mistake. The sweet spot for disk drive performance is around 214 GB. This is the smallest disk that gives you the highest transfer rate and the highest burst credit acquisition rates.
Teams should do their own alaysis before picking the more expensive EBS volume types. EBS GP2 burstdable SSDs may provide a higher value than fixed provisined SSDs (io1).
Burst credits are a way you can store IOPS in a credit bucket so that you can exceed your provisioned sustained IOPS rate. This lets you reach up to 3000 IOPS (GP2) in short bursts without having to pay for higher performing drives. New machines are given 30 minutes of burst credits in order to provision the machines and warm up applications at the fastest speed possible. Burst credits are earned based on the size of the EBS volume and the provisioned sustained IOPS rate. Larger disks earn IOPS credits faster than smaller ones.
IOPS vs MB/S
<I have no idea what I was planning on putting here>
IOPS, MB/S and Block Sizes
I/O Operations per second, I/O bandwidth and the data block size interact with each other to limit total throughput. Machines that use the AWS default 16KB block size may not use their full I/O bandwidth. AWS machines default to 16KB block sizes. Our test results agree with concept.
- effective bandwidth = number of IOPS * the block size of each write
Teams may have to do some math to tune their disk drives in I/O bound applications.
Ephemeral local SSD
EC2 machines can make use of SSDs attached to the host machine that the EC2 instance is running on. These disks provide significantly higher performance that must be balances against ephemberal nature. Local SSDs cannot have snapshots and disapear whenever a mechine is terminated and restarted. All Ephemeral SSD data must be reconstitutable from other data sources since the VM with its local SSD could disapear at anny time
BenchmarksThe following table describes several benchmark tests against various drive configurations. Regular GP2 SSDs provide exactly the specified speed and througput with burst credits available and with no burst credits. The main area of interest is around latency and the relative performance of EBS vs Ephemeral and around the impact of disk encryption. I don't understand why we have outliers in the data. Sometimes different machines gave different ephemeral performance. Note that Amazon does not seem to specify performance data for local SSDs.
Process Reference pages
ResultsThis chart show the transfer rate of various disk drives. It makes clear how much of a performance improvement can be obtained using Local SSD (ephemeral) drives over network attached EBS.
|Disk Size||Disk Type||MB/s rand write 512m bs=16k||iops rand write 512m bs=16k||clat latency 95% usec||MB/s rand read 512m bs=16k||iops rand read 512m bs=16k||clat latency 95% usec||MB/s AWS max||MB/s 16KB max||IOPS Spec||Test Device|
|20GB||SSD (GP2)||1||60||1||60||2.56||0.96||60/3000||No Burst|
|80GB||SSD (GP2)||3.8||240||3.8||240||10.24||3.84||240/3000||No Burst|
*Measured and calculated using Burst credits
#Measured and calculated with no available Burst credits
Disk encryption does not affect disk throughput or IOPS. It does increase disk latency. Local SSD performance can be affected by other VMs on the same hardware device sharing the same drives.
System configuration and Test Commands
|ssh -i speedtest.pem email@example.com|
|sudo yum update|
|sudo yum install fio|
|sudo mkfs -t ext4 /dev/xvdb|
|sudo fio --directory=/media/ephemeral0 --name=randwrite --direct=1 --rw=randwrite --bs=16k --size=1G --numjobs=16 --time_based --runtime=180 --group_reporting --norandommap|
|sudo fio --directory=/media/ephemeral0 --name=randread --direct=1 --rw=randread --bs=16k --size=1G --numjobs=16 --time_based --runtime=180 --group_reporting --norandommap|
Other AWS related references
This blog not yet finished