What is disk Throughput and IOPS in AWS EBS?
Each technology comes with its own terminology and to understand that technology, it’s important to understand those terminologies.
When we are dealing with Disk Throughput and IOPS in AWS EBS EBS, which is a storage solution provided by AWS for their EC2 servers.
As we use the local hard disk on our physical machine, similar to that for EC2 servers, EBS acts as a disk. But they are software-based and it has more flexibility to increase their size, upgrade to different storage types and reattach to a different server.
It’s very important to understand, the parameters like
- Disk IOPS
- Disk Throughput
But to understand Disk IOPS and Throughput. Let’s understand what it actually depends on.
It mostly depends on the block size of the disk.
Block size is a defined chunk of storage where an amount of data can be written or read.
Storage vendors define those block size to be 512 Bytes for HDD & 4K for SSD based.
Out of the box information
For Linux to know block size, use the command
blockdev --getbsz /dev/sda
For Windows to know block size, use the command
fsutil fsinfo ntfsinfo c: (Look for bytes per sector)
Let’s get down to some calculations and math.
If a file is 1000 Bytes, It will consume 1 full sector of 512 bytes, and the remaining 488 bytes will be written into the 2nd sector.
But as one sector is of 512 bytes, from the 2nd sector the remained (512-488)=24 bytes would be filled with 0s.
- Input-Output Operations for disk and it is measured in a second.
- This measures the amount of data read or write on a disk per second. Again, this purely depends on the application, about the size of reading and write chunks.
- If the chunk is small, it is possible that the IO would be higher & if the chunk is big, the IO would be lesser.
For example: If the size of data is 512 Bytes (It’s quite small compared to 4K).
- The disk may easily read the number of chunks at a given point in time.
- Whereas if the size of data is 4K, the same disk would take around 7 times more time to read or write.
- So, the IOPS calculation depends on the size of data to be written or read to/from the disk.
- Here we are calculating throughput in terms of disk NOT in terms of Networking.
- This measures the amount of time it takes for a disk to read and write data.
For example: If a Disk1 is allowed to write 1000 blocks of data and each data is of 100K.
- Then the throughput would be 1000* 100K / Second = 100000K/Sec = 100MB/Second
- If there is another disk that is able to write 10 times more data of the same size then, its throughput would also be more.
Eg: Disk2 is able to write 20000 blocks of data if size 100K, the throughput would be 20000*100K/Second = 2000 MB/Second.
On both the examples:
- Disk 1 has IOPS as 1000
- Disk 2 has IOPS as 20000
One disk is designed to have a maximum throughput of 160MBps. But, it claims to have different IOPS size.
This could be only possible, by changing the size of each data.
Let’s take this as an example:
- Disk1 is able to write 1000 blocks of data 160K each. The throughput is 1000* 160K/s = 160MBps.
- Now, Disk2 is claiming that it can achieve more IOPS than Disk1. Let say, it can burst a maximum of up to 2000 IOPS.
So, if Disk2 is also designed to get 160MBps of throughput with 2000 IOPS, the storage size must be 80K.
To get more ideas about Amazon web services to follow us LIA Infraservices