Sales : 1800 103 3130 | Tech Support : 1800 102 2120


Chat Now
Get live help and chat with
and Netmagic representative

Write To Us
Email us with comments
questions and feedback
  Netmagic Blogs
Expert Views on Trending IT Topics  

Amazon EBS vs EFS vs S3: What’s the Difference?

Author : Rishiraj Nandedkar
Date : January 02, 2019
Category : Multi-Cloud

AWS gives us three different storage services – EBS, EFS and S3. Each has its own strength and purpose. Here are the key features and architectural benefits of each of these:

Amazon EBS
Every server needs a drive. Amazon Elastic Block Storage, or EBS, is essentially a cloud-based storage for the drives of your virtual machines – just like a D drive or an E drive (in case of Linux, its //Data/ etc. The core principle of EBS is that it stores data as blocks of the same size and organizes them through the hierarchy, similar to a traditional file system.

Amazon EBS is designed to store data in blocks (volumes of a provisioned size) attached to an Amazon EC2 instance, similar to a local disk drive on your physical machine. In physical environment this volume might be coming from storage or server. The key difference is that EBS is elastic (easily scalable and extensible) as compared to a local disk which has finite capacity.

There are 4 types of volumes in Amazon EBS. To understand the difference, you need to know what IOPS is. “IOPS” stands for input / output operations per second or, put it simply, the maximum amount of read/write operations you are able to perform per second. To choose the right Amazon EBS volume type you need to consider a number of parameters such as:

  • IOPS and through put requirements for your application
  • Read and write ratio
  • Data type (Random or sequential)
  • Chunk size of data (to align EBS volume to your application)

Volume Type

General Purpose SSD (gp2)*

Provisioned IOPS SSD (io1)

Throughput Optimized HDD (st1)

Cold HDD (sc1)

Description

 

General purpose SSD volume that balances price and performance for a wide variety of workloads

Highest-performance SSD volume for mission-critical low-latency or high-throughput workloads

Low-cost HDD volume designed for frequently accessed, throughput-intensive workloads

Lowest cost HDD volume designed for less frequently accessed workloads

Amazon EFS
At some point, it became clear that EBS may be good for setting up a drive for virtual machines, but what if you want to run an application with high workloads that need scalable storage and relatively fast output? Amazon Elastic File System was created to cater to these needs.

Amazon EFS is automatically scalable - that means that your running applications won't have any problems if the workload suddenly becomes higher - the storage will scale itself automatically. If the workload decreases - the storage will scale down, so you won't pay anything for the storage you don't use.

You can mount EFS to various AWS services and access it from various virtual machines. Amazon EFS is especially helpful for running servers, shared volumes (like NAS devices), big data analysis, and any scalable workload.

Amazon S3
Amazon S3 stores data as objects in a flat environment (without a hierarchy). Each object (file) in the storage contains a header with an associated sequence of bytes from 0 bytes to 5 TB. Objects in Amazon S3 are associated with a unique identifier (key), so access to them can be obtained through web requests from anywhere.

Amazon S3 also allows hosting static website content. It is a highly scalable storage service with famous "eleven nines" data durability (99.999999999%).

 




Author : Rishiraj Nandedkar

DGM and Practice Head for Data Protection

 

Rishiraj has been instrumental in building the Data Protection portfolio at Netmagic. In over a decade at the organisation, he’s assisted several of its enterprise customers through highly complex DRaaS, Back-up, Storage deployments and demanding data migration scenarios.