This week we'll be taking a deep dive into AWS S3 service.
According to AWS website: Amazon S3 (Simple Storage Service) is an object-storage service that offers industry-leading scalability, data availability, security and performance. Customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides easy-to-use management features so you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999% (11 9’s) of durability, and stores data for millions of applications for companies all around the world.
S3 is a regionally resilient service, which means that your data is replicated among multiple availability zones within a specific region. It never leaves that region unless you explicitly configure it to do so (which might be crucial from the legal compliance standpoint). S3 is public.
S3 is a key value store. The files you put inside S3 are called objects (they can be literally everything). Single object may vary in size from 0 bytes to 5 TB.
When you need to upload anything to S3, you’ll have to first create a bucket (inside specific region). Buckets are AWS resources that serve as containers for the objects you put into S3.
There are some constraints that should be taken into account when creating S3 buckets:
- Bucket’s name has to be globally unique
- It has to start with either a lowercase letter or a number
- Bucket’s name cannot be an IP address
- There is a soft limit of 100 buckets per AWS account (can be changed after reaching out to AWS support). The hard limit is 1000 buckets.
Since S3 is an object store, it has a flat structure (there’s no filesystem underneath). However, to make organizing your bucket easier, it supports a concept of folders - when adding common prefix to object name, for example
songs/song2.mp3, AWS console would display songs as a folder that you can click and it will display only the objects that will have same prefix. Such structure could be nested, for example
personal/music/songs/song1.mp3 etc. But this is just for a convenience of the user - underneath, these objects are stored inside a flat structure. Due to such design, an object’s name (or object’s key, to be more precise) should be unique within the bucket.
Everything inside S3 is private by default. The only identity that will be granted access to the bucket is the account root user that owns the bucket. Any additional permissions should be configured with AWS IAM service.
This can be done via :
- S3 bucket policies (similar to identity policies, but attached to a bucket instead). It controls who can access the resource that the policy is attached to. Bucket policy allows for granting access to identities outside of your AWS account (including anonymous principals).
- Access controls lists (on objects and buckets) - but this is considered as a legacy feature and AWS does not recommend using it anymore.
Due to some security incidents caused by wrong bucket permissions configuration (opening bucket to public internet by mistake), AWS introduced ‘Block Public Access’ feature. These are permission boundaries that are applied to all anonymous principals. If you configured a public access to a bucket and it doesn’t work as required, check these settings as they might be blocking it.
S3 Static Website Hosting
S3 allows for hosting static websites inside buckets. It supports HTTP access, so you shouldn’t place any sensitive data there (passwords etc.) as it will be transferred over the wire in a plain-text form.
To use this feature, you have to enable it in bucket settings, provide files that will serve as index and error pages. By default, url for your webpage will be generated by the service and you won’t be able to change it. However, you can register a custom domain through Route53 and use it for your webpage. To make this work with custom domain, name of the bucket hosting static website should match your custom domain name (this is obligatory).
Object versioning is a feature that allows storing multiple version of same objects inside S3. It is disabled by default and when you upload anything to a bucket, objects will be identified through their keys. If you upload an object with a key that already exists in the bucket (and versioning is disabled), you’ll overwrite this object. When object versioning is disabled, ‘id’ property of an object is NULL.
When object versioning is enabled, every object uploaded to S3 will have ‘id’ property filled in and each object will be identified by a combination of id and key values. This time, when uploading an object having already existing key, new version of the object will be created (with higher id and same key) and marked as the ‘latest’ version. All previous versions are kept as well so you can keep the history of objects changes. Whenever you retrieve the object by providing just the key, its latest version will be returned. If you provide both key and id properties, specific version of object will be returned.
When an object is deleted without specifying object’s ID, a special thing called ‘Delete Marker’ is added on top of all previous object’s versions which hides this object (but doesn’t delete it permanently). Delete markers can be ‘undeleted’.
If you specify both object’s ID and key for deletion, this specific version will be permanently deleted.
Lifecycle of object versioning looks as follows:
- It’s disabled by default
- Once enabled, cannot be disabled again
- It’s only possible to move to suspended mode (from which you can activate it later when required).
When using object versioning, you’ll be billed for the space consumed by all currently existing versions. If you don’t want to be billed anymore, you can delete whole bucket.
Uploading into S3
There are different ways you can upload an object to S3.
- Single PUT upload - it creates a single data stream to S3, if it fails due to network connectivity problem or any other issues, whole upload fails and has to be done from scratch. Single stream transfer speed limits the time of whole upload.
- Multipart upload - data is divided into multiple pieces (each piece is uploaded with a separate stream). It can only be used with a minimal data size of 100 MBs. You can upload up to 10000 parts (each part size ranging from 5MB to 5GB + last part can be <5MB). If any stream fails, it can be restarted without impacting remaining streams. This makes the process of uploading big files much more reliable and provides better transfer rate. Whenever possible, choose multipart upload over single upload.
- S3 Accelerated Transfer
When S3 Accelerated Transfer is enabled, it will first upload your data to the nearest edge location. Then, it’ll be transferred to destination bucket through AWS high-speed connection (not the public one). AWS manages this process in such a way that data is transferred through the most efficient route.
AWS handles in-transit encryption by using HTTPS endpoints for S3. At-rest encryption can be handled either by a client (client-side encryption) or by AWS (server-side encryption).
Client-side encryption means that you take care of the whole process - you send already encrypted files to S3. Advantages of such approach is that you have full control over encryption algorithms and key rotation processes.
The disadvantages of client-side encryption are admin overhead (you need to store your encryption keys and know which key was used to encrypt specific file + you need to take care of key rotation. Encryption process will also consume some computational resources of yours.
If you’d like to get rid of the admin overhead, you can go with server-side encryption. There are 3 ways of implementing it:
- Server-side encryption with customer provided keys (SSE-C)
- Server-side encryption with Amazon S3-Managed Keys (SSE-S3)
Server-side encryption with customer master keys (CMKs) stored in AWS KMS service (SSE-KMS)
SSE-C means that you provide S3 the encryption key that should be used for encrypting specific object. The actual encryption process is done inside S3 (so it doesn’t consume any additional CPU power on your side). Instead of storing actual key, S3 stores only its hash. When you request S3 to retrieve SSE-C encrypted object, you’ll need to provide the key as well. S3 will compare hashes and, if matched, will return the object.
- SSE-S3 (AES-256) means that you provide S3 only with plain-text object that you want to upload. S3 generates one master key and then new key for each object that you upload using this method. Whole process is offloaded to AWS (you don’t have any influence on it, no access to key rotation). You only know that it is handled with AES-256 algorithm. S3 uses one master key + separate keys per each object in S3 but this is beyond your control. SSE-S3 will be probably the default path for most of the use cases. If your business requires having control over generation, rotation, and management of the keys, then you should go with other option, which is discussed below.
- SSE-KMS uses customer master key and data encryption keys for the encryption process. This way, you can have control over not only key permissions, but also the rotation process. As this is still a server-side encryption, CPU power required for encrypting is provided by AWS. This approach also allows for role separation - by leveraging IAM you can create admin users that will be able to create and rotate encryption keys, but at the same time, they won’t have enough permissions to use these keys to encrypt or decrypt any objects.
Object storage classes
S3 offers different classes of storage which vary in terms of speed of object retrieval and price paid for storage.
Default storage class. When you don’t explicitly choose another storage class, S3 standard will be used. It replicates the data to at least 3 AZs. It provides 11 9’s of durability and 4 9’s of availability. There are no minimums/delays/penalties when you upload/download the data.
S3 Standard-IA (infrequent access)
It’s suitable for storage of the files that are accessed less frequently but, when requesting to access the object, access should be rapid. When compared to S3 standard, S3 standard-IA will have a cheaper base rate (~54%). There’s a minimum charge of 128KB per object - if the file you upload to IA class is for example 64kb, you’ll be charged for 128KB anyway. Additionally, 30 days minimum duration charge per object + and there’s per GB data retrieval fee. Availability of files stored in this class is slightly lower in comparison to S3 Standard (99.9%).
S3 One Zone-IA
It has all the trade-offs that S3 Standard-IA has but it’s cheaper (~80% of Standard-IA base cost). The most important thing to consider when using this storage class is that the date will be stored in a single AZ only (no replication). It has 99.5% availability.
All previous classes have milliseconds latency between retrieving the first byte of data, which is not supported by any glacier storage classes. It is designed for the cool archival data. Whenever you need to retrieve data from glacier storage, you’ll need to initialize the process of restoring it (which takes time).
Glacier has 11 9’s of durability, 4 9’s of availability and provides data replication (to at least 3 availability zones). 40KB minimum object capacity charge is applied. The minimum storage duration charge is 90 days. Retrieval of the objects may take from minutes to even hours.
When you restore objects from Glacier and need them ASAP, you can use expedite retrieval (access to data between 1 and 5 minutes, most expensive). Standard retrieval will take from 3 to 5 hours (but will be cheaper than expedite). Last and cheapest option is bulk retrieval (takes from 5 to 12 hours).
S3 Glacier Deep Archive
It is designed for backups or tape-drive storage equivalent. 4.3% of base S3 standard cost. 180 days minimum storage duration charge. Retrieval of the data is expected to happen within 12 hours.
It’s a combination of S3 standard and S3 standard-IA storage classes. It consists of 2 tiers - one for frequent access and other one for infrequent access. AWS implements smart monitoring of objects inside this storage class - any objects that haven't been accessed for 30 consecutive days will be automatically moved to infrequent access tier. Infrequent access tier means lower fees for data storage. The only charge that is connected with Intelligent Tiering is the one that you pay for the monitoring solutions ($0.0025 per 1000 objects). It might be a good choice when you’re not aware of the access patterns of the data you store in S3.
These are set of rules that defines the storage class of an object based on its access patterns. Rules consist of actions. Policies are applied to buckets or group of objects. Actions are either of transition or expiration type. Transition action changes object’s storage class. Expiration action deletes an object based on specified conditions.
Objects flow is unidirectional - for example objects can be transitioned from S3 standard to S3 standard-IA or any other storage class below (and not the other way around).
S3 supports two types of replication: - Cross-Region Replication (CRR) - Same-Region Replication (SRR)
Replication can happen between a source and destination buckets in the same AWS account or between different accounts. When data is replicated between one account, both buckets will be using same role for performing this action and destination bucket will trust this role by default. It isn’t the case when replication happens between different account, so additional bucket policy on destination bucket is required.
You can replicate either the whole bucket (all objects) or just their subset. You can also decide what storage class will be used for replicated data in the destination bucket. By default, ownership of replicated data is the same as in source bucket, but you can change this behaviour when replicating data between different accounts. If you require SLA on replication process, you can enable Replication Time Control which ensures 15 minutes time SLA on the process + it gives you visibility into process queue.
Replication is not retroactive - only new objects (created after replication was introduced) will be affected. Also, you need to enable versioning as well.
Replication is a one-way process (from source to destination).
It supports unencrypted, SSE-S3, SSE-KMS encrypted objects; does not work with SSE-C encrypted objects.
Only user-generated events are replicated (no system events or any actions inside Glacier/Glacier Deep Archive).
Deletes are not replicated.
S3 Presigned URLs
This feature allows S3 object’s owner to share it with others by creating a presigned URL, using their own security credentials, to grant time-limited permission to download the object. Presigned URLs are valid only for the specified duration (when creating them, you have to provide expiration datetime).
Anyone with valid security credentials can create a presigned URL. This means that anyone can create presigned URL for any object (including objects that this person does not have access to). In such case, no one using this link will be able to access this object (since the creator of presigned URL did not have such permissions at the time of generating the URL).
Important thing to note is that when you’re generating a presigned URL while using temporary STS token, presigned URL will become invalid along with your token (even if you set presigned URL expiration time to be later).
Presigned URLs can be generated programmatically using the REST API, AWS CLI, or AWS SDK in any language it supports.
Partial object’s retrieval with S3 Select and Glacier Select
Both of these features allows using SQL-like statements that allow to fetch filtered data from S3 or Glacier. Data filtering is offloaded to AWS. This helps you with reducing the cost of data retrieval (you only pay for the relevant data you need to fetch).
- S3 is a regionally resilient key-value store.
- S3 is an highly-available object-storage service.
- You put objects inside buckets, which are created in specific region.
- Bucket name must be globally unique, start with either a lowercase letter or a number, can’t have name in IP format.
- Bucket limits: 100 buckets per account (soft), 1000 buckets per account (hard).
- S3 doesn’t have any underlying filesystem, it is a flat structure.
- For convenience sake, it supports the concept of folders (created through prefix ended with “/” character).
- Everything inside S3 is private by default.
- Objects can be shared with others through S3 bucket policies (recommended way) and access control lists (ACLs - legacy, don’t use them).
- Be aware of ‘Block Public Access’ feature.
- S3 can host static websites. You must upload index and error pages (they can be the same file) to the bucket and enable static website hosting in the settings. You can use custom domain in Route53 only if your domain’s name matches bucket name.
- Object versioning allows for storing multiple versions of particular object. It’s disabled by default. Once enabled, cannot be disabled anymore - you can only move it to suspended status.
- When versioning is on and you retrieve object by its key only (no ID provided), its latest version will be returned. When combining key with ID, you can retrieve an object's specific version.
- When you delete an object just by its key, delete marker is created and this object becomes hidden. It’s not removed permanently. When you delete an object by specifying both key and ID, requested version is deleted permanently.
- When versioning is ON, you’re billed for all versions stored in the bucket.
- You can upload files to S3 in 3 ways: single PUT upload, multipart upload, S3 accelerated transfer.
- Single PUT upload creates a single stream of data. If stream fails, whole upload fails.
- Multipart upload uses multiple streams of data which, if in failed state, can be restarted individually. It’s much more efficient than single data stream. Minimal data size required for multiupload is 100 MBs (each part size may vary from 5MB to 5GB, last part can be smaller than 5 MB).
- S3 Accelerated Transfer provides more efficient data transfer - files are first uploaded to the nearest edge location and picked from there by AWS efficient network (which is connected in an much more optimal way than public internet).
- S3 uses HTTPS, so everything is encrypted in-transit.
- For at-rest encryption, client can manage whole process of data encryption or it can be offloaded to S3 (server-side encryption).
- SSE-C - you provide S3 with encryption key, encryption process is done on AWS side.
- SSE-S3 (AES-256) - you send plain-text object to S3 and whole encryption process is managed by AWS. You don’t have any control over picking encryption keys and rotating them.
- SSE-KMS - server-side encryption with customer master keys (CMKs) stored in AWS KMS service. Keys permissions and rotation management can be done by client. It also allows for role separation.
- S3 supports multiple storage classes which depends on the data access patterns and price you’re ready to pay.
- S3 Standard - the default class. Data replicated to at least 3 AZs. 11 9’s of durability and 4 9’s of availability. You can access the data instantly.
- S3 Standard-IA (infrequent access) - cheaper base rate than standard class. Minimum charge of 128KB per object. 30 days minimum duration charge per object + per GB data retrieval fee. 99.9% availability; data can be access instantly.
- S3 One Zone-IA - data stored only in one AZ. Cheaper than Standard & Standard-IA. 99.5 % availability.
- S3 Glacier - storage for data that is access less frequently. Once uploaded, you cannot retrieve it immediately. 11 9’s of durability, 4 9’s of availability. Minimum storage duration charge - 90 days.
- Object restoration time may vary. If choosing expedite restore, it will be between 1 and 5 minutes, most expensive one). Standard retrieval takes from 3 to 5 hours (cheaper than expedite). The cheapest one is bulk retrieval (between 5 and 12 hours).
- S3 Glacier Deep Archive - meant to replace tape-drive backups. Cheapest one of all the classes. 180 minimum storage duration charge. Retrieval can take up to 12 hours.
- Intelligent Tiering - dynamic storage class that monitors data access patterns of the objects and moves them Standard S3 to S3 Standard-IA when not access for 30 consecutive days. There’s a monitoring fee ($0.0025 per 1000 objects).
- Life-cycle policies - set of rules that allow for either moving object to other storage class based on user conditions (transition actions) or deleting objects (expiration actions).
- Life-cycle transition actions flow is unidirectional - you can move object from more expensive to cheaper one, not the other way around.
- S3 Replication can happen in two ways: CRR - cross-region replication, SRR - same region replication. Allows for copying data between source and destination bucket. Permissions are handled via IAM role. When copying data between different AWS accounts, additional bucket policy should be added on top of destination bucket (to make it trust the source bucket account).
- Whole bucket or just the subset of objects can be replicated.
- Replication is not retroactive.
- It’s a one way process - from source to destination.
- Deletes and system events are not replicated.
- Presigned URLs can be generated by anyone having valid AWS security credentials. Presigned URL allow for temporary object sharing. You set an expiration date of the link when generating in.
- When generated while using STS temporary credentials, presigned URLs expire at the same time when STS credentials expire (even if presigned URL expiration date is greater than STS credentials expiration date).
- You don’t have to retrieve whole objects from S3 or Glacier. If you require any filtering of the data, you can use S3 Select or Glacier Select. It’s a SQL-like interface that allows to filter the data before downloading to disk. Filtering part is done on AWS side and you are billed just for the filtered data.
See you in the next post, Kuba