Introduction to Amazon S3
- Key Concept: Amazon S3 is a foundational AWS service offering virtually limitless object storage.
- Importance: Many websites and AWS services depend on its reliability and scalability.
- Use Cases:
- Backup and storage
- Disaster recovery
- Data archiving
- Hybrid cloud storage
- App and media hosting
- Big data (data lakes)
- Software updates
- Static website hosting
- Real-World Examples:
- Nasdaq (data archival)
- Sysco (business insights from data analytics)
S3 Structure
- Buckets: Top-level containers for objects.
- Must have globally unique names.
- Defined within a specific AWS region.
- Naming conventions exist (check documentation for the latest).
- Objects:
- The actual files stored in S3
- Identified by a key, which is the full file path within the bucket.
- Example:
myfolder1/anotherfolder/myfile.txt
- S3 doesn't have a native directory concept, but the UI simulates it.
- Maximum size per object: 5 terabytes.
- Large files require multipart upload. +5GB files have to use multipart.
- Metadata and Tags
- Objects can have additional descriptive information (key-value pairs) which can be user-defined or system-generated.
Key Takeaways
- Amazon S3 is a core building block of AWS due to its versatility and scalability.
- Understand buckets, objects, keys, and the importance of naming conventions.
- S3's many use cases make it an essential tool to grasp for DevOps engineers.