S3 Bucket: Cloud Trail Log Analysis

Hacktivities
InfoSec Write-ups
Published in
7 min readAug 31, 2022

--

This article provides my approach for solving the Bucket CTF challenge created by Scott Piper on the CyberDefenders website, a blue team-focused challenge that requires you to analyze a copy of AWS cloud trail logs and spot the misconfigurations that allowed a successful compromise to occur.

Disclaimer

I like to add a brief disclaimer before a writeup to encourage people to attempt the room before reading this article, since there will obviously be spoilers in this writeup. I believe you will enjoy the CTF more if you attempt it yourself first and then come back to this writeup if you get stuck or need a hint. So without any further delay, lets get started!

Cyber Defenders Questions & Answers

1. What is the full AWS CLI command used to configure credentials?

The AWS CLI stores sensitive credential information that you specify with “aws configure” in a local file named “credentials”, in a folder named “.aws” in your home directory.

aws configure
AWS CLI command used to configure credentials.

2. What is the ‘creation’ date of the bucket ‘flaws2-logs’?

I logged into the AWS Management Dashboard and selected the S3 service, where I see that there is a single bucket called “flaws2-logs” that was created “November 19, 2018, 20:54:31 (UTC+00:00)”.

S3 Bucket “flaws2_logs” Details.

3. What is the name of the first generated event -according to time?

Selecting the S3 bucket “flaws2_logs” and navigating down the folder structure, I can see there are eight log objects stored in the bucket, ordered by oldest to newest from top to bottom.

S3 Bucket Objects.

I downloaded all eight log files and used the naming convention “CloudTrail-#.json”, with “CloudTrail-1.json” being the oldest. I opened the “CloudTrail-1.json” file and saw that the first generated event name was “AssumeRole”.

Name of the first generated event according to time.

4. What source IP address generated the event dated 2018–11–28 at 23:03:20 UTC?

After reviewing the Cloud Trail logs JSON structure, I decided to create a simple python script called “json_print.py” to answer this question.

“json-print.py” python script.

After executing this script, I can see the source IP address was “34[.]234[.]236[.]212”.

python3 json-print.py CloudTrail-4.json
Source IP address was “34[.]234[.]236[.]212” generated the event dated 2018–11–28 at 23:03:20 UTC

Searching the IP address in VirusTotal shows that it belongs to the Amazon AWS infrastructure.

IP address belonging to Amazon AWS infrastructure.

5. Which IP address does not belong to Amazon AWS infrastructure?

Reviewing the IP addresses seen in the figure earlier above, I can see the IP address “104[.]102[.]221[.]250”. Reviewing the IP address in VirusTotal shows that it does not belong to the Amazon AWS infrastructure.

IP address does not belong to Amazon AWS infrastructure.

6. Which user issued the “ListBuckets” request?

Looking at the log file “CloudTrail-7.json”, I can see a single event. I decided to use a website called beautifier.io to help format the JSON code. I can see that the user “level3” issued the “ListBuckets” request.

“level3” user issued the “ListBuckets” request.

7. What was the first request issued by the user “level1”?

Looking at the log file “CloudTrail-2.json”, I can see a single event which shows the first request issued by the user “level1” was “CreateLogStream”.

“level1” user issued the “CreateLogStream” request.

Incident Investigation

Using the information identified from answering the questions above, I started to further investigate the incident. When Pentesting S3, one of the first things you’ll want to do is look and see what the policies are for an S3 bucket. Bucket policies and access control lists (ACLs) are used for access control — acting as the front line to allow and deny access to S3 resources. We can use the aws-cli to retrieve the bucket policy for the “flaws2-logs” bucket or view the policy through the S3 management console.

aws s3api get-bucket-policy --bucket flaws2-logs
“flaws2-logs” Bucket Policy.

I can see that the principal (i.e. person or application that can make a request for an action or operation on an AWS resource) is set to anyone using the asterisks wildcard and the S3 action “GetObject” allows objects to be retrieved from the bucket. I can also see that anyone can perform the “ListBucket” and “GetBucketLocation” actions. Over the years, quite a number of serious data leaks at major companies have taken place due to open Amazon S3 buckets. To test the openness of the “flaws2-logs” S3 bucket, I can attempt to access it through it’s URL. The URL format of a bucket is either of two options:

A private bucket will return a message of “Access Denied,” and no bucket contents will be shown. With a public bucket, however, clicking the URL will list the first 1000 files contained in that bucket. Typing “http://s3.amazonaws.com/flaws2-logs/” shows that the bucket is publicly accessible, and I can see the eight cloud trail logs stored in the bucket.

“flaws2-logs” bucket is publicly accessible.

Now that we have confirmed that the bucket is accessible to the public and what requests can be performed, I moved onto reviewing the logs in more detail. If I look at the log file “CloudTrail-7.json”, I can see a “ListBucket” request from the user “level3”.

“level3” user issued the “ListBuckets” request.

Inspecting this role, we see that the description states that this should only be run by ECS services because the AssumeRolePolicyDocument is only allowing that one principle. The “104[.]102[.]221[.]250” IP address also does not belong to the Amazon AWS infrastructure.

aws --profile target_security iam get-role --role-name level3
“level3” role description.

Based on these details, we can assume that the ECS container must have been hacked and credentials were stolen, since normally, we would see the resource (the ECS in this case) having made AWS API calls from its own IP instead of “104[.]102[.]221[.]250”. Looking back at earlier events, I can see that the user “level1” performed “ListImages”, “BatchGetImage”, and “GetDownloadUrlForLayer” requests.

python3 json-print.py CloudTrail-5.json
“level1” performed “BatchGetImage”, and “GetDownloadUrlForLayer” requests.
python3 json-print.py CloudTrail-4.json
“level1” performed “ListImages” requests.

Inspecting the “ListImages” request, I can see the event contains a repository name in the request parameters called “level2”.

Event contains a repository name in the request parameters called “level2”.

We can review the policy for the Elastic Container Registry (ECR), where we see that the Principal is “*” which means these actions are public to anyone.

aws --profile target_security ecr get-repository-policy --repository-name level2
ECR “level2” Repository.

Incident Impact & Recommendations

Amazon S3 is considered a publicly accessible platform. What that means is with the right URL and permissions, any bucket can be accessed from anywhere through HTTP requests, such as a normal browser would do to access a website. Public buckets are one of the largest health risks to AWS and S3. Large amounts of data leaks have been reported due to misconfigurations in S3 due to a poor security posture. Bucket policies and bucket or object ACLs allow you to configure them for access to anyone. Many admins, neglecting this, leave their S3 resources open without knowing they are doing so. Of course, AWS has prompts and warnings that emphasize this point and try to prevent this type of lapse in security, but that hasn’t prevented many occurrences of sensitive data being leaked through this simple error.

Recommendations include:

  • The monitoring of S3 buckets. Without monitoring, there really isn’t a stable way to check access to your S3 environments.
  • The testing and auditing of S3 environments. Something as simple as a vulnerability assessment or even a simple penetration test would help highlight issues that can be easily fixed.
  • Avoiding relaxed policies. If policies let too many users access S3 resources, issues could arise if those accounts become compromised.

Closing Remarks

I really enjoyed working through this CTF and getting the opportunity to learn more about investigating Cloud Trail logs for indicators of successful compromise. Thank you for reading till the end and keep hacking 😄!

From Infosec Writeups: A lot is coming up in the Infosec every day that it’s hard to keep up with. Join our weekly newsletter to get all the latest Infosec trends in the form of 5 articles, 4 Threads, 3 videos, 2 Github Repos and tools, and 1 job alert for FREE!

--

--