The subdomain monitoring bot — Setting up your bug bounty scripts with Python and Bash

Shriyans Sudhi
InfoSec Write-ups
Published in
8 min readDec 30, 2022

--

Hi there,

Automation is very interesting things, and if done in a right manner, it is more interesting. But writing automation scripts is the most important thing for that. So, in this article, we’ll be discussing about writing automation scripts in Python and Bash for your VPS server (or maybe raspberry pi). If you’ve not read my previous article, Monitoring your targets for bug bounties, which is the main article for our series, I highly recommend you reading that first.

Photo by Sai Kiran Anagani on Unsplash; Automation script

First of all let’s decide what will be our automation system could be for. The following are the possibilities:-

  • New subdomain monitoring
  • Log poisoning
  • Nuclei
  • Waybackurls with gf
  • Etc.

So, in this article, we will start by the very basic bot, for new subdomain monitoring

The algorithm

The algorithm can be designed in various methods. In bash, in python, mixed, so we will go for running python and shell commands with help of bash scripts

Open the existing file “domains.txt” containing targets provided by the user

Sort a few targets form that as the file could contain thousands of domains, and we don’t want that much load on our network/machine

Run a tool, maybe amass, on them and output it to a file “new-subdomains.txt”

Compare it with the old file, called “subdomains.txt” and filter out new ones.

Send them to a user defined channel, maybe slack or email (slack is highly recommended as you will have an organized data). Also, write the new subdomain to the “subdomains.txt” file so that you can easily get a list when you want to hunt on that target

Setting up things

Creating directories to organize all the data is important. So we need to set up directories so that things won’t get messed up. The structure of the directory could be:-

.
├── domains.txt
├── init.py
├── last.txt
├── main.sh
├── max.txt
├── runner.py
├── sorter.py
└── targets
├── abc.com
│ ├── new-subdomains.txt
│ └── subdomains.txt
└── xyz.com
├── new-subdomains.txt
└── subdomains.txt

So here, domains.txt is well known, init.py is the script that will initialize new targets, runner.py is the main script, sorter.py will take out desired number of domains from domains.txt and output it to max.txt. And the last, last.txt is the file containing the last scanned domains so that the script can continue from that.

sorter.py

First of all, let’s see the code for sorter.py

import os

# Edit the line below to specify the number of domains to be scanned in a run
max_scans = 11

last_scanned_url = open("last.txt", 'r').read().replace("\n", '')

length = 0

with open("domains.txt", "r") as urls_file:
for url in urls_file:
url = url.replace('\n', '')
if url == last_scanned_url:
for i in range(max_scans):
next_url = urls_file.readline().replace('\n', '')
print(next_url)
length += 1
open('last.txt', 'w').write(next_url)

if length < max_scans:
with open("domains.txt", 'r') as urls_file:
for url in urls_file:
url = url.replace('\n', '')
if length < max_scans:
print(url)
length += 1
open('last.txt', 'w').write(url)

And now, let’s see this in action

Video to sorter.py

I hope that you’ve understood, but if not let’s understand.

This code first of all opens a file called last.txt , and gets the last scanned domain. After that it opens the file domains.txt and check where is the last scanned domain. Once it is found, it prints the domain and write it to last.txt so that it can continue from there next time it is run. Also, if it reaches the last line of file, it jumps to the first and then start reading again. It is run with command python3 sorter.py > max.txt so that it can write the output to the file. Now, a simple question

This can be done straight from the main script, so why it is here?

Ans. Yes, you can do that. But things gets messy when you have multiple bots. To avoid messing up in things, do them in a neat manner

Creating the main script — runner.py

runner.py — It is the most important script for us, as this will be responsible for doing all the tasks. But before setting it up, we need to set up our slack channel, so let’s do that first.

Setting up slack channel

Assuming that you already have a slack account and a workspace, follow the steps below:-

  • Open the slack workspace (https://app.slack.com/client/<something>/<otherthing>) and click add channel
Click on add channels button
  • Give it a good name and click create and skip adding some people (if you wish to add some people, feel free to go ahead)
Giving the channel a name
Creating a new app
  • After that, a new box should pop up, on that click From Scratch > Give it a name > Assign it a channel and click create app
Select scratch here
  • Once created, go to Incoming Webhooks and switch it on
Incoming webhook
Switch incoming webhooks on
  • After it is switched on, you’ll see something like this on the same page below
Slack webhook API
  • Click on Add New Webhook to Workspace and it will redirect you to a page where you can select the slack channel to which it can send messages. Now, just select a channel, and click allow
Allow the bot to send messages to slack channel with help of incoming webhook
  • Once done, you will see a curl command at the place of Sample curl request to post to a channel . Just copy it, paste to a terminal, and you will see a message in your slack channel
Get the cURL command for sending a slack message
Execute the command
And check your slack channel for the message

Now, we are done setting up our slack channel. The message could be changed by changing the value of "text" in JSON post data.

runner.py — The main script

This script will be responsible for running different tools, comparing the output, and then send message to slack.

So first of all, we will create a function, that will send the message to slack

import requests

def send_msg_to_slack(message):
# Get your webhook URL `Incoming Webhooks` page. It will be like
# https://hooks.slack.com/services/T04RRRRRRRR/B04RRRRRRRR/B55JREDACTEDREDACTEDAAAA
# The link above is just a random link
webhook_url = "<paste_your_webhook_URL_here>"
payload = {"text": message}
requests.post(webhook_url, json=payload)

Now, we need to run a tool for subdomains enumeration (here I will demonstrate with amass) with the help of python and then, compare the old file and new file. (IMPORTANT: The command should be run in the directory of the target only, and the script in . (according to directory structure above))

And before that, we will use the system() function of os module to execute the command. This can be done with subprocess module also.

from os import system

def run_amass():
with open("max.txt", 'r') as file:
for target in file:
target = target.replace("\n", '')
system(f"cd targets/{target}/ && amass enum -d {target} -o new-subdomains.txt")

# read the old subdomain list and save it in old_subdomains
old_subdomains = []
with open(f"targets/{target}/subdomains.txt", 'r') as old_subdomain_list:
for subdomain in old_subdomain_list:
subdomain = subdomain.replace('\n', '')
old_subdomains.append(subdomain)

# read the new subdomain list and save it in new_subdomains
new_subdomains = []
with open("targets/{target}/new-subdomains.txt", 'r') as new_subdomain_list:
for subdomain in new_subdomain_list:
subdomain = subdomain.replace("\n", '')
new_subdomains.append(subdomain)

# compare them and send new ones to slack and
# also write them to old subdomain list
found_subdomains = [] # will store the unique subdomains/newly found
for subdomain in new_subdomains:
if subdomain in old_subdomains:
pass
else:
found_subdomains.append(subdomain)
if len(found_subdomains > 0):
msg = "The following new subdomains were found:-"
for subdomain in found_subdomains:
msg = msg + "\n" + subdomain
send_msg_to_slack(msg)
# now, write them to old subdomains file
to_write = open(f"targets/{target}/subdomains.txt", 'r').read()
for subdomain in found_subdomains:
if to_write == "":
to_write = subdomain
else:
to_write = to_write + "\n" + ""
open(f"targets/{target}/subdomains.txt", 'w').write(to_write)

init.py

As there could be 1000s of domains available, so we need to automate the task of creating folder, and initially running amass. (We need to create a file called init.txt , which will contain all domains we want to app. Also, we need to just execute mkdir targets to create a directory. After that, the script will be ready to go)

import os

with open("init.txt", 'r') as file:
for line in file:
line = line.replace('\n', '')
os.mkdir(f"targets/{line}")
os.system(f'cd targets/{line}/ && amass enum -d {line} -o subdomains.txt')
open('domains.txt', 'a').write(("\n" + line))

This script should be also run in . directory according to file structure given above in this article

Summing it up — main.sh

We now need to run sorter.py and then runner.py to do what we want. SO we can simply do it with a shell script

python3 sorter.py > max.txt
python3 runner.py

Once you saved it, execute chmod +x main.sh to make the file executable

Scheduling it with crontab

As we discussed in our previous article that we can use crontab to schedule scripts, so you can do that by adding the following lines to crontab

00 01 * * * cd /path/to/your/bot/folder/ && ./main.sh

This will execute the script everyday at 01:00 AM according to your system time

If you have any queries related to this article, feel free to visit this thread on House of Hackers and tag me there

Buy me a coffee

Buy me a coffee

Hope you enjoyed the article. Feel free to follow me and subscribe to my mailing list so that you can get the article whenever I’ll write the next part of this :)

Subscribe here

I hope that you liked this article. Feel free to follow me for more.

Twitter: https://twitter.com/ss0x00 (I am more active here 😄)

Linkedin: https://www.linkedin.com/in/shriyans-s-a62826216/

Protip: You can clap upto 50 times for an article and you can add this article to one of your list (reading list) to easily find it whenever you need it

;)

From Infosec Writeups: A lot is coming up in the Infosec every day that it’s hard to keep up with. Join our weekly newsletter to get all the latest Infosec trends in the form of 5 articles, 4 Threads, 3 videos, 2 GitHub Repos and tools, and 1 job alert for FREE!

--

--