Reconnaissance: Finding and Enumerating Subdomains of a Domain Name

by hash3liZer . 25 January 2019

Reconnaissance: Finding and Enumerating Subdomains of a Domain Name

The very first step towards a successful pentest against an asset or a target is a perfect recon. In this phase, the attacker enumerates and gather as much information as can be gained from different sources and finally all the information is put under a single asset to be analyzed further.

Subdomain enumeration is a part of reconnaissance where the subdomains of a domain name are enumerated and are then analyzed further.

Now, assembling all the subdomains under a single file is quite not possible but we could try so by enumerating through different sources or by bruteforcing with the help of a dictionary.

If we manually do this, i.e. going to each website and submitting different queries, this would take a lot more time of us. For the purpose, we have different tools which almost automates the whole stuff.

Most of subdomain enumeration scripts are based on wordlists. However, Sublister enumerates subdomains by scraping through different search engines and websites.

So, there's a good chance of getting normaly used subdomains with this tool. However, there could exist such sort of subdomains which are normally kept internal to the administrators and which we have to bruteforce. Let's make it simple:



We are going to use three different tools here for enumerating common subdomains and finally taking screenshots. Update your repository and install the dependencies:

$ apt update
$ apt install chrpath libssl-dev libxft-dev libfreetype6-dev libfreetype6 \
    libfontconfig1-dev libfontconfig1 -y

Clone into Sublist3r. Lately, Subbrute has been integrated into Sublister. So, now you could fire sublist3r and eventually could also use the features of Subbrute:

$ git clone
$ pip install -r Sublist3r/requirements.txt

Then, clone into Subrake. We will use it to have further analysis on each of the found subdomains:

$ git clone

And finally, clone into Snapper:

$ pip install -r Sublist3r/requirements.txt


Finding Subdomains

Now, Sublist3r and Subbrute are different from the working perspective. Sublister uses different search engines and certificates to enumerate subdomains which are commonly used on Internet and are somehow integrated with different other services. While on ther other hand, subbrute basically breaks into subdomains by garneing DNS records and through a list given to it.

Run the sublister scan with subbrute module turned on:

$ python Sublist3r/ -d [] -b -o subdomains.txt --verbose
  • -d: Domain Name.
  • -b: Enables subbrute script.
  • -o: Output the subdomains to a file.
  • --verbose: Verbose mode.

Let's check how many subdomains we got:

$ cat subdomains.txt && wc -l subdomains.txt



The problem with the above enumeration is uncertainity. It's very normal for subbrute to give false positives. It's most likely that the subdomains that will be given to you doesn't exist. The same could be said for sublister. In case of sublister, they are just being enumerated from the website. Let's do an analysis for the live hosts and do port scannng. That's where Subrake comes in.

Push the manual for Subrake:

$ python Subrake/ --help

The good thing about subrake is it accepts multiple dictionaries, scrape websites online, do port scanning and enumerates CNAME records. There's a dicitonary with Subrake that includes about 250 common named subdomains. Fire it up:

$ python Subrake/ -d [] -w Subrake/wordlists/small.lst,subdomains.txt \
     -o output.txt -s verified-subdomains.txt -p 21,22,23,8080,8000
  • -d, --domain: Domain Name
  • -w, --wordlist: Comma-seperated wordlists
  • -o, --output: Output Data to file. See --format.
  • -s, --output-subs: Output Verified Subdomains to a File
  • -p, --ports: Ports to Scan. Default are 50 common-used ports

At this point, we would have enumerated various details of our target. Subrake uses sockets to develop connections with subdomains on both ports 80 and 443 and cuts off the connection when the response headers are received which eventually makes the process more faster.


Caputring Screenshots

So, we have the list of verified subdomains along with their status codes and cname records. But we can still take it to a further step by taking screenshots of each subdomain on both HTTP and HTTPS ports. For this we have snapper. 

Snapper uses Selenium with PhantomJS driver to take screenshots. Let's first install PhantomJS headless browser. Download PhantomJS:

$ wget \
 -O "phantomjs.tar.bz2"

Extract PhantomJS and produce a link in execution directory:

$ tar xvjf phantomjs.tar.bz2 -C /usr/local/share/
$ ln -s /usr/local/share/phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/bin/

You can now verify by pushing the version of phantomjs:

$ phantomjs --version

Move to Snapper directory and fire it up:

$ cd Snapper/
$ python -f ../verified-subdomains.txt -c 3 -v -p 8060
  • -f: File with subdomains or hosts
  • -c: Number of Threads to Spawn
  • -v: Verbose mode
  • -p: Port to host run webserver on.

When it's all done, it will run a webserver on from where you can see the captured screenshots:


It was easy enumerating subdomains with Sublister for it automates most of the manual stuff. The later task of validating the subdomains along with differnt information was done by subrake. Finally, we had a compiled list of all validated subdomains towards which we can take further steps. The last thing was to take screenshots of enumerated subdomains which was done with the help of Snapper.