ZeroDay Odyssey: A Cyberpunk Framework for Web Application Penetration Testing

In the neon haze of cyberspace, where firewalls flicker and secrets hide in plain sight, ZeroDay Odyssey is your compass. Inspired by OWASP and forged for both rebels and red teams, this modular framework guides you through the labyrinth of web security—from reconnaissance to exploit, from code to consequence. Whether you're hunting bugs or defending fortresses, the Odyssey begins here.

April 05, 2025
Victor Nthuli
Security Best Practices
5 min read

Web Application Penetration Testing Framework

Table of Contents

  1. Framework Overview
  2. Phase 1: Information Gathering
  3. Phase 2: Configuration & Deployment Testing
  4. Phase 3: Identity Management Testing
  5. Phase 4: Authentication Testing
  6. Phase 5: Authorization Testing
  7. Phase 6: Session Management Testing
  8. Phase 7: Input Validation Testing
  9. Phase 8: Error Handling & Logging Testing
  10. Phase 9: Cryptography Testing
  11. Phase 10: Business Logic Testing
  12. Phase 11: Client-Side Testing
  13. Phase 12: API Testing
  14. Customization Guide
  15. Automation Strategies
  16. Evidence Collection & Reporting

Framework Overview

This framework is designed to systematically test web applications for security vulnerabilities. It adopts a phased approach based on the OWASP Web Security Testing Guide (WSTG) and can be customized for various engagement types.

  • Complexity: The framework can be scaled up or down depending on the application’s complexity and the engagement’s scope.
  • Modularity: Each phase can be conducted independently, though earlier phases often inform later testing.
  • Adaptability: The framework can be tailored for bug bounty programs, enterprise penetration tests, DevSecOps integrations, or API-only targets.
  • Documentation: Standardized reporting templates and evidence collection methodologies are included.

Phase 1: Information Gathering

Objective: Collect all relevant information about the target application to understand its architecture, functionality, and potential attack surface.

1.1 Conduct Search Engine Discovery (WSTG-INFO-01)

  • Type: Semi-automated
  • Tools: Google Dorks, Shodan, Censys, BuiltWith, SpyOnWeb
  • Techniques:
  • [ ] Use search engine operators (site:, inurl:, filetype:)
  • [ ] Search for sensitive files (config files, logs, backups)
  • [ ] Look for exposed API documentation
  • [ ] Search for leaked credentials on public repositories
  • Expected Results: Discovery of hidden pages, subdomains, technologies, and potentially sensitive information
  • Red Flags: Exposure of sensitive files, credentials, or internal documentation

1.2 Fingerprint Web Server (WSTG-INFO-02)

  • Type: Automated
  • Tools: Nmap, Wappalyzer, Whatweb, Nikto, HTTPie, cURL
  • Techniques:
  • [ ] Analyze HTTP headers (Server, X-Powered-By)
  • [ ] Check for server signature in error pages
  • [ ] Banner grabbing
  • [ ] HTTP method enumeration
  • [ ] Version detection
  • Expected Results: Identification of web server type, version, and technologies
  • Red Flags: Outdated server versions, verbose error messages revealing technology stack

1.3 Review Webserver Metafiles (WSTG-INFO-03)

  • Type: Semi-automated
  • Tools: Browser, curl, wget, Burp Suite
  • Techniques:
  • [ ] Check robots.txt
  • [ ] Analyze sitemap.xml
  • [ ] Review security.txt
  • [ ] Check .well-known directory
  • [ ] Look for manifest files
  • Expected Results: Discovery of hidden directories, files, or functionality
  • Red Flags: Sensitive directories listed in robots.txt, outdated sitemaps revealing hidden content

1.4 Enumerate Applications on Web Server (WSTG-INFO-04)

  • Type: Semi-automated
  • Tools: Nmap, Nikto, Burp Suite, dirsearch, ffuf, gobuster
  • Techniques:
  • [ ] Virtual host scanning
  • [ ] Directory brute forcing
  • [ ] Port scanning
  • [ ] Service identification
  • [ ] Proxy detection
  • Expected Results: Discovery of all applications, APIs, and services running on the server
  • Red Flags: Unprotected admin interfaces, test environments, staging sites

1.5 Review Web Page Content (WSTG-INFO-05)

  • Type: Manual
  • Tools: Browser dev tools, Burp Suite, visual site mapper
  • Techniques:
  • [ ] Manual browsing of all accessible pages
  • [ ] Inspect page source for comments and hidden fields
  • [ ] Review JavaScript files
  • [ ] Check for hidden directories/functionality
  • [ ] Analyze URL parameters
  • Expected Results: Understanding of application functionality, potential entry points
  • Red Flags: Developer comments in code, hardcoded credentials, hidden functionality

1.6 Identify Application Entry Points (WSTG-INFO-06)

  • Type: Semi-automated
  • Tools: Burp Suite, ZAP, browser dev tools
  • Techniques:
  • [ ] Identify all input vectors (forms, API endpoints)
  • [ ] Document HTTP methods for each endpoint
  • [ ] Map application parameters
  • [ ] Catalog file upload functionality
  • Expected Results: Comprehensive map of all input vectors
  • Red Flags: Excessive parameters, insecure HTTP methods, complex input handling

1.7 Map Execution Paths Through Application (WSTG-INFO-07)

  • Type: Manual
  • Tools: Draw.io, Burp Suite, manual testing
  • Techniques:
  • [ ] Document the application flow
  • [ ] Identify user roles and privileges
  • [ ] Map critical business functions
  • [ ] Trace data flow through the application
  • Expected Results: Flowchart of application functionality and business logic
  • Red Flags: Complex workflows, insufficient access controls between steps

1.8 Fingerprint Web Application Framework (WSTG-INFO-08)

  • Type: Automated
  • Tools: Wappalyzer, Whatweb, Retire.js, BuiltWith
  • Techniques:
  • [ ] Analyze HTTP headers
  • [ ] Check for framework-specific cookies
  • [ ] Inspect HTML source for framework fingerprints
  • [ ] Review JavaScript libraries and versions
  • Expected Results: Identification of frameworks, libraries, and their versions
  • Red Flags: Outdated frameworks with known vulnerabilities

1.9 Fingerprint Web Application (WSTG-INFO-09)

  • Type: Semi-automated
  • Tools: Burp Suite, Wappalyzer, custom scripts
  • Techniques:
  • [ ] Identify custom application components
  • [ ] Check for application-specific headers
  • [ ] Analyze error messages
  • [ ] Look for custom JavaScript
  • Expected Results: Detailed understanding of custom application components
  • Red Flags: Custom encryption, homegrown security controls, outdated components

1.10 Map Application Architecture (WSTG-INFO-10)

  • Type: Manual
  • Tools: Network mapping tools, architecture diagramming
  • Techniques:
  • [ ] Document network architecture
  • [ ] Identify integration points with other systems
  • [ ] Map dataflow between components
  • [ ] Document security controls
  • Expected Results: High-level architecture diagram of the application
  • Red Flags: Unnecessary exposure of components, insufficient network segmentation

Phase 2: Configuration & Deployment Testing

Objective: Identify misconfigurations in web server settings, infrastructure, and application deployment that could lead to security vulnerabilities.

2.1 Test Network Infrastructure Configuration (WSTG-CONFIG-01)

  • Type: Semi-automated
  • Tools: Nmap, Nessus, OpenVAS, Shodan
  • Techniques:
  • [ ] Port scanning
  • [ ] Network service enumeration
  • [ ] Firewall configuration testing
  • [ ] Network segregation assessment
  • Expected Results: Understanding of network security posture
  • Red Flags: Open administrative ports, unnecessary services, missing firewall rules

2.2 Test Application Platform Configuration (WSTG-CONFIG-02)

  • Type: Semi-automated
  • Tools: Nikto, SSLyze, testssl.sh, configtest scripts
  • Techniques:
  • [ ] Check default accounts/passwords
  • [ ] Test for default configurations
  • [ ] Verify patch levels
  • [ ] Review configuration files if accessible
  • Expected Results: Properly configured platform with security hardening
  • Red Flags: Default credentials, missing patches, insecure configurations

2.3 Test File Extensions Handling (WSTG-CONFIG-03)

  • Type: Manual
  • Tools: Burp Suite, manual requests
  • Techniques:
  • [ ] Test different file extensions
  • [ ] Check for alternate file extensions (.php.jpg, .asp;.jpg)
  • [ ] Test null bytes in filenames
  • [ ] Try double extensions
  • Expected Results: Proper handling of file extensions
  • Red Flags: Server execution of files with manipulated extensions

2.4 Review Old Backup and Unreferenced Files (WSTG-CONFIG-04)

  • Type: Semi-automated
  • Tools: dirsearch, ffuf, Burp Suite, custom scripts
  • Techniques:
  • [ ] Search for common backup extensions (.bak, .old, .backup)
  • [ ] Look for source code files (.java, .php, .aspx)
  • [ ] Check for temporary files
  • [ ] Search for configuration files
  • Expected Results: No accessible backup or source files
  • Red Flags: Source code exposure, configuration files, backups containing sensitive data

2.5 Enumerate Infrastructure and Application Admin Interfaces (WSTG-CONFIG-05)

  • Type: Semi-automated
  • Tools: dirsearch, ffuf, Burp Suite, Nmap
  • Techniques:
  • [ ] Scan for common admin paths
  • [ ] Check for exposed admin interfaces
  • [ ] Test default credentials
  • [ ] Look for admin interfaces on non-standard ports
  • Expected Results: All admin interfaces properly secured
  • Red Flags: Exposed admin panels, weak authentication, lack of network restrictions

2.6 Test HTTP Methods (WSTG-CONFIG-06)

  • Type: Automated
  • Tools: Burp Suite, ZAP, curl, Nmap
  • Techniques:
  • [ ] Test OPTIONS method
  • [ ] Try DELETE, PUT, PATCH methods
  • [ ] Test HTTP method overriding
  • [ ] Check HTTP method handling
  • Expected Results: Only necessary HTTP methods allowed
  • Red Flags: Dangerous methods enabled (PUT, DELETE) without authorization

2.7 Test HTTP Strict Transport Security (WSTG-CONFIG-07)

  • Type: Automated
  • Tools: SSLyze, ZAP, Burp Suite, securityheaders.com
  • Techniques:
  • [ ] Check for HSTS header
  • [ ] Verify max-age value
  • [ ] Test includeSubDomains directive
  • [ ] Verify preload status
  • Expected Results: Properly configured HSTS header
  • Red Flags: Missing HSTS, short max-age, missing includeSubDomains

2.8 Test RIA Cross Domain Policy (WSTG-CONFIG-08)

  • Type: Manual
  • Tools: Browser, curl, manual inspection
  • Techniques:
  • [ ] Check crossdomain.xml
  • [ ] Review clientaccesspolicy.xml
  • [ ] Test for permissive cross-domain policies
  • Expected Results: Restrictive cross-domain policies
  • Red Flags: Wildcard permissions, overly permissive policies

2.9 Test File Permission (WSTG-CONFIG-09)

  • Type: Manual
  • Tools: Manual inspection, custom scripts
  • Techniques:
  • [ ] Check for readable configuration files
  • [ ] Test for writable directories
  • [ ] Verify permissions on sensitive files
  • Expected Results: Properly secured file permissions
  • Red Flags: World-readable configuration files, writable web directories

2.10 Test for Subdomain Takeover (WSTG-CONFIG-10)

  • Type: Semi-automated
  • Tools: SubOver, subjack, custom scripts
  • Techniques:
  • [ ] Identify all subdomains
  • [ ] Check for dangling DNS records
  • [ ] Verify service configurations
  • Expected Results: No vulnerable subdomains
  • Red Flags: Unclaimed subdomains, misconfigured DNS

2.11 Test Cloud Storage (WSTG-CONFIG-11)

  • Type: Semi-automated
  • Tools: S3Scanner, GCPBucketBrute, azure-storage-explorer, custom scripts
  • Techniques:
  • [ ] Check for open cloud storage (S3 buckets, Azure Blobs)
  • [ ] Test permissions on storage objects
  • [ ] Look for leaked keys
  • [ ] Verify proper access controls
  • Expected Results: Properly secured cloud storage
  • Red Flags: Public buckets, excessive permissions, data exposure

Phase 3: Identity Management Testing

Objective: Evaluate the robustness of the application’s identity management system, focusing on user registration, account provisioning, and profile management.

3.1 Test Role Definitions (WSTG-IDNT-01)

  • Type: Manual
  • Tools: Burp Suite, manual testing
  • Techniques:
  • [ ] Identify all user roles
  • [ ] Document privileges for each role
  • [ ] Check for role separation
  • [ ] Test role inheritance
  • Expected Results: Clear segregation of duties between roles
  • Red Flags: Overlapping permissions, excessive privileges, lack of least privilege

3.2 Test User Registration Process (WSTG-IDNT-02)

  • Type: Manual
  • Tools: Burp Suite, browser
  • Techniques:
  • [ ] Test for duplicate registration
  • [ ] Check for weak account validation
  • [ ] Test registration form input validation
  • [ ] Verify identity requirements
  • Expected Results: Secure user registration process
  • Red Flags: Weak validation, enumeration vulnerabilities, insufficient verification

3.3 Test Account Provisioning Process (WSTG-IDNT-03)

  • Type: Manual
  • Tools: Browser, Burp Suite
  • Techniques:
  • [ ] Test account creation
  • [ ] Verify provisioning controls
  • [ ] Check default permissions
  • [ ] Test account approval workflow
  • Expected Results: Controlled account creation with proper approval
  • Red Flags: Self-registration for privileged access, missing approval workflows

3.4 Testing for Account Enumeration and Guessable User Account (WSTG-IDNT-04)

  • Type: Semi-automated
  • Tools: Burp Intruder, custom scripts
  • Techniques:
  • [ ] Test login error messages
  • [ ] Check password reset functionality
  • [ ] Test registration for existing users
  • [ ] Analyze response times and status codes
  • Expected Results: No user enumeration possible
  • Red Flags: Different error messages, response times, or status codes that reveal valid users

3.5 Testing for Weak or Unenforced Username Policy (WSTG-IDNT-05)

  • Type: Manual
  • Tools: Manual testing
  • Techniques:
  • [ ] Test username policy enforcement
  • [ ] Check for reserved usernames
  • [ ] Try special characters in usernames
  • [ ] Test for case sensitivity issues
  • Expected Results: Strong username policy that prevents confusion
  • Red Flags: Ability to create similar usernames, impersonation opportunities

Phase 4: Authentication Testing

Objective: Evaluate the security of the authentication mechanisms to ensure they adequately protect access to the application.

4.1 Testing for Credentials Transported over an Encrypted Channel (WSTG-ATHN-01)

  • Type: Automated
  • Tools: Burp Suite, ZAP, browser dev tools
  • Techniques:
  • [ ] Check for HTTPS on all authentication pages
  • [ ] Verify certificate validity
  • [ ] Test for mixed content
  • [ ] Check for insecure redirects
  • Expected Results: All authentication occurs over HTTPS
  • Red Flags: HTTP login forms, mixed content, insecure redirects after authentication

4.2 Testing for Default Credentials (WSTG-ATHN-02)

  • Type: Semi-automated
  • Tools: Default credential lists, Burp Intruder, custom scripts
  • Techniques:
  • [ ] Test vendor default credentials
  • [ ] Check for default admin accounts
  • [ ] Try common username/password combinations
  • [ ] Test for default API keys
  • Expected Results: No default credentials accepted
  • Red Flags: Working default credentials, especially for administrative access

4.3 Testing for Weak Lock Out Mechanism (WSTG-ATHN-03)

  • Type: Semi-automated
  • Tools: Burp Intruder, custom scripts
  • Techniques:
  • [ ] Test login failure thresholds
  • [ ] Check account lockout duration
  • [ ] Test for lockout bypasses
  • [ ] Verify notification of lockouts
  • Expected Results: Account lockout after specified number of failures
  • Red Flags: No lockout, too many attempts allowed, easy bypass mechanisms

4.4 Testing for Bypassing Authentication Schema (WSTG-ATHN-04)

  • Type: Manual
  • Tools: Burp Suite, browser dev tools
  • Techniques:
  • [ ] Test direct page access
  • [ ] Modify session tokens
  • [ ] Test forced browsing
  • [ ] Check for authentication bypasses
  • Expected Results: No access without proper authentication
  • Red Flags: Direct access to protected pages, missing auth checks on certain functions

4.5 Testing for Vulnerable Remember Password (WSTG-ATHN-05)

  • Type: Manual
  • Tools: Browser, Burp Suite
  • Techniques:
  • [ ] Check “remember me” functionality
  • [ ] Analyze persistent cookies
  • [ ] Test cookie security attributes
  • [ ] Check for password storage in client-side
  • Expected Results: Secure implementation of remember password feature
  • Red Flags: Plaintext credentials in cookies, insecure cookie attributes, indefinite persistence

4.6 Testing for Browser Cache Weaknesses (WSTG-ATHN-06)

  • Type: Manual
  • Tools: Browser dev tools, Burp Suite
  • Techniques:
  • [ ] Check cache-control headers
  • [ ] Test back button after logout
  • [ ] Check for sensitive data in HTML sources
  • [ ] Verify autocomplete attributes
  • Expected Results: No caching of sensitive data
  • Red Flags: Cached authentication pages, sensitive data visible after using back button

4.7 Testing for Weak Password Policy (WSTG-ATHN-07)

  • Type: Manual
  • Tools: Manual testing, password strength analyzers
  • Techniques:
  • [ ] Test password complexity requirements
  • [ ] Check minimum length enforcement
  • [ ] Test common password rejection
  • [ ] Verify password history requirements
  • Expected Results: Strong password policy enforced
  • Red Flags: Acceptance of weak passwords, short minimum length, no complexity requirements

4.8 Testing for Weak Security Question Answer (WSTG-ATHN-08)

  • Type: Manual
  • Tools: Manual testing
  • Techniques:
  • [ ] Evaluate security question strength
  • [ ] Test for publicly available answers
  • [ ] Check for case sensitivity
  • [ ] Test answer validation
  • Expected Results: Strong security questions with unpredictable answers
  • Red Flags: Common questions, publicly available answers, weak validation

4.9 Testing for Weak Password Change or Reset Functionalities (WSTG-ATHN-09)

  • Type: Manual
  • Tools: Burp Suite, browser
  • Techniques:
  • [ ] Test password reset process
  • [ ] Check token strength and expiration
  • [ ] Verify old password requirement for changes
  • [ ] Test for user enumeration
  • Expected Results: Secure password change/reset functionality
  • Red Flags: Weak tokens, missing verification steps, information disclosure

4.10 Testing for Weaker Authentication in Alternative Channel (WSTG-ATHN-10)

  • Type: Manual
  • Tools: Mobile devices, API clients, alternate browsers
  • Techniques:
  • [ ] Test mobile application authentication
  • [ ] Check API authentication requirements
  • [ ] Test alternative login methods
  • [ ] Verify consistent security across channels
  • Expected Results: Consistent authentication strength across all channels
  • Red Flags: Weaker requirements in mobile apps, API shortcuts, alternative auth bypasses

Phase 5: Authorization Testing

Objective: Verify that users can only access resources and perform actions they are authorized for, and cannot access or modify resources of other users or higher privilege levels.

5.1 Testing Directory Traversal File Include (WSTG-AUTHZ-01)

  • Type: Semi-automated
  • Tools: Burp Suite, custom scripts, directory traversal wordlists
  • Techniques:
  • [ ] Test path traversal in file parameters
  • [ ] Check for LFI/RFI vulnerabilities
  • [ ] Try different encodings (../, ..%2f, ..%252f)
  • [ ] Test with null bytes
  • Expected Results: No ability to access files outside intended directory
  • Red Flags: Access to system files, path traversal success, inclusion of unauthorized files

5.2 Testing for Bypassing Authorization Schema (WSTG-AUTHZ-02)

  • Type: Manual
  • Tools: Burp Suite, browser dev tools
  • Techniques:
  • [ ] Test direct object references
  • [ ] Modify request parameters
  • [ ] Test with different user roles
  • [ ] Check horizontal and vertical privilege escalation
  • Expected Results: Proper authorization checks on all resources
  • Red Flags: Access to unauthorized resources, privilege escalation, missing access controls

5.3 Testing for Privilege Escalation (WSTG-AUTHZ-03)

  • Type: Manual
  • Tools: Burp Suite, browser
  • Techniques:
  • [ ] Test parameter manipulation
  • [ ] Check role switching functions
  • [ ] Attempt to access admin functions as regular user
  • [ ] Test API endpoints with different privileges
  • Expected Results: No ability to escalate privileges
  • Red Flags: Access to higher privilege functions, role confusion, broken access controls

5.4 Testing for Insecure Direct Object References (WSTG-AUTHZ-04)

  • Type: Manual
  • Tools: Burp Suite, browser dev tools
  • Techniques:
  • [ ] Identify object references in requests
  • [ ] Modify IDs to access other users’ data
  • [ ] Check sequential reference patterns
  • [ ] Test access to resources with guessable IDs
  • Expected Results: Proper authorization checks on all object references
  • Red Flags: Access to other users’ data, predictable IDs, missing ownership validation

Phase 6: Session Management Testing

Objective: Evaluate the security of the session management system to ensure that user sessions are properly protected from hijacking and manipulation.

6.1 Testing for Session Management Schema (WSTG-SESS-01)

  • Type: Manual
  • Tools: Burp Suite, browser dev tools
  • Techniques:
  • [ ] Analyze session token generation
  • [ ] Check token entropy
  • [ ] Test session lifecycle
  • [ ] Verify session termination
  • Expected Results: Strong session management with secure tokens
  • Red Flags: Predictable tokens, insufficient entropy, improper session handling

6.2 Testing for Cookies Attributes (WSTG-SESS-02)

  • Type: Automated
  • Tools: Burp Suite, ZAP, cookie-checker
  • Techniques:
  • [ ] Check Secure flag
  • [ ] Verify HttpOnly flag
  • [ ] Test SameSite attribute
  • [ ] Check expiration and scope
  • Expected Results: Properly secured cookies with appropriate attributes
  • Red Flags: Missing Secure/HttpOnly flags, improper SameSite setting, broad scope

6.3 Testing for Session Fixation (WSTG-SESS-03)

  • Type: Manual
  • Tools: Burp Suite, browser
  • Techniques:
  • [ ] Attempt to set session ID before login
  • [ ] Check if session ID changes after authentication
  • [ ] Test session adoption
  • Expected Results: New session token issued after authentication
  • Red Flags: Reuse of pre-authentication session, no session regeneration on login

6.4 Testing for Exposed Session Variables (WSTG-SESS-04)

  • Type: Manual
  • Tools: Burp Suite, browser dev tools
  • Techniques:
  • [ ] Check for session tokens in URLs
  • [ ] Look for tokens in JavaScript
  • [ ] Test for session data in logs
  • [ ] Check referer leakage
  • Expected Results: Session variables properly protected
  • Red Flags: Session IDs in URLs, logs, or exposed to JavaScript

6.5 Testing for Cross Site Request Forgery (WSTG-SESS-05)

  • Type: Manual
  • Tools: CSRF PoC generator, Burp Suite
  • Techniques:
  • [ ] Check for CSRF tokens
  • [ ] Test state-changing operations
  • [ ] Verify token validation
  • [ ] Test token scope and persistence
  • Expected Results: CSRF protection on all state-changing operations
  • Red Flags: Missing CSRF tokens, token validation bypasses, predictable tokens

6.6 Testing for Logout Functionality (WSTG-SESS-06)

  • Type: Manual
  • Tools: Browser, Burp Suite
  • Techniques:
  • [ ] Verify session invalidation on logout
  • [ ] Test session reuse after logout
  • [ ] Check for proper cleanup of all session data
  • [ ] Test logout across all authenticated pages
  • Expected Results: Complete session termination on logout
  • Red Flags: Usable session after logout, incomplete session cleanup

6.7 Testing Session Timeout (WSTG-SESS-07)

  • Type: Manual
  • Tools: Browser, automated scripts
  • Techniques:
  • [ ] Test idle timeout implementation
  • [ ] Check absolute timeout enforcement
  • [ ] Verify timeout for different session types
  • [ ] Test timeout bypass techniques
  • Expected Results: Appropriate session timeout implemented
  • Red Flags: No timeout, excessive timeout period, inconsistent timeout implementation

6.8 Testing for Session Puzzling (WSTG-SESS-08)

  • Type: Manual
  • Tools: Burp Suite, custom scripts
  • Techniques:
  • [ ] Test for session variable overloading
  • [ ] Check variable influence across functions
  • [ ] Test session variable manipulation
  • Expected Results: Proper session variable isolation and validation
  • Red Flags: Variable overwriting, variable leakage between functions, variable confusion

Phase 7: Input Validation Testing

Objective: Verify that all input from users and external systems is properly validated and sanitized to prevent injection attacks and other input-based vulnerabilities.

7.1 Testing for Reflected Cross-Site Scripting (WSTG-INPVAL-01)

  • Type: Semi-automated
  • Tools: XSS Hunter, Burp Suite, ZAP, custom payloads
  • Techniques:
  • [ ] Test all input parameters
  • [ ] Try different XSS vectors
  • [ ] Check for context-specific payloads
  • [ ] Test encoding bypasses
  • Expected Results: No XSS vulnerabilities
  • Red Flags: Script execution, alert popups, DOM modification

7.2 Testing for Stored Cross-Site Scripting (WSTG-INPVAL-02)

  • Type: Manual
  • Tools: XSS payloads, Burp Suite
  • Techniques:
  • [ ] Identify storage points (comments, profiles)
  • [ ] Submit XSS payloads
  • [ ] Check rendering in different contexts
  • [ ] Test second-order XSS
  • Expected Results: Proper sanitization of stored user input
  • Red Flags: Persistent script execution, stored payloads reflected to other users

7.3 Testing for HTTP Verb Tampering (WSTG-INPVAL-03)

  • Type: Semi-automated
  • Tools: Burp Suite, curl, custom scripts
  • Techniques:
  • [ ] Test alternate HTTP methods
  • [ ] Check for method overriding
  • [ ] Test method handling
  • Expected Results: Proper HTTP method restrictions
  • Red Flags: Unauthorized methods allowed, security bypass via method switching

7.4 Testing for HTTP Parameter Pollution (WSTG-INPVAL-04)

  • Type: Manual
  • Tools: Burp Suite, custom scripts
  • Techniques:
  • [ ] Submit duplicate parameters
  • [ ] Test parameter interpretation
  • [ ] Check for server/application-specific behavior
  • Expected Results: Proper handling of duplicate parameters
  • Red Flags: Security bypasses, unexpected behavior with multiple parameters

7.5 Testing for SQL Injection (WSTG-INPVAL-05)

  • Type: Semi-automated
  • Tools: SQLmap, Burp Suite, manual SQL payloads
  • Techniques:
  • [ ] Test for error-based injection
  • [ ] Check for blind SQL injection
  • [ ] Test time-based techniques
  • [ ] Verify ORM/framework protections
  • Expected Results: No SQL injection vulnerabilities
  • Red Flags: Database errors, successful injection, data extraction

7.6 Testing for LDAP Injection (WSTG-INPVAL-06)

  • Type: Manual
  • Tools: LDAP injection payloads, Burp Suite
  • Techniques:
  • [ ] Test special LDAP characters
  • [ ] Try LDAP syntax in inputs
  • [ ] Check for error messages
  • Expected Results: Proper LDAP query sanitization
  • Red Flags: LDAP errors, authentication bypass, information disclosure

7.7 Testing for XML Injection (WSTG-INPVAL-07)

  • Type: Manual
  • Tools: XML payloads, Burp Suite
  • Techniques:
  • [ ] Inject XML metacharacters
  • [ ] Test XML structure tampering
  • [ ] Check for XML parser errors
  • Expected Results: Proper XML input handling
  • Red Flags: XML parsing errors, structure modification, injection success

7.8 Testing for SSI Injection (WSTG-INPVAL-08)

  • Type: Manual
  • Tools: SSI payloads, Burp Suite
  • Techniques:
  • [ ] Test SSI directives
  • [ ] Check for directive execution
  • [ ] Try command execution via SSI
  • Expected Results: No server-side includes execution
  • Red Flags: SSI directive execution, command output display

7.9 Testing for XPath Injection (WSTG-INPVAL-09)

  • Type: Manual
  • Tools: XPath payloads, Burp Suite
  • Techniques:
  • [ ] Test XPath syntax in inputs
  • [ ] Check for error messages
  • [ ] Try boolean XPath queries
  • Expected Results: Proper XPath query sanitization
  • Red Flags: XPath errors, authentication bypass, information disclosure

7.10 Testing for IMAP SMTP Injection (WSTG-INPVAL-10)

  • Type: Manual
  • Tools: Email injection payloads, Burp Suite
  • Techniques:
  • [ ] Test email command injection
  • [ ] Check protocol command handling
  • [ ] Verify input sanitization in email functionality
  • Expected Results: Proper email command sanitization
  • Red Flags: Command execution, unauthorized email operations

7.11 Testing for Code Injection (WSTG-INPVAL-11)

  • Type: Manual
  • Tools: Language-specific payloads, Burp Suite
  • Techniques:
  • [ ] Test various code evaluation contexts
  • [ ] Try language-specific injections
  • [ ] Check for remote code execution
  • Expected Results: No code execution from user input
  • Red Flags: Successful code execution, command output, system access

7.12 Testing for Command Injection (WSTG-INPVAL-12)

  • Type: Semi-automated
  • Tools: Command injection payloads, Burp Suite, commix
  • Techniques:
  • [ ] Test shell metacharacters
  • [ ] Try command chaining
  • [ ] Check for blind command injection
  • [ ] Test different encoding techniques
  • Expected Results: No command execution from user input
  • Red Flags: Command execution, output displayed, system access

7.13 Testing for Format String Injection (WSTG-INPVAL-13)

  • Type: Manual
  • Tools: Format string payloads, Burp Suite
  • Techniques:
  • [ ] Test format specifiers
  • [ ] Check for memory leaks or crashes
  • [ ] Look for unexpected output
  • Expected Results: Proper handling of format strings
  • Red Flags: Memory dumps, application crashes, format string evaluation

7.14 Testing for Incubated Vulnerability (WSTG-INPVAL-14)

  • Type: Manual
  • Tools: Delayed payloads, Burp Suite
  • Techniques:
  • [ ] Plant dormant payloads
  • [ ] Check for delayed execution
  • [ ] Test for second-order vulnerabilities
  • Expected Results: No delayed or stored vulnerability execution
  • Red Flags: Triggered payloads, delayed attacks, second-order vulnerabilities

7.15 Testing for HTTP Splitting Smuggling (WSTG-INPVAL-15)

  • Type: Manual
  • Tools: HTTP request smuggling payloads, Burp Suite
  • Techniques:
  • [ ] Test for CR/LF injection
  • [ ] Check request smuggling possibilities
  • [ ] Try HTTP response splitting
  • Expected Results: Proper HTTP request/response handling
  • Red Flags: Request smuggling success, HTTP response manipulation

7.16 Testing for HTTP Incoming Requests (WSTG-INPVAL-16)

  • Type: Manual
  • Tools: Burp Suite, custom scripts
  • Techniques:
  • [ ] Check HTTP request validation
  • [ ] Test origin verification
  • [ ] Verify input sanitization from HTTP headers
  • Expected Results: Proper validation of incoming HTTP requests
  • Red Flags: Lack of origin validation, header injection, request forgery

7.17 Testing for Host Header Injection (WSTG-INPVAL-17)

  • Type: Manual
  • Tools: Burp Suite, custom headers
  • Techniques:
  • [ ] Manipulate Host header
  • [ ] Test password reset functionality
  • [ ] Check email generation logic
  • [ ] Verify host-based access controls
  • Expected Results: No reliance on user-controllable Host header
  • Red Flags: Host header influence on security controls, email links, or access controls

7.18 Testing for Server-side Template Injection (WSTG-INPVAL-18)

  • Type: Manual
  • Tools: Template injection payloads, Burp Suite, Tplmap
  • Techniques:
  • [ ] Test template syntax
  • [ ] Check for template evaluation
  • [ ] Try sandbox escapes
  • [ ] Verify template context
  • Expected Results: No template evaluation of user input
  • Red Flags: Template execution, sandbox escape, code execution

7.19 Testing for Server-Side Request Forgery (WSTG-INPVAL-19)

  • Type: Manual
  • Tools: SSRF payloads, Burp Suite, collaborator
  • Techniques:
  • [ ] Test URL input fields
  • [ ] Check file imports and API integrations
  • [ ] Try internal/localhost references
  • [ ] Use out-of-band detection
  • Expected Results: No server-side requests to arbitrary destinations
  • Red Flags: Internal service access, metadata access, request execution

Phase 8: Error Handling & Logging Testing

Objective: Verify that the application handles errors securely and maintains appropriate logging without revealing sensitive information.

8.1 Testing for Improper Error Handling (WSTG-ERRH-01)

  • Type: Manual
  • Tools: Burp Suite, browser
  • Techniques:
  • [ ] Force application errors
  • [ ] Check for stack traces
  • [ ] Analyze error messages
  • [ ] Test debug modes
  • Expected Results: Generic error messages without technical details
  • Red Flags: Stack traces, database errors, system paths, detailed exceptions

8.2 Testing for Stack Traces (WSTG-ERRH-02)

  • Type: Semi-automated
  • Tools: Burp Suite, error provocating inputs
  • Techniques:
  • [ ] Submit malformed input
  • [ ] Force application errors
  • [ ] Check error responses
  • [ ] Test error handling mechanisms
  • Expected Results: No stack traces or technical error details
  • Red Flags: Stack traces, line numbers, file paths, framework details

Phase 9: Cryptography Testing

Objective: Evaluate the implementation of cryptographic functions to ensure they adequately protect sensitive data.

9.1 Testing for Weak Transport Layer Security (WSTG-CRYPST-01)

  • Type: Automated
  • Tools: SSLyze, testssl.sh, Qualys SSL Labs
  • Techniques:
  • [ ] Check supported TLS versions
  • [ ] Test cipher suites
  • [ ] Verify certificate validity
  • [ ] Check for protocol vulnerabilities
  • Expected Results: Strong TLS configuration with modern protocols
  • Red Flags: Outdated TLS versions, weak ciphers, certificate issues

9.2 Testing for Padding Oracle (WSTG-CRYPST-02)

  • Type: Semi-automated
  • Tools: Padding oracle testing tools, Burp Suite
  • Techniques:
  • [ ] Identify encrypted parameters
  • [ ] Test padding manipulation
  • [ ] Check for different error responses
  • Expected Results: No padding oracle vulnerabilities
  • Red Flags: Different responses based on padding, decryption oracles

9.3 Testing for Sensitive Information Sent via Unencrypted Channels (WSTG-CRYPST-03)

  • Type: Manual
  • Tools: Burp Suite, packet sniffers
  • Techniques:
  • [ ] Identify all data transmissions
  • [ ] Check protocol security
  • [ ] Verify all sensitive operations use HTTPS
  • [ ] Test downgrade attacks
  • Expected Results: All sensitive data transmitted over encrypted channels
  • Red Flags: Plaintext transmission of credentials, personal data, or security tokens

9.4 Testing for Weak Encryption (WSTG-CRYPST-04)

  • Type: Manual
  • Tools: Encryption analysis tools, Burp Suite
  • Techniques:
  • [ ] Identify encryption methods used
  • [ ] Check for weak algorithms
  • [ ] Test key management
  • [ ] Verify encryption implementation
  • Expected Results: Strong, properly implemented encryption
  • Red Flags: Weak algorithms, poor key management, homegrown encryption

Phase 10: Business Logic Testing

Objective: Identify flaws in the application’s business logic that could allow users to perform unintended actions or bypass security controls.

10.1 Test Business Logic Data Validation (WSTG-BUSL-01)

  • Type: Manual
  • Tools: Burp Suite, manual testing
  • Techniques:
  • [ ] Identify business rules
  • [ ] Test boundary conditions
  • [ ] Check negative scenarios
  • [ ] Verify data consistency
  • Expected Results: Proper validation of business rules
  • Red Flags: Inconsistent rule application, rule bypass, flawed logic

10.2 Test Ability to Forge Requests (WSTG-BUSL-02)

  • Type: Manual
  • Tools: Burp Suite, custom scripts
  • Techniques:
  • [ ] Analyze request sequences
  • [ ] Test forced browsing
  • [ ] Check for predictable parameters
  • [ ] Verify workflow enforcement
  • Expected Results: Proper sequence and workflow validation
  • Red Flags: Request forgery success, bypassed workflow steps

10.3 Test Integrity Checks (WSTG-BUSL-03)

  • Type: Manual
  • Tools: Burp Suite, proxy tools
  • Techniques:
  • [ ] Identify client-side calculations
  • [ ] Test price/quantity manipulation
  • [ ] Check total calculations
  • [ ] Verify server-side validation
  • Expected Results: Server-side verification of all calculations
  • Red Flags: Client-controlled calculations, missing integrity checks

10.4 Test for Process Timing (WSTG-BUSL-04)

  • Type: Manual
  • Tools: Burp Suite, custom scripts
  • Techniques:
  • [ ] Identify time-dependent operations
  • [ ] Test race conditions
  • [ ] Check transaction timing
  • [ ] Verify time-based restrictions
  • Expected Results: Proper handling of timing issues
  • Red Flags: Race conditions, time manipulation, process issues

10.5 Test Number of Times a Function Can Be Used Limits (WSTG-BUSL-05)

  • Type: Manual
  • Tools: Burp Suite, custom scripts
  • Techniques:
  • [ ] Identify limited operations
  • [ ] Test for counter bypasses
  • [ ] Check rate limiting
  • [ ] Verify usage restrictions
  • Expected Results: Proper enforcement of usage limits
  • Red Flags: Limit bypasses, counter manipulation, excessive usage

10.6 Testing for the Circumvention of Work Flows (WSTG-BUSL-06)

  • Type: Manual
  • Tools: Burp Suite, browser
  • Techniques:
  • [ ] Map application workflow
  • [ ] Test step skipping
  • [ ] Check for forced browsing
  • [ ] Verify sequence validation
  • Expected Results: Proper workflow enforcement
  • Red Flags: Step skipping, workflow circumvention, order manipulation

10.7 Test Defenses Against Application Misuse (WSTG-BUSL-07)

  • Type: Manual
  • Tools: Burp Suite, automated scripts
  • Techniques:
  • [ ] Test rate limiting
  • [ ] Check for anti-automation
  • [ ] Verify CAPTCHA effectiveness
  • [ ] Test brute force protections
  • Expected Results: Effective anti-abuse protections
  • Red Flags: Missing rate limits, ineffective CAPTCHAs, automation vulnerabilities

10.8 Test Upload of Unexpected File Types (WSTG-BUSL-08)

  • Type: Manual
  • Tools: Various file types, Burp Suite
  • Techniques:
  • [ ] Test MIME type validation
  • [ ] Try extension manipulation
  • [ ] Check content validation
  • [ ] Test malicious file uploads
  • Expected Results: Proper file type validation
  • Red Flags: Acceptance of dangerous file types, execution of uploaded content

10.9 Test Upload of Malicious Files (WSTG-BUSL-09)

  • Type: Manual
  • Tools: Malicious file samples, Burp Suite
  • Techniques:
  • [ ] Test malware detection
  • [ ] Try obfuscated malicious files
  • [ ] Check file processing
  • [ ] Verify file content scanning
  • Expected Results: Rejection of malicious files
  • Red Flags: Successful malicious file uploads, execution of uploaded content

Phase 11: Client-Side Testing

Objective: Identify vulnerabilities in the client-side code that could allow attacks against users or manipulation of the application’s behavior.

11.1 Testing for DOM-Based Cross-Site Scripting (WSTG-CLNT-01)

  • Type: Semi-automated
  • Tools: DOM XSS scanner, Burp Suite, browser dev tools
  • Techniques:
  • [ ] Identify DOM manipulation points
  • [ ] Test URL fragment handling
  • [ ] Check client-side template usage
  • [ ] Verify data flow to dangerous sinks
  • Expected Results: Proper sanitization of data used in DOM
  • Red Flags: DOM XSS execution, client-side code injection

11.2 Testing for JavaScript Execution (WSTG-CLNT-02)

  • Type: Manual
  • Tools: Browser dev tools, Burp Suite
  • Techniques:
  • [ ] Analyze JavaScript execution flow
  • [ ] Test injection in JavaScript contexts
  • [ ] Check for unsafe eval() usage
  • [ ] Verify content security policy
  • Expected Results: Secure JavaScript execution
  • Red Flags: Unsafe eval(), code injection, CSP bypasses

11.3 Testing for HTML Injection (WSTG-CLNT-03)

  • Type: Manual
  • Tools: Burp Suite, browser
  • Techniques:
  • [ ] Test for HTML injection points
  • [ ] Check rendering contexts
  • [ ] Verify HTML sanitization
  • Expected Results: Proper HTML encoding/sanitization
  • Red Flags: Successful HTML injection, DOM manipulation

11.4 Testing for Client-Side URL Redirect (WSTG-CLNT-04)

  • Type: Manual
  • Tools: Burp Suite, browser
  • Techniques:
  • [ ] Identify redirect functionality
  • [ ] Test for open redirects
  • [ ] Check redirect validation
  • [ ] Try relative URL bypasses
  • Expected Results: Proper validation of redirect targets
  • Red Flags: Open redirects, phishing opportunities

11.5 Testing for CSS Injection (WSTG-CLNT-05)

  • Type: Manual
  • Tools: Burp Suite, browser dev tools
  • Techniques:
  • [ ] Identify CSS injection points
  • [ ] Test style attribute manipulation
  • [ ] Check for CSS attack vectors
  • [ ] Verify sanitization of CSS
  • Expected Results: Proper CSS sanitization
  • Red Flags: CSS injection, data exfiltration via CSS, UI manipulation

11.6 Testing for Client-Side Resource Manipulation (WSTG-CLNT-06)

  • Type: Manual
  • Tools: Browser dev tools, Burp Suite
  • Techniques:
  • [ ] Identify resource inclusion
  • [ ] Test for path manipulation
  • [ ] Check cross-domain resource loading
  • [ ] Verify resource integrity
  • Expected Results: Secure resource handling
  • Red Flags: Resource manipulation, unauthorized resource loading

11.7 Testing Cross Origin Resource Sharing (WSTG-CLNT-07)

  • Type: Manual
  • Tools: Burp Suite, custom scripts
  • Techniques:
  • [ ] Check CORS headers
  • [ ] Test wildcard origins
  • [ ] Verify credentials handling
  • [ ] Test preflight requests
  • Expected Results: Properly restricted CORS policy
  • Red Flags: Overly permissive CORS, wildcard with credentials

11.8 Testing for Cross Site Flashing (WSTG-CLNT-08)

  • Type: Manual
  • Tools: Flash decompilers, Burp Suite
  • Techniques:
  • [ ] Analyze Flash objects
  • [ ] Check crossdomain.xml
  • [ ] Test for ExternalInterface issues
  • [ ] Verify SWF security
  • Expected Results: Secure Flash implementation (if used)
  • Red Flags: Insecure Flash objects, permissive crossdomain.xml

11.9 Testing for Clickjacking (WSTG-CLNT-09)

  • Type: Manual
  • Tools: Iframe test pages, Burp Suite
  • Techniques:
  • [ ] Test iframe embedding
  • [ ] Check X-Frame-Options header
  • [ ] Verify CSP frame-ancestors
  • [ ] Test for UI redressing
  • Expected Results: Protection against framing
  • Red Flags: Successful framing, missing X-Frame-Options/CSP

11.10 Testing WebSockets (WSTG-CLNT-10)

  • Type: Manual
  • Tools: Burp Suite, browser dev tools
  • Techniques:
  • [ ] Identify WebSocket connections
  • [ ] Test authentication
  • [ ] Check input validation
  • [ ] Verify origin checks
  • Expected Results: Secure WebSocket implementation
  • Red Flags: Missing authentication, insufficient validation, origin bypasses

11.11 Testing Web Messaging (WSTG-CLNT-11)

  • Type: Manual
  • Tools: Browser dev tools, custom scripts
  • Techniques:
  • [ ] Identify postMessage usage
  • [ ] Test origin validation
  • [ ] Check message handling
  • [ ] Verify input sanitization
  • Expected Results: Secure web messaging implementation
  • Red Flags: Missing origin checks, unsafe message handling

11.12 Testing Browser Storage (WSTG-CLNT-12)

  • Type: Manual
  • Tools: Browser dev tools, Burp Suite
  • Techniques:
  • [ ] Check localStorage/sessionStorage usage
  • [ ] Identify sensitive data storage
  • [ ] Test for persistent XSS via storage
  • [ ] Verify storage security
  • Expected Results: No sensitive data in browser storage
  • Red Flags: Credentials in storage, insecure data persistence

11.13 Testing for Cross Site Script Inclusion (WSTG-CLNT-13)

  • Type: Manual
  • Tools: Burp Suite, browser
  • Techniques:
  • [ ] Identify JSONP endpoints
  • [ ] Test for script inclusion
  • [ ] Check callback validation
  • [ ] Verify content type enforcement
  • Expected Results: Secure JSONP implementation or alternatives
  • Red Flags: JSONP with sensitive data, missing callback validation

Phase 12: API Testing

Objective: Evaluate the security of API endpoints, focusing on authentication, authorization, and data validation.

12.1 Testing GraphQL

  • Type: Semi-automated
  • Tools: GraphQL-specific tools, Burp Suite, Insomnia/Postman
  • Techniques:
  • [ ] Test introspection
  • [ ] Check query depth/complexity
  • [ ] Verify authentication/authorization
  • [ ] Test for injection in queries
  • Expected Results: Secure GraphQL implementation
  • Red Flags: Excessive introspection, missing query limits, auth bypasses

12.2 Testing RESTful APIs

  • Type: Semi-automated
  • Tools: Burp Suite, Postman/Insomnia, custom scripts
  • Techniques:
  • [ ] Enumerate endpoints
  • [ ] Test HTTP methods
  • [ ] Check authentication
  • [ ] Verify authorization
  • [ ] Test parameter validation
  • Expected Results: Secure REST API implementation
  • Red Flags: Missing auth, BOLA vulnerabilities, insufficient validation

12.3 Testing API Documentation

  • Type: Manual
  • Tools: Swagger/OpenAPI parsers, Postman
  • Techniques:
  • [ ] Review API documentation
  • [ ] Check for hidden endpoints
  • [ ] Test endpoints from docs
  • [ ] Verify security controls
  • Expected Results: Accurate documentation without security issues
  • Red Flags: Exposed test endpoints, undocumented features, excessive information

12.4 Testing for Mass Assignment

  • Type: Manual
  • Tools: Burp Suite, Postman
  • Techniques:
  • [ ] Identify object creation/update operations
  • [ ] Add unexpected properties
  • [ ] Test for privilege escalation
  • [ ] Check property overwriting
  • Expected Results: Property-level access control
  • Red Flags: Successful manipulation of restricted properties, role escalation

12.5 Testing for Rate Limiting

  • Type: Semi-automated
  • Tools: Custom scripts, Burp Suite
  • Techniques:
  • [ ] Test request frequency
  • [ ] Check for rate limit headers
  • [ ] Try rate limit bypasses
  • [ ] Verify limit consistency
  • Expected Results: Proper rate limiting implementation
  • Red Flags: Missing rate limits, easy bypasses, inconsistent enforcement

Customization Guide

This framework can be adapted for various testing scenarios:

Bug Bounty Customization

  • Focus Areas:
  • Information gathering (identify scope and attack surface)
  • Input validation (XSS, SQLi, SSRF)
  • Authentication/authorization bypasses
  • Business logic flaws
  • Optimization:
  • Skip formal documentation unless required
  • Focus on high-impact vulnerabilities
  • Use automation for initial scanning
  • Prioritize tests based on program scope
  • Workflow Adjustments:
  • Start with passive reconnaissance
  • Run automated scans while performing manual tests
  • Focus on unique/creative attack vectors
  • Document findings according to program guidelines

Enterprise Application Testing

  • Focus Areas:
  • Complete methodology execution
  • Comprehensive documentation
  • Risk assessment for findings
  • Remediation guidance
  • Optimization:
  • Adapt to compliance requirements (PCI-DSS, HIPAA, etc.)
  • Include business impact context in findings
  • Balance depth vs. breadth based on time constraints
  • Prioritize tests based on application sensitivity
  • Workflow Adjustments:
  • Formal kickoff and reporting
  • Scheduled testing windows
  • Clear communication channels
  • Detailed evidence collection
  • Executive and technical reporting

CI/CD Pipeline / DevSecOps Integration

  • Focus Areas:
  • Automated testing components
  • Quick feedback loops
  • Integration with development workflow
  • Continuous monitoring
  • Optimization:
  • Implement SAST, DAST, and dependency scanning
  • Create custom scripts for business logic tests
  • Set appropriate severity thresholds for pipeline breaks
  • Balance speed vs. coverage
  • Workflow Adjustments:
  • Integrate test cases into build pipelines
  • Implement pre-commit and pre-deployment hooks
  • Create fast-feedback automated tests
  • Reserve complex testing for scheduled deeper scans

API-Only Targets

  • Focus Areas:
  • API-specific testing (Phase 12)
  • Authentication mechanisms
  • Authorization controls
  • Input validation
  • Rate limiting and resource controls
  • Optimization:
  • Skip client-side testing
  • Focus on data validation and business logic
  • Test for API-specific issues (mass assignment, BOLA)
  • Use API documentation for test case generation
  • Workflow Adjustments:
  • Use API testing tools (Postman, Insomnia)
  • Create collection of test cases for reuse
  • Focus on authorization testing between endpoints
  • Test for API versioning issues

Automation Strategies

Tool Integration

  • Burp Suite:
  • Use for proxy capture, scanning, and manual testing
  • Integrate extensions for specialized testing
  • Create custom macros for complex workflows
  • Utilize Intruder for parameter fuzzing
  • Implement session handling rules
  • OWASP ZAP:
  • Configure for CI/CD pipeline integration
  • Use Baseline scan for quick checks
  • Implement Full scan for comprehensive testing
  • Utilize API scan for API-specific testing
  • Create custom scripts for business logic testing
  • Nuclei:
  • Develop custom templates for application-specific tests
  • Use for fast, signature-based scanning
  • Integrate into CI/CD pipeline
  • Customize severity levels for findings
  • Create templates for business logic issues
  • Nikto:
  • Use for quick configuration checks
  • Integrate for known vulnerability scanning
  • Filter results to reduce false positives
  • Focus on server misconfiguration detection
  • ffuf:
  • Implement for directory and endpoint discovery
  • Use for parameter fuzzing
  • Create custom wordlists for application context
  • Integrate into reconnaissance automation

Custom Script Development

  • Python Scripts:
  • Develop for custom authentication testing
  • Create for business logic validation
  • Implement for API endpoint enumeration
  • Use for data extraction and analysis
  • Build for report generation
  • Bash Scripts:
  • Create for automation orchestration
  • Develop for tool integration
  • Implement for quick reconnaissance
  • Use for result filtering and aggregation
  • JavaScript/Node.js:
  • Build for client-side testing automation
  • Create for DOM-based vulnerability detection
  • Implement for WebSocket testing
  • Use for API fuzzing

Automation Priorities

  1. Information Gathering:
  2. Subdomain enumeration
  3. Technology fingerprinting
  4. Directory brute forcing
  5. Endpoint discovery
  6. Vulnerability Scanning:
  7. Common web vulnerabilities
  8. Known CVEs
  9. Default credentials
  10. Misconfigurations
  11. Authentication Testing:
  12. Credential brute forcing
  13. Session validation
  14. Authentication flow testing
  15. Input Validation:
  16. Parameter fuzzing
  17. Injection testing
  18. XSS scanning
  19. Automated payload delivery
  20. Advanced Testing (requires manual verification):
  21. Business logic fuzzing
  22. Custom exploits
  23. Complex attack chains
  24. Privilege escalation

Evidence Collection & Reporting

Folder Structure

/project-name/
├── reconnaissance/
│   ├── subdomains.txt
│   ├── technologies.json
│   ├── directories.txt
│   └── endpoints.txt
├── vulnerabilities/
│   ├── vuln-001-sqli/
│   │   ├── description.md
│   │   ├── screenshots/
│   │   ├── request-response/
│   │   └── poc.py
│   ├── vuln-002-xss/
│   │   ├── ...
├── tools/
│   ├── scripts/
│   ├── wordlists/
│   └── configs/
├── evidence/
│   ├── raw-data/
│   ├── processed-results/
│   └── logs/
└── reports/
    ├── executive-summary.md
    ├── technical-details.md
    ├── appendices/
    └── remediation-plan.md

Naming Conventions

  • Screenshots: [vulnerability-id]_[component]_[step]_[YYYYMMDD].png
  • Example: vuln-003_login-bypass_auth-response_20230315.png
  • Request/Response: [vulnerability-id]_[HTTP-method]_[endpoint]_[YYYYMMDD].txt
  • Example: vuln-005_POST_api-users_20230316.txt
  • Log Files: [tool]_[target]_[YYYYMMDD]_[HHMMSS].log
  • Example: nmap_example.com_20230314_153045.log
  • Proof of Concept: [vulnerability-id]_[vulnerability-type]_poc.[extension]
  • Example: vuln-007_csrf_poc.html

Evidence Collection Guidelines

  1. Screenshots:
  2. Capture full browser window when possible
  3. Include developer tools when relevant
  4. Highlight critical areas
  5. Redact sensitive information
  6. Document steps to reproduce
  7. HTTP Traffic:
  8. Save full request and response
  9. Include headers and body
  10. Highlight relevant portions
  11. Document sequence of requests
  12. Note authentication context
  13. Tool Output:
  14. Save raw and processed output
  15. Document tool version and parameters
  16. Include command-line arguments
  17. Filter out false positives
  18. Maintain original timestamps
  19. Exploitation Proof:
  20. Document all steps to reproduce
  21. Create minimal working example
  22. Include necessary setup instructions
  23. Provide cleanup procedures
  24. Ensure repeatable results

Reporting Structure

  1. Executive Summary:
  2. Overall risk assessment
  3. Key findings summary
  4. Recommendations overview
  5. Testing methodology
  6. Scope and limitations
  7. Technical Findings:
  8. Vulnerability details
  9. Risk rating (CVSS score)
  10. Steps to reproduce
  11. Impact description
  12. Evidence references
  13. Remediation recommendations
  14. Vulnerability Categories:
  15. Group by OWASP Top 10 or WSTG categories
  16. Prioritize by risk level
  17. Include vulnerability count by category
  18. Provide category-specific recommendations
  19. Remediation Plan:
  20. Prioritized actions
  21. Short-term fixes
  22. Long-term improvements
  23. Resource requirements
  24. Verification methods
  25. Appendices:
  26. Testing tools and versions
  27. Methodology details
  28. Raw scanning results
  29. Glossary of terms
  30. References and resources

Report Delivery Format

  • Executive Report: PDF with executive summary and high-level findings
  • Technical Report: Full PDF with detailed findings and evidence
  • Remediation Tracker: Spreadsheet with findings and remediation status
  • Evidence Package: Compressed archive with organized evidence
  • Presentation: Slides for stakeholder debriefing

Tags

Security Cybersecurity Information Security

Victor Nthuli

Security Operations Engineer specializing in incident response, threat hunting, and compliance alignment for regulated industries.

Related Posts

April 22, 2025

My Terminal is My Happy Place: A Tour of My CLI Setup

Read More
April 19, 2025

Comprehensive Network Traffic Monitoring: A Deep Dive into Zeek, MySQL, and Grafana Integration

This project provides a comprehensive solution for capturing network traffic, processing it with Zeek (formerly Bro), and storing the enriched logs into a MySQL database for further analysis and visualization. It includes scripts and configurations to enhance Zeek's capabilities with GeoIP, ASN data, and JA3/JA4 fingerprinting, enabling detailed network security monitoring and analysis.

Read More

Table of Contents

Loading...

Recent Posts

  • My Terminal is My Happy Place: A Tour of My CLI Setup

    April 22, 2025

    Read Post
  • Comprehensive Network Traffic Monitoring: A Deep Dive into Zeek, MySQL, and Grafana Integration

    April 19, 2025

    Read Post
  • Bookmarklet Deep Dive: Harvest Every JavaScript URL on a Page with a Single Line

    April 10, 2025

    Read Post
  • Ultimate Command Arsenal: Master Wireshark, Linux, and Windows CLI

    April 07, 2025

    Read Post
  • ZeroDay Odyssey: A Cyberpunk Framework for Web Application Penetration Testing

    April 05, 2025

    Current Post
  • Mastering Cybersecurity: A Complete Roadmap from Beginner to Expert

    April 02, 2025

    Read Post
  • Responsible Disclosure: Browser DevTools and Direct File Access in SlidesGPT

    April 01, 2025

    Read Post
  • Bluewave vs Uptime Kuma: A Real-World Comparison for Monitoring Uptime and Beyond

    March 26, 2025

    Read Post
  • Nextcloud

    March 25, 2025

    Read Post
  • 🔍 Keeping Your Linux Services in Check: How I Use Monit for Bulletproof Monitoring 🚨

    February 03, 2025

    Read Post

About the Author

Victor Nthuli is a Security Operations Engineer with expertise in incident response, SIEM implementation, and threat hunting. With a background in cybersecurity and a passion for Linux systems, he provides insights based on real-world experience.

Learn More

Subscribe for Security Updates

Get notified when new security articles and insights are published.

Need Enterprise Security Solutions?

Visit SocDev Africa for comprehensive security services and software development solutions for your organization.

Visit SocDev.Africa