Use Cases

When to use it

PassingCrackwords is purpose-built for authorized penetration testing and security validation. Here are the primary engagement scenarios and recommended workflows.

01
Pentera Automated Validation
SITE SCRAPE + CITY AI

Enrich a Pentera campaign with organisation-specific vocabulary. Scrape the target's website to extract brand terms, executive names, and domain jargon. City detection appends location-based words automatically.

  1. Enter the target's primary domain URL
  2. Enable city detection (on by default)
  3. Download the .txt output
  4. Upload to Pentera as a custom wordlist
  5. Pentera's rule engine handles variant expansion
02
Active Directory Password Audit
LOCATION AI + SITE SCRAPE

Generate a targeted candidate list for Hashcat against a dumped NTDS. Combine a site scrape with a location-based wordlist for maximum coverage, then apply d3adOne.rule for full expansion.

  1. Run site scrape with city detection enabled
  2. Run a separate Location AI for the HQ city if not auto-detected
  3. Merge both wordlists: cat *.txt | sort -u > combined.txt
  4. Attack: hashcat -m 1000 hashes.ntds combined.txt -r d3adOne.rule
03
External Network Credential Spray
SITE SCRAPE

Build a low-noise credential spray list for OWA, VPN, or other external-facing authentication endpoints. Site scrape produces the most targeted candidate set with minimum irrelevant words.

  1. Scrape the target's external-facing site
  2. Increase min word length to 6 to reduce noise
  3. Download and use with Medusa or Burp Intruder
  4. Keep spray rate low to avoid lockout policies
04
Regional Engagement — Unknown Target
LOCATION AI

When you know the target's geography but don't have the URL yet, or when the site is SPA-rendered and won't scrape. Location AI generates 1,000 culturally relevant words in seconds.

  1. Enter city, state, or region in Location AI
  2. Download and apply rules for full expansion
  3. Use as a baseline; supplement with site scrape when the URL is available
05
Red Team Pre-Engagement Prep
LOCATION AI + SITE SCRAPE

Build a comprehensive pre-engagement wordlist before the engagement begins. Combine site vocabulary with city-level cultural references for the broadest targeted coverage.

  1. Scrape the primary domain (city detection on)
  2. Scrape subsidiary or acquisition domains separately
  3. Run Location AI for known office cities
  4. Merge, deduplicate, and stage for the engagement
06
Password Policy Gap Assessment
SITE SCRAPE

Demonstrate to a client that their own website content forms the basis of their employees' passwords. A site scrape output against their AD hash dump is a compelling evidence artifact.

  1. Scrape the client's own domain
  2. Run against a sample of their AD hashes
  3. Present crack rate as evidence of policy weakness
  4. Recommend longer minimum length, banned word lists

Recommended Workflow — Full Engagement

01

Site Scrape with City Detection

Enter the primary target domain. Leave city detection on. This produces a combined wordlist of scraped vocabulary plus location-specific terms, typically 4,000–5,500 unique words.

02

Supplement with Location AI (optional)

If the auto-detected city is wrong or the company has multiple major office locations, run a manual Location AI pass for each city and merge the outputs.

# Merge and deduplicate
cat scrape_output.txt location_houston.txt location_chicago.txt | sort -u > combined.txt
03

Apply Rules at Crack Time

Feed the combined wordlist into your cracking tool with a rule file. Do not pre-expand — rule engines do this more efficiently and comprehensively.

# Hashcat — NTLM with d3adOne rule
hashcat -m 1000 -a 0 hashes.ntds combined.txt -r d3adOne.rule --force

# Pentera — upload combined.txt as custom wordlist directly
04

Iterate

If initial results are low, scrape subsidiary domains, deeper site pages (increase crawl depth to 2), or add more cities. The tool is fast enough to iterate multiple times within a session.