Web path scanner

Overview

dirsearch - Hacking is not a crime

dirsearch - Web path scanner

Build License Release Stars Tweet

Current Release: v0.4.1 (2020.12.8)

Overview

  • Dirsearch is a mature command-line tool designed to brute force directories and files in webservers.

  • With 6 years of growth, dirsearch now has become the top web content scanner.

  • As a feature-rich tool, dirsearch gives users the opportunity to perform a complex web content discovering, with many vectors for the wordlist, high accuracy, impressive performance, advanced connection/request settings, modern brute-force techniques and nice output.

  • Dirsearch is being actively developed by @maurosoria and @shelld3v

Installation & Usage

git clone https://github.com/maurosoria/dirsearch.git
cd dirsearch
python3 dirsearch.py -u <URL> -e <EXTENSIONS>
  • To can use SOCKS proxy or work with ../ in the wordlist, you need to install pips with requirements.txt: pip3 install -r requirements.txt

  • If you are using Windows and don't have git, you can install the ZIP file here. Dirsearch also supports Docker

Dirsearch requires python 3 or greater

Features

  • Fast
  • Easy and simple to use
  • Multithreading
  • Wildcard responses filtering (invalid webpages)
  • Keep alive connections
  • Support for multiple extensions
  • Support for every HTTP method
  • Support for HTTP request data
  • Support for raw request
  • Extensions excluding
  • Reporting (Plain text, JSON, XML, Markdown, CSV)
  • Recursive brute forcing
  • Target enumeration from an IP range
  • Sub-directories brute forcing
  • Force extensions
  • HTTP and SOCKS proxy support
  • HTTP cookies and headers support
  • HTTP headers from file
  • User agent randomization
  • Proxy host randomization
  • Batch processing
  • Request delaying
  • 429 response code detecting
  • Multiple wordlist formats (lowercase, uppercase, capitalization)
  • Default configuration from file
  • Option to force requests by hostname
  • Option to add custom suffixes and prefixes
  • Option to whitelist response codes, support ranges (-i 200,300-399)
  • Option to blacklist response codes, support ranges (-x 404,500-599)
  • Option to exclude responses by sizes
  • Option to exclude responses by texts
  • Option to exclude responses by regexp(s)
  • Option to exclude responses by redirects
  • Options to display only items with response length from range
  • Option to remove all extensions from every wordlist entry
  • Quiet mode
  • Debug mode

About wordlists

Summary: Wordlist must be a text file, each line will be an endpoint. About extensions, unlike other tools, dirsearch won't append extensions to every word, if you don't use the -f flag. By default, only the %EXT% keyword in the wordlist will be replaced with extensions (-e <extensions>).

Details:

  • Each line in the wordlist will be processed as such, except when the special keyword %EXT% is used, it will generate one entry for each extension (-e | --extensions) passed as an argument.

Example:

root/
index.%EXT%

Passing the extensions "asp" and "aspx" (-e asp,aspx) will generate the following dictionary:

root/
index
index.asp
index.aspx
  • For wordlists without %EXT% (like SecLists), you need to use the -f | --force-extensions switch to append extensions to every word in the wordlists, as well as the "/". And for entries in the wordlist that you do not want to force, you can add %NOFORCE% at the end of them so dirsearch won't append any extension.

Example:

admin
home.%EXT%
api%NOFORCE%

Passing extensions "php" and "html" with the -f/--force-extensions flag (-f -e php,html) will generate the following dictionary:

admin
admin.php
admin.html
admin/
home
home.php
home.html
api

To use multiple wordlists, you can seperate your wordlists with commas. Example: -w wordlist1.txt,wordlist2.txt

Options

Usage: dirsearch.py [-u|--url] target [-e|--extensions] extensions [options]

Options:
  --version             show program's version number and exit
  -h, --help            show this help message and exit

  Mandatory:
    -u URL, --url=URL   Target URL
    -l FILE, --url-list=FILE
                        URL list file
    --stdin             URL list from STDIN
    --cidr=CIDR         Target CIDR
    --raw=FILE          File contains the raw request (use `--scheme` flag to
                        set the scheme)
    -e EXTENSIONS, --extensions=EXTENSIONS
                        Extension list separated by commas (Example: php,asp)
    -X EXTENSIONS, --exclude-extensions=EXTENSIONS
                        Exclude extension list separated by commas (Example:
                        asp,jsp)
    -f, --force-extensions
                        Add extensions to the end of every wordlist entry. By
                        default dirsearch only replaces the %EXT% keyword with
                        extensions

  Dictionary Settings:
    -w WORDLIST, --wordlists=WORDLIST
                        Customize wordlists (separated by commas)
    --prefixes=PREFIXES
                        Add custom prefixes to all entries (separated by
                        commas)
    --suffixes=SUFFIXES
                        Add custom suffixes to all entries, ignore directories
                        (separated by commas)
    --only-selected     Only entries with selected extensions or no extension
                        + directories
    --remove-extensions
                        Remove extensions in all wordlist entries (Example:
                        admin.php -> admin)
    -U, --uppercase     Uppercase wordlist
    -L, --lowercase     Lowercase wordlist
    -C, --capital       Capital wordlist

  General Settings:
    -r, --recursive     Bruteforce recursively
    -R DEPTH, --recursion-depth=DEPTH
                        Maximum recursion depth
    -t THREADS, --threads=THREADS
                        Number of threads
    --subdirs=SUBDIRS   Scan sub-directories of the given URL[s] (separated by
                        commas)
    --exclude-subdirs=SUBDIRS
                        Exclude the following subdirectories during recursive
                        scan (separated by commas)
    -i STATUS, --include-status=STATUS
                        Include status codes, separated by commas, support
                        ranges (Example: 200,300-399)
    -x STATUS, --exclude-status=STATUS
                        Exclude status codes, separated by commas, support
                        ranges (Example: 301,500-599)
    --exclude-sizes=SIZES
                        Exclude responses by sizes, separated by commas
                        (Example: 123B,4KB)
    --exclude-texts=TEXTS
                        Exclude responses by texts, separated by commas
                        (Example: 'Not found', 'Error')
    --exclude-regexps=REGEXPS
                        Exclude responses by regexps, separated by commas
                        (Example: 'Not foun[a-z]{1}', '^Error$')
    --exclude-redirects=REGEXPS
                        Exclude responses by redirect regexps or texts,
                        separated by commas (Example: 'https://okta.com/*')
    --calibration=PATH  Path to test for calibration
    --random-agent      Choose a random User-Agent for each request
    --minimal=LENGTH    Minimal response length
    --maximal=LENGTH    Maximal response length
    -q, --quiet-mode    Quiet mode
    --full-url          Print full URLs in the output
    --no-color          No colored output

  Request Settings:
    -m METHOD, --http-method=METHOD
                        HTTP method (default: GET)
    -d DATA, --data=DATA
                        HTTP request data
    -H HEADERS, --header=HEADERS
                        HTTP request header, support multiple flags (Example:
                        -H 'Referer: example.com' -H 'Accept: */*')
    --header-list=FILE  File contains HTTP request headers
    -F, --follow-redirects
                        Follow HTTP redirects
    --user-agent=USERAGENT
    --cookie=COOKIE

  Connection Settings:
    --timeout=TIMEOUT   Connection timeout
    --ip=IP             Server IP address
    -s DELAY, --delay=DELAY
                        Delay between requests
    --proxy=PROXY       Proxy URL, support HTTP and SOCKS proxies (Example:
                        localhost:8080, socks5://localhost:8088)
    --proxy-list=FILE   File contains proxy servers
    --matches-proxy=PROXY
                        Proxy to replay with found paths
    --scheme=SCHEME     Default scheme (for raw request or if there is no
                        scheme in the URL)
    --max-retries=RETRIES
    -b, --request-by-hostname
                        By default dirsearch requests by IP for speed. This
                        will force requests by hostname
    --exit-on-error     Exit whenever an error occurs
    --debug             Debug mode

  Reports:
    --simple-report=OUTPUTFILE
    --plain-text-report=OUTPUTFILE
    --json-report=OUTPUTFILE
    --xml-report=OUTPUTFILE
    --markdown-report=OUTPUTFILE
    --csv-report=OUTPUTFILE

NOTE: You can change the dirsearch default configurations (default extensions, timeout, wordlist location, ...) by editing the default.conf file.

How to use

Dirsearch demo

Some examples for how to use dirsearch - those are the most common arguments. If you need all, just use the -h argument.

Simple usage

python3 dirsearch.py -u https://target
python3 dirsearch.py -e php,html,js -u https://target
python3 dirsearch.py -e php,html,js -u https://target -w /path/to/wordlist

Recursive scan

By using the -r | --recursive argument, dirsearch will automatically brute-force the after of directories that it found.

python3 dirsearch.py -e php,html,js -u https://target -r

You can set the max recursion depth with -R or --recursion-depth

python3 dirsearch.py -e php,html,js -u https://target -r -R 3

Threads

The threads number (-t | --threads) reflects the number of separate brute force processes, that each process will perform path brute-forcing against the target. And so the bigger the threads number is, the more fast dirsearch runs. By default, the number of threads is 20, but you can increase it if you want to speed up the progress.

In spite of that, the speed is actually still uncontrollable since it depends a lot on the response time of the server. And as a warning, we advise you to keep the threads number not too big because of the impact from too much automation requests, this should be adjusted to fit the power of the system that you're scanning against.

python3 dirsearch.py -e php,htm,js,bak,zip,tgz,txt -u https://target -t 30

Prefixes / Suffixes

  • --prefixes: Adding custom prefixes to all entries
python3 dirsearch.py -e php -u https://target --prefixes .,admin,_,~

Base wordlist:

tools

Generated with prefixes:

.tools
admintools
_tools
~tools
  • --suffixes: Adding custom suffixes to all entries
python3 dirsearch.py -e php -u https://target --suffixes ~,/

Base wordlist:

index.php
internal

Generated with suffixes:

index.php~
index.php/
internal~
internal/

Exclude extensions

Use -X | --exclude-extensions with your exclude-extension list to remove all entries in the wordlist that have the given extensions

python3 dirsearch.py -e asp,aspx,htm,js -u https://target -X php,jsp,jspx

Base wordlist:

admin
admin.%EXT%
index.html
home.php
test.jsp

After:

admin
admin.asp
admin.aspx
admin.htm
admin.js
index.html

Wordlist formats

Supported wordlist formats: uppercase, lowercase, capitalization

Lowercase:

admin
index.html
test

Uppercase:

ADMIN
INDEX.HTML
TEST

Capital:

Admin
Index.html
Test

Filters

Use -i | --include-status and -x | --exclude-status to select allowed and not allowed response status codes

python3 dirsearch.py -e php,html,js -u https://target -i 200,204,400,403 -x 500,502,429

--exclude-sizes, --exclude-texts, --exclude-regexps and --exclude-redirects are also supported for a more advanced filter

python3 dirsearch.py -e php,html,js -u https://target --exclude-sizes 1B,243KB
python3 dirsearch.py -e php,html,js -u https://target --exclude-texts "403 Forbidden"
python3 dirsearch.py -e php,html,js -u https://target --exclude-regexps "^Error$"

Raw requests

dirsearch allows you to import the raw request from a file. The raw file content will be looked something like this:

GET /admin HTTP/1.1
Host: admin.example.com
Cache-Control: max-age=0
Accept: */*

Since there is no way for dirsearch to know what the URI scheme is (http or https), you need to set it using the --scheme flag. By default, the scheme is http, which is not popular in modern web servers now. That means, without setting up the scheme, you may brute-force with the wrong protocol, and will end up with false negatives.

Scan sub-directories

From an URL, you can scan sub-dirsearctories with --subdirs.

python3 dirsearch.py -e php,html,js -u https://target --subdirs admin/,folder/,/

A reverse version of this feature is --exclude-subdirs, which to prevent dirsearch from brute-forcing directories that should not be brute-forced when doing a recursive scan.

python3 dirsearch.py -e php,html,js -u https://target --recursive -R 2 --exclude-subdirs "server-status/,%3f/"

Proxies

Dirsearch supports SOCKS and HTTP proxy, with two options: a proxy server or a list of proxy servers.

python3 dirsearch.py -e php,html,js -u https://target --proxy 127.0.0.1:8080
python3 dirsearch.py -e php,html,js -u https://target --proxy socks5://10.10.0.1:8080
python3 dirsearch.py -e php,html,js -u https://target --proxylist proxyservers.txt

Reports

Dirsearch allows the user to save the output into a file. It supports several output formats like text or json, and we are keep updating for new formats

python3 dirsearch.py -e php -l URLs.txt --plain-text-report report.txt
python3 dirsearch.py -e php -u https://target --json-report target.json
python3 dirsearch.py -e php -u https://target --simple-report target.txt

Some others commands

python3 dirsearch.py -e php,txt,zip -u https://target -w db/dicc.txt -H "X-Forwarded-Host: 127.0.0.1" -f
python3 dirsearch.py -e php,txt,zip -u https://target -w db/dicc.txt -t 100 -m POST --data "username=admin"
python3 dirsearch.py -e php,txt,zip -u https://target -w db/dicc.txt --random-agent --cookie "isAdmin=1"
python3 dirsearch.py -e php,txt,zip -u https://target -w db/dicc.txt --json-report=target.json
python3 dirsearch.py -e php,txt,zip -u https://target -w db/dicc.txt --minimal 1
python3 dirsearch.py -e php,txt,zip -u https://target -w db/dicc.txt --header-list rate-limit-bypasses.txt
python3 dirsearch.py -e php,txt,zip -u https://target -w db/dicc.txt -q --stop-on-error
python3 dirsearch.py -e php,txt,zip -u https://target -w db/dicc.txt --full-url
python3 dirsearch.py -u https://target -w db/dicc.txt --no-extension

There are more features and you will need to discover it by your self

Tips

  • To run dirsearch with a rate of requests per second, try -t <rate> -s 1
  • Want to findout config files or backups? Try out --suffixes ~ and --prefixes .
  • For some endpoints that you do not want to force extensions, add %NOFORCE% at the end of them
  • Want to find only folders/directories? Combine --no-extension and --suffixes /!
  • The combination of --cidr, -F and -q will reduce most of the noise + false negatives when brute-forcing with a CIDR

Support Docker

Install Docker Linux

Install Docker

curl -fsSL https://get.docker.com | bash

To use docker you need superuser power

Build Image dirsearch

To create image

docker build -t "dirsearch:v0.4.1" .

dirsearch is the name of the image and v0.4.1 is the version

Using dirsearch

For using

docker run -it --rm "dirsearch:v0.4.1" -u target -e php,html,js,zip

License

Copyright (C) Mauro Soria ([email protected])

License: GNU General Public License, version 2

Contributors

This tool is currently under development by @maurosoria and @shelld3v. We received a lot of help from many people around the world to improve this tool. Thanks so much to everyone who helped us!!

See CONTRIBUTORS.md for more information about who they are!

Want to join the team? Feel free to submit any pull request that you can. If you don't know how to code, you can support us by updating the wordlist or the documentation. Giving feedback or a new feature suggestion is also a good way to help us improve this tool

Issues
  • New Suggestion

    New Suggestion

    Hello Team , So I have a suggestion for the dirsearch , so from my experience , whenever I run run dirsearch on a target and it throws a file greater than 10-15 mbs , the bruteforced directories doesn't work , so can you add a feature to like discard the target if the file exceeds 10 mb ? That would be a really cool feature in my opinion , what do you think ?

    bug Priority: Medium 
    opened by bootyhunt3r 50
  • PEP8 Compliance

    PEP8 Compliance

    Fixes all PEP8 violations except for line length and wildcard imports.

    opened by jsfan 37
  • Python overwrites some PyPi packages when installing dirsearch with setup.py

    Python overwrites some PyPi packages when installing dirsearch with setup.py

    I'm having this error when I run it with a non root user, I've tried doing it with sudo as well but I had the same error prompt dirsearcherror dirsearchError.txt

      File "/opt/dirsearch/dirsearch.py", line 47, in <module>
        main = Program()
      File "/opt/dirsearch/dirsearch.py", line 43, in __init__
        self.controller = Controller(self.script_path, self.arguments, self.output)
      File "/opt/dirsearch/lib/controller/controller.py", line 170, in __init__
        self.setupErrorLogs()
      File "/opt/dirsearch/lib/controller/controller.py", line 337, in setupErrorLogs
        self.errorLog = open(self.errorLogPath, "w")
    PermissionError: [Errno 13] Permission denied: '/opt/dirsearch/logs/errors-21-01-19_13-39-11.log'
    

    I tried to review the error log but doesn't exist.

    bug Priority: High 
    opened by AyOnEs04 36
  • Need a general help

    Need a general help

    Hello Team , I saw that you have updated with that desired update of response size , but is there any way to exclude file outputs which are greater then 5MB or my desired size using --exclude-sizes or --exclude-regexps ? Because with --exclude-sizes we have to type every size manually , can it also be like 5MB- Infinite ?

    question 
    opened by bootyhunt3r 34
  • Pretty Reporting

    Pretty Reporting

    Description

    ~~This PR is a work in progress (WIP).~~

    This is to address #762. This ended up rewriting/clobbering more code than I was hoping for. There are definitely bugs, so please address them and I'll work to balance the new behavior vs the old.

    The major upgrade of this PR is that reports will return a combined format in a single output for all urls. The reports were originally written to write one report per url, which is fine, but batch reports end up looking ugly and contain redundant metadata. This required a heavy rewrite to the report_manager.py script.

    The current PR focuses on writing a single report. The previous code performed some auto-saving and appending. That has been removed in an effort to get this working. Previously if you added the --simple-report or similar flags you would see the standard log file written along with selected output file. The log + report behavior has also been removed, but would be pretty simple to add back if desired.

    The base_report.py script still needs some work due to some of these pending questions. Each report was modified to account for the url iteration. I used a bit of #761, but I'd like to merge that PR and resolve any conflicts between the styles.

    A major simplification of reporting was made by dropping the ---report flag. The outputFile (-o) and outputFormat (--format) flags were added, which clean up some of the long if-then statements.

    I'll add comparisons in the comments of the PR of report output using batch mode to demonstrate the changes.


    Update 4/21/21

    This PR should be good to go for testing.

    Usage

    # Basic example (single url)
    $ python3 ./dirsearch.py -u https://example.com/ --format json -o example_dirsearch.json
    
    # Basic example (multiple urls)
    $ python3 ./dirsearch.py -l urls.txt --format json -o batch_report.json
    

    The same functionality should be provided with the auto-save functionality. You should be able to leave out the -o flag and just specify the output format to use the standard report location. You should be able to run a tail -f <output_file> while dirsearch is running and watch any changes to the output file.

    enhancement need more info 
    opened by wdahlenburg 32
  • support system installation or output to ~.dirsearch

    support system installation or output to ~.dirsearch

    Kindly review and accept the following patch: https://github.com/pentoo/pentoo-overlay/blob/master/net-analyzer/dirsearch/files/add_homedir_support-r1.patch

    There is one small improvement need to be done, see:

    https://github.com/pentoo/pentoo-overlay/issues/512

    opened by blshkv 26
  • [Bug Fix] Printing full URL in response

    [Bug Fix] Printing full URL in response

    Hi,

    Here's a PR around this issue: https://github.com/maurosoria/dirsearch/issues/214. Going through the code found the methods responsible for printing the path and replaced it with Full URL + Path.

    Copy/Pasting the Path, joining it with the URL was a hassle before. This will make things easier as one can easily copy/paste the whole line or just open the link through the terminal.

    Addressed Issue: Printing Full URL rather than only Path in Response

    Before: image

    After: image

    opened by Anon-Exploiter 23
  • Is there a way to automatically add redirects to the recursive queue?

    Is there a way to automatically add redirects to the recursive queue?

    When I used the default wordlist, it did not add gallery/ to the queue, but when I used a custom wordlist it did? There was nothing in the error log. I set recursive to true in default.conf.

    $ dirsearch.py -u 192.168.87.91
    
    [07:57:50] 301 -  316B  - /gallery  ->  http://192.168.87.91/gallery/
    [07:57:51] 200 -  228B  - /index.php
    [07:57:51] 301 -  319B  - /javascript  ->  http://192.168.87.91/javascript/
    [07:57:52] 200 -    0B  - /lang.php
    
    $ dirsearch.py -u 192.168.87.91 -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -f
    
    [08:00:18] 301 -  316B  - /gallery  ->  http://192.168.87.91/gallery/
    [08:00:18] 200 -    1KB - /gallery/     (Added to queue)
    [08:00:39] 301 -  319B  - /javascript  ->  http://192.168.87.91/javascript/
    [08:00:39] 403 -  278B  - /javascript/     (Added to queue)
    
    question 
    opened by f00d4w0rm4 20
  • Is there a option to save the results as such we are seing the output?

    Is there a option to save the results as such we are seing the output?

    output options does not provide option to do this.Taking stdin is not giving pretty results as it show every request done

    opened by whiteorange007 19
  • Headers from file

    Headers from file

    Description

    Why this PR? What will it do?

    Adding Option to provide a feature where headers can be read from the file directly.

    If this PR will fix an issue, please address it:

    Fix https://github.com/maurosoria/dirsearch/issues/292

    opened by chowmean 19
  • The server does not support HTTP / 2.0 and dirsearch cannot connect to it

    The server does not support HTTP / 2.0 and dirsearch cannot connect to it

    https://prnt.sc/21fy7x7 - HTTP/1.1 request and response

    https://prnt.sc/21fyajc - HTTP/2.0 request and response

    dirsearch sends the first request from http / 2.0 and since the server does not support this protocol and does not respond to it, dirsearch thinks that the server is down, but it is not

    bug unreproducible 
    opened by shuraros2334 1
  • --max-time只对单个url生效

    --max-time只对单个url生效

    --max-time参数,使用--url-list指定url列表时只能对第一个url有效,后续请求无法超时退出

    bug Priority: High 
    opened by hu185396 3
  • AttributeError: 'Fuzzer' object has no attribute 'play_event'

    AttributeError: 'Fuzzer' object has no attribute 'play_event'

    运行报错,运行命令:python dirsearch.py --url-list url.txt -i 200,300 --timeout 3 –random-agent --max-time=3 -F image

    bug Priority: Low 
    opened by hu185396 2
  • Interactive recursion

    Interactive recursion

    What is the feature?

    Decide while scanning which directories should be added to the list of recursively brute forced directories

    Why?

    Gives more control to the user and decide on the get-go whether a directory looks interesting to be brute forced or skipped.

    Especially useful if you do not know beforehand which directories will likely be there, but can be useful at anytime the tool is used. Saves time and generates less traffic against the target.

    Example

    img1/, img2/, img3/ may not look interesting, but backup/ does. So you can deny recursion on the occurrences of img1/, img2/ and img3/, but add the backup/ for recursive brute force.

    Example implementation

    See tool dirb (-R option)

    enhancement Priority: High 
    opened by 0xalwayslucky 2
  • 运行过程中出现这个报错是什么原因?

    运行过程中出现这个报错是什么原因?

    image

    requirement都安了。

    bug unreproducible 
    opened by Scivous 5
  • Same pages back bugs

    Same pages back bugs

    When I scan a website,dirsearch output like this: image

    The website always return same pages .....

    Can we check html file md5 hash if the website always response same pages counts > 5 ?

    enhancement Priority: Medium 
    opened by yanghaoi 4
  • When site use WAF  scan proc will stop at

    When site use WAF scan proc will stop at "first path"

    When site use WAF, first request url like .0zeGnr69eweI will stop, and prompt "Cannot connect to:"

    image

    Of course, I prefer it to continue to work, so i add this code it can continue to work

    image

    bug Priority: Low in review 
    opened by CaijiOrz 1
  • Reverse Recursive Scan

    Reverse Recursive Scan

    What is the feature?

    Following this Issue, I think it is useful to introduce a Reverse Recursive Scan!

    What is the use case?

    If I scan this url: https://www.example.org/downloads/documents/ dirsearch will automatically scan the previous directories. Example:

    1. https://www.example.org/downloads/documents/
    2. https://www.example.org/downloads/
    3. https://www.example.org/

    Thanks

    enhancement Priority: Low in review 
    opened by drego85 4
  • dirsearch proxied through Burp Suite = intermittent success

    dirsearch proxied through Burp Suite = intermittent success

    What is the current behavior?

    dirsearch proxied through Burp Suite running on localhost intermittently works with a certain command, but frequently fails with exact same command line with following message:

    Error with the proxy: http://127.0.0.1:8080

    Opening randomized requests from dirsearch visible in Burp Suite Proxy HTTP History, with no apparent issue

    What is the expected behavior?

    dirsearch should successfully and reliably proxy through Burp Suite

    Any additional information?

    Summary of results (note: successes and failures occurring in both Burp Suite Community and Pro ; I deliberately redacted the target I am running this against with [target].[omitted].com ; dirsearch performs without issue against target when --proxy parameter is omitted)

    $ dirsearch -l /home/kali/Desktop/targets.txt -w /home/kali/Desktop/api-endpoints.txt --proxy 127.0.0.1:8080 -o /home/kali/Desktop/out.txt --format=csv

    |. _ _ _ _ _ | v0.4.1
    (
    ||| ) (/(|| (| )

    Extensions: php, aspx, jsp, html, js | HTTP method: GET | Threads: 30 | Wordlist size: 362

    Output File: /home/kali/Desktop/out.txt

    Error Log: /home/kali/.dirsearch/logs/errors-21-10-18_10-55-28.log

    Target: https://[target].[omitted].com/

    [10:55:28] Starting: [10:55:35] 404 - 153B - /v1/button ... [10:55:37] 302 - 0B - /medicare -> /medicare/

    Task Completed

    ┌──(kali㉿kali)-[~/Desktop] └─$ dirsearch -l ~/Desktop/targets.txt -w ~/Desktop/api-endpoints.txt --proxy 127.0.0.1:8080 -o ~/Desktop/out.txt --format=csv

    |. _ _ _ _ _ | v0.4.1
    (
    ||| ) (/(|| (| )

    Extensions: php, aspx, jsp, html, js | HTTP method: GET | Threads: 30 | Wordlist size: 3129

    Output File: /home/kali/Desktop/out.txt

    Error Log: /home/kali/.dirsearch/logs/errors-21-10-18_11-38-36.log

    Target: https://[target].[omitted].com/

    [11:38:36] Starting: Error with the proxy: http://127.0.0.1:8080

    Task Completed

    Checker:

    • [X ] I tested in the latest version of dirsearch
    bug 
    opened by helmutye0 4
  • error , I cann't use it

    error , I cann't use it

    Traceback (most recent call last): File "C:\Tool\dirsearch.py", line 27, in from lib.core.argument_parser import ArgumentParser File "C:\Tool\lib\core\argument_parser.py", line 24, in from lib.parse.headers import HeadersParser File "C:\Tool\lib\parse\headers.py", line 23, in from lib.utils.fmt import lowercase File "C:\Tool\lib\utils\fmt.py", line 19, in from chardet import detect ModuleNotFoundError: No module named 'chardet'

    I can't run it How could i fix it #bug

    bug unreproducible 
    opened by Youknow2509 12
Releases(v0.4.2)
  • v0.4.2(Sep 13, 2021)

    • More accurate
    • Exclude responses by redirects
    • URLs from STDIN
    • Fixed the CSV Injection vulnerability (https://www.exploit-db.com/exploits/49370)
    • Raw request supported
    • Can setup the default URL scheme (will be used when there is no scheme in the URL)
    • Added max runtime option
    • Recursion on specified status codes
    • Max request rate
    • Support several authentication types
    • Deep/forced recursive scan
    • HTML report format
    • Option to skip target by specified status codes
    • Bug fixes

    Special thanks to @shelld3v for his contributions!

    We skipped version v0.4.1 for no particular reason (but alpha versions were available).

    Source code(tar.gz)
    Source code(zip)
  • v0.4.2-beta1(Jun 16, 2021)

  • v0.4.1-alpha2(Apr 12, 2021)

  • v0.4.1(Dec 9, 2020)

    • Faster
    • Allow to brute force through a CIDR notation
    • Exclude responses by human readable sizes
    • Provide headers from a file
    • Match/filter status codes by ranges
    • Detect 429 response status code
    • Support SOCKS proxy
    • XML, Markdown and CSV report formats
    • Capital wordlist format
    • Option to replay proxy with found paths
    • Option to remove all extensions in the wordlist
    • Option to exit whenever an error occurs
    • Option to disable colored output
    • Debug mode
    • Multiple bugfixes
    Source code(tar.gz)
    Source code(zip)
  • v0.4.1-alpha(Dec 9, 2020)

    • Faster
    • Allow to brute force through a CIDR notation
    • Exclude responses by human readable sizes
    • Provide headers from a file
    • Match/filter status codes by ranges
    • Detect 429 response status code
    • Support SOCKS proxy
    • XML, Markdown and CSV report formats
    • Capital wordlist format
    • Option to replay proxy with found paths
    • Option to remove all extensions in the wordlist
    • Option to exit whenever an error occurs
    • Option to disable colored output
    • Debug mode
    • Multiple bugfixes
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Sep 26, 2020)

    • Exclude extensions argument added
    • Added custom prefixes and suffixes
    • No dot extensions option
    • Support HTTP request data
    • Added minimal response length and maximal response length arguments
    • Added include status codes and exclude status codes arguments
    • Added --clean-view option
    • Added option to print the full URL in the output
    • Added Prefix and Suffix arguments
    • Multiple bugfixes

    Special thanks to @shelld3v

    Source code(tar.gz)
    Source code(zip)
  • v0.3.9(Nov 26, 2019)

    • Added default extensions argument (-E).
    • Added suppress empty responses.
    • Recursion max depth.
    • Exclude responses with text and regexes.
    • Multiple fixes.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.8(Jul 25, 2017)

    Changelog:

    • Delay argument added.
    • Request by hostname switch added.
    • Suppress empty switch added.
    • Added Force Extensions switch.
    • Multiple bugfixes.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.7(Aug 22, 2016)

  • v0.3.6-1(Mar 11, 2016)

  • v0.3.6(Mar 7, 2016)

  • v0.3.5(Jan 29, 2016)

  • v0.3.0(Feb 5, 2015)

  • v0.2.7(Nov 21, 2014)

  • v0.2.6(Sep 12, 2014)

Owner
Mauro Soria
https://twitter.com/_maurosoria
Mauro Soria
Web Scraping Framework

Grab Framework Documentation Installation $ pip install -U grab See details about installing Grab on different platforms here http://docs.grablib.

null 2.1k Nov 30, 2021
A Powerful Spider(Web Crawler) System in Python.

pyspider A Powerful Spider(Web Crawler) System in Python. Write script in Python Powerful WebUI with script editor, task monitor, project manager and

Roy Binux 15.2k Nov 29, 2021
Scrapy, a fast high-level web crawling & scraping framework for Python.

Scrapy Overview Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pag

Scrapy project 42.3k Dec 4, 2021
:arrow_double_down: Dumb downloader that scrapes the web

You-Get NOTICE: Read this if you are looking for the conventional "Issues" tab. You-Get is a tiny command-line utility to download media contents (vid

Mort Yao 42.4k Nov 24, 2021
Web mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.

Pattern Pattern is a web mining module for Python. It has tools for: Data Mining: web services (Google, Twitter, Wikipedia), web crawler, HTML DOM par

Computational Linguistics Research Group 8.1k Dec 2, 2021
A Smart, Automatic, Fast and Lightweight Web Scraper for Python

AutoScraper: A Smart, Automatic, Fast and Lightweight Web Scraper for Python This project is made for automatic web scraping to make scraping easy. It

Mika 4.1k Nov 30, 2021
Async Python 3.6+ web scraping micro-framework based on asyncio

Ruia ??️ Async Python 3.6+ web scraping micro-framework based on asyncio. ⚡ Write less, run faster. Overview Ruia is an async web scraping micro-frame

howie.hu 1.5k Nov 25, 2021
Web scraping library and command-line tool for text discovery and extraction (main content, metadata, comments)

trafilatura: Web scraping tool for text discovery and retrieval Description Trafilatura is a Python package and command-line tool which seamlessly dow

Adrien Barbaresi 318 Nov 25, 2021
Web Content Retrieval for Humans™

Lassie Lassie is a Python library for retrieving basic content from websites. Usage >>> import lassie >>> lassie.fetch('http://www.youtube.com/watch?v

Mike Helmick 535 Nov 24, 2021
🥫 The simple, fast, and modern web scraping library

About gazpacho is a simple, fast, and modern web scraping library. The library is stable, actively maintained, and installed with zero dependencies. I

Max Humber 606 Nov 22, 2021
Transistor, a Python web scraping framework for intelligent use cases.

Web data collection and storage for intelligent use cases. transistor About The web is full of data. Transistor is a web scraping framework for collec

BOM Quote Manufacturing 211 Oct 17, 2021
Html Content / Article Extractor, web scrapping lib in Python

Python-Goose - Article Extractor Intro Goose was originally an article extractor written in Java that has most recently (Aug2011) been converted to a

Xavier Grangier 3.7k Nov 24, 2021
A scalable frontier for web crawlers

Frontera Overview Frontera is a web crawling framework consisting of crawl frontier, and distribution/scaling primitives, allowing to build a large sc

Scrapinghub 1.2k Nov 24, 2021
Web crawling framework based on asyncio.

Web crawling framework for everyone. Written with asyncio, uvloop and aiohttp. Requirements Python3.5+ Installation pip install gain pip install uvloo

Jiuli Gao 2k Nov 23, 2021
Library to scrape and clean web pages to create massive datasets.

lazynlp A straightforward library that allows you to crawl, clean up, and deduplicate webpages to create massive monolingual datasets. Using this libr

Chip Huyen 2k Nov 19, 2021
A web scraper that exports your entire WhatsApp chat history.

WhatSoup ?? A web scraper that exports your entire WhatsApp chat history. Table of Contents Overview Demo Prerequisites Instructions Frequen

Eddy Harrington 64 Nov 11, 2021
Command line program to download documents from web portals.

command line document download made easy Highlights list available documents in json format or download them filter documents using string matching re

null 11 Aug 22, 2021
A Web Scraper built with beautiful soup, that fetches udemy course information. Get udemy course information and convert it to json, csv or xml file

Udemy Scraper A Web Scraper built with beautiful soup, that fetches udemy course information. Installation Virtual Environment Firstly, it is recommen

Aditya Gupta 12 Nov 20, 2021
Python Web Scrapper Project

Web Scrapper Projeto desenvolvido em python, sobre tudo com Selenium, BeautifulSoup e Pandas é um web scrapper que puxa uma tabela com as principais e

Jordan Ítalo Amaral 3 Aug 13, 2021