Incredibly fast crawler designed for OSINT.

Overview


Photon
Photon

Incredibly fast crawler designed for OSINT.

pypi

demo

Photon WikiHow To UseCompatibilityPhoton LibraryContributionRoadmap

Key Features

Data Extraction

Photon can extract the following data while crawling:

  • URLs (in-scope & out-of-scope)
  • URLs with parameters (example.com/gallery.php?id=2)
  • Intel (emails, social media accounts, amazon buckets etc.)
  • Files (pdf, png, xml etc.)
  • Secret keys (auth/API keys & hashes)
  • JavaScript files & Endpoints present in them
  • Strings matching custom regex pattern
  • Subdomains & DNS related data

The extracted information is saved in an organized manner or can be exported as json.

save demo

Flexible

Control timeout, delay, add seeds, exclude URLs matching a regex pattern and other cool stuff. The extensive range of options provided by Photon lets you crawl the web exactly the way you want.

Genius

Photon's smart thread management & refined logic gives you top notch performance.

Still, crawling can be resource intensive but Photon has some tricks up it's sleeves. You can fetch URLs archived by archive.org to be used as seeds by using --wayback option.

Plugins

Docker

Photon can be launched using a lightweight Python-Alpine (103 MB) Docker image.

$ git clone https://github.com/s0md3v/Photon.git
$ cd Photon
$ docker build -t photon .
$ docker run -it --name photon photon:latest -u google.com

To view results, you can either head over to the local docker volume, which you can find by running docker inspect photon or by mounting the target loot folder:

$ docker run -it --name photon -v "$PWD:/Photon/google.com" photon:latest -u google.com

Frequent & Seamless Updates

Photon is under heavy development and updates for fixing bugs. optimizing performance & new features are being rolled regularly.

If you would like to see features and issues that are being worked on, you can do that on Development project board.

Updates can be installed & checked for with the --update option. Photon has seamless update capabilities which means you can update Photon without losing any of your saved data.

Contribution & License

You can contribute in following ways:

  • Report bugs
  • Develop plugins
  • Add more "APIs" for ninja mode
  • Give suggestions to make it better
  • Fix issues & submit a pull request

Please read the guidelines before submitting a pull request or issue.

Do you want to have a conversation in private? Hit me up on my twitter, inbox is open :)

Photon is licensed under GPL v3.0 license

Comments
  • error when scanning IP

    error when scanning IP

    Line 169 errors out if you run photon against an IP. Easiest fix might be to just add a try/except, but there is prob a more elgant solution.

    I'm pretty sure this was working before.

    root@kali:/opt/Photon# python /opt/Photon/photon.py -u http://192.168.0.213:80
          ____  __          __
         / __ \/ /_  ____  / /_____  ____
        / /_/ / __ \/ __ \/ __/ __ \/ __ \
       / ____/ / / / /_/ / /_/ /_/ / / / /
      /_/   /_/ /_/\____/\__/\____/_/ /_/ v1.1.1
    
    Traceback (most recent call last):
      File "/opt/Photon/photon.py", line 169, in <module>
        domain = get_fld(host, fix_protocol=True) # Extracts top level domain out of the host
      File "/usr/local/lib/python2.7/dist-packages/tld/utils.py", line 387, in get_fld
        search_private=search_private
      File "/usr/local/lib/python2.7/dist-packages/tld/utils.py", line 339, in process_url
        raise TldDomainNotFound(domain_name=domain_name)
    tld.exceptions.TldDomainNotFound: Domain 192.168.0.213 didn't match any existing TLD name!
    
    opened by sethsec 18
  • Option to skip crawling of URLs that match a regex pattern

    Option to skip crawling of URLs that match a regex pattern

    Added option to skip crawling of URLs that match a regex pattern, changed handling of seeds, and removed double spaces.

    I am assuming this is what you meant in the ideas column tell me if I'm way off though :)

    opened by connorskees 17
  • Added two new options: -o/--output and --stdout

    Added two new options: -o/--output and --stdout

    Awesome tool. I've been looking for something like this for a while to integrate with something I am building! I added two new options for you to consider:

    1. -o/--output option that allows the user to specify an output directory (overriding the default).

    Command: python photon.py -u http://10.10.10.102:80 -l 2 -t100 -o /pentest/photontest

    In this case, all of the output files will be written to /pentest/photontest:

    root@htbeu:/pentest/photontest# ls -ltr
    total 24
    -rw-r--r-- 1 root root    0 Jul 25 11:11 scripts.txt
    -rw-r--r-- 1 root root 3260 Jul 25 11:11 robots.txt
    -rw-r--r-- 1 root root 3260 Jul 25 11:11 links.txt
    -rw-r--r-- 1 root root   17 Jul 25 11:11 intel.txt
    -rw-r--r-- 1 root root  437 Jul 25 11:11 fuzzable.txt
    -rw-r--r-- 1 root root  146 Jul 25 11:11 files.txt
    -rw-r--r-- 1 root root    0 Jul 25 11:11 failed.txt
    -rw-r--r-- 1 root root   96 Jul 25 11:11 external.txt
    -rw-r--r-- 1 root root    0 Jul 25 11:11 endpoints.txt
    -rw-r--r-- 1 root root    0 Jul 25 11:11 custom.txt
    
    1. A --stdout option that allow user to print everything to stdout so they can pipe it into another tool or redirect all output to an output file with an operating system redirector.

    Command: root@htbeu:/opt/dev/Photon# python photon.py -u http://10.10.10.9:80 -l 2 -t100 --stdout

    Output:

          ____  __          __
         / __ \/ /_  ____  / /_____  ____
        / /_/ / __ \/ __ \/ __/ __ \/ __ \
       / ____/ / / / /_/ / /_/ /_/ / / / /
      /_/   /_/ /_/\____/\__/\____/_/ /_/
    
    [+] URLs retrieved from robots.txt: 68
    [~] Level 1: 69 URLs
    [!] Progress: 69/69
    [~] Level 2: 9 URLs
    [!] Progress: 9/9
    [~] Crawling 0 JavaScript files
    
    --------------------------------------------------
    [+] URLs: 78
    [+] Intel: 1
    [+] Files: 1
    [+] Endpoints: 0
    [+] Fuzzable URLs: 9
    [+] Custom strings: 0
    [+] JavaScript Files: 0
    [+] External References: 3
    --------------------------------------------------
    [!] Total time taken: 0:32
    [!] Average request time: 0.40
    [+] Results saved in 10.10.10.9:80 directory
    
    All Results:
    
    http://10.10.10.9:80/themes/*.gif
    http://10.10.10.9:80/modules/*.png
    http://10.10.10.9:80/INSTALL.mysql.txt
    http://10.10.10.9:80/install.php
    http://10.10.10.9:80/scripts/
    http://10.10.10.9:80/node/add/
    http://10.10.10.9:80/?q=admin/
    http://10.10.10.9:80/themes/*.png
    http://10.10.10.9:80/modules/*.gif
    http://10.10.10.9:80
    http://10.10.10.9:80/includes/
    http://10.10.10.9:80/?q=user/password/
    http://10.10.10.9:80/INSTALL.txt
    http://10.10.10.9:80/profiles/
    http://10.10.10.9:80/themes/bartik/css/ie6.css?on28x3
    http://10.10.10.9:80/MAINTAINERS.txt
    http://10.10.10.9:80/themes/bartik/css/ie.css?on28x3
    http://10.10.10.9:80/modules/*.jpeg
    http://10.10.10.9:80/misc/*.gif
    

    The last thing i changed is the way you were wiping the directory each time you ran the tool so that you would get clean output. If you accept the -o option which allows the user to specify the directory, you can't just blindly delete the directory anymore (can't trust user input ;)). So i think i added a cleaner way to just overwrite each file (replacing the w+ with w), that should accomplish the same thing without needing to delete directories.

    enhancement 
    opened by sethsec 13
  • Added -v/--verbose option + fixed few logic error in detecting bad js files

    Added -v/--verbose option + fixed few logic error in detecting bad js files

    Changes:

    • Added the -v/--verbose option for verbose output.
    • Fixed 2 logical errors in detecting bad JS scripts. - With this fix, number of bad js files have almost gone to zero.
    opened by 0xInfection 11
  • RuntimeError: Set changed size during iteration in Photon 1.0.7

    RuntimeError: Set changed size during iteration in Photon 1.0.7

    [!] Progress: 1/1
    [~] Level 2: 387 URLs
    [!] Progress: 387/387
    [~] Level 3: 18078 URLs
    [!] Progress: 18078/18078
    [~] Level 4: 90143 URLs
    [!] Progress: 39750/90143^C
    [~] Crawling 0 JavaScript files
    
    Traceback (most recent call last):
      File "photon.py", line 454, in <module>
        for url in external:
    RuntimeError: Set changed size during iteration
    
    invalid 
    opened by thistehneisen 10
  • A couple minor issues

    A couple minor issues

    SSL issues

    You're ignoring SSL verification:

    /usr/local/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:843: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
      InsecureRequestWarning)
    

    You can fix this by adding the following to the top of the file (or wherever you feel like putting it):

    import warnings
    warnings.filterwarnings("ignore")
    

    Now this is bad practice, for obvious reasons, however, it makes sense why you're doing it. The above solution is the quickest and simplest solution to solve the problem and keep the errors from being annoying.

    An example with and without:

    Without: without

    With: with


    Dammit Unicode

    If I supply a URL that is in, lets say Russian and try to extract all the data:

          ____  __          __
         / __ \/ /_  ____  / /_____  ____
        / /_/ / __ \/ __ \/ __/ __ \/ __ \
       / ____/ / / / /_/ / /_/ /_/ / / / /
      /_/   /_/ /_/\____/\__/\____/_/ /_/ 
    
     URLs retrieved from robots.txt: 5
     Level 1: 6 URLs
     Progress: 6/6
     Level 2: 35 URLs
     Progress: 35/35
     Level 3: 7 URLs
     Progress: 7/7
     Crawling 7 JavaScript files
     Progress: 7/7
    Traceback (most recent call last):
      File "photon.py", line 429, in <module>
        f.write(x + '\n')
    UnicodeEncodeError: 'ascii' codec can't encode character u'\u200b' in position 7: ordinal not in range(128)
    

    Easiest solution would be to just ignore things like this and continue, with a warning to the user that it was ignored.

    opened by Ekultek 10
  • Adding code to detect and report broken links.

    Adding code to detect and report broken links.

    Current Issue: Broken links for which server returns 404 status (or similar error) are not reported as failed links, instead these erroneous pages are parsed for text content.

    opened by snehm 9
  • Multiple improvements

    Multiple improvements

    The intels are searched only inside the page plain text to avoid retrieving tokens are garbage javascript code. Better regular expressions could allow searching inside javascript code for intels though.

    v1.3.0

    • Added more intels (GENERIC_URL, BRACKET_URL, BACKSLASH_URL, HEXENCODED_URL, URLENCODED_URL, B64ENCODED_URL, IPV4, IPV6, EMAIL, MD5, SHA1, SHA256, SHA512, YARA_PARSE, CREDIT_CARD)
    • Intel search only applied to text (not inside javascript or html tags)
    • proxy support with -p, --proxy option (http proxy only)
    • minor fixes and pep8 format

    Tested on

    os: Linux Mint 19.1
    python: 3.6.7
    
    opened by oXis 8
  • get_lfd

    get_lfd

    Hi After using the option --update i get this error:

    Traceback (most recent call last): File "photon.py", line 187, in domain = topLevel(main_url) File "photon.py", line 182, in topLevel toplevel = tld.get_fld(host, fix_protocol=True) AttributeError: 'module' object has no attribute 'get_fld'

    Any clue? I installed also on another system and still the same change target: same delete totally and clone again: same

    OS Kali3 Thanks

    invalid 
    opened by psychomad 7
  • Exceptions in threads during scanning in Level 1 & 2

    Exceptions in threads during scanning in Level 1 & 2

    Exception in thread Thread-4694: Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner self.run() File "/usr/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "photon.py", line 211, in extractor if is_link(link, processed, files): File "/usr/share/Photon/core/utils.py", line 41, in is_link is_file = url.endswith(BAD_TYPES) TypeError: endswith first arg must be str, unicode, or tuple, not list

    image

    opened by 4k4xs4pH1r3 6
  • Improve dev-experience

    Improve dev-experience

    Add .gitignore and requirements file just to improve development experience.

    Doing this:

    • You won't need to remove unnecessary python compiled files or either other not needed files.
    • You just "pip install -r requirements.txt", and you'll have all necessary third-party libraries to execute script.
    invalid 
    opened by dgarana 5
  • TLSCertVerificationDisabled

    TLSCertVerificationDisabled

    Under [core/requester.py] line 48 of code.

    response = SESSION.get

    Certificate verification is disabled by setting verify to False in get. This may lead to Man-in-the-middle attacks.

    opened by yonasuriv 0
  • .well-known files

    .well-known files

    I see that this project examines robots.txt and sitemap.xml. I was wondering if you could add some of the other .well-known files like ads.txt, security.txt and others found https://well-known.dev/resources/

    opened by WebBreacher 0
  • No generating a DNS map

    No generating a DNS map

    So i cant get an dns map only creates when getting from google and example.com.

    Im using python 3 and command: python3 photon.py -u "http://example.com" --dns

    opened by DEPSTRCZ 0
Releases(v1.3.0)
  • v1.3.0(Apr 5, 2019)

    • Dropped Python < 3.2 support
    • Removed Ninja mode
    • Fixed a bug in link parsing
    • Fixed Unicode output
    • Fixed a bug which caused URLs to be treated as files
    • Intel is now associated with the URL where it was found
    Source code(tar.gz)
    Source code(zip)
  • v1.2.1(Jan 26, 2019)

  • v1.1.6(Jan 25, 2019)

  • v1.1.5(Oct 24, 2018)

  • v1.1.4(Sep 18, 2018)

  • v1.1.3(Sep 6, 2018)

  • v1.1.2(Aug 7, 2018)

    • Code refactor
    • Better identification of external URLs
    • Fixed a major bug that made several intel URLs pass under the radar
    • Fixed a major bug that caused non-html type content to be marked a crawlable URL
    Source code(tar.gz)
    Source code(zip)
  • v1.1.1(Sep 4, 2018)

    • Added --wayback
    • Fixed progress bar for python > 3.2
    • Added /core/config.py for easy customization
    • --dns now saves subdomains in subdomains.txt
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Aug 29, 2018)

    • Use of ThreadPoolExecutor for x2 speed (for python > 3.2)
    • Fixed mishandling of urls starting with //
    • Removed a redundant try-except statement
    • Evaluate entropy of found keys to avoid false positives
    Source code(tar.gz)
    Source code(zip)
  • v1.0.9(Aug 19, 2018)

  • v1.0.8(Aug 4, 2018)

    • added --exclude option
    • Better regex and code logic to favor performance
    • Fixed a bug that caused dnsdumpster to fail if target was a subdomain
    • Fixed a bug that caused a crash if run outside "Photon" directory
    • Fixed a bug in file saving (specific to python3)
    Source code(tar.gz)
    Source code(zip)
  • v1.0.7(Jul 28, 2018)

    • Added --timeout option
    • Added --output option
    • Added --user-agent option
    • Replaced lxml with regex
    • Better logic for favoring performance
    • Added bigger and separate file for user-agents
    Source code(tar.gz)
    Source code(zip)
  • v1.0.6(Jul 26, 2018)

  • v1.0.5(Jul 26, 2018)

  • v1.0.4(Jul 25, 2018)

    • Fixed an issue which caused regular links to be saved in robots.txt
    • Simplified flash function
    • Removed -n as an alias of --ninja
    • Added --only-urls option
    • Refactored code for readability
    • Skip saving files if the content is empty
    Source code(tar.gz)
    Source code(zip)
  • v1.0.3(Jul 24, 2018)

    • Introduced plugins
    • Added dnsdumpster plugin
    • Fixed non-ascii character handling, again
    • 404 pages are now added to failed list
    • Handling exceptions in jscanner
    Source code(tar.gz)
    Source code(zip)
  • v1.0.2(Jul 23, 2018)

    • Proper handling of null response from robots.txt & sitemap.xml
    • Python2 compatibility
    • Proper handling of non-ascii chars
    • Added ability to specify custom regex pattern
    • Display total time taken and average time per request
    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Jul 23, 2018)

Owner
Somdev Sangwan
I make things, I break things and I make things that break things.
Somdev Sangwan
Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Django and Vue.js

Gerapy Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Scrapyd-Client, Scrapyd-API, Django and Vue.js. Documentation Documentation

Gerapy 2.9k Jan 3, 2023
A low-code tool that generates python crawler code based on curl or url

KKBA Intruoduction A low-code tool that generates python crawler code based on curl or url Requirement Python >= 3.6 Install pip install kkba Usage Co

null 8 Sep 20, 2021
The core packages of security analyzer web crawler

Security Analyzer ?? A large scale web crawler (considered also as vulnerability scanner tool) to take an overview about security of Moroccan sites Cu

Security Analyzer 10 Jul 3, 2022
Crawler in Python 3.7, 3.8. 3.9. Pypy3

Description Python Crawler written Python 3. (Supports major Python releases Python3.6, Python3.7 and Python 3.8) Installation and Use Setup VirtualEn

Vinit Kumar 2 Mar 12, 2022
Audio media crawler for lbry.

Audio media crawler for lbry. Requirements Python 3.8 Poetry 1.1.7 Elasticsearch 7.14.0 Lbry-sdk 0.99.0 Development This project uses poetry as a depe

Hound.fm 4 Dec 3, 2022
Crawler job that scrapes comments from social media posts and saves them in a S3 bucket.

Toxicity comments crawler Crawler job that scrapes comments from social media posts and saves them in a S3 bucket. Twitter Tweets and replies are scra

Douglas Trajano 2 Jan 24, 2022
A crawler of doubamovie

豆瓣电影 A crawler of doubamovie 一个小小的入门级scrapy框架的应用,选取豆瓣电影对排行榜前1000的电影数据进行爬取。 spider.py start_requests方法为scrapy的方法,我们对它进行重写。 def start_requests(self):

Cats without dried fish 1 Oct 5, 2021
A web crawler script that crawls the target website and lists its links

A web crawler script that crawls the target website and lists its links || A web crawler script that lists links by scanning the target website.

null 2 Apr 29, 2022
Deep Web Miner Python | Spyder Crawler

Webcrawler written in Python. This crawler does dig in till the 3 level of inside addressed and mine the respective data accordingly

Karan Arora 17 Jan 24, 2022
Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo.

Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo. (Todas as infomações)

Guilherme Silva Uchoa 3 Oct 4, 2022
A Pixiv web crawler module

Pixiv-spider A Pixiv spider module WARNING It's an unfinished work, browsing the code carefully before using it. Features 0004 - Readme.md updated, co

Uzuki 1 Nov 14, 2021
Python script to check if there is any differences in responses of an application when the request comes from a search engine's crawler.

crawlersuseragents This Python script can be used to check if there is any differences in responses of an application when the request comes from a se

Podalirius 13 Dec 27, 2022
Google Maps crawler using Selenium

Google Maps Crawler using Selenium Built as part of the Antifragile Dev Project Selenium crawler that browses Google Maps as a regular user and stores

Guilherme Latrova 46 Dec 16, 2022
Rottentomatoes, Goodreads and IMDB sites crawler. Semantic Web final project.

Crawler Rottentomatoes, Goodreads and IMDB sites crawler. Crawler written by beautifulsoup, selenium and lxml to gather books and films information an

Faeze Ghorbanpour 1 Dec 30, 2021
A dead simple crawler to get books information from Douban.

Introduction A dead simple crawler to get books information from Douban. Pre-requesites Python 3 Install dependencies from requirements.txt (Optional)

Yun Wang 1 Jan 10, 2022
A dead simple crawler to get books information from Douban.

Introduction A dead simple crawler to get books information from Douban. Pre-requesites Python 3 Install dependencies from requirements.txt (Optional)

Yun Wang 1 Jan 10, 2022
PaperRobot: a paper crawler that can quickly download numerous papers, facilitating paper studying and management

PaperRobot PaperRobot 是一个论文抓取工具,可以快速批量下载大量论文,方便后期进行持续的论文管理与学习。 PaperRobot通过多个接口抓取论文,目前抓取成功率维持在90%以上。通过配置Config文件,可以抓取任意计算机领域相关会议的论文。 Installation Down

moxiaoxi 47 Nov 23, 2022
This is a web crawler that works on employ email data by gmane.org and visualizes it in different ways.

crawler_to_visual_gmane Analyzing an EMAIL Archive from gmane and vizualizing the data using the D3 JavaScript library. This is a set of tools that al

Saim Zafar 1 Dec 20, 2021
Create crawler get some new products with maximum discount in banimode website

crawler-banimode create crawler and get some new products with maximum discount in banimode website. این پروژه کوچک جهت یادگیری و کار با ابزار سلنیوم

nourollah rezaei 2 Feb 17, 2022