Scrapy
Overview
Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.
Scrapy is maintained by Zyte (formerly Scrapinghub) and many other contributors.
Check the Scrapy homepage at https://scrapy.org for more information, including a list of features.
Requirements
- Python 3.6+
- Works on Linux, Windows, macOS, BSD
Install
The quick way:
pip install scrapy
See the install section in the documentation at https://docs.scrapy.org/en/latest/intro/install.html for more details.
Documentation
Documentation is available online at https://docs.scrapy.org/ and in the docs
directory.
Releases
You can check https://docs.scrapy.org/en/latest/news.html for the release notes.
Community (blog, twitter, mail list, IRC)
See https://scrapy.org/community/ for details.
Contributing
See https://docs.scrapy.org/en/master/contributing.html for details.
Code of Conduct
Please note that this project is released with a Contributor Code of Conduct (see https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md).
By participating in this project you agree to abide by its terms. Please report unacceptable behavior to [email protected].
Companies using Scrapy
See https://scrapy.org/companies/ for a list.
Commercial Support
See https://scrapy.org/support/ for details.