A crawler of doubamovie

Overview

豆瓣电影

A crawler of doubamovie

一个小小的入门级scrapy框架的应用,选取豆瓣电影对排行榜前1000的电影数据进行爬取。

spider.py

start_requests方法为scrapy的方法,我们对它进行重写。

def start_requests(self):
    # 将start_url中的链接通过for循环进行遍历。
    for url in self.start_urls:
        # 通过yield发送Request请求。
        # 这里的Reques注意是scrapy下的Request类。注意不到导错类了。
        # 这里的有3个参数:
        #        1、url为遍历后的链接
        #        2、callback为发送完请求后通过什么方法进行处理,这里通过parse方法进行处理。
        #        3、如果网站设置了防爬措施,需要加上headers伪装浏览器发送请求。

        yield scrapy.Request(url=url, callback=self.parse, headers=self.headler)

使用scrapy的css选择器,定位选取范围div.item

重写parse对start_request()请求到的数据进行处理,通过yield对爬取到的网页数据进行封装

def parse(self, response):
    # 这里使用scrapy的css选择器,既然数据在class为item的div下,那么把选取范围定位div.item
    for quote in response.css('div.item'):
        # 通过yield对网页数据进行循环抓取
        yield {
            # 抓取排名、电影名、导演、主演、上映日期、制片国家 / 地区、类型,评分、评论数量、一句话评价以及电影链接
            "电影链接": quote.css('div.info div.hd a::attr(href)').extract_first(),
            "排名": quote.css('div.pic em::text').extract(),
            "电影名": quote.css('div.info div.hd a span.title::text')[0].extract(),
            "上映年份": quote.css('div.info div.bd p::text')[1].extract().split('/')[0].strip(),
            "制片国家": quote.css('div.info div.bd p::text')[1].extract().split('/')[1].strip(),
            "类型": quote.css('div.info div.bd p::text')[1].extract().split('/')[2].strip(),
            "评分": quote.css('div.info div.bd div.star span.rating_num::text').extract(),
            "评论数量": quote.css('div.info div.bd div.star span::text')[1].re(r'\d+'),
            "引言": quote.css('div.info div.bd p.quote span.inq::text').extract(),
        }
    next_url = response.css('div.paginator span.next a::attr(href)').extract()
    if next_url:
        next_url = "https://movie.douban.com/top250" + next_url[0]
        print(next_url)
        yield scrapy.Request(next_url, headers=self.headler)

pipelines.py

将其存入mongodb数据库中,不用提前创建表。

def __init__(self):
    self.client = pymongo.MongoClient('localhost', 27017)
    scrapy_db = self.client['doubanmovie']  # 创建数据库
    self.coll = scrapy_db['movie']  # 创建数据库中的表格

def process_item(self, item, spider):
    self.coll.insert_one(dict(item))
    return item

def close_spider(self, spider):
    self.client.close()

item.py

数据封装

url = scrapy.Field()
name = scrapy.Field()
order = scrapy.Field()
content = scrapy.Field()
contentnum = scrapy.Field()
year = scrapy.Field()
country = scrapy.Field()
score = scrapy.Field()
vary = scrapy.Field()
You might also like...
A Pixiv web crawler module

Pixiv-spider A Pixiv spider module WARNING It's an unfinished work, browsing the code carefully before using it. Features 0004 - Readme.md updated, co

Python script to check if there is any differences in responses of an application when the request comes from a search engine's crawler.
Python script to check if there is any differences in responses of an application when the request comes from a search engine's crawler.

crawlersuseragents This Python script can be used to check if there is any differences in responses of an application when the request comes from a se

Google Maps crawler using Selenium
Google Maps crawler using Selenium

Google Maps Crawler using Selenium Built as part of the Antifragile Dev Project Selenium crawler that browses Google Maps as a regular user and stores

Rottentomatoes, Goodreads and IMDB sites crawler. Semantic Web final project.

Crawler Rottentomatoes, Goodreads and IMDB sites crawler. Crawler written by beautifulsoup, selenium and lxml to gather books and films information an

A dead simple crawler to get books information from Douban.

Introduction A dead simple crawler to get books information from Douban. Pre-requesites Python 3 Install dependencies from requirements.txt (Optional)

A dead simple crawler to get books information from Douban.

Introduction A dead simple crawler to get books information from Douban. Pre-requesites Python 3 Install dependencies from requirements.txt (Optional)

PaperRobot: a paper crawler that can quickly download numerous papers, facilitating paper studying and management
PaperRobot: a paper crawler that can quickly download numerous papers, facilitating paper studying and management

PaperRobot PaperRobot 是一个论文抓取工具,可以快速批量下载大量论文,方便后期进行持续的论文管理与学习。 PaperRobot通过多个接口抓取论文,目前抓取成功率维持在90%以上。通过配置Config文件,可以抓取任意计算机领域相关会议的论文。 Installation Down

This is a web crawler that works on employ email data by gmane.org and visualizes it in different ways.

crawler_to_visual_gmane Analyzing an EMAIL Archive from gmane and vizualizing the data using the D3 JavaScript library. This is a set of tools that al

Create crawler get some new products with maximum discount in banimode website

crawler-banimode create crawler and get some new products with maximum discount in banimode website. این پروژه کوچک جهت یادگیری و کار با ابزار سلنیوم

Owner
Cats without dried fish
Cats without dried fish
Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Django and Vue.js

Gerapy Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Scrapyd-Client, Scrapyd-API, Django and Vue.js. Documentation Documentation

Gerapy 2.9k Jan 3, 2023
Incredibly fast crawler designed for OSINT.

Photon Incredibly fast crawler designed for OSINT. Photon Wiki • How To Use • Compatibility • Photon Library • Contribution • Roadmap Key Features Dat

Somdev Sangwan 9.3k Jan 2, 2023
A low-code tool that generates python crawler code based on curl or url

KKBA Intruoduction A low-code tool that generates python crawler code based on curl or url Requirement Python >= 3.6 Install pip install kkba Usage Co

null 8 Sep 20, 2021
The core packages of security analyzer web crawler

Security Analyzer ?? A large scale web crawler (considered also as vulnerability scanner tool) to take an overview about security of Moroccan sites Cu

Security Analyzer 10 Jul 3, 2022
Crawler in Python 3.7, 3.8. 3.9. Pypy3

Description Python Crawler written Python 3. (Supports major Python releases Python3.6, Python3.7 and Python 3.8) Installation and Use Setup VirtualEn

Vinit Kumar 2 Mar 12, 2022
Audio media crawler for lbry.

Audio media crawler for lbry. Requirements Python 3.8 Poetry 1.1.7 Elasticsearch 7.14.0 Lbry-sdk 0.99.0 Development This project uses poetry as a depe

Hound.fm 4 Dec 3, 2022
Crawler job that scrapes comments from social media posts and saves them in a S3 bucket.

Toxicity comments crawler Crawler job that scrapes comments from social media posts and saves them in a S3 bucket. Twitter Tweets and replies are scra

Douglas Trajano 2 Jan 24, 2022
A web crawler script that crawls the target website and lists its links

A web crawler script that crawls the target website and lists its links || A web crawler script that lists links by scanning the target website.

null 2 Apr 29, 2022
Deep Web Miner Python | Spyder Crawler

Webcrawler written in Python. This crawler does dig in till the 3 level of inside addressed and mine the respective data accordingly

Karan Arora 17 Jan 24, 2022
Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo.

Crawler do site Fundamentus.com com o uso do framework scrapy, tanto da aba detalhada como a de resumo. (Todas as infomações)

Guilherme Silva Uchoa 3 Oct 4, 2022