Crawl BookCorpus

Overview

Homemade BookCorpus

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

Clawling could be difficult due to some issues of the website. Also, please consider another option such as using publicly available files at your own risk.

For example,

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@


These are scripts to reproduce BookCorpus by yourself.

BookCorpus is a popular large-scale text corpus, espetially for unsupervised learning of sentence encoders/decoders. However, BookCorpus is no longer distributed...

This repository includes a crawler collecting data from smashwords.com, which is the original source of BookCorpus. Collected sentences may partially differ but the number of them will be larger or almost the same. If you use the new corpus in your work, please specify that it is a replica.

How to use

Prepare URLs of available books. However, this repository already has a list as url_list.jsonl which was a snapshot I (@soskek) collected on Jan 19-20, 2019. You can use it if you'd like.

python -u download_list.py > url_list.jsonl &

Download their files. Downloading is performed for txt files if possible. Otherwise, this tries to extract text from epub. The additional argument --trash-bad-count filters out epub files whose word count is largely different from its official stat (because it may imply some failure).

python download_files.py --list url_list.jsonl --out out_txts --trash-bad-count

The results are saved into the directory of --out (here, out_txts).

Postprocessing

Make concatenated text with sentence-per-line format.

python make_sentlines.py out_txts > all.txt

If you want to tokenize them into segmented words by Microsoft's BlingFire, run the below. You can use another choices for this by yourself.

python make_sentlines.py out_txts | python tokenize_sentlines.py > all.tokenized.txt

Disclaimer

For example, you can refer to terms of smashwords.com. Please use the code responsibly and adhere to respective copyright and related laws. I am not responsible for any plagiarism or legal implication that rises as a result of this repository.

Requirement

  • python3 is recommended
  • beautifulsoup4
  • progressbar2
  • blingfire
  • html2text
  • lxml
pip install -r requirements.txt

Note on Errors

  • It is expected some error messages are shown, e.g., Failed: epub and txt, File is not a zip file or Failed to open. But, the number of failures will be much less than one of successes. Don't worry.

Acknowledgement

epub2txt.py is derived and modified from https://github.com/kevinxiong/epub2txt/blob/master/epub2txt.py

Citation

If you found this code useful, please cite it with the URL.

@misc{soskkobayashi2018bookcorpus,
    author = {Sosuke Kobayashi},
    title = {Homemade BookCorpus},
    howpublished = {\url{https://github.com/BIGBALLON/cifar-10-cnn}},
    year = {2018}
}

Also, the original papers which made the original BookCorpus are as follows:

Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler. "Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books." arXiv preprint arXiv:1506.06724, ICCV 2015.

@InProceedings{Zhu_2015_ICCV,
    title = {Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books},
    author = {Zhu, Yukun and Kiros, Ryan and Zemel, Rich and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja},
    booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
    month = {December},
    year = {2015}
}
@inproceedings{moviebook,
    title = {Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books},
    author = {Yukun Zhu and Ryan Kiros and Richard Zemel and Ruslan Salakhutdinov and Raquel Urtasun and Antonio Torralba and Sanja Fidler},
    booktitle = {arXiv preprint arXiv:1506.06724},
    year = {2015}
}

Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. "Skip-Thought Vectors." arXiv preprint arXiv:1506.06726, NIPS 2015.

@article{kiros2015skip,
    title={Skip-Thought Vectors},
    author={Kiros, Ryan and Zhu, Yukun and Salakhutdinov, Ruslan and Zemel, Richard S and Torralba, Antonio and Urtasun, Raquel and Fidler, Sanja},
    journal={arXiv preprint arXiv:1506.06726},
    year={2015}
}
Comments
  • Could you share the processed all.txt?

    Could you share the processed all.txt?

    Hi Sosuke,

    Thanks a lot for the wonderful work! I expect to obtain the bookcorpus dataset with your crawler, but I failed to crawl the articles owing to some network errors. I am afraid that I cannot achieve a complete dataset. So could you please share with me the dataset you have got, e.g. the all.txt. My email address is [email protected]. Thanks!

    Zhijie

    opened by thudzj 9
  • Fix merging sentences in one paragraph

    Fix merging sentences in one paragraph

    This PR simply merges sentences in stack whenever it met an empty line. I am not sure why blank was necessary at the first place, so let't discuss about it if I'm missing some thing here.

    Consider one example from starting section of out_txts/100021__three-plays.txt. Current implementation output:

    Three Plays Published by Mike Suttons at Smashwords Copyright 2011 Mike Sutton ISBN 978-1-4659-8486-9 Tripping on Nothing
    

    It obviously merged the paragraph title Tripping on Nothing into stack incorrectly. With this PR, output is:

    Three Plays Published by Mike Suttons at Smashwords Copyright 2011 Mike Sutton ISBN 978-1-4659-8486-9
    
    
    Tripping on Nothing
    
    opened by yoquankara 4
  • intermittent issues with connections and file names

    intermittent issues with connections and file names

    example: python3.6 download_files.py --list url_list.jsonl --out out_txts --trash-bad-count 0 files had already been saved in out_txts. File is not a zip file | File is not a zip file File is not a zip file File is not a zip file File is not a zip file Failed to open https://www.smashwords.com/books/download/490185/8/latest/0/0/existence.epub HTTPError: HTTP Error 503: Service Temporarily Unavailable Succeeded in opening https://www.smashwords.com/books/download/490185/8/latest/0/0/existence.epub File is not a zip file File is not a zip file | File is not a zip file File is not a zip file File is not a zip file | File is not a zip file File is not a zip file "There is no item named '' in the archive" File is not a zip file File is not a zip file "There is no item named 'OPS/' in the archive" File is not a zip file File is not a zip file | File is not a zip file Failed to open https://www.smashwords.com/books/download/793264/8/latest/0/0/jaynells-wolf.epub HTTPError: HTTP Error 503: Service Temporarily Unavailable Succeeded in opening https://www.smashwords.com/books/download/793264/8/latest/0/0/jaynells-wolf.epub Failed to open https://www.smashwords.com/books/download/479710/6/latest/0/0/tainted-ava-delaney-lost-souls-1.txt HTTPError: HTTP Error 503: Service Temporarily Unavailable Succeeded in opening https://www.smashwords.com/books/download/479710/6/latest/0/0/tainted-ava-delaney-lost-souls-1.txt File is not a zip file "There is no item named 'OPS/' in the archive" File is not a zip file Failed to open https://www.smashwords.com/books/download/496160/8/latest/0/0/royal-blood-royal-blood-1.epub HTTPError: HTTP Error 404: Not Found Failed to open https://www.smashwords.com/books/download/496160/8/latest/0/0/royal-blood-royal-blood-1.epub HTTPError: HTTP Error 404: Not Found Gave up to open https://www.smashwords.com/books/download/496160/8/latest/0/0/royal-blood-royal-blood-1.epub [Errno 2] No such file or directory: 'out_txts/royal-blood-royal-blood-1.epub'

    opened by David-Levinthal 3
  • Network Error

    Network Error

    Hi,Thanks for your code, it's really useful for most nlp researchers and thank you again.

    And when I run this code, it's often interrupted by network error after download a little files, I thought this maybe caused by my network. so, could you please send me a email attached with the crawled BookCorpus datasets if you have ?

    My email is: [email protected]. Thank you very much.

    Best,

    opened by SummmerSnow 3
  • HTTPError: HTTP Error 401: Authorization Required

    HTTPError: HTTP Error 401: Authorization Required

    Thanks for you code, but I got some network trouble when I run the download_list script. The full error message is Failed to open https://www.smashwords.com/books/category/1/downloads/0/free/medium/0 HTTPError: HTTP Error 401: Authorization Required

    What's more, when I use your url_list.jsonl to download file, the download_filles script gaves the same error message: Failed to open https://www.smashwords.com/books/download/246580/6/latest/0/0/silence.txt HTTPError: HTTP Error 401: Authorization Required

    And I tried to open the url in my chrome, and I can see that page without error 401. Could help to find a solution? Thanks a lot~

    opened by NotToday 2
  • smashwords.com forbids this; readme should tell people to get permission first

    smashwords.com forbids this; readme should tell people to get permission first

    The code in this repo violates both the robots.txt of smashwords.com:

    $ curl -s https://www.smashwords.com/robots.txt | tail -4
    User-agent: *
    Disallow: /books/search?
    Disallow: /books/download/
    Crawl-delay: 4
    

    and their terms of service, as far as I can see: “Third parties are not authorized to download, host and otherwise redistribute Smashwords books without prior written agreement from Smashwords” (you could imagine that this only prohibits downloading for subsequent hosting or redistribution, but I think that would be an opportunistic interpretation :) ).

    The readme should tell people very clearly that they must get permission from smashwords.com before running this stuff against their site.

    opened by gthb 1
  • How to resolve URLError SSL: CERTIFICATE_VERIFY_FAILED

    How to resolve URLError SSL: CERTIFICATE_VERIFY_FAILED

    If you get the following error:

    URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:748)>
    

    Adding this block of code at the top of the script at download_files.py will resolve it.

    import os, ssl
    if (not os.environ.get('PYTHONHTTPSVERIFY', '') and
        getattr(ssl, '_create_unverified_context', None)):
        ssl._create_default_https_context = ssl._create_unverified_context
    
    opened by delzac 1
  • add: utf8 encoding for all file opens

    add: utf8 encoding for all file opens

    First of all, Thank you for sharing your work.

    There were some errors about encoding like below. image

    It's been resolved by adding encoding='utf8' for every opens.

    Have a beautiful day.

    opened by YongWookHa 1
  • download_list.py not working due to title change.

    download_list.py not working due to title change.

    Apparently the titles on smashwords changed. txt is now found under "Plain text; contains no formatting" and epub under "Supported by many apps and devices (e.g., Apple Books, Barnes and Noble Nook, Kobo, Google Play, etc.)"

    opened by 1227505 1
  • add strip for genre scraping

    add strip for genre scraping

    It was dirty. I added strip.

    "genres": ["\n                            Category: Fiction \u00bb Mystery & detective \u00bb Women Sleuths ", "\n                            Category: Fiction \u00bb Fantasy \u00bb Paranormal "]
    

    will be

    "genres": ["Category: Fiction \u00bb Mystery & detective \u00bb Women Sleuths", "Category: Fiction \u00bb Fantasy \u00bb Paranormal"]
    
    opened by soskek 0
  • Update on the `url_list.jsonl`

    Update on the `url_list.jsonl`

    Hello, on 2022-12-17 I run the script download_list.py with modified number to page to 31430 which covered the last search page. Here is the updated url_list.jsonl.zip

    There are 4544 entries loss, and 8475 entries added from the original file

    Hope this help

    opened by thipokKub 0
  • Here’s a download link for all of bookcorpus as of Sept 2020

    Here’s a download link for all of bookcorpus as of Sept 2020

    You can download it here: https://twitter.com/theshawwn/status/1301852133319294976?s=21

    it contains 18k plain text files. The results are very high quality. I spent about a week fixing the epub2txt script, which you can find at https://github.com/shawwn/scrap named “epub2txt-all”. (not epub2txt.)

    The new script:

    1. Correctly preserves structure, matching the table of contents very closely;

    2. Correctly renders tables of data (by default html2txt produces mostly garbage-looking results for tables),

    3. Correctly preserves code structure, so that source code and similar things are visually coherent,

    4. Converts numbered lists from “1\.” to “1.”

    5. Runs the full text through ftfy.fix_text() (which is what OpenAI does for GPT), replacing Unicode apostrophes with ascii apostrophes;

    6. Expands Unicode ellipses to “...” (three separate ascii characters).

    The tarball download link (see tweet above) also includes the original ePub URLs, updated for September 2020, which ended up being about 2k more than the URLs in this repo. But they’re hard to crawl. I do have the epub files, but I’m reluctant to distribute them for obvious reasons.

    opened by shawwn 13
  • epub2txt.py produces incorrect results for many epubs

    epub2txt.py produces incorrect results for many epubs

    Specifically this line: https://github.com/soskek/bookcorpus/blob/05a3f227d9748c2ee7ccaf93819d0e0236b6f424/epub2txt.py#L149

    image

    When I tried to convert a book on Tensorflow to text using this script, I noticed chapter 1 was being repeated multiple times.

    The reason is that the Table of Contents looks similar to this:

    ch1.html#section1
    ch1.html#section2
    ch1.html#section3
    ... ch2.html#section1 ch2.html#section2 ...

    The epub2txt script iterates over this table of contents, splits "ch1.html#section1" to "ch1.html", then converts that to text. Then repeats for "ch1.html#section2", which converts the same chapter into text.

    I have a fixed version here: https://github.com/shawwn/scrap/blob/afb699ee9c8181b3728b81fc410a31b66311f0d8/epub2txt#L158-L206

    opened by shawwn 1
  • Can anyone download all the files in the url list file?

    Can anyone download all the files in the url list file?

    I tried to download the bookscorpus data. So far I just downloaded around 5000 books. Can anyone get all the books? I met a lot HTTP Error: 403 Forbidden How to fix this ? Or can i get the all the bookscorpus data from somewhere ?

    Thanks

    opened by wxp16 13
Releases(v1.0)
Owner
Sosuke Kobayashi
[email protected] ML, NLP, CV
Sosuke Kobayashi
Crawl the information of a given keyword on Google search engine

Crawl the information of a given keyword on Google search engine

null 4 Nov 9, 2022
Python script who crawl first shodan page and check DBLTEK vulnerability

?? MASS DBLTEK EXPLOIT CHECKER USING SHODAN ?? Python script who crawl first shodan page and check DBLTEK vulnerability

Divin 4 Jan 9, 2022
This script is intended to crawl license information of repositories through the GitHub API.

GithubLicenseCrawler This script is intended to crawl license information of repositories through the GitHub API. Taking a csv file with requirements.

schutera 4 Oct 25, 2022
This is a simple website crawler which asks for a website link from the user to crawl and find specific data from the given website address.

This is a simple website crawler which asks for a website link from the user to crawl and find specific data from the given website address.

Faisal Ahmed 1 Jan 10, 2022
Iptvcrawl - A scrapy project for crawl IPTV playlist

iptvcrawl a scrapy project for crawl IPTV playlist. Dependency Python3 pip insta

Zhijun 18 May 5, 2022
Open Crawl Vietnamese Text

Open Crawl Vietnamese Text This repo contains crawled Vietnamese text from multiple sources. This list of a topic-centric public data sources in high

QAI Research 4 Jan 5, 2022
Python code to crawl computer vision papers from top CV conferences. Currently it supports CVPR, ICCV, ECCV, NeurIPS, ICML, ICLR, SIGGRAPH

Python code to crawl computer vision papers from top CV conferences. Currently it supports CVPR, ICCV, ECCV, NeurIPS, ICML, ICLR, SIGGRAPH. It leverages selenium, a website testing framework to crawl the titles and pdf urls from the conference website, and download them one by one with some simple anti-anti-crawler tricks.

Xiaoyang Huang 39 Nov 21, 2022
TunBERT is the first release of a pre-trained BERT model for the Tunisian dialect using a Tunisian Common-Crawl-based dataset.

TunBERT is the first release of a pre-trained BERT model for the Tunisian dialect using a Tunisian Common-Crawl-based dataset. TunBERT was applied to three NLP downstream tasks: Sentiment Analysis (SA), Tunisian Dialect Identification (TDI) and Reading Comprehension Question-Answering (RCQA)

InstaDeep Ltd 72 Dec 9, 2022
A toolkit to automatically crawl the paper list and download paper pdfs of ACL Ahthology.

ACL-Anthology-Crawler A toolkit to automatically crawl the paper list and download paper pdfs of ACL Anthology

Ray GG 9 Oct 9, 2022
Crawl the information of a given keyword on Google search engine

Crawl the information of a given keyword on Google search engine

null 4 Nov 9, 2022
Python script who crawl first shodan page and check DBLTEK vulnerability

?? MASS DBLTEK EXPLOIT CHECKER USING SHODAN ?? Python script who crawl first shodan page and check DBLTEK vulnerability

Divin 4 Jan 9, 2022
This script is intended to crawl license information of repositories through the GitHub API.

GithubLicenseCrawler This script is intended to crawl license information of repositories through the GitHub API. Taking a csv file with requirements.

schutera 4 Oct 25, 2022
This is a simple website crawler which asks for a website link from the user to crawl and find specific data from the given website address.

This is a simple website crawler which asks for a website link from the user to crawl and find specific data from the given website address.

Faisal Ahmed 1 Jan 10, 2022
Iptvcrawl - A scrapy project for crawl IPTV playlist

iptvcrawl a scrapy project for crawl IPTV playlist. Dependency Python3 pip insta

Zhijun 18 May 5, 2022
automatically crawl every URL and find cross site scripting (XSS)

scancss Fastest tool to find XSS. scancss is a fastest tool to detect Cross Site scripting (XSS) automatically and it's also an intelligent payload ge

Md. Nur habib 30 Sep 24, 2022