Library to scrape and clean web pages to create massive datasets.

Overview

lazynlp

DOI License

A straightforward library that allows you to crawl, clean up, and deduplicate webpages to create massive monolingual datasets. Using this library, you should be able to create datasets larger than the one used by OpenAI for GPT-2.

Setup

This library uses Python 3.

  1. Clone this library and cd into the lazynlp folder:
git clone https://github.com/chiphuyen/lazynlp.git
cd lazynlp
  1. Install dependencies

pip3 install -r requirements.txt

  1. Install the library pip3 install .

If you want to uninstall the library, use:

pip3 uninstall lazynlp

How to create a massive dataset using lazynlp:

Step 1. Obtain URLs of the webpages you want to crawl

There are several major dumps of URLs available that you can use.

Reddit URLs

This is the link to all submissions to Reddit by months. You can download the raw dump and process to get the links. Keep in mind that each of these dumps is huge (100MB - 1GB).

@jcpeterson is kind enough to provide a list of deduplicated links with at least 3 karma that you can download here.

There are about 23M URLs from between 2015-06 to 2018-10, of which around 40 - 60 % are bad URLs (URLs no longer exist or aren't scraper-friendly). It means that after you've downloaded and cleaned all good URLs from this, you should have approx 10M webpages or 50GB of pure text.

Gutenberg

You can download the list of all URLs to US Gutenberg books here. There are 50K books, which convert to about 14GB of pure text.

You can also run lazynlp.get_us_gutenberg_links() to get the same list. For example, if you want to get all the Gutenberg URLs and store it in the file us_gutenberg.urls, run the following command. This might take half a day.

lazynlp.get_us_gutenberg_links('us_gutenberg.urls')

You can download the list of all URLs to Australian Gutenberg books here. There are 4k books, which convert to about 1GB of pure text.

You can also run lazynlp.get_aus_gutenberg_links() to get the same list. For example, if you want to get all the Gutenberg URLs and store it in the file aus_gutenberg.urls:

lazynlp.get_aus_gutenberg_links('aus_gutenberg.urls')

Wikipedia

You can download the Wikipedia dumps here.

Step 2. Deduplicate URLs

You don't want to download the same URL multiple times. There are two functions that help you deduplicate all URLs:

lazynlp.dedup_lines(files, outfold)

This function takes in a list of files (in each file, each line is a URLs) and deduplicate each file against all previous files. Save all the deduplicated files in outfold.

lazynlp.dedup_lines_from_new_file(original_files, new_file, outfile)

This function allows you to deduplicate a new file against all previously deduplicated files (original_files)

Step 3. Download the URLs

If you want to download each webpage separately, call:

lazynlp.download_page(link, context=None, timeout=None)

If you want to download from a file that contains a list of URLs, call:

lazynlp.download_pages(link_file, folder, timeout=30, default_skip=True, extensions=[], domains=[])

"""

link_file:

	file contains links to webpages to crawl. Each line contains one URL.

folder:

	folder that you want to contain your downloaded pages.

timeout:

	seconds to wait for a page to respond before abandoning it.

default_skip:

	set to True if you want to automatically skip all URLs that contain domains and extensions that are known to be scraper-unfriendly or NSFW.

	You can see the list of excluded domains at lazynlp/exclude_domains.txt.

	You can see the list of excluded extensions at lazynlp/exclude_extensions.txt

You can also add your own domains and extensions to skip with domains and extensions and arguments.

In the folder:

	Each URL is downloaded into a file, indexed by the order in which it is downloaded. The first line of each file is the URL. The rest is the textual content of the page.
 	
 	index.urls contains all the URLs that have been successfully downloaded.
	
	bad.urls contains the URLs that are bad.
	
	connection.urls contains the URLs that haven't been downloaded because of connection issues.
	
	non_ascii.urls contains the URLs that haven't been downloaded because of bad encoding issues.
	
	empty.urls contains the URLs that have empty textual content.

"""

If you have a lot of URLs, you can divide the list into multiple files and call this function separately. I was able to run 40 scripts in parallel. I guess I could have parallized the code. I just found this to be easier.

Step 4. Clean the webpages

You can get rid of all HTML tags, decode utf-8 into string, transliterate foreign characters, collapse white space, replace unprintable characters, unescape HTML, etc. using methods available in lazynlp/cleaner.py.

You can also just call the following function to do most of the processing.

lazynlp.clean_page(page)

Note:

In this library, the function lazynlp.download_pages() does both the crawling and cleaning part, so the webpages you have are pure text, like this:

http://www.thecannabist.co/2017/03/02/jeff-sessions-russia-resign-democrats/74687/
Attorney general nominee Sen. Jeff Sessions, R-Ala., testifies on Capitol Hill in Washington on Jan. 10, 2017, in the first day of his confirmation hearing before the Senate Judiciary Committee. Top Democrats now say that because he misled the committee about his visits to Russia, he should resign. (Andrew Harnik, The Associated Press)

House Oversight and Government Reform Committee Chairman Jason Chaffetz, R-Utah, tweeted early Thursday that "AG Sessions should clarify his testimony and recuse himself."

Later, Sen. Rob Portman, R-Ohio, said in a statement, "Jeff Sessions is a former colleague and a friend, but I think it would be best for him and for the country to recuse himself from the DOJ Russia probe."

House Majority Leader Kevin McCarthy, R-Calif., also initially said during an appearance on MSNBC's "Morning Joe" that Sessions should bow out.

Asked whether Sessions should recuse himself in this situation, McCarthy replied "I think the trust of the American people -- you recuse yourself in these situations, yes."

McCarthy was pressed a second time about whether he was calling for Sessions to recuse himself and he confirmed that he believed the situation required a recusal.

"I think it would be easier from that standpoint, yes," McCarthy said.

But McCarthy later said his comment had been misinterpreted, telling Fox News' "Fox and Friends," "I'm not calling on him to recuse himself. I was asked on 'Morning Joe,' if he needs to recuse himself as going forward. As you just heard, Attorney General Sessions said he would recuse himself going forward -- appropriate, and that's all my answer was."

The comments from prominent Republicans follow revelations that Sessions met with the Russian ambassador during election season. Under oath in front of the Senate Judiciary Committee for his confirmation hearing in January, Sessions had said that he had not met with any Russian officials.

Senate Minority Leader Charles Schumer, D-N.Y., joined growing Democratic calls for Sessions to either resign or at least recuse himself from any investigations into Russia's meddling in U.S. elections.

"Attorney General Sessions cannot possibly lead an investigation into Russian interference in our elections or come anywhere near it. With these revelations, he may indeed become the subject of it," Schumer told reporters. "Better for the country if he resigns, but let's get an investigation going."

Because the Department of Justice should be above reproach, for the good of the country, the Attorney General should resign.

Step 5. Remove duplicated webpages

To avoid any piece of texts being over-represented, you want to only include pages that don't signicantly overlap with other pages.

To estimate the amount of overlapping of target files with certain source files, use this function:

lazynlp.estimate_overlap(source_files, target_files, gran='word', n=8, capacity=10000, error_rate=1e-5, header=0, interval=100000)

gran is the granulary of tokens: 'char' or 'word' level.

n is the n-gram.

capacity and error_rate are for the BloomFilter used.

header: number of lines of each file to skip. It's because in our format, the first line is the url

To estimate the amount of overlapping of a target file with an existing BloomFilter, use this function:

lazynlp.estimate_overlap_bf(bf, target_file, gran='word', n=8, header=0)

If given a list of files, e.g. cleaned webpages, to filter out all the files that contain more than threshold overlapping with other files, use this function:

lazynlp.filter_files(files, threshold=0.5, gran='word', n=8, capacity=100000000, error_rate=1e-7, header=0, interval=1000000)

Names of all the files that are deemed duplicated are stored in dupped_files.list

Names of all the files used for the dataset are stored in clean_files.list

Some notes:

  1. 1GB of text is about 1b characters. An English word has on average 4.5 characters, or 5.5 including whitespace. So 1GB of text is about 181M words.

  2. When I ran 30 scripts in parallel, it took 3 hours to download and clean 1GB of pure text. So it'd take 5 days to get 50GB of pure text.

  3. The OpenAI dataset has 40GB, which I estimate to contain about 7-8 billion words. If you download all the webpages from the good Reddit URLs and Gutenberg books, you should have a dataset bigger than OpenAI's WebText.

  4. OpenAI, in their paper for GPT-2, didn't include Wikipedia articles for fear of overlapping. You can choose to include Wikipedia articles that have less than a certain amount of overlapping with the existing dataset using lazynlp.estimate_overlap_bf(bf, target_file, gran='word', n=8.

Comments
  • License?

    License?

    Hello,

    There are legal problems with code with no license, where I work using code that has no license attached to it is outright banned.

    Would you be so kind to add some sort of license in a file?

    It would be very nice of you if it were something permissive, like MIT or Apache 2 or BSD too.

    Thank you!

    opened by mrkafk 2
  • syntax error near unexpected token

    syntax error near unexpected token

    I see a "syntax error near unexpected token `sgp.urls,'" on submitting the following command: lazynlp.download_pages(sgp.urls, text_docs, timeout = 30, default_skip = True, extensions = [], domains = [])

    Is there something wrong I am doing? sgp.urls has all the URLs, text_docs is the name of the folder to get the outputs into, the rest of the parameters as default.

    opened by vamsiuppala 2
  • Sum of n-gram counts

    Sum of n-gram counts

    Thanks for building this, really nice work!

    I was reading through the code and noticed this line https://github.com/chiphuyen/lazynlp/blob/08696976ff1b521103147e51a187e23504fe23bd/lazynlp/analytics.py#L56 Were you looking to iteratively add up the line-ngram-counts? If yes, I can help complete that and raise a PR

    Lmk

    All the best

    opened by MichaMucha 1
  • import re for line 18

    import re for line 18

    flake8 testing of https://github.com/chiphuyen/lazynlp on Python 3.7.1

    $ flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics

    ./lazynlp/utils.py:17:12: F821 undefined name 're'
        return re.match("^([a-z]\.)+?$", token.lower()) is not None
               ^
    1     F821 undefined name 're'
    1
    

    E901,E999,F821,F822,F823 are the "showstopper" flake8 issues that can halt the runtime with a SyntaxError, NameError, etc. These 5 are different from most other flake8 issues which are merely "style violations" -- useful for readability but they do not effect runtime safety.

    • F821: undefined name name
    • F822: undefined name name in __all__
    • F823: local variable name referenced before assignment
    • E901: SyntaxError or IndentationError
    • E999: SyntaxError -- failed to compile a file into an Abstract Syntax Tree
    opened by cclauss 1
  • Check robot.txt and ai.txt

    Check robot.txt and ai.txt

    Hello. I'm new to open source contribution. I saw your issue #6 and created a robots.py file that might help you. read_disallows(url) : takes in a url and returns the pattern object list containing all disallowed items from robots.txt of the baseUrl for the url. I've tested it by providing "https://github.com/GrayHat12" as input to the function It extracted the baseurl "https://github.com" and went on to read robots.txt using a GET request on "https://github.com/robots.txt" Then I used a regex to extract all disallowed urls. Next I converted those urls to regex strings that could be compared against any url with the same baseurl (github.com) for example : One disallowed url is : "/*/stargazers" I converted it to : "/[^/]*/stargazers" compiled it to a pattern object and added it to a disallowed list which is returned by the function.

    Now when you compare a url "https://github.com/chiphuyen/lazynlp/stargazers" with pattern ""/[^/]*/stargazers"" there will be a match found using re.match and you can choose to not crawl it.

    Hope this was explanatory enough. I didn't understand the ai.txt part in the issue though. Will be great if someone could elaborate on that. 🐰

    Sorry for any issues with my pull request. I'm new to this and am hoping someone will guide me through

    opened by GrayHat12 0
  • urllib fails without headers

    urllib fails without headers

    Hi, Thanks for this great tool.

    I noticed urllib fails with a Forbidden Request error when I call download_page on some links. You can reproduce the error by trying the code below:

    import lazynlp
    link = "https://punchng.com/"
    page = lazynlp.download_page(link, context=None, timeout=None)
    

    This raises a 403 as shown below. Screen Shot 2019-09-16 at 2 09 51 PM

    I've attempted to create a PR that adds headers to the request by default.

    opened by Olamyy 0
  • Text quality score

    Text quality score

    Have you considered adding a metric to assess the text quality of the documents, for example using the frequencies of short frequent words? (http://rolandschaefer.net/?p=78)

    opened by vanyacohen 1
  • (Also) parsing structured data while you're at it

    (Also) parsing structured data while you're at it

    One might as well extract structured data from each element of such a dataset.

    Linked data. https://5stardata.info/

    Useful features:

    • Relations to e.g. https://schema.org/Dataset (s)
    • Reified edges to other https://schema.org/ScholarlyArticle (s) indicating whether A seems to confirm or disprove B
    • URIs for columns in CSV and CSVW datasets
      • https://www.w3.org/TR/tabular-data-primer/ (CSVW)
    help wanted 
    opened by westurner 1
Releases(v1.0.0)
Owner
Chip Huyen
Developing tools and best practices for machine learning production.
Chip Huyen
Video Games Web Scraper is a project that crawls websites and APIs and extracts video game related data from their pages.

Video Games Web Scraper Video Games Web Scraper is a project that crawls websites and APIs and extracts video game related data from their pages. This

Albert Marrero 1 Jan 12, 2022
Instagram_scrapper - This project allow you to scrape the list of followers, following or both from a public Instagram account, and create a csv or excel file easily.

Instagram_scrapper This project allow you to scrape the list of followers, following or both from a public Instagram account, and create a csv or exce

Lakhdar Belkharroubi 5 Oct 17, 2022
Scraping web pages to get data

Scraping Data Get public data and save in database This is project use Python How to run a project 1 - Clone the repository 2 - Install beautifulsoup4

Soccer Project 2 Nov 1, 2021
A Python web scraper to scrape latest posts from official Coinbase's Blog.

Coinbase Blog Scraper A Python web scraper to scrape latest posts from official Coinbase's Blog. IDEA It scrapes up latest blog posts from https://blo

Lucas Villela 3 Feb 18, 2022
An utility library to scrape data from TikTok, Instagram, Twitch, Youtube, Twitter or Reddit in one line!

Social Media Scraper An utility library to scrape data from TikTok, Instagram, Twitch, Youtube, Twitter or Reddit in one line! Go to the website » Vie

null 2 Aug 3, 2022
An helper library to scrape data from TikTok in one line, using the Influencer Hunters APIs.

TikTok Scraper An utility library to scrape data from TikTok hassle-free Go to the website » View Demo · Report Bug · Request Feature About The Projec

null 6 Jan 8, 2023
Webservice wrapper for hhursev/recipe-scrapers (python library to scrape recipes from websites)

recipe-scrapers-webservice This is a wrapper for hhursev/recipe-scrapers which provides the api as a webservice, to be consumed as a microservice by o

null 1 Jul 9, 2022
Use Flask API to wrap Facebook data. Grab the wapper of Facebook public pages without an API key.

Facebook Scraper Use Flask API to wrap Facebook data. Grab the wapper of Facebook public pages without an API key. (Currently working 2021) Setup Befo

Encore Shao 2 Dec 27, 2021
A spider for Universal Online Judge(UOJ) system, converting problem pages to PDFs.

Universal Online Judge Spider Introduction This is a spider for Universal Online Judge (UOJ) system (https://uoj.ac/). It also works for all other Onl

TriNitroTofu 1 Dec 7, 2021
Simple tool to scrape and download cross country ski timings and results from live.skidor.com

LiveSkidorDownload Simple tool to scrape and download cross country ski timings and results from live.skidor.com Usage: Put the python file in a dedic

null 0 Jan 7, 2022
Liveskidordownload - Simple tool to scrape and download cross country ski timings and results from live.skidor.com

LiveSkidorDownload Simple tool to scrape and download cross country ski timings

null 0 Jan 7, 2022
Docker containerized Python Flask API that uses selenium to scrape and interact with websites

Docker containerized Python Flask API that uses selenium to scrape and interact with websites

Christian Gracia 0 Jan 22, 2022
Github scraper app is used to scrape data for a specific user profile created using streamlit and BeautifulSoup python packages

Github Scraper Github scraper app is used to scrape data for a specific user profile. Github scraper app gets a github profile name and check whether

Siva Prakash 6 Apr 5, 2022
Scrape and display grades onto the console

WebScrapeGrades About The Project This Project is a personal project where I learned how to webscrape using python requests. Being able to get request

Cyrus Baybay 1 Oct 23, 2021
This is python to scrape overview and reviews of companies from Glassdoor.

Data Scraping for Glassdoor This is python to scrape overview and reviews of companies from Glassdoor. Please use it carefully and follow the Terms of

Houping 5 Jun 23, 2022
This project was created using Python technology and flask tools to scrape a music site

python-scrapping This project was created using Python technology and flask tools to scrape a music site You need to install the following packages to

hosein moradi 1 Dec 7, 2021
Python framework to scrape Pastebin pastes and analyze them

pastepwn - Paste-Scraping Python Framework Pastebin is a very helpful tool to store or rather share ascii encoded data online. In the world of OSINT,

Rico 105 Dec 29, 2022
A tool can scrape product in aliexpress: Title, Price, and URL Product.

Scrape-Product-Aliexpress A tool can scrape product in aliexpress: Title, Price, and URL Product. Usage: 1. Install Python 3.8 3.9 padahal halaman ins

Rahul Joshua Damanik 1 Dec 30, 2021
This code will be able to scrape movies from a movie website and also provide download links to newly uploaded movies.

Movies-Scraper You are probably tired of navigating through a movie website to get the right movie you'd want to watch during the weekend. There may e

null 1 Jan 31, 2022