If Google News had a Python library

Overview

pygooglenews

If Google News had a Python library

Created by Artem from newscatcherapi.com but you do not need anything from us or from anyone else to get the software going, it just works out of the box.

My blog post about how I did it

Demo

You might also like to check our Google News API or Financial Google News API

Table of Contents

About

A python wrapper of the Google News RSS feed.

Top stories, topic related news feeds, geolocation news feed, and an extensive full text search feed.

This work is more of a collection of all things I could find out about how Google News functions.

How is it different from other Pythonic Google News libraries?

  1. URL-escaping user input helper for the search function
  2. Extensive support for the search function that makes it simple to use:
    • exact match
    • in title match, in url match, etc
    • search by date range (from_ & to_), latest published (when)
  3. Parsing of the sub articles. Almost always, all feeds except the search one contain a subset of similar news for each article in a feed. This package takes care of extracting those sub articles. This feature might be highly useful to ML task when you need to collect a data of similar article headlines

Examples of Use Cases

  1. Integrating a news feed to your platform/application/website
  2. Collecting data by topic to train your own ML model
  3. Search for latest mentions for your new product
  4. Media monitoring of people/organizations — PR

Working with Google News in Production

Before we start, if you want to integrate Google News data to your production then I would advise you to use one of the 3 methods described below. Why? Because you do not want your servers IP address to be locked by Google. Every time you call any function there is an HTTPS request to Google's servers. Don't get me wrong, this Python package still works out of the box.

  1. NewsCatcher's Google News API — all code is written for you, clean & structured JSON output. Low price. You can test it yourself with no credit card. Plus, financial version of API is also available.
  2. ScrapingBee API which handles proxy rotation for you. Each function in this package has scraping_bee parameter where you paste your API key. You can also try it for free, no credit card required. See example
  3. Your own proxy — already have a pool of proxies? Each function in this package has proxies parameter (python dictionary) where you just paste your own proxies.

Motivation

I love working with the news data. I love it so much that I created my own company that crawls for hundreds of thousands of news articles, and allow you to search it via a news API. But this time, I want to share with the community a Python package that makes it simple to get the news data from the best search engine ever created - Google.

Most likely, you know already that Google has its own news service. It is different from the usual Google search that we use on a daily basis (sorry DuckDuckGo, maybe next time).

This package uses the RSS feed of the Google News. The top stories page, for example.

RSS is an XML page that is already well structured. I heavily rely on Feedparser package to parse the RSS feed.

Google News used to have an API but it was deprecated many years ago. (Unofficial) information about RSS syntax is decentralized over the web. There is no official documentation. So, I tried my best to collect all this informaion in one place.

Installation

$ pip install pygooglenews --upgrade

Quickstart

from pygooglenews import GoogleNews

gn = GoogleNews()

Top Stories

top = gn.top_news()

Stories by Topic

business = gn.topic_headlines('business')

Geolocation Specific Stories

headquaters = gn.geo_headlines('San Fran')

Stories by a Query Search

# search for the best matching articles that mention MSFT and 
# do not mention AAPL (over the past 6 month
search = gn.search('MSFT -APPL', when = '6m')

Documentation - Functions & Classes

GoogleNews Class

from pygooglenews import GoogleNews
# default GoogleNews instance
gn = GoogleNews(lang = 'en', country = 'US')

To get the access to all the functions, you first have to initiate the GoogleNews class.

It has 2 required variables: lang and country

You can try any combination of those 2, however, it does not exist for all. Only the combinations that are supported by GoogleNews will work. Check the official Google News page to check what is covered:

On the bottom left side of the Google News page you may find a Language & region section where you can find all of the supported combinations.

For example, for country=UA (Ukraine), there are 2 languages supported:

  • lang=uk Ukrainian
  • lang=ru Russian

Top Stories

top = gn.top_news(proxies=None, scraping_bee = None)

top_news() returns the top stories for the selected country and language that are defined in GoogleNews class. The returned object contains feed (FeedParserDict) and entries list of articles found with all data parsed.


Stories by Topic

business = gn.topic_headlines('BUSINESS', proxies=None, scraping_bee = None)

The returned object contains feed (FeedParserDict) and entries list of articles found with all data parsed.

Accepted topics are:

  • WORLD
  • NATION
  • BUSINESS
  • TECHNOLOGY
  • ENTERTAINMENT
  • SCIENCE
  • SPORTS
  • HEALTH

However, you can find some other topics that are also supported by Google News.

For example, if you search for corona in the search tab of en + US you will find COVID-19 as a topic.

The URL looks like this: https://news.google.com/topics/CAAqIggKIhxDQkFTRHdvSkwyMHZNREZqY0hsNUVnSmxiaWdBUAE?hl=en-US&gl=US&ceid=US%3Aen

We have to copy the text after topics/ and before ?, then you can use it as an input for the top_news() function.

from pygooglenews import GoogleNews

gn = GoogleNews()
covid = gn.topic_headlines('CAAqIggKIhxDQkFTRHdvSkwyMHZNREZqY0hsNUVnSmxiaWdBUAE')

However, be aware that this topic will be unique for each language/country combination.


Stories by Geolocation

gn = GoogleNews('uk', 'UA')
kyiv = gn.geo_headlines('kyiv', proxies=None, scraping_bee = None)
# or 
kyiv = gn.geo_headlines('kiev', proxies=None, scraping_bee = None)
# or
kyiv = gn.geo_headlines('киев', proxies=None, scraping_bee = None)
# or
kyiv = gn.geo_headlines('Київ', proxies=None, scraping_bee = None)

The returned object contains feed (FeedParserDict) and entries list of articles found with all data parsed.

All of the above variations will return the same feed of the latest news about Kyiv, Ukraine:

geo['feed'].title

# 'Київ - Останні - Google Новини'

It is language agnostic, however, it does not guarantee that the feed for any specific place will exist. For example, if you want to find the feed on LA or Los Angeles you can do it with GoogleNews('en', 'US').

The main (enUS) Google News client will most likely find the feed about the most places.


Stories by a Query

gn.search(query: str, helper = True, when = None, from_ = None, to_ = None, proxies=None, scraping_bee=None)

The returned object contains feed (FeedParserDict) and entries list of articles found with all data parsed.

Google News search itself is a complex function that has inherited some features from the standard Google Search.

The official reference on what could be inserted

The biggest obstacle that you might have is to write the URL-escaping input. To ease this process, helper = True is turned on by default.

helper uses urllib.parse.quote_plus to automatically convert the input.

For example:

  • 'New York metro opening' --> 'New+York+metro+opening'
  • 'AAPL -MSFT' --> 'AAPL+-MSFT'
  • '"Tokyo Olimpics date changes"' --> '%22Tokyo+Olimpics+date+changes%22'

You can turn it off and write your own query in case you need it by helper = False

when parameter (str) sets the time range for the published datetime. I could not find any documentation regarding this option, but here is what I deducted:

  • h for hours.(For me, worked for up to 101h). when=12h will search for only the articles matching the search criteri and published for the last 12 hours
  • d for days
  • m for month (For me, worked for up to 48m)

I did not set any hard limit here. You may try put here anything. Probably, it will work. However, I would like to warn you that wrong inputs will not lead to an error. Instead, the when parameter will be ignored by the Google.

from_ and to_ accept the following format of date: %Y-%m-%d For example, 2020-07-01


Google's Special Query Terms Cheat Sheet

Many Google's Special Query Terms have been tested one by one. Most of the core ones have been inherited by Google News service. At first, I wanted to integrate all of those as the search() function parameters. But, I realised that it might be a bit confusing and difficult to make them all work correctly.

Instead, I decided to write some kind of a cheat sheet that should give you a decent understanding of what you could do.

  • Boolean OR Search [ OR ]
from pygooglenews import GoogleNews

gn = GoogleNews()

s = gn.search('boeing OR airbus')

print(s['feed'].title)
# "boeing OR airbus" - Google News
  • Exclude Query Term [-]

"The exclude (-) query term restricts results for a particular search request to documents that do not contain a particular word or phrase. To use the exclude query term, you would preface the word or phrase to be excluded from the matching documents with "-" (a minus sign)."

  • Include Query Term [+]

"The include (+) query term specifies that a word or phrase must occur in all documents included in the search results. To use the include query term, you would preface the word or phrase that must be included in all search results with "+" (a plus sign).

The URL-escaped version of + (a plus sign) is %2B."

  • Phrase Search

"The phrase search (") query term allows you to search for complete phrases by enclosing the phrases in quotation marks or by connecting them with hyphens.

The URL-escaped version of " (a quotation mark) is %22.

Phrase searches are particularly useful if you are searching for famous quotes or proper names."

  • allintext

"The allintext: query term requires each document in the search results to contain all of the words in the search query in the body of the document. The query should be formatted as allintext: followed by the words in your search query.

If your search query includes the allintext: query term, Google will only check the body text of documents for the words in your search query, ignoring links in those documents, document titles and document URLs."

  • intitle

"The intitle: query term restricts search results to documents that contain a particular word in the document title. The search query should be formatted as intitle:WORD with no space between the intitle: query term and the following word."

  • allintitle

"The allintitle: query term restricts search results to documents that contain all of the query words in the document title. To use the allintitle: query term, include "allintitle:" at the start of your search query.

Note: Putting allintitle: at the beginning of a search query is equivalent to putting intitle: in front of each word in the search query."

  • inurl

"The inurl: query term restricts search results to documents that contain a particular word in the document URL. The search query should be formatted as inurl:WORD with no space between the inurl: query term and the following word"

  • allinurl

The allinurl: query term restricts search results to documents that contain all of the query words in the document URL. To use the allinurl: query term, include allinurl: at the start of your search query.

List of operators that do not work (for me, at least):

  1. Most (probably all) of the as_* terms do not work for Google News
  2. allinlinks:
  3. related:

Tip. If you want to build a near real-time feed for a specific topic, use when='1h'. If Google captured fewer than 100 articles over the past hour, you should be able to retrieve all of them.

Check the Useful Links section if you want to dig into how Google Search works.

Especially, Special Query Terms section of Google XML reference.

Plus, I will provide some more examples under the Full-Text Search Examples section


Output Body

All 4 functions return the dictionary that has 2 sub-objects:

  • feed - contains the information on the feed metadata
  • entries - contains the parsed articles

Both are inherited from the Feedparser. The only change is that each dictionary under entries also contains sub_articles which are the similar articles found in the description. Usually, it is non-empty for top_news() and topic_headlines() feeds.

Tip To check what is the found feed's name just check the title under the feed dictionary


How to use pygooglenews with ScrapingBee

Every function has scrapingbee parameter. It accepts your ScrapingBee API key that will be used to get the response from Google's servers.

You can take a look at what exactly is happening in the source code: check for __scaping_bee_request() function under GoogleNews class

Pay attention to the concurrency of each plan at ScrapingBee

How to use example:

gn = GoogleNews()

# it's a fake API key, do not try to use it
gn.top_news(scraping_bee = 'I5SYNPRFZI41WHVQWWUT0GNXFMO104343E7CXFIISR01E2V8ETSMXMJFK1XNKM7FDEEPUPRM0FYAHFF5')

How to use pygooglenews with proxies

So, if you have your own HTTP/HTTPS proxy(s) that you want to use to make requests to Google, that's how you do it:

gn = GoogleNews()

gn.top_news(proxies = {'https':'34.91.135.38:80'})

Advanced Querying Search Examples

Example 1. Search for articles that mention boeing and do not mention airbus

from pygooglenews import GoogleNews

gn = GoogleNews()

s = gn.search('boeing -airbus')

print(s['feed'].title)
# "boeing -airbus" - Google News

Example 2. Search for articles that mention boeing in title

from pygooglenews import GoogleNews

gn = GoogleNews()

s = gn.search('intitle:boeing')

print(s['feed'].title)
# "intitle:boeing" - Google News

Example 3. Search for articles that mention boeing in title and got published over the past hour

from pygooglenews import GoogleNews

gn = GoogleNews()

s = gn.search('intitle:boeing', when = '1h')

print(s['feed'].title)
# "intitle:boeing when:1h" - Google News

Example 4. Search for articles that mention boeing or airbus

from pygooglenews import GoogleNews

gn = GoogleNews()

s = gn.search('boeing OR airbus', when = '1h')

print(s['feed'].title)
# "boeing AND airbus when:1h" - Google News

Useful Links

Stack Overflow thread from which it all began

Google XML reference for the search query

Google News Search parameters (The Missing Manual)


Built With

Feedparser

Beutifulsoup4


About me

My name is Artem. I ❤️ working with news data. I am a co-founder of NewsCatcherAPI - Ultra-fast API to find news articles by any topic, country, language, website, or keyword

If you are interested in hiring me, please, contact me by email - [email protected] or [email protected]

Follow me on 🖋  Twitter - I write about data engineering, python, entrepreneurship, and memes.

Want to read about how it all was done? Subscribe to CODARIUM

thx to Kizy


Change Log

v0.1.1 -- fixed language-country issues

Comments
  • Exception: Could not parse your date

    Exception: Could not parse your date

    I have this very simple code

    gn = GoogleNews()

    start = datetime.date(2018,3,1)

    end = datetime.date(2019,3,1)

    print(start)

    gn.search(query="car", from_=start.strftime('%Y-%m-%d'), to_=end.strftime('%Y-%m-%d'))

    but it's giving me the error of `

    AttributeError Traceback (most recent call last) /opt/anaconda3/lib/python3.8/site-packages/pygooglenews/init.py in __from_to_helper(self, validate) 89 try: ---> 90 validate = parse_date(validate).strftime('%Y-%m-%d') 91 return str(validate)

    /opt/anaconda3/lib/python3.8/site-packages/dateparser/conf.py in wrapper(*args, **kwargs) 84 ---> 85 return f(*args, **kwargs) 86 return wrapper

    /opt/anaconda3/lib/python3.8/site-packages/dateparser/init.py in parse(date_string, date_formats, languages, locales, region, settings) 52 ---> 53 data = parser.get_date_data(date_string, date_formats) 54

    /opt/anaconda3/lib/python3.8/site-packages/dateparser/date.py in get_date_data(self, date_string, date_formats) 416 for locale in self._get_applicable_locales(date_string): --> 417 parsed_date = _DateLocaleParser.parse( 418 locale, date_string, date_formats, settings=self._settings)

    /opt/anaconda3/lib/python3.8/site-packages/dateparser/date.py in parse(cls, locale, date_string, date_formats, settings) 193 instance = cls(locale, date_string, date_formats, settings) --> 194 return instance._parse() 195

    /opt/anaconda3/lib/python3.8/site-packages/dateparser/date.py in _parse(self) 197 for parser_name in self._settings.PARSERS: --> 198 date_obj = self._parsersparser_name 199 if self._is_valid_date_obj(date_obj):

    /opt/anaconda3/lib/python3.8/site-packages/dateparser/date.py in _try_parser(self) 221 self._settings.DATE_ORDER = self.locale.info.get('date_order', _order) --> 222 date_obj, period = date_parser.parse( 223 self._get_translated_date(), settings=self._settings)

    /opt/anaconda3/lib/python3.8/site-packages/dateparser/conf.py in wrapper(*args, **kwargs) 84 ---> 85 return f(*args, **kwargs) 86 return wrapper

    /opt/anaconda3/lib/python3.8/site-packages/dateparser/date_parser.py in parse(self, date_string, settings) 36 stz = get_localzone() ---> 37 date_obj = stz.localize(date_obj) 38 else:

    AttributeError: 'backports.zoneinfo.ZoneInfo' object has no attribute 'localize'

    During handling of the above exception, another exception occurred:

    Exception Traceback (most recent call last) in 26 return stories 27 ---> 28 df = pd.DataFrame(get_news('Banana'))

    in get_news(search) 13 14 for date in date_list[:-1]: ---> 15 search = gn.search(search, from_=date.strftime('%Y-%m-%d'), to_=(date+delta).strftime('%Y-%m-%d')) 16 newsitem = search['entries'] 17

    /opt/anaconda3/lib/python3.8/site-packages/pygooglenews/init.py in search(self, query, helper, when, from_, to_, proxies, scraping_bee) 139 140 if from_ and not when: --> 141 from_ = self.from_to_helper(validate=from) 142 query += ' after:' + from 143

    /opt/anaconda3/lib/python3.8/site-packages/pygooglenews/init.py in __from_to_helper(self, validate) 91 return str(validate) 92 except: ---> 93 raise Exception('Could not parse your date') 94 95

    Exception: Could not parse your date `

    I would appreciate any help

    opened by KevorkSulahian 5
  • Only able to retrieve 100 links

    Only able to retrieve 100 links

    Hi, I tried to download some articles using pygooglenews but it only gives me 100 links. I put the from and to dates of 2020-01-01 and 2020-07-07. Please help.

    opened by MeghnaChaudhary24 2
  • Could it be that the installation is broken?

    Could it be that the installation is broken?

    $ pip install pygooglenews  --upgrade
    Collecting pygooglenews
      WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. (read timeout=15)")': /packages/84/9e/893c2336f2faa6fa96b0f86b794cccb99f4b090a5c62a61d3eeee594acff/pygooglenews-0.1.1-py3-none-any.whl
      WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. (read timeout=15)")': /packages/84/9e/893c2336f2faa6fa96b0f86b794cccb99f4b090a5c62a61d3eeee594acff/pygooglenews-0.1.1-py3-none-any.whl
      WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. (read timeout=15)")': /packages/84/9e/893c2336f2faa6fa96b0f86b794cccb99f4b090a5c62a61d3eeee594acff/pygooglenews-0.1.1-py3-none-any.whl
      WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. (read timeout=15)")': /packages/84/9e/893c2336f2faa6fa96b0f86b794cccb99f4b090a5c62a61d3eeee594acff/pygooglenews-0.1.1-py3-none-any.whl
      WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. (read timeout=15)")': /packages/84/9e/893c2336f2faa6fa96b0f86b794cccb99f4b090a5c62a61d3eeee594acff/pygooglenews-0.1.1-py3-none-any.whl
    ERROR: Could not install packages due to an EnvironmentError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Max retries exceeded with url: /packages/84/9e/893c2336f2faa6fa96b0f86b794cccb99f4b090a5c62a61d3eeee594acff/pygooglenews-0.1.1-py3-none-any.whl (Caused by ReadTimeoutError("HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. (read timeout=15)"))
    
    $ ping files.pythonhosted.org
    PING dualstack.r.ssl.global.fastly.net (151.101.1.63) 56(84) bytes of data.
    64 bytes from 151.101.1.63 (151.101.1.63): icmp_seq=1 ttl=55 time=18.2 ms
    64 bytes from 151.101.1.63 (151.101.1.63): icmp_seq=2 ttl=55 time=25.5 ms
    64 bytes from 151.101.1.63 (151.101.1.63): icmp_seq=3 ttl=55 time=9.44 ms
    64 bytes from 151.101.1.63 (151.101.1.63): icmp_seq=4 ttl=55 time=10.0 ms
    64 bytes from 151.101.1.63 (151.101.1.63): icmp_seq=5 ttl=55 time=36.3 ms
    ^C
    --- dualstack.r.ssl.global.fastly.net ping statistics ---
    5 packets transmitted, 5 received, 0% packet loss, time 4006ms
    rtt min/avg/max/mdev = 9.449/19.928/36.369/10.125 ms
    
    
    
    opened by fernandobugni 2
  • add language and country to base_URL

    add language and country to base_URL

    Hey ! This is the change that I made to be able to get portuguese and spanish news from Brazil and Argentina. I tested for english and russian and it works as well.

    opened by Bruck1701 2
  • Unable to install: error in feedparser setup command: use_2to3 is invalid.

    Unable to install: error in feedparser setup command: use_2to3 is invalid.

    I got this error while downloading package using pip3 install pygooglenews :

    Collecting feedparser<6.0.0,>=5.2.1
      Using cached feedparser-5.2.1.zip (1.2 MB)
      Preparing metadata (setup.py) ... error
      error: subprocess-exited-with-error
      
      × python setup.py egg_info did not run successfully.
      │ exit code: 1
      ╰─> [1 lines of output]
          error in feedparser setup command: use_2to3 is invalid.
          [end of output]
      
      note: This error originates from a subprocess, and is likely not a problem with pip.
    error: metadata-generation-failed
    
    × Encountered error while generating package metadata.
    ╰─> See above for output.
    
    note: This is an issue with the package mentioned above, not pip.
    hint: See above for details.
    

    Tried with pip3 install feedparser but no results

    opened by cosimopp 1
  • Copy editing

    Copy editing

    Think you meant to write "Before we start, ... then I would advise you to use one of the 3 methods described below" instead of "above" in https://github.com/kotartemiy/pygooglenews#working-with-google-news-in-production. Cheers.

    opened by mlperson 1
  • Broken link in README

    Broken link in README

    The newscatcherapi.com link in the README points to https://github.com/kotartemiy/pygooglenews/blob/master/newscatcherapi.com, which returns a 404.

    Perhaps it ought to link to:

    https://newscatcherapi.com/

    or

    https://github.com/kotartemiy/newscatcher.

    opened by wache 1
  • Suggestion: argument to specify number of returned articles

    Suggestion: argument to specify number of returned articles

    Hi there

    I've just started using your pygooglenews library - it works nicely :)

    I was wondering whether it's possible for an extra argument to be added to the function arguments to specify the number of returned news articles? Currently it's limited to 100 - I don't know whether that's a limit imposed by google or not...

    No worries if it's not feasible.

    D

    opened by Dan-Treacher 1
  • Could not parse your date

    Could not parse your date

    This is my code.

    s=gn.search('energy digital transformation',helper=True,from_ =date1.strftime("%Y-%m-%d"), to_ =date2.strftime("%Y-%m-%d"))

    The following result is obtained. #Exception: Could not parse your date

    Why can't it recognize the date?

    opened by tctopic 0
  • i Can't Install it!!!

    i Can't Install it!!!

    i tried pip install pygooglenews on the cmd, and also on the terminal of VisualStudio Code but it gives me this error everytime:

    error: subprocess-exited-with-error
    
      × python setup.py egg_info did not run successfully.
      │ exit code: 1
      ╰─> [1 lines of output]
          error in feedparser setup command: use_2to3 is invalid.
          [end of output]
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
    error: metadata-generation-failed
    
    × Encountered error while generating package metadata.
    ╰─> See above for output.
    
    note: This is an issue with the package mentioned above, not pip.
    hint: See above for details.
    

    i tried different methods, such as to upgrade pip, pip3, and various solutions people propose here: https://github.com/facebook/prophet/issues/418

    but nothing works....

    If someone has any help, it would be helpful! Thanks

    opened by shred66 2
  • AttributeError at importation

    AttributeError at importation

    (module 'base64' has no attribute 'decodestring') this method has removed from python 3.9 and changed to decodebytes. This bug impossibilize the lib import

    opened by KevBoyz 0
  • Major issue with using from and to.

    Major issue with using from and to.

    So, for the time being, it seems like dateparser itself is broken. Here's a stackoverflow thread detailing the issue, https://stackoverflow.com/questions/71498132/error-in-heroku-regex-regex-core-error-bad-escape-d-at-position-7-when-usin

    To put it simply, whenever you try to use the from_ or to_ arguments, you get the error "error: bad escape \d at position 7", which is related to issues between regex and dateparser. I was able to fix it by rolling back regex to 2022.3.2, but you may want to find a more long term solution.

    opened by bluefinch83 0
  • dependencies question

    dependencies question

    hello, i got this readout on docker build... pygooglenews 0.1.2 requires beautifulsoup4<5.0.0,>=4.9.1, but you'll have beautifulsoup4 4.8.1 which is incompatible. pygooglenews 0.1.2 requires requests<3.0.0,>=2.24.0, but you'll have requests 2.21.0 which is incompatible.

    when I tested in development everything worked fine... may I ask is it possible it can still work with this requests and beautifulsoup version? or maybe it only fully works with your required ones but mostly works otherwise with the versions I have?

    opened by wpdevs 0
Owner
Artem Bugara
Data Engineer. Building newscatcherapi.com
Artem Bugara
A feed generator. Currently supports generating RSS feeds from Google, Bing, and Yahoo news.

A feed generator. Currently supports generating RSS feeds from Google, Bing, and Yahoo news.

Josh Cardenzana 0 Dec 13, 2021
Ahmed Hossam 12 Oct 17, 2022
Stocks Trading News Alert Using Python

Stocks-Trading-News-Alert-Using-Python Ever Thought of Buying Shares of your Dream Company, When their stock price got down? But It is not possible to

Ayush Verma 3 Jul 29, 2022
NewsBlur is a personal news reader bringing people together to talk about the world.

NewsBlur NewsBlur is a personal news reader bringing people together to talk about the world.

Samuel Clay 6.2k Dec 29, 2022
A one place destination to check whatever is trending on the top social and news websites at present.

UpTrend A one place destination to check whatever is trending on the top social and news websites at present. Explore the docs » View Demo · Report Bu

Google Developer Student Clubs - JGEC 10 Oct 3, 2021
A nonebot2 plugin, send news information in a picture form.

A nonebot2 plugin, send news information in a picture form.

幼稚园园长 7 Nov 18, 2022
Gaia: a chrome extension that curates environmental news of a company

Gaia - Gaia: Your Environment News Curator Call for Code 2021 Gaia: a chrome extension that curates environmental news of a company Explore the docs »

null 4 Mar 19, 2022
GEGVL: Google Earth Based Geoscience Video Library

Google Earth Based Geoscience Video Library is transforming to Server Based. The

null 3 Feb 11, 2022
Google Scholar App Using Python

Google Scholar App Watch the tutorial video How to build a Google Scholar App | Streamlit #30 Demo Launch the web app: Reproducing this web app To rec

Chanin Nantasenamat 4 Jun 5, 2022
🤖🧭Creates google-like navigation menu using python-telegram-bot wrapper

python telegram bot menu pagination Makes a google style pagination line for a list of items. In other words it builds a menu for navigation if you ha

Sergey Smirnov 9 Nov 27, 2022
Python samples for Google Cloud Platform products.

Google Cloud Platform Python Samples Python samples for Google Cloud Platform products. Setup Install pip and virtualenv if you do not already have th

Google Cloud Platform 6k Jan 3, 2023
Jarvis Python BOT acts like Google-assistance

Jarvis-Python-BOT Jarvis Python BOT acts like Google-assistance Setup Add Mail ID (Gmail) in the file at line no 82.

Ishan Jogalekar 1 Jan 8, 2022
Python for downloading model data (HRRR, RAP, GFS, NBM, etc.) from NOMADS, NOAA's Big Data Program partners (Amazon, Google, Microsoft), and the University of Utah Pando Archive System.

Python for downloading model data (HRRR, RAP, GFS, NBM, etc.) from NOMADS, NOAA's Big Data Program partners (Amazon, Google, Microsoft), and the University of Utah Pando Archive System.

Brian Blaylock 194 Jan 2, 2023
Your Google Recon is Now Automated

GRecon : GRecon (Greei-Conn) is a simple python tool that automates the process of Google Based Recon AKA Google Dorking The current Version 1.0 Run 7

adnane-tebbaa 189 Dec 21, 2022
2 Way Sync Between Notion Database and Google Calendar

Notion-and-Google-Calendar-2-Way-Sync 2 Way Sync Between a Notion Database and Google Calendar WARNING: This repo will be undergoing a good bit of cha

null 248 Dec 26, 2022
The Google Assistant on a rotary phone

Google Assistant Rotary Phone Shoutout to my dad who had this idea a year ago and I'm only now getting around to doing it. Notes This is the code used

rydercalmdown 10 Nov 4, 2022
Google Foobar challenge solutions from my experience and other's on the web.

Google Foobar challenge Google Foobar challenge solutions from my experience and other's on the web. Note: Problems indicated with "Mine" are tested a

Islam Ayman 6 Jan 20, 2022
It is a personal assistant chatbot, capable to perform many tasks same as Google Assistant plus more extra features...

PersonalAssistant It is an Personal Assistant, capable to perform many tasks with some unique features, that you haven'e seen yet.... Features / Tasks

Roshan Kumar 95 Dec 21, 2022
This repo is related to Google Coding Challenge, given to Bright Network Internship Experience 2021.

BrightNetworkUK-GCC-2021 This repo is related to Google Coding Challenge, given to Bright Network Internship Experience 2021. Language used here is py

Dareer Ahmad Mufti 28 May 23, 2022