See trending stock tickers on Reddit and check Stock perfomance

Overview

Reddit Stock Trends 📈

See trending stock tickers on Reddit and check Stock perfomance


GitHub issues

Usage

Reddit API

  • Get your reddit API credentials from here
  • Follow this article to get your credentials.

Running Scripts

  • Go to back/ directory.
  • Create a praw.ini file with the following
[ClientSecrets]
client_id=
client_secret=
user_agent=

Note that the title of this section, ClientSecrets, is important because ticker_counts.py will specifically look for that title in the praw.ini file.

  • Install required modules using pip install -r requirements.txt
  • Run ticker_counts.py first
  • Now run yfinance_analysis.py
  • You will be able to find your results in data/ directory.

Web app

There's also a JavaScript web app that shows some data visualizations if you don't want to read the csv files.

Usage

Once you finished running the scripts, you'll have to set up the local server

cd back
python wsgi.py

Then, launch the client

cd front
cp .env.example .env
npm install
npm run serve

You can change the env variables if you need to


Ticker Symbol API - EOD Historical Data

Included for potential future use is a csv file that contains all the listed ticker symbols for stocks, ETFs, and mutual funds (~50,000 tickers). This was retrieved from https://eodhistoricaldata.com/. You can register for a free api key and get up to 20 api calls every 24 hours.

To retrieve a csv of all USA ticker symbols, use the following:

https://eodhistoricaldata.com/api/exchange-symbol-list/US?api_token={YOUR_API_KEY}

Contribution

I would love to see more work done on this, I think this could be something very useful at some point. All contributions are welcome. Go ahead and open a PR.

  • Join the Discord to discuss development and suggestions.

To Do

See this page.

Suggestions are appreciated.

Donation

If you like what I am doing, consider buying me a coffee this helps me give more time to this project and improve.

Buy Me A Coffee


If you decide to use this anywhere please give a credit to @abbasmdj on twitter, also If you like my work, check out other projects on my Github and my personal blog.

Comments
  • 404 - GET http://SERVER_IP_ADRESS:8080/get-basic-data?page=1

    404 - GET http://SERVER_IP_ADRESS:8080/get-basic-data?page=1

    image

    I'm trying to deploy with docker, but when I visit the site by 8080 port, it shows nothing, chrome debug message as above.

    I configured praw.ini just like below:

    [ClientSecrets]
    client_id=a_14_character_string_from_reddit
    client_secret=a_27_character_string_from_reddit
    user_agent=my_app_name
    

    The backend and frontend container built well and running healthy, docker logs show no error message.

    I found after I deploy the service by docker-compose up -d, there is two new files generated under back/data folder: 2021-04-06_financial_df.csv and 2021-04-06_tick_df.csv.

    opened by jjdblast 9
  • urllib.error.HTTPError: HTTP Error 503: Service Unavailable

    urllib.error.HTTPError: HTTP Error 503: Service Unavailable

    Running yfinance_analysis.py results in the following error, can someone help me please?

    Traceback (most recent call last):
      File "/Users/maxrugen/Developer/Reddit-Stock-Trends/back/yfinance_analysis.py", line 71, in <module>
        analyzer.analyze()
      File "/Users/maxrugen/Developer/Reddit-Stock-Trends/back/yfinance_analysis.py", line 24, in analyze
        df_best[dataColumns] = df_best.Ticker.apply(self.get_ticker_info)
      File "/usr/local/lib/python3.9/site-packages/pandas/core/series.py", line 4135, in apply
        mapped = lib.map_infer(values, f, convert=convert_dtype)
      File "pandas/_libs/lib.pyx", line 2467, in pandas._libs.lib.map_infer
      File "/Users/maxrugen/Developer/Reddit-Stock-Trends/back/yfinance_analysis.py", line 46, in get_ticker_info
        info = yf.Ticker(ticker).info
      File "/usr/local/lib/python3.9/site-packages/yfinance/ticker.py", line 138, in info
        return self.get_info()
      File "/usr/local/lib/python3.9/site-packages/yfinance/base.py", line 446, in get_info
        self._get_fundamentals(proxy)
      File "/usr/local/lib/python3.9/site-packages/yfinance/base.py", line 285, in _get_fundamentals
        holders = _pd.read_html(url)
      File "/usr/local/lib/python3.9/site-packages/pandas/util/_decorators.py", line 299, in wrapper
        return func(*args, **kwargs)
      File "/usr/local/lib/python3.9/site-packages/pandas/io/html.py", line 1085, in read_html
        return _parse(
      File "/usr/local/lib/python3.9/site-packages/pandas/io/html.py", line 893, in _parse
        tables = p.parse_tables()
      File "/usr/local/lib/python3.9/site-packages/pandas/io/html.py", line 213, in parse_tables
        tables = self._parse_tables(self._build_doc(), self.match, self.attrs)
      File "/usr/local/lib/python3.9/site-packages/pandas/io/html.py", line 732, in _build_doc
        raise e
      File "/usr/local/lib/python3.9/site-packages/pandas/io/html.py", line 713, in _build_doc
        with urlopen(self.io) as f:
      File "/usr/local/lib/python3.9/site-packages/pandas/io/common.py", line 195, in urlopen
        return urllib.request.urlopen(*args, **kwargs)
      File "/usr/local/Cellar/[email protected]/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 214, in urlopen
        return opener.open(url, data, timeout)
      File "/usr/local/Cellar/[email protected]/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 523, in open
        response = meth(req, response)
      File "/usr/local/Cellar/[email protected]/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 632, in http_response
        response = self.parent.error(
      File "/usr/local/Cellar/[email protected]/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 561, in error
        return self._call_chain(*args)
      File "/usr/local/Cellar/[email protected]/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 494, in _call_chain
        result = func(*args)
      File "/usr/local/Cellar/[email protected]/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 641, in http_error_default
        raise HTTPError(req.full_url, code, msg, hdrs, fp)
    urllib.error.HTTPError: HTTP Error 503: Service Unavailable
    
    opened by maxrugen 6
  • prawcore.exceptions.ResponseException: received 401 HTTP response

    prawcore.exceptions.ResponseException: received 401 HTTP response

    **Hi, I follow the instruction to get the reddit API credentials

    However, I still have the following error.

    Could you please help me why I get the error below?**

    Version 7.0.0 of praw is outdated. Version 7.1.4 was released 3 days ago. Selecting relevant data from webscraper: 0% 0/2000 [00:00<?, ?it/s] Traceback (most recent call last): File "ticker_counts.py", line 45, in <module> post.total_awards_received] for post in tqdm(new_bets, desc="Selecting relevant data from webscraper", total=WEBSCRAPER_LIMIT)] File "ticker_counts.py", line 40, in <listcomp> posts = [[post.id, File "/usr/local/lib/python3.6/dist-packages/tqdm/std.py", line 1166, in __iter__ for obj in iterable: File "/usr/local/lib/python3.6/dist-packages/praw/models/listing/generator.py", line 61, in __next__ self._next_batch() File "/usr/local/lib/python3.6/dist-packages/praw/models/listing/generator.py", line 71, in _next_batch self._listing = self._reddit.get(self.url, params=self.params) File "/usr/local/lib/python3.6/dist-packages/praw/reddit.py", line 490, in get return self._objectify_request(method="GET", params=params, path=path) File "/usr/local/lib/python3.6/dist-packages/praw/reddit.py", line 574, in _objectify_request data=data, files=files, method=method, params=params, path=path File "/usr/local/lib/python3.6/dist-packages/praw/reddit.py", line 732, in request timeout=self.config.timeout, File "/usr/local/lib/python3.6/dist-packages/prawcore/sessions.py", line 339, in request url=url, File "/usr/local/lib/python3.6/dist-packages/prawcore/sessions.py", line 235, in _request_with_retries url, File "/usr/local/lib/python3.6/dist-packages/prawcore/sessions.py", line 195, in _make_request timeout=timeout, File "/usr/local/lib/python3.6/dist-packages/prawcore/rate_limit.py", line 35, in call kwargs["headers"] = set_header_callback() File "/usr/local/lib/python3.6/dist-packages/prawcore/sessions.py", line 282, in _set_header_callback self._authorizer.refresh() File "/usr/local/lib/python3.6/dist-packages/prawcore/auth.py", line 325, in refresh self._request_token(grant_type="client_credentials") File "/usr/local/lib/python3.6/dist-packages/prawcore/auth.py", line 153, in _request_token response = self._authenticator._post(url, **data) File "/usr/local/lib/python3.6/dist-packages/prawcore/auth.py", line 36, in _post raise ResponseException(response) prawcore.exceptions.ResponseException: received 401 HTTP response

    opened by AngelTiger90 4
  • Speed up execution suggestion.

    Speed up execution suggestion.

    Cache the calls to Yahoo Finance to speed up execution.

    You should replace all the calls to yf.Ticker(symbol) with this function:

    _symbols = {}
    def yfTicker(symbol):
      if(symbol in _symbols):
        return _symbols[symbol]
      _symbols[symbol] = yf.Ticker(symbol)
      return _symbols[symbol]
    

    Additionally when using the notebook you can just restart the kernel or run _symbols = {} again to clear the caching.

    Why?

    Because every call is being executed when yf.Ticker(symbol) is called, it's a ton of redundant calls to YF especially when your calling the same ticker repeatedly.

    Additional possible speedup would to overload the history functions in a similar manner as above as well.

    opened by a904guy 4
  • tqdm.std.TqdmKeyError

    tqdm.std.TqdmKeyError

    Hello!

    Love the idea! Just started playing around with it and ran into an issue. After setting up requirements.txt and doing my first run, I am getting this error:

    Traceback (most recent call last):
      File "./ticker_counts.py", line 48, in <module>
        post.total_awards_received] for post in tqdm(new_bets, desc="Selecting relevant data from webscraper", limit=WEBSCRAPER_LIMIT)]
      File "/usr/local/lib/python3.7/dist-packages/tqdm/std.py", line 1007, in __init__
        TqdmKeyError("Unknown argument(s): " + str(kwargs)))
    tqdm.std.TqdmKeyError: "Unknown argument(s): {'limit': 2000}"
    

    I do not know anything about tqdm and could not figure out the issue, any ideas @Denbergvanthijs? I only ping you because tqdm was added to the standalone scripts.

    Thanks!

    opened by s0meguy1 4
  • Fixed Code Quality Issues

    Fixed Code Quality Issues

    Description

    Summary:

    • Removed the usage of self
    • Removed multiple import names
    • Add .deepsource.toml

    I ran a DeepSource Analysis on my fork of this repository. You can see all the issues raised by DeepSource here.

    DeepSource helps you to automatically find and fix issues in your code during code reviews. This tool looks for anti-patterns, bug risks, performance problems, and raises issues. There are plenty of other issues in relation to Bug Discovery and Anti-Patterns which you would be interested to take a look at.

    If you do not want to use DeepSource to continuously analyze this repo, I'll remove the .deepsource.toml from this PR and you can merge the rest of the fixes. If you want to setup DeepSource for Continuous Analysis, I can help you set that up.

    opened by HarshCasper 3
  • Google trends colab:ERROR: Could not find a version that satisfies the requirement numpy==1.20.1

    Google trends colab:ERROR: Could not find a version that satisfies the requirement numpy==1.20.1

    Hi, I am trying to use the code in google colab. Some error is displayed. Any idea why?

    !pip install -r requirements.txt

    Requirement already satisfied: certifi==2020.12.5 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 1)) (2020.12.5) Collecting chardet==4.0.0 Downloading https://files.pythonhosted.org/packages/19/c7/fa589626997dd07bd87d9269342ccb74b1720384a4d739a1872bd84fbe68/chardet-4.0.0-py2.py3-none-any.whl (178kB) |████████████████████████████████| 184kB 4.9MB/s Collecting configparser==5.0.1 Downloading https://files.pythonhosted.org/packages/08/b2/ef713e0e67f6e7ec7d59aea3ee78d05b39c15930057e724cc6d362a8c3bb/configparser-5.0.1-py3-none-any.whl Requirement already satisfied: idna==2.10 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 4)) (2.10) Collecting lxml==4.6.2 Downloading https://files.pythonhosted.org/packages/bd/78/56a7c88a57d0d14945472535d0df9fb4bbad7d34ede658ec7961635c790e/lxml-4.6.2-cp36-cp36m-manylinux1_x86_64.whl (5.5MB) |████████████████████████████████| 5.5MB 7.9MB/s Requirement already satisfied: multitasking==0.0.9 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 6)) (0.0.9) ERROR: Could not find a version that satisfies the requirement numpy==1.20.1 (from -r requirements.txt (line 7)) (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0b3, 1.11.0rc1, 1.11.0rc2, 1.11.0, 1.11.1rc1, 1.11.1, 1.11.2rc1, 1.11.2, 1.11.3, 1.12.0b1, 1.12.0rc1, 1.12.0rc2, 1.12.0, 1.12.1rc1, 1.12.1, 1.13.0rc1, 1.13.0rc2, 1.13.0, 1.13.1, 1.13.3, 1.14.0rc1, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0rc1, 1.15.0rc2, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0rc1, 1.16.0rc2, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0rc1, 1.17.0rc2, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0rc1, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0rc1, 1.19.0rc2, 1.19.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5) ERROR: No matching distribution found for numpy==1.20.1 (from -r requirements.txt (line 7))

    bug good first issue 
    opened by AngelTiger90 3
  • Using praw.ini

    Using praw.ini

    Hey! Have you thought about using a top-level praw.ini to pass in your client id, client secret, and user_agent? See here for its documentation.

    Having a section in the praw.ini file that contains the following

    [ScraperBot] client_id = ABCDE client_secret = 12345 user_agent = ScraperBot

    would allow initialization of the Reddit object to simply be reddit = praw.Reddit('ScraperBot'). Plus, it would remove the current need for dotenv.

    opened by pcheng17 3
  • Remove new users in the webscrape

    Remove new users in the webscrape

    Considering adding a post.author during the webscraper and dumping those names into a filter. Praw can snag the userinfo and test to see if they have been active for over a certain time, might dump out the bots. We can keep the user info in a cache. Since we are talking about thousands of comments to search through, we can pass these off to a separate thread to build a blacklist of commentors to compare against the main thread. This would allow them first pass through but after they are checked and rejected, future checks would hit on the blacklist. So 1 bot that spams a random pump and dump would have low volume and not show up hopefully. Let me know if its worth the effort.

    opened by Nighkali 1
  • Introducing Configparser for breaking user credentials and filtering …

    Introducing Configparser for breaking user credentials and filtering …

    We should break out the configuration for scraping filters for individual users to mess with in the config.ini. I also decided to offer up a counter to env to instead have it config driven. The password needs to be removed but its set this way for example so users can remove their own API creds

    opened by Nighkali 1
  • PRAW details

    PRAW details

    A suggestion; Store your details (client-id, client-secret of praw) as variables in a secrets.py file and import the variables from this .py file at the start of your notebook.

    Then in your .gitignore you can add this secrets.py file such that it will not be pushed to github.

    Currently everyone pulling your code is using your account.

    opened by quinten-goens 1
  • Stock prediction tools

    Stock prediction tools

    I saw that you liked my work, I like that you have looked at it https://github.com/Leci37/LecTrade/tree/develop

    My name is Luis, I'm a big-data machine-learning developer, I'm a fan of your work, and I usually check your updates.

    I was afraid that my savings would be eaten by inflation. I have created a powerful tool that based on past technical patterns (volatility, moving averages, statistics, trends, candlesticks, support and resistance, stock index indicators). All the ones you know (RSI, MACD, STOCH, Bolinger Bands, SMA, DEMARK, Japanese candlesticks, ichimoku, fibonacci, williansR, balance of power, murrey math, etc) and more than 200 others.

    The tool creates prediction models of correct trading points (buy signal and sell signal, every stock is good traded in time and direction). For this I have used big data tools like pandas python, stock technical patterns market libraries like: tablib, TAcharts ,pandas_ta... For data collection and calculation. And powerful machine-learning libraries such as: Sklearn.RandomForest , Sklearn.GradientBoosting, XGBoost, Google TensorFlow and Google TensorFlow LSTM.

    With the models trained with the selection of the best technical indicators, the tool is able to predict trading points (where to buy, where to sell) and send real-time alerts to Telegram or Mail. The points are calculated based on the learning of the correct trading points of the last 2 years (including the change to bear market after the rate hike).

    I think it could be useful to you, to improve, I would like to share it with you, and if you are interested in improving and collaborating we could, who knows how to make beautiful things.

    Thank you for your time I'm sorry to contact you here ,by issues, I don't know how I would be able to do it. https://github.com/Leci37/stocks-Machine-learning-RealTime-telegram/discussions

    opened by Leci37 2
  • Activation of DeepSource

    Activation of DeepSource

    Hi 👋

    One of my Pull Requests around fixing Code Quality Issues with DeepSource was merged here: https://github.com/iam-abbas/Reddit-Stock-Trends/pull/67

    I'd just like to inform you that the issues fixed here were detected by running DeepSource analysis on the repo. If you like, you can activate analysis for your repository to detect such code quality issues/bug risks on the fly for every change made. You can also use the Autofix feature to fix them with one click.

    The .deepsource.toml file you merged will only take effect if you activate analysis for this repo.

    Here's what you can do if you wish to activate DeepSource to continuously analyze your repository:

    • Sign up on DeepSource and activate analysis for this repository.
    • Create .deepsource.toml configuration which you can use to configure your analysis settings (My PR already added that, but feel free to edit it anytime).
    • Track/Check analysis here.

    If you have any doubts or questions, you can check out the docs, or feel free to reach out :)

    opened by HarshCasper 0
  • ValueError: No tables found

    ValueError: No tables found

    https://github.com/iam-abbas/Reddit-Stock-Trends/commit/af82387650c5d217ae3072390df90516c671df6a

    I downloaded the above hash 3 days ago. Set it up and ran it twice once a day with no issue.

    On the 3rd day "html5lib not found" I did pip install html5lib. The ticker_counts.py runs fine but

    yfinance_analysis.py has a bunch of line errors ending in ValueError: No tables found.

    I have reverted to 59cd6d0a3280c78468501aa4a56fb378532873fc

    opened by Tardisgx 1
  • Error: UTF-8 codec cant decode byter 0xff

    Error: UTF-8 codec cant decode byter 0xff

    Hey im a total beginner at python and programming but im getting this error message when running ticker_counts:

    UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte

    The whole log looks like this:

    python3 ticker_counts.py Traceback (most recent call last): File "/Users/filipalvgren/Desktop/Reddit-Stock-Trends/back/ticker_counts.py", line 102, in ticket.get_data() File "/Users/filipalvgren/Desktop/Reddit-Stock-Trends/back/ticker_counts.py", line 49, in get_data reddit = praw.Reddit('ClientSecrets') File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/praw/reddit.py", line 188, in init self.config = Config( File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/praw/config.py", line 79, in init self._load_config(config_interpolation) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/praw/config.py", line 60, in _load_config config.read(locations) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/configparser.py", line 697, in read self._read(fp, filename) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/configparser.py", line 1017, in _read for lineno, line in enumerate(fp, start=1): File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte

    Any help on solving this issue would be much appriciated!

    opened by fimpen1312 0
  • Enhancement to #56 - Portfolios of Interesting People

    Enhancement to #56 - Portfolios of Interesting People

    Portfolios Of Interesting People

    Users' may desire to track popular, or machine-learned as reliable, ticker-mentioners. Leverage work done for #56 and hook into reddit result-set to update portfolios labeled with userIds from the scrape-source (praw/reddit)

    Undecided on the name, needs feedback 😂 -- Portfolios of Interesting People - POIPeses 🐬

    Notes

    • Every time a person of interest mentions a stock, add to the POI

    portfolio1

    portfolio2

    portfolio3

    opened by boboman-1 2
Owner
Abbas
CS Major, Full-Stack Developer, APP Developer, Data Science Researcher, ML Enthusiast
Abbas
PRAW, an acronym for "Python Reddit API Wrapper", is a python package that allows for simple access to Reddit's API.

PRAW: The Python Reddit API Wrapper PRAW, an acronym for "Python Reddit API Wrapper", is a Python package that allows for simple access to Reddit's AP

Python Reddit API Wrapper Development 3k Dec 29, 2022
PRAW, an acronym for "Python Reddit API Wrapper", is a python package that allows for simple access to Reddit's API.

PRAW: The Python Reddit API Wrapper PRAW, an acronym for "Python Reddit API Wrapper", is a Python package that allows for simple access to Reddit's AP

Python Reddit API Wrapper Development 3k Dec 29, 2022
A reddit bot that imitates the popular reddit bot "u/repostsleuthbot" to trick people into clicking on a rickroll

Reddit-Rickroll-Bot A reddit bot that imitates the popular reddit bot "u/repostsleuthbot" to trick people into clicking on a rickroll Made with The Py

null 0 Jul 16, 2022
Check and write all account info + Check nitro on account

Discord-Token-Checker Check and write all account info + Check nitro on account Also check https://github.com/GuFFy12/Discord-Token-Parser (Parse disc

null 36 Jan 1, 2023
Stock Market Insights is a Dashboard that gives the 360 degree view of the particular company stock

Stock Market Insights is a Dashboard that gives the 360 degree view of the particular company stock.It extracts specific data from multiple sources like Social Media (Twitter,Reddit ,StockTwits) , News Articles and applies NLP techniques to get sentiments and insights.

Ganesh N 3 Sep 10, 2021
Track live sentiment for stocks from Reddit and Twitter and identify growing stocks

Market Sentiment About This repository can mainly be used for two things. a. Tracking the live sentiment of stocks from Reddit and Twitter b. Tracking

Market Sentiment 345 Dec 17, 2022
Twitter bot that turns comment chains into ace attorney scenes. Inspired by and using https://github.com/micah5/ace-attorney-reddit-bot

Ace Attorney twitter Bot Twitter bot that turns comment chains into ace attorney scenes. Inspired by and using https://github.com/micah5/ace-attorney-

Luis Mayo Valbuena 542 Dec 17, 2022
Exports saved posts and comments on Reddit to a csv file.

reddit-saved-to-csv Exports saved posts and comments on Reddit to a csv file. Columns: ID, Name, Subreddit, Type, URL, NoSFW ID: Starts from 1 and inc

null 70 Jan 2, 2023
Create Fast and easy image datasets using reddit

Reddit-Image-Scraper Reddit Reddit is an American Social news aggregation, web content rating, and discussion website. Reddit has been devided by topi

Wasin Silakong 4 Apr 27, 2022
Shred your reddit comment and post history

trasheddit Shred your reddit comment and post history (x89/Shreddit replacement) Usage Simple Example Download trasheddit: git clone https://github.co

Elly 2 Jan 5, 2022
A small bot to interact with the reddit API. Get top viewers and update the sidebar widget.

LiveStream_Reddit_Bot Get top twitch and facebook stream viewers for a game and update the sidebar widget and old reddit sidebar to show your communit

Tristan Wise 1 Nov 21, 2021
HackerNews and Reddit in one placce

EDIT: this project is 3.5 years old. I found it sad it's just laying around, so I did some minimal fixes and deployed it. Hope you enjoy! (PR's welcom

Hugo Montenegro 1 Nov 13, 2021
A bot framework for Reddit to manage threads, wiki pages, widgets, menus and more.

Sub Manager Sub Manager is a bot framework for Reddit to automate a variety of tasks on one or more subreddits, and can be configured and run without

r/SpaceX 3 Aug 26, 2022
A Python package that can be used to download post and comment data from Reddit.

Reddit Data Collector Reddit Data Collector is a Python package that allows a user to collect post and comment data from Reddit. It is built on top of

Nico Van den Hooff 3 Jul 26, 2022
Source code of u/pekofy_bot from reddit.

pekofy-bot Source code of u/pekofy_bot from reddit. Get more info about the bot here: https://www.reddit.com/user/pekofy_bot/comments/krxxol/pekofy_bo

null 32 Dec 25, 2022
historical code from reddit.com

This repository is archived. This repository is archived and will not receive any updates or accept issues or pull requests. To report bugs in reddit.

The Reddit Archives 16.3k Dec 31, 2022
A way to export your saved reddit posts to a Notion table.

reddit-saved-to-notion A way to export your saved reddit posts and comments to a Notion table.Uses notion-sdk-py and praw for interacting with Notion

null 19 Sep 12, 2022
Reddit cli to slack at work

Reddit CLI (v1.0) Introduction Why Reddit CLI? Coworker who sees me looking at something in a browser: "Glad you're not busy; I need you to do this, t

null 3 Jun 22, 2021
Telegram bot to scrape images from the reddit universe

Telegram bot to scrape images from the reddit universe

XD22 3 Sep 30, 2022