👨🏼‍⚖️ reddit bot that turns comment chains into ace attorney scenes

Overview

Ace Attorney reddit bot 👨🏼‍⚖️

Reddit bot that turns comment chains into ace attorney scenes.

You'll need to sign up for streamable and reddit and set the appropriate env vars to use the bot.

Assets

Download them here and put them in ./assets/ 🙂

Demo

See demo

Tutorial

If you'd like to use anim.py outside of the bot please see this notebook

Comments
  • Added sentiment analysis for other languages

    Added sentiment analysis for other languages

    This PR adds sentiment analysis capabilities to other languages other than english by performing the analysis over a translated version of the original text.

    I'm well aware that Reddit speaks mainly in English, so this PR may not be useful. Please feel free to reject it. I had to type code anyway for my own purposes so I think you may be interested in it. As I said, feel free to reject the PR if you fell it doesn't add anything relevant.

    Thanks

    opened by LuisMayo 6
  • Importing correct ffmpeg library

    Importing correct ffmpeg library

    The use of "mmpeg" library was throwing "AttributeError: module 'ffmpeg' has no attribute 'input'"

    That was due to the incorrect "ffmpeg" library being installed. It seems the intended library is in fact ffmpeg-python. More info in this issue: https://github.com/kkroening/ffmpeg-python/issues/174

    EDIT: Thanks for the project, BTW ^^

    opened by LuisMayo 3
  • Source for other assets

    Source for other assets

    I was unable to find the asset sources for certain things such as textbox4.png, arrow.png, or objection.gif. Is this also on court records?

    I assume assets like helperstand.png can be found on this page.

    Much help would be appreciated :)

    opened by yipalber 3
  • Use on Linux

    Use on Linux

    First of all, sorry for using GitHub issues for this since it's not what they're for but I truly don't know what to do

    My main problem seems to be a lack of codecs of both x264 and mp3. I've already tried to compile from source ffmpeg but it doesn't seem to work because it doesn't find mp3 encoder. I'm on debían

    Just opening this issue in case someone struggled with it and has an idea of what to do

    Thanks

    opened by LuisMayo 2
  • Refresh Token Authentication

    Refresh Token Authentication

    Because having passwords stored in your computer in plaintext is bad practice I implemented an automatic way to authenticate using oauth2 refresh tokens.

    this makes it so you need to run an authentication script (auth.py) once and give permissions from your browser. then you can run the other scripts indefinitely using the refresh token you obtained.

    this however, requires the reddit app to be a web app and not a script. (hence the branch name) read more here

    also added a .gitignore to ignore .refresh_token.txt and __pycache__

    I'm not familiar with how streamable works but if it uses oauth2 it might be possible to create refresh token authentification for that too.

    let me know what you think! :-)

    edit: oh also i forgot to ask what scope the app needs. I set it as identity,read,submit but I'm not sure

    changelog

    • created new files auth.py and server.py
    • created .gitignore
    • changed praw.Reddit authentication method in bot_streamable.py
    • added dependencies sys, socket, random
    opened by alex-unofficial 2
  • Fixes how sentences are split.

    Fixes how sentences are split.

    Old code of splitting sentences had same caveats, which include.

    • If the input text was a single really really long sentence it wouldn't get split
    • It could shuffle the order of the sentences, I had an instance where it would just return the same array but with the first sentence being the last.
    • Data in case you want to replicate the bug. Data is in Spanish but it should be the same regardless the language if you manage to get sentences this long:
    sentences =
    ['Absolutamente todos los profesores de informática que he tenido:', '¿', 'Oye sabéis que se ha detectado que los hombres que compran pañales también compran cerveza?']
    

    This code has however, one problem as well:

    • It only merges up to two small sentences even while more small sentences may be merged

    Feel free to comment, change/review the code, accept or reject this PR. Thanks

    opened by LuisMayo 1
  • Add spacy model

    Add spacy model

    Project won't work if spacy doesn't find en_core_web_sm.

    Although spacy models are usually downloaded using python -m spacy download en_core_web_sm, they can also be downloaded using pip. This way, adding it to requirements.txt would make the project work transparently for users without them needing to manually download said model

    opened by LuisMayo 1
  • Optimise Dockerfile

    Optimise Dockerfile

    • Moved package acquisition to be first and cache-busted when requirements.txt changes. Should speed up repeat builds by leveraging Docker's build cache.
    • Cleared apt and pip caches after installing packages. Shaved ~200MB off the image size.
    opened by lolPants 1
  • Include a license

    Include a license

    By not including a license with your code (usually in the form of a LICENSE file), people are legally not allowed to fork, contribute or otherwise use the code you've published in their own projects, kinda defeating the point of making it open source.

    opened by DerpyChap 1
  • Pinning pip requirements and added instructions for windows

    Pinning pip requirements and added instructions for windows

    This PR contain 3 set of changes all aimed to make it easier to clone and start using the project.

    1. I pinned specific versions in the requirements.txt file. Without specifying any version, pip was backtracking for over 10h trying to figure out which version of the dependencies to use. The versions could definitively be less strict but at the very least now it only takes a few second to setup. I would recommend testing on Linux/Docker before merging.
    2. Since the README already mentioned the existence of the assets folder, I thought it would improve the experience for it to be part of the repository.
    3. I documented the need to setup openh264 and FFmpeg for windows to be able to run the project.
    opened by Tri125 0
  • Add pip requirements

    Add pip requirements

    I used this when hosting the bot myself, and I think others trying to do the same can benefit from this. You can now install the requirements with:

    pip3 install -r requirements.txt
    

    Rather than needing to find and install all the dependencies individually.

    opened by gadhagod 0
  • Suggest to loosen the dependency on textblob

    Suggest to loosen the dependency on textblob

    Hi, your project ace-attorney-reddit-bot requires "textblob==0.15.3" in its dependency. After analyzing the source code, we found that the following versions of textblob can also be suitable without affecting your project, i.e., textblob 0.11.1, 0.12.0, 0.13.0, 0.13.1, 0.14.0, 0.15.0, 0.15.1, 0.15.2. Therefore, we suggest to loosen the dependency on textblob from "textblob==0.15.3" to "textblob>=0.11.1,<=0.15.3" to avoid any possible conflict for importing more packages or for downstream projects that may use ace-attorney-reddit-bot.

    May I pull a request to further loosen the dependency on textblob?

    By the way, could you please tell us whether such dependency analysis may be potentially helpful for maintaining dependencies easier during your development?



    We also give our detailed analysis as follows for your reference:

    Your project ace-attorney-reddit-bot directly uses 3 APIs from package textblob.

    textblob.blob.BaseBlob.translate, textblob.blob.BaseBlob.detect_language, textblob.blob.TextBlob.__init__
    
    

    Beginning from the 3 APIs above, 3 functions are then indirectly called, including 3 textblob's internal APIs and 0 outsider APIs. The specific call graph is listed as follows (neglecting some repeated function occurrences).

    [/micah5/ace-attorney-reddit-bot]
    +--textblob.blob.BaseBlob.translate
    +--textblob.blob.BaseBlob.detect_language
    +--textblob.blob.TextBlob.__init__
    |      +--textblob.blob.BaseBlob.__init__
    |      |      +--textblob.utils.lowerstrip
    |      |      |      +--textblob.utils.strip_punc
    |      |      +--textblob.blob._initialize_models
    |      |      |      +--textblob.blob._validated_param
    

    We scan textblob's versions and observe that during its evolution between any version from [0.11.1, 0.12.0, 0.13.0, 0.13.1, 0.14.0, 0.15.0, 0.15.1, 0.15.2] and 0.15.3, the changing functions (diffs being listed below) have none intersection with any function or API we mentioned above (either directly or indirectly called by this project).

    diff: 0.15.3(original) 0.11.1
    ['textblob.ordereddict.OrderedDict.__ne__', 'textblob.blob.WordList.__str__', 'textblob.classifiers.BaseClassifier.extract_features', 'textblob.classifiers.BaseClassifier', 'textblob.en.sentiments.PatternAnalyzer.analyze', 'textblob.en.taggers.NLTKTagger.tag', 'textblob.ordereddict.OrderedDict.__iter__', 'textblob.blob.WordList.__setitem__', 'textblob.blob.BaseBlob.pos_tags', 'textblob.blob.BaseBlob.sentiment_assessments', 'textblob.blob.WordList.stem', 'textblob.translate.Translator.detect', 'textblob.translate.Translator.translate', 'textblob._text._read', 'textblob.blob.BaseBlob.ngrams', 'textblob.classifiers.NLTKClassifier.update', 'textblob.blob.WordList', 'textblob.blob.WordList.append', 'textblob.ordereddict.OrderedDict.__eq__', 'textblob.ordereddict.OrderedDict.__reduce__', 'textblob.blob.WordList.__iter__', 'textblob.base.BaseTagger', 'textblob.en.taggers.NLTKTagger', 'textblob.blob.WordList.__getitem__', 'textblob.blob.WordList.__getslice__', 'textblob.en.taggers.PatternTagger', 'textblob.blob.BaseBlob', 'textblob.en.taggers.PatternTagger.tag', 'textblob.blob.BaseBlob.correct', 'textblob.ordereddict.OrderedDict.__init__', 'textblob.ordereddict.OrderedDict.copy', 'textblob._text.Sentiment.__call__', 'textblob.classifiers.basic_extractor', 'textblob.base.BaseTagger.tag', 'textblob.en.sentiments.PatternAnalyzer', 'textblob.blob.WordList.__init__', 'textblob.blob.Word', 'textblob.blob.WordList.extend', 'textblob.blob.Word.stem', 'textblob.ordereddict.OrderedDict', 'textblob.blob.WordList.__repr__', 'textblob.classifiers.NLTKClassifier.accuracy', 'textblob.tokenizers.SentenceTokenizer', 'textblob.translate._calculate_tk', 'textblob.blob.Word.lemmatize', 'textblob.blob.Word.lemma', 'textblob.ordereddict.OrderedDict.__repr__', 'textblob.classifiers.BaseClassifier.__init__', 'textblob._text.Sentiment', 'textblob.classifiers.NLTKClassifier', 'textblob.ordereddict.OrderedDict.__setitem__', 'textblob.blob.WordList.count', 'textblob.ordereddict.OrderedDict.clear', 'textblob.translate._unescape', 'textblob.translate.Translator', 'textblob.ordereddict.OrderedDict.__delitem__', 'textblob.ordereddict.OrderedDict.keys', 'textblob.ordereddict.OrderedDict.popitem', 'textblob.ordereddict.OrderedDict.__reversed__', 'textblob.ordereddict.OrderedDict.fromkeys']
    
    diff: 0.15.3(original) 0.12.0
    ['textblob.blob.WordList.__str__', 'textblob.classifiers.BaseClassifier.extract_features', 'textblob.classifiers.BaseClassifier', 'textblob.en.sentiments.PatternAnalyzer.analyze', 'textblob.en.taggers.NLTKTagger.tag', 'textblob.blob.BaseBlob.pos_tags', 'textblob.blob.WordList.__setitem__', 'textblob.blob.BaseBlob.sentiment_assessments', 'textblob._text._read', 'textblob.classifiers.NLTKClassifier.update', 'textblob.blob.WordList', 'textblob.blob.WordList.append', 'textblob.blob.WordList.__iter__', 'textblob.base.BaseTagger', 'textblob.en.taggers.NLTKTagger', 'textblob.blob.WordList.__getitem__', 'textblob.blob.WordList.__getslice__', 'textblob.en.taggers.PatternTagger', 'textblob.blob.BaseBlob', 'textblob.en.taggers.PatternTagger.tag', 'textblob.blob.BaseBlob.correct', 'textblob._text.Sentiment.__call__', 'textblob.classifiers.basic_extractor', 'textblob.base.BaseTagger.tag', 'textblob.en.sentiments.PatternAnalyzer', 'textblob.blob.WordList.__init__', 'textblob.blob.Word', 'textblob.blob.WordList.extend', 'textblob.blob.WordList.__repr__', 'textblob.classifiers.NLTKClassifier.accuracy', 'textblob.tokenizers.SentenceTokenizer', 'textblob.blob.Word.lemmatize', 'textblob.blob.Word.lemma', 'textblob.classifiers.BaseClassifier.__init__', 'textblob._text.Sentiment', 'textblob.classifiers.NLTKClassifier', 'textblob.blob.WordList.count', 'textblob.translate._unescape']
    
    diff: 0.15.3(original) 0.13.0
    ['textblob.blob.WordList.__str__', 'textblob.classifiers.BaseClassifier', 'textblob.en.sentiments.PatternAnalyzer.analyze', 'textblob.en.taggers.NLTKTagger.tag', 'textblob.blob.BaseBlob.pos_tags', 'textblob.blob.WordList.__setitem__', 'textblob.blob.BaseBlob.sentiment_assessments', 'textblob._text._read', 'textblob.blob.WordList', 'textblob.blob.WordList.append', 'textblob.blob.WordList.__iter__', 'textblob.base.BaseTagger', 'textblob.en.taggers.NLTKTagger', 'textblob.blob.WordList.__getitem__', 'textblob.blob.WordList.__getslice__', 'textblob.en.taggers.PatternTagger', 'textblob.blob.BaseBlob', 'textblob.en.taggers.PatternTagger.tag', 'textblob.blob.BaseBlob.correct', 'textblob._text.Sentiment.__call__', 'textblob.classifiers.basic_extractor', 'textblob.base.BaseTagger.tag', 'textblob.en.sentiments.PatternAnalyzer', 'textblob.blob.WordList.__init__', 'textblob.blob.Word', 'textblob.blob.WordList.extend', 'textblob.blob.WordList.__repr__', 'textblob.classifiers.NLTKClassifier.accuracy', 'textblob.tokenizers.SentenceTokenizer', 'textblob.blob.Word.lemmatize', 'textblob.blob.Word.lemma', 'textblob.classifiers.BaseClassifier.__init__', 'textblob._text.Sentiment', 'textblob.classifiers.NLTKClassifier', 'textblob.blob.WordList.count', 'textblob.translate._unescape']
    
    diff: 0.15.3(original) 0.13.1
    ['textblob.blob.WordList.__str__', 'textblob.classifiers.BaseClassifier', 'textblob.en.sentiments.PatternAnalyzer.analyze', 'textblob.en.taggers.NLTKTagger.tag', 'textblob.blob.BaseBlob.pos_tags', 'textblob.blob.WordList.__setitem__', 'textblob.blob.BaseBlob.sentiment_assessments', 'textblob._text._read', 'textblob.blob.WordList', 'textblob.blob.WordList.append', 'textblob.blob.WordList.__iter__', 'textblob.base.BaseTagger', 'textblob.en.taggers.NLTKTagger', 'textblob.blob.WordList.__getitem__', 'textblob.blob.WordList.__getslice__', 'textblob.en.taggers.PatternTagger', 'textblob.blob.BaseBlob', 'textblob.en.taggers.PatternTagger.tag', 'textblob.blob.BaseBlob.correct', 'textblob.blob.WordList.__init__', 'textblob.classifiers.basic_extractor', 'textblob.base.BaseTagger.tag', 'textblob.en.sentiments.PatternAnalyzer', 'textblob.blob.Word', 'textblob.blob.WordList.extend', 'textblob.blob.WordList.__repr__', 'textblob.tokenizers.SentenceTokenizer', 'textblob.blob.Word.lemmatize', 'textblob.blob.Word.lemma', 'textblob.classifiers.BaseClassifier.__init__', 'textblob.blob.WordList.count', 'textblob.translate._unescape']
    
    diff: 0.15.3(original) 0.14.0
    ['textblob.blob.WordList.__str__', 'textblob.classifiers.BaseClassifier', 'textblob.en.sentiments.PatternAnalyzer.analyze', 'textblob.blob.BaseBlob.pos_tags', 'textblob.blob.WordList.__setitem__', 'textblob.blob.BaseBlob.sentiment_assessments', 'textblob._text._read', 'textblob.blob.WordList', 'textblob.blob.WordList.append', 'textblob.blob.WordList.__iter__', 'textblob.blob.WordList.__getitem__', 'textblob.blob.WordList.__getslice__', 'textblob.blob.BaseBlob', 'textblob.blob.BaseBlob.correct', 'textblob.blob.WordList.__init__', 'textblob.classifiers.basic_extractor', 'textblob.en.sentiments.PatternAnalyzer', 'textblob.blob.Word', 'textblob.blob.WordList.extend', 'textblob.blob.WordList.__repr__', 'textblob.tokenizers.SentenceTokenizer', 'textblob.blob.Word.lemmatize', 'textblob.blob.Word.lemma', 'textblob.classifiers.BaseClassifier.__init__', 'textblob.blob.WordList.count', 'textblob.translate._unescape']
    
    diff: 0.15.3(original) 0.15.0
    ['textblob.blob.WordList.__str__', 'textblob.blob.BaseBlob.pos_tags', 'textblob.blob.WordList.__setitem__', 'textblob._text._read', 'textblob.blob.WordList', 'textblob.blob.WordList.append', 'textblob.blob.WordList.__iter__', 'textblob.blob.WordList.__getitem__', 'textblob.blob.WordList.__getslice__', 'textblob.blob.BaseBlob', 'textblob.blob.BaseBlob.correct', 'textblob.blob.WordList.__init__', 'textblob.blob.Word', 'textblob.blob.WordList.extend', 'textblob.blob.WordList.__repr__', 'textblob.tokenizers.SentenceTokenizer', 'textblob.blob.Word.lemmatize', 'textblob.blob.Word.lemma', 'textblob.blob.WordList.count']
    
    diff: 0.15.3(original) 0.15.1
    ['textblob.blob.BaseBlob', 'textblob.blob.BaseBlob.correct', 'textblob.blob.WordList.__str__', 'textblob.blob.WordList', 'textblob.blob.WordList.append', 'textblob.blob.WordList.__init__', 'textblob.blob.WordList.count', 'textblob.blob.BaseBlob.pos_tags', 'textblob.blob.WordList.__iter__', 'textblob.blob.WordList.__setitem__', 'textblob.blob.WordList.extend', 'textblob.blob.WordList.__repr__', 'textblob.tokenizers.SentenceTokenizer', 'textblob._text._read', 'textblob.blob.WordList.__getitem__', 'textblob.blob.WordList.__getslice__']
    
    diff: 0.15.3(original) 0.15.2
    ['textblob.blob.BaseBlob', 'textblob.tokenizers.SentenceTokenizer', 'textblob.blob.BaseBlob.correct', 'textblob.blob.BaseBlob.pos_tags']
    
    

    Therefore, we believe that it is quite safe to loose your dependency on textblob from "textblob==0.15.3" to "textblob>=0.11.1,<=0.15.3". This will improve the applicability of ace-attorney-reddit-bot and reduce the possibility of any further dependency conflict with other projects.

    opened by Agnes-U 0
  • Higher resolution videos

    Higher resolution videos

    Although I understand that the assets are low-res since they come from a GBA game, the video is rendering at the resolution from said assets which affects the letters and make them (specially names) a bit hard to read.

    We should be able to output, at least the letters, at a higher rest than the assets, which may mean probably upscaling.

    opened by LuisMayo 0
  • Crash if 15 or more authors

    Crash if 15 or more authors

    It'll crash if there are more than 15 authors in the scene. I tried to solve it using this code

    available_characters = list(
                    filter(
                        lambda character: character not in characters, rnd_characters
                    )
                )
                if (len(available_characters) == 0):
                    available_characters = rnd_characters
                rnd_character = random.choice(available_characters)
    

    But it crash at a later point due to how the code works Thanks

    opened by LuisMayo 0
  • Audio disappears when music file ends

    Audio disappears when music file ends

    When the court music "03 - Turnabout Courtroom - Trial.mp3" ends, all sounds are then muted. I think sound should work past that time and music should loop.

    Regards

    opened by LuisMayo 2
  • special character processing

    special character processing

    when emojis/special characters are in the processed text, it causes a glitch in the text box that spams the typewriter effect for the duration of the scene. this can be fixed by adding some lines that detect emojis/special characters and removes them.

    opened by miettee 0
  • Generalize it outside of reddit

    Generalize it outside of reddit

    Being able to take a text/csv format and generate the animation. This is useful to integrate it with many other sites, or even make custom ones.

    Ths can be done potentially by using:

    characters, comments = load_from_file("file.txt/csv")
    characters = anim.get_characters(characters)
    anim.comments_to_scene(
        comments, characters, output_filename=output_filename
    )
    
    opened by israelg99 6
Owner
null
Subscrape - A Python scraper for substrate chains

subscrape A Python scraper for substrate chains that uses Subscan. Usage copy co

ChaosDAO 14 Dec 15, 2022
An utility library to scrape data from TikTok, Instagram, Twitch, Youtube, Twitter or Reddit in one line!

Social Media Scraper An utility library to scrape data from TikTok, Instagram, Twitch, Youtube, Twitter or Reddit in one line! Go to the website » Vie

null 2 Aug 3, 2022
Console application for downloading images from Reddit in Python

RedditImageScraper Console application for downloading images from Reddit in Python Introduction This short Python script was created for the mass-dow

James 0 Jul 4, 2021
A simple reddit scraper to get memes (only images) from r/ProgrammerHumor.

memey A simple reddit scraper to get memes (only images) from r/ProgrammerHumor. Note Only works if you have firefox installed (yet). Instructions foo

null 2 Nov 16, 2021
API to parse tibia.com content into python objects.

Tibia.py An API to parse Tibia.com content into object oriented data. No fetching is done by this module, you must provide the html content. Features:

Allan Galarza 25 Oct 31, 2022
Example of scraping a paginated API endpoint and dumping the data into a DB

Provider API Scraper Example Example of scraping a paginated API endpoint and dumping the data into a DB. Pre-requisits Python >= 3.9 Pipenv Setup # i

Alex Skobelev 1 Oct 20, 2021
Scrapping the data from each page of biocides listed on the BAUA website into a csv file

Scrapping the data from each page of biocides listed on the BAUA website into a csv file

Eric DE MARIA 1 Nov 30, 2021
This was supposed to be a web scraping project, but somehow I've turned it into a spamming project

Introduction This was supposed to be a web scraping project, but somehow I've turned it into a spamming project.

Boss Perry (Pez) 1 Jan 23, 2022
Meme-videos - Scrapes memes and turn them into a video compilations

Meme Videos Scrapes memes from reddit using praw and request and then converts t

Partho 12 Oct 28, 2022
A Python module to bypass Cloudflare's anti-bot page.

cloudscraper A simple Python module to bypass Cloudflare's anti-bot page (also known as "I'm Under Attack Mode", or IUAM), implemented with Requests.

VeNoMouS 2.6k Dec 31, 2022
Simple Web scrapper Bot to scrap webpages using Requests, html5lib and Beautifulsoup.

WebScrapperRoBot Simple Web scrapper Bot to scrap webpages using Requests, html5lib and Beautifulsoup. Mark your Star ⭐ ⭐ What is Web Scraping ? Web s

Nuhman Pk 53 Dec 21, 2022
Web3 Pancakeswap Sniper bot written in python3

Pancakeswap_BSC_Sniper_Bot Web3 Pancakeswap Sniper bot written in python3, Please note the license conditions! The first Binance Smart Chain sniper bo

Treading-Tigers 295 Dec 31, 2022
NASA APOD Discord Bot - Fetches information from NASA APOD site.

NASA APOD Discord Bot - Fetches information from NASA APOD site.

Astronomy Club IITK 4 Apr 23, 2022
A Python module to bypass Cloudflare's anti-bot page.

cloudflare-scrape A simple Python module to bypass Cloudflare's anti-bot page (also known as "I'm Under Attack Mode", or IUAM), implemented with Reque

null 3k Jan 4, 2023
PS5 bot to find a console in france for chrismas 🎄🎅🏻 NOT FOR SCALPERS

Une PS5 pour Noël Python + Chrome --headless = une PS5 pour noël MacOS Installer chrome Tweaker le .yaml pour la listes sites a scrap et les criteres

Olivier Giniaux 3 Feb 13, 2022
Automated Linkedin bot that will improve your visibility and increase your network.

LinkedinSpider LinkedinSpider is a small project using browser automating to increase your visibility and network of connections on Linkedin. DISCLAIM

Frederik 2 Nov 26, 2021
This Spider/Bot is developed using Python and based on Scrapy Framework to Fetch some items information from Amazon

- Hello, This Project Contains Amazon Web-bot. - I've developed this bot for fething some items information on Amazon. - Scrapy Framework in Python is

Khaled Tofailieh 4 Feb 13, 2022
TarkovScrappy - A nifty little bot that lets you know if a queried item might be required for a quest at some point in the land of Tarkov!

TarkovScrappy A nifty little bot that lets you know if a queried item might be required for a quest at some point in the land of Tarkov! Hideout items

Joshua Smeda 2 Apr 11, 2022
Web-scraping - A bot using Python with BeautifulSoup that scraps IRS website by form number and returns the results as json

Web-scraping - A bot using Python with BeautifulSoup that scraps IRS website (prior form publication) by form number and returns the results as json. It provides the option to download pdfs over a range of years.

null 1 Jan 4, 2022