A Python feed reader library.

Overview

reader is a Python feed reader library.

It aims to allow writing feed reader applications without any business code, and without enforcing a dependency on a particular framework.

build status (GitHub Actions) code coverage documentation status PyPI status checked with mypy code style: black

reader allows you to:

  • retrieve, store, and manage Atom, RSS, and JSON feeds
  • mark entries as read or important
  • add tags and metadata to feeds
  • filter feeds and articles
  • full-text search articles
  • get statistics on feed and user activity
  • write plugins to extend its functionality
  • skip all the low level stuff and focus on what makes your feed reader different

...all these with:

  • a stable, clearly documented API
  • excellent test coverage
  • fully typed Python

What reader doesn't do:

  • provide an UI
  • provide a REST API (yet)
  • depend on a web framework
  • have an opinion of how/where you use it

The following exist, but are optional (and frankly, a bit unpolished):

  • a minimal web interface
    • that works even with text-only browsers
    • with automatic tag fixing for podcasts (MP3 enclosures)
  • a command-line interface

Documentation: reader.readthedocs.io

Usage:

$ pip install reader
>>> from reader import make_reader
>>>
>>> reader = make_reader('db.sqlite')
>>> reader.add_feed('http://www.hellointernet.fm/podcast?format=rss')
>>> reader.update_feeds()
>>>
>>> entries = list(reader.get_entries())
>>> [e.title for e in entries]
['H.I. #108: Project Cyclops', 'H.I. #107: One Year of Weird', ...]
>>>
>>> reader.mark_entry_as_read(entries[0])
>>>
>>> [e.title for e in reader.get_entries(read=False)]
['H.I. #107: One Year of Weird', 'H.I. #106: Water on Mars', ...]
>>> [e.title for e in reader.get_entries(read=True)]
['H.I. #108: Project Cyclops']
>>>
>>> reader.update_search()
>>>
>>> for e in list(reader.search_entries('year'))[:3]:
...     title = e.metadata.get('.title')
...     print(title.value, title.highlights)
...
H.I. #107: One Year of Weird (slice(15, 19, None),)
H.I. #52: 20,000 Years of Torment (slice(17, 22, None),)
H.I. #83: The Best Kind of Prison ()
Comments
  • Increasing

    Increasing "database is locked" errors during update

    Starting with 2020-04-15, there has been an increasing number of "database is locked" errors during update on my reader deployment (update every hour and update --new-only && search update every minute).

    Most errors happen at XX:01:0X, which I think indicates the hourly and minutely updates are clashing. It's likely that search update is hogging the database (since we know it has long-running transactions).

    I didn't see any metric changes on the host around -04-15.

    Logs.
    $ head -n1 /var/log/reader/update.log | cut -dT -f1
    2019-07-26
    $ cat locked.py 
    import sys
    
    last_ts = None
    
    for line in sys.stdin:
        if line.startswith('2020-'):
            last_ts, *_ = line.partition(' ')
        if line.startswith('reader.exceptions.StorageError: sqlite3 error: database is locked'):
            print(last_ts)
    
    $ cat /var/log/reader/update.log | python3 locked.py | cut -dT -f1 | uniq -c
          1 2020-04-15
          4 2020-04-16
          3 2020-04-17
          6 2020-04-18
          2 2020-04-19
          1 2020-04-20
          2 2020-04-21
          3 2020-04-22
          3 2020-04-23
          2 2020-04-24
          3 2020-04-25
          1 2020-04-26
          4 2020-04-27
          2 2020-04-28
          3 2020-04-30
          6 2020-05-01
          8 2020-05-02
         12 2020-05-03
          6 2020-05-04
          4 2020-05-05
          5 2020-05-06
          1 2020-05-07
          2 2020-05-08
          3 2020-05-09
          4 2020-05-10
          4 2020-05-11
          7 2020-05-12
          8 2020-05-13
          9 2020-05-14
          6 2020-05-15
          5 2020-05-16
          9 2020-05-17
         16 2020-05-18
          9 2020-05-19
         10 2020-05-20
         15 2020-05-21
         23 2020-05-22
         20 2020-05-23
         19 2020-05-24
         22 2020-05-25
         22 2020-05-26
         21 2020-05-27
         15 2020-05-28
         18 2020-05-29
         14 2020-05-30
         11 2020-05-31
         17 2020-06-01
         13 2020-06-02
         18 2020-06-03
         10 2020-06-04
         15 2020-06-05
         10 2020-06-06
         14 2020-06-07
         15 2020-06-08
         18 2020-06-09
         17 2020-06-10
         19 2020-06-11
         21 2020-06-12
         19 2020-06-13
         16 2020-06-14
         13 2020-06-15
         24 2020-06-16
         24 2020-06-17
         24 2020-06-18
         24 2020-06-19
         24 2020-06-20
         24 2020-06-21
         24 2020-06-22
         24 2020-06-23
         24 2020-06-24
         11 2020-06-25
    $ cat /var/log/reader/update.log | python3 locked.py | cut -dT -f2 | cut -d: -f2- | cut -c1-4 | sort | uniq -c | sort -rn | head
        510 01:0
        119 03:0
         76 01:1
         46 01:2
         30 00:5
         14 00:3
         13 00:0
          7 01:3
          7 00:4
          3 05:0
    

    I should check if there was a trigger that started this, or if the number of entries/feeds simply hit some threshold.

    Also it, would be nice to show the pid in the logs so I can see which of the commands is failing, and maybe intercepting the exception and showing some nicer error message.

    Things that are likely to improve this:

    • [helps] Enabling WAL (#169).
    • Using --workers 20 to give the hourly update a chance to finish before the second minute of the hour. Obviously, this isn't actually addressing the problem.
    • Increasing the timeout passed to sqlite3.connect (at the moment we're using the 5s default).
    • [doesn't help] Using a SQLite with HAVE_USLEEP (but this is not necessarily a reader issue; we could document it, though).
    • Adding retrying in reader. ಠ_ಠ, SQLite already has it built in, it's sucky because of the HAVE_USLEEP thing.
    • Wrap the whole update with a lock. ಠ_ಠ, idem.
    • Make the search update chunks more spaced out, to allow other stuff to happen.
    • Make search update not hog the database by not stripping HTML inside the transaction.
    bug core 
    opened by lemon24 15
  • Entry tags and metadata

    Entry tags and metadata

    Currently two bits of user data can be added to a feed entry (mark_as_read, mark_as_important).

    Possible use cases:

    1. Add user notes about the entry
    2. Add tags for entry, add entry to 'saved items'
    3. In case of podcasts, download info, (i.e path to the file, if successfully downloaded or how many times tried to download)

    Optionally include user_data to search (as an argument to make_reader).

    help wanted wontfix API core 
    opened by balki 12
  • SQLite objects created in a thread can only be used in that same thread.

    SQLite objects created in a thread can only be used in that same thread.

    Hi @lemon24

    Thanks for making this library.

    I am attempting to utilise it for a Telegram bot I am working on. However, I run into the following error:

    SQLite objects created in a thread can only be used in that same thread.
    

    Here is my code 😃

    Doing some quick Google searching I might need to do something similar to what is recommended in this stack overflow post.

    However, I cannot see a way I can provide this to the library currently.

    Hope you can help. Thanks and a merry Christmas to you! 🎄

    opened by dbrennand 12
  • Feed decommissioned

    Feed decommissioned

    Like #149, but there's no replacement.

    Now, to make the feed stop updating, I can delete it, but I lose the entries.

    Possible ways of keeping the entries:

    • Have a way to mark a feed as "broken / don't update anymore" (obviously, this could be temporary).
    • If we can mark an entire feed as important (https://github.com/lemon24/reader/issues/96#issuecomment-628520935), and have an Archived feed where important entries of deleted feeds go (https://github.com/lemon24/reader/issues/96#issuecomment-460077441), marking the feed as important and then deleting it would preserve the entries.
    API core 
    opened by lemon24 11
  • [Question] Public API for recently fetched entries, or entries added/modified since date?

    [Question] Public API for recently fetched entries, or entries added/modified since date?

    For app I'm working on, I wish to update some feeds and do something on entries that appeared. But I'm unsure how to implement this last part and get only new entries.

    Workflow I sketched so far is:

    1. Disable updates for all feeds in db (because there might be some I don't wish to update)
    2. Start adding feeds to db 2.1. If feed already exists, enable it for updates 2.2. If feed doesn't exist, it will be added and enabled for updates automatically
    3. Update feeds through update_feeds or update_feeds_iter
    4. ???

    update_feeds doesn't return anything. update_feeds_iter gives me back list of UpdateResult, where each item will have url and counts or exception.

    So, I think I can sum all the counts and ask for that many entries. Something like:

    count = sum(
        sum(result.value.new, result.value.updated)
        for result in results
        if not isinstance(result.value, ReaderError)
    )
    new_entries = reader.get_entries(limit=count)
    

    But is it guaranteed that get_entries(sort='recent') will include recently updated entry? Even if that entry originally appeared long time ago? I might be misunderstanding what it means for entry to be marked as "updated", so any pointer on that would be helpful, too.

    Perhaps I could change my workflow a little - first get monotonic timestamp, then run all the steps, and finally ask for all entries that were added or modified after timestamp. But it seems that there is no API for searching by date? search_entries is designed for full-text search and works only on few columns.

    So, my question is:

    1. What is the preferred way of obtaining all entries added in update call? Counting and using get_entries? Calling get_entries for everything and discarding all results that were added / modified before timestamp? Something else?
    2. What does it mean that entry was "updated"?
    opened by mirekdlugosz 10
  • Twitter support

    Twitter support

    Some notes:

    • Main use case: get updates on someone's tweets, e.g. https://twitter.com/qntm; maybe replies too.
      • Account / API key not required (it kinda defeats the purpose).
    • From 20 minutes of research, snscrape seems to be working (other popular ones seem broken).
      • The lib part is not stable, but usable.
        • We can use our own Requests session (by setting a private attribute).
    • This won't really fit with the retriever/parser model we have now.
      • #222 has the same issue, converge.
      • We can use the date of the last tweet as Last-Modified.
        • We need a limit the first time (scraping is paginated, if we go all the way to the beginning of an account it'll take ages).
    • Should model the URLs on Twitter's (https://twitter.com/$user, https://twitter.com/$user/with_replies, etc.).
    • Presentation matters:
      • Threads should be shown in a sane way.
      • Media should be inlined (e.g. a link to an image should be shown as an <img ...>).
      • Titles should work with the dedupe plugin (likely, no title).
    plugin 
    opened by lemon24 9
  • Handle non-http feeds gracefully in the web app

    Handle non-http feeds gracefully in the web app

    ...or don't handle them at all.

    Adding feed /dev/zero kills the web app. People using Reader directly are free to shoot themselves in the foot however they want; the app should not allow them to, especially if it's not their foot they're shooting.

    Update: Oh look, we already have a TODO for this: https://github.com/lemon24/reader/blob/165a0af5f3510dd64fc4c75de17e9c5f45f25c06/src/reader/core/parser.py#L155

    API web app core 
    opened by lemon24 9
  • Some feeds have duplicate entries

    Some feeds have duplicate entries

    Some entries have duplicate entries, or their ids change (resulting in an entry being stored twice).

    E.g., a feed that had the entry id format updated:

    $ sqlite3 db.sqlite 'select feed, id, updated, title from entries where title like "RE: xkcd%"' -line
       feed = http://sealedabstract.com/feed/
         id = http://sealedabstract.com/?p=2494
    updated = 2014-09-09 08:30:25
      title = RE: xkcd #1357 free speech
    
       feed = http://sealedabstract.com/feed/
         id = /?p=2494
    updated = 2014-09-09 07:30:25
      title = RE: xkcd #1357 free speech
    

    If possible, only one should be shown (similar to #78).

    API web app 
    opened by lemon24 9
  • CLI options must be passed all the time

    CLI options must be passed all the time

    opened by lemon24 8
  • I don't know what's happening during update_feeds()

    I don't know what's happening during update_feeds()

    I don't know what's happening during update_feeds() until it finishes. More so, there's no straightforward way to know programmatically which feeds failed (logging or guessing from feed.last_exception don't count).

    From https://clig.dev/#robustness-guidelines:

    Responsive is more important than fast. Print something to the user in <100ms. If you’re making a network request, print something before you do it so it doesn’t hang and look broken.

    Show progress if something takes a long time. If your program displays no output for a while, it will look broken. A good spinner or progress indicator can make a program appear to be faster than it is.

    Doing either of these is hard to do at the moment.

    API core 
    opened by lemon24 7
  • REST API

    REST API

    Have you thought about providing a REST API returning feed data as JSON? This would help implementing other UI interfaces. I will probably experiment with that and if your are interested provide a PR.

    help wanted wontfix 
    opened by clemera 7
  • Gemini subscription support

    Gemini subscription support

    https://gemini.circumlunar.space/docs/companion/subscription.gmi

    Prerequisites:

    • a way to fetch gemini:// URLs
      • https://pypi.org/project/aiogemini/
      • https://pypi.org/project/gemurl/
      • https://github.com/kr1sp1n/awesome-gemini#programming
        • https://github.com/cbrews/ignition
        • https://framagit.org/bortzmeyer/agunua
        • https://tildegit.org/solderpunk/AV-98 ("canonical" CLI client)
        • https://tildegit.org/solderpunk/CAPCOM ("canonical" CLI feed reader)
    • figure out how to handle TOFU
    • a way to render GMI to HTML (re-check awesome-gemini above); only if we fetch linked entries

    This might be a good way to explore how a plugin (PyPI) package would work.

    opened by lemon24 0
  • Sort by recently interacted with

    Sort by recently interacted with

    It would be nice to get entries the user recently interacted with.

    "Interacted" means, at a minimum, marked as (un)read/important. "Set tag" might be nice. "Downloaded enclosure" might be nice too.

    Arguably, the mark_as_read plugin (and plugins in general) should not count as an interaction. If we use read_/important_modified, mark_as_read should probably set them to None – but this would likely break the "don't care" tri-state (https://github.com/lemon24/reader/issues/254#issuecomment-938146589).

    enhancement core 
    opened by lemon24 0
  • 4.0 backwards compatibility breaks

    4.0 backwards compatibility breaks

    This is to track all the backwards compatibility breaks we want to do in 4.0.

    Things that require deprecation warnings pre-4.0:

    • ...

    Things that do not require / can't (easily) have deprecation warnings pre-4.0:

    • [ ] make most public dataclass fields KW_ONLY (after we drop Python 3.9)
    API core 
    opened by lemon24 0
  • Deleting a feed deletes its important entries

    Deleting a feed deletes its important entries

    They should likely be moved to an "Archived" feed (mentioned in https://github.com/lemon24/reader/issues/96#issuecomment-460077441).

    • Old entries in the archived feed should not be deleted (by #96); if they remain important, they won't.
    • The entry source (#276) and original_feed_url should be set to the to-be-deleted feed (if not already set).
    core plugin 
    opened by lemon24 0
  • Typing cleanup

    Typing cleanup

    Clean typing stuff. Might be able to use pyupgrade for this.

    To do:

    • [x] from __future__ import annotations (and the changes it enables)
    • [ ] don't depend on typing_extensions at runtime (example)
    • [ ] typing.Self (supported by mypy master, but not by 0.991)
    • [ ] show types for data objects in docs
    • [ ] ~~move TagFilter, {Feed,Entry}Filter options to _storage (todo)~~ no, used by _search as well, just remove todo
    • [ ] make DEFAULT_RESERVED_NAME_SCHEME public
    • [ ] move reader.core.*Hook to types

    To not do:

    • Consolidate (public) aliases under reader.typing (Flask does it)?
      • No, reader.types is likely good enough (but maybe consolidate them in one place in the file).
    • import typing as t, import typing_extensions as te (Flask does it)?
      • Fewer imports, but makes code look kinda ugly.
      • Also,

        When adding types, the convention is to import types using the form from typing import Union (as opposed to doing just import typing or import typing as t or from typing import *).

    opened by lemon24 0
Releases(3.3)
Comics/doujinshi reader application. Web-based, will work on desktop and tablet devices with swipe interface.

Yomiko Comics/doujinshi reader application. Web-based, will work on desktop and tablet devices with swipe interface. Scans one or more directories of

Kyubi Systems 26 Aug 10, 2022
This Python3 script will monitor Upwork RSS feed and then email you the results.

Upwork RSS Parser This Python3 script will monitor Upwork RSS feed and then email you the results. Table of Contents General Info Technologies Used Fe

Chris 5 Nov 29, 2021
A feed generator. Currently supports generating RSS feeds from Google, Bing, and Yahoo news.

A feed generator. Currently supports generating RSS feeds from Google, Bing, and Yahoo news.

Josh Cardenzana 0 Dec 13, 2021
Automated Content Feed Curator

Gathers posts from content feeds, filters, formats, delivers to you.

Alper S. Soylu 2 Jan 22, 2022
An easy FASTA object handler, reader, writer and translator for small to medium size projects without dependencies.

miniFASTA An easy FASTA object handler, reader, writer and translator for small to medium size projects without dependencies. Installation Using pip /

Jules Kreuer 3 Jun 30, 2022
JLC2KICAD_lib is a python script that generate a component library for KiCad from the JLCPCB/easyEDA library.

JLC2KiCad_lib is a python script that generate a component library (schematic, footprint and 3D model) for KiCad from the JLCPCB/easyEDA library. This script requires Python 3.6 or higher.

Nicolas Toussaint 73 Dec 26, 2022
K2HASH Python library - NoSQL Key Value Store(KVS) library

k2hash_python Overview k2hash_python is an official python driver for k2hash. Install Firstly you must install the k2hash shared library: curl -o- htt

Yahoo! JAPAN 3 Oct 19, 2022
Msgpack serialization/deserialization library for Python, written in Rust using PyO3 and rust-msgpack. Reboot of orjson. msgpack.org[Python]

ormsgpack ormsgpack is a fast msgpack library for Python. It is a fork/reboot of orjson It serializes faster than msgpack-python and deserializes a bi

Aviram Hassan 139 Dec 30, 2022
PyPIContents is an application that generates a Module Index from the Python Package Index (PyPI) and also from various versions of the Python Standard Library.

PyPIContents is an application that generates a Module Index from the Python Package Index (PyPI) and also from various versions of the Python Standar

Collage Labs 10 Nov 19, 2022
🔩 Like builtins, but boltons. 250+ constructs, recipes, and snippets which extend (and rely on nothing but) the Python standard library. Nothing like Michael Bolton.

Boltons boltons should be builtins. Boltons is a set of over 230 BSD-licensed, pure-Python utilities in the same spirit as — and yet conspicuously mis

Mahmoud Hashemi 6k Jan 6, 2023
A Python library to simulate a Zoom H6 recorder remote control

H6 A Python library to emulate a Zoom H6 recorder remote control Introduction This library allows you to control your Zoom H6 recorder from your compu

Matias Godoy 68 Nov 2, 2022
Python library for creating PEG parsers

PyParsing -- A Python Parsing Module Introduction The pyparsing module is an alternative approach to creating and executing simple grammars, vs. the t

Pyparsing 1.7k Jan 3, 2023
Python library to natively send files to Trash (or Recycle bin) on all platforms.

Send2Trash -- Send files to trash on all platforms Send2Trash is a small package that sends files to the Trash (or Recycle Bin) natively and on all pl

Andrew Senetar 224 Jan 4, 2023
Python screenshot library, replacement for the Pillow ImageGrab module on Linux.

tldr: Use Pillow The pyscreenshot module is obsolete in most cases. It was created because PIL ImageGrab module worked on Windows only, but now Linux

null 455 Dec 24, 2022
A functional standard library for Python.

Toolz A set of utility functions for iterators, functions, and dictionaries. See the PyToolz documentation at https://toolz.readthedocs.io LICENSE New

null 4.1k Jan 4, 2023
🔩 Like builtins, but boltons. 250+ constructs, recipes, and snippets which extend (and rely on nothing but) the Python standard library. Nothing like Michael Bolton.

Boltons boltons should be builtins. Boltons is a set of over 230 BSD-licensed, pure-Python utilities in the same spirit as — and yet conspicuously mis

Mahmoud Hashemi 5.4k Feb 20, 2021
Retrying library for Python

Tenacity Tenacity is an Apache 2.0 licensed general-purpose retrying library, written in Python, to simplify the task of adding retry behavior to just

Julien Danjou 4.3k Jan 2, 2023
Retrying is an Apache 2.0 licensed general-purpose retrying library, written in Python, to simplify the task of adding retry behavior to just about anything.

Retrying Retrying is an Apache 2.0 licensed general-purpose retrying library, written in Python, to simplify the task of adding retry behavior to just

Ray Holder 1.9k Dec 29, 2022