RSS feed generator website with user friendly interface

Overview

PolitePol.com

RSS feed generator website with user friendly interface

PolitePol.com

This is source code of RSS feed generator website with user friendly interface.

Installation of development server for Ubuntu

(If you have some questions please contact me by github email)

Install required packages

sudo apt-get install python-minimal libmysqlclient-dev libxml2-dev libxslt-dev python-dev libffi-dev gcc libssl-dev gettext

Install pip

pushd /tmp
wget https://bootstrap.pypa.io/get-pip.py
sudo python get-pip.py
popd

Install pip packages

sudo pip install -r pol/requirements.txt

Install less and yuglify

sudo apt-get install nodejs npm
sudo npm install -g [email protected]
sudo npm -g install [email protected]
sudo ln -s /usr/bin/nodejs /usr/bin/node

Install sass

sudo apt-get install ruby
sudo su -c "gem install sass -v 3.7.4"

Install and setup nginx

sudo apt-get install nginx
sudo cp pol/nginx/default.site-example /etc/nginx/sites-available/default
sudo service nginx reload

Install and setup mysql if you didn't yet.

sudo apt-get install mysql-server

sudo mysql -u root
mysql> USE mysql;
mysql> UPDATE user SET plugin='mysql_native_password' WHERE User='root';
mysql> FLUSH PRIVILEGES;
mysql> exit;

sudo mysql_secure_installation

Create database. Use password 'toor' for root user

mysql -uroot -ptoor -e 'CREATE DATABASE pol DEFAULT CHARACTER SET utf8mb4 DEFAULT COLLATE utf8mb4_unicode_ci;'

Create django config

cp pol/frontend/frontend/settings.py.example pol/frontend/frontend/settings.py

Initialise database

pushd pol/frontend
python manage.py migrate
python manage.py loaddata fields.json
popd

Run servers

Run downloader server

pushd pol
python downloader.py
popd

Run frontend server

pushd pol/frontend
python manage.py runserver
popd

Installation of Docker

Build

git clone https://github.com/taroved/pol
cd pol
docker-compose up -d --build

Access (port 8088)

Docker Host IP in browser. Ex: http://192.168.0.10:8088

License

MIT

Comments
  • Simpler Architecture (without database)

    Simpler Architecture (without database)

    you could make it work without a database at all. just put everything into a link like:

    http://politepol.com/en/setup?url=https%3A//github.com/rssowl/RSSOwl/issues&title=a.link-gray-dark&description=span.opened-by

    the link can now be used in any rss aggregator to get the rss and you save yourself many potential problems with a database.

    opened by Xyrio 13
  • Endless Loading while creating feed

    Endless Loading while creating feed

    After I select title & desc and click the next button

    It is stuck on a endless spinning circle.

    Logs show

    "POST /setup_create_feed HTTP/1.0" 500 12604

    in the frontend app

    and

    [twisted.web.client._HTTP11ClientFactory#info] Stopping factory _HTTP11ClientFactory(<function quiescentCallback at 0x7fecd2a06aa0>, )

    in the backend

    Any ideas?

    opened by JReming85 12
  • update frequency

    update frequency

    Hats up for awesome design and framework. May I know the update frequency the politepol uses to generate the feeds? Is it possible to manually change the frequency?

    opened by Hasan0ff 7
  • Не работает с russian.rt.com

    Не работает с russian.rt.com

    На страницах

    https://russian.rt.com/inotv/catalog/country/at https://russian.rt.com/inotv/tag/%D0%90%D0%B2%D1%81%D1%82%D1%80%D0%B8%D1%8F

    нельзя выбрать заголовок и описание.

    opened by sojusnik 6
  • Restarted the server but..

    Restarted the server but..

    I restarted the server but can't make it work properly now.. The fronted doesn't work at all as it shows 502 bad gateway.. To keep the previous feeds up, I'm running downloader.py on console..

    What can I do at the moment to fix everything?

    opened by Hasan0ff 4
  • how do I see list all my feeds?

    how do I see list all my feeds?

    I built it on ubuntu and all seems working fine. I created my first feed and it works (/feed/1). I created another one (/feed/2) and also ok but how do I see a list of them? How can I manage them (edit, delete, preview,..)? I tried with localhost:8088/myfeeds but it does not work

    opened by sasagr 3
  • Allowing for remote access

    Allowing for remote access

    Howdy,

    Just wanted to make sure I'm not going insane. Is there a setting that prevents remote hosts from getting the page on port 8000?

    If so where is there a config file somewhere I can edit to allow remote access?

    opened by liamcs98 3
  • setup_create_feed(_ext) can't be loaded

    setup_create_feed(_ext) can't be loaded

    I setup the POL project on my VServer but if I try to create a new feed, I only see the loading circle.

    In the nginx access-log I see that the page "setup_create_feed" can't be loaded.

    root@VSERVER:~# tailf /var/log/nginx/access.log
    11.11.11.11 - - [06/Jul/2018:03:19:13 +0200] "GET /static/frontend/js/google-code-prettify/prettify.js HTTP/1.1" 304 0 "http://DOMAIN.TLD:8080/en/setup?url=https://netzpolitik.org/category/datenschutz/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.183 Safari/537.36 Vivaldi/1.96.1147.52"
    11.11.11.11 - - [06/Jul/2018:03:19:13 +0200] "GET /static/frontend/js/setup-tool.js HTTP/1.1" 200 15395 "http://DOMAIN.TLD:8080/en/setup?url=https://netzpolitik.org/category/datenschutz/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.183 Safari/537.36 Vivaldi/1.96.1147.52"
    11.11.11.11 - - [06/Jul/2018:03:19:13 +0200] "GET /static/frontend/js/setup-tool-ext.js HTTP/1.1" 304 0 "http://DOMAIN.TLD:8080/en/setup?url=https://netzpolitik.org/category/datenschutz/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.183 Safari/537.36 Vivaldi/1.96.1147.52"
    11.11.11.11 - - [06/Jul/2018:03:19:13 +0200] "GET /static/frontend/js/help.js HTTP/1.1" 304 0 "http://DOMAIN.TLD:8080/en/setup?url=https://netzpolitik.org/category/datenschutz/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.183 Safari/537.36 Vivaldi/1.96.1147.52"
    11.11.11.11 - - [06/Jul/2018:03:19:13 +0200] "GET /static/frontend/images/apple-touch-icon-precomposed.png HTTP/1.1" 304 0 "http://DOMAIN.TLD:8080/en/setup?url=https://netzpolitik.org/category/datenschutz/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.183 Safari/537.36 Vivaldi/1.96.1147.52"
    11.11.11.11 - - [06/Jul/2018:03:19:13 +0200] "GET /static/frontend/images/target48.png HTTP/1.1" 304 0 "http://DOMAIN.TLD:8080/en/setup?url=https://netzpolitik.org/category/datenschutz/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.183 Safari/537.36 Vivaldi/1.96.1147.52"
    11.11.11.11 - - [06/Jul/2018:03:19:13 +0200] "GET /static/frontend/images/wrench48.png HTTP/1.1" 304 0 "http://DOMAIN.TLD:8080/en/setup?url=https://netzpolitik.org/category/datenschutz/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.183 Safari/537.36 Vivaldi/1.96.1147.52"
    11.11.11.11 - - [06/Jul/2018:03:19:13 +0200] "GET /static/frontend/images/glyphicons-halflings-white.png HTTP/1.1" 304 0 "http://DOMAIN.TLD:8080/static/frontend/stylesheets/bootstrap_and_overrides.css.css" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.183 Safari/537.36 Vivaldi/1.96.1147.52"
    11.11.11.11 - - [06/Jul/2018:03:19:13 +0200] "GET /downloader?url=https%3A%2F%2Fnetzpolitik.org%2Fcategory%2Fdatenschutz%2F HTTP/1.1" 200 23314 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.183 Safari/537.36 Vivaldi/1.96.1147.52"
    11.11.11.11 - - [06/Jul/2018:03:19:13 +0200] "GET /static/frontend/images/favicon.ico HTTP/1.1" 200 1150 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.183 Safari/537.36 Vivaldi/1.96.1147.52"
    11.11.11.11 - - [06/Jul/2018:03:19:21 +0200] "POST /setup_get_selected_ids HTTP/1.1" 200 212 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.183 Safari/537.36 Vivaldi/1.96.1147.52"
    11.11.11.11 - - [06/Jul/2018:03:19:23 +0200] "POST /setup_get_selected_ids HTTP/1.1" 200 298 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.183 Safari/537.36 Vivaldi/1.96.1147.52"
    11.11.11.11 - - [06/Jul/2018:03:19:25 +0200] "POST /setup_create_feed HTTP/1.1" 500 11899 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.183 Safari/537.36 Vivaldi/1.96.1147.52"
    

    Installed all components exactly as described in the readme file. Only differences:

    • Installed sqlparse in additional (needed for python manage.py migrate)
    • The downloader and frontend server are running as systemd service
    • Nginx is running on port 8080

    System is a Ubuntu:16.04. Same happend in a docker container.

    opened by rsguhr 3
  • embedded iframe 404 errors

    embedded iframe 404 errors

    Hi, I've been working on hosting politepol via docker on my server and have been able to get Django serving the appropriate pages. However, I am getting 404 errors in the embedded iframe when I attempt to setup a new feed. Here is a screenshot of the Django error: https://imgur.com/a/pcFMSRW . I think it might be due to something with nginx, but I don't know enough about it to be sure. I would appreciate any insight you could give me on this.

    opened by chasekidder 2
  • Admin and sign in/up pages in Docker

    Admin and sign in/up pages in Docker

    I feel this issue is not related to docker but at least I could try it out this way thanks to #32

    When I started up the project, I noticed that the menu is different from the online politepol.com. Namely, there are no Sign in/Sign up buttons and there is no admin page as well (https://politepol.com/en/admin/login/?next=/en/admin/). When I manually enter the URLs in the local env, Django outputs error pages which mean that debug mode is also enabled.

    How can one edit the added feeds in the local env without signin or admin pages? How could I make them work, do I miss a configuration somewhere that's also need to be added to the Dockerfile or start.sh?

    opened by immanuelfodor 2
  • Feeds aren't available

    Feeds aren't available

    The last days I see this when creating or previewing a feed:

    Something wrong. Reload the page or contact us by email: [email protected]

    Are you aware of that?

    opened by sojusnik 2
  • 302 Infinite redirection detected

    302 Infinite redirection detected

    Scary mantra: [<twisted.python.failure.Failure twisted.web.error.InfiniteRedirection: 302 Infinite redirection detected to https://www.nfe.fazenda.gov.br/portal/informe.aspx?ehCTG=false>]

    Using with feed43 there was no problem...

    opened by joaoavilars 1
  • Bump scrapy from 1.4.0 to 2.6.2

    Bump scrapy from 1.4.0 to 2.6.2

    Bumps scrapy from 1.4.0 to 2.6.2.

    Release notes

    Sourced from scrapy's releases.

    2.6.2

    Fixes a security issue around HTTP proxy usage, and addresses a few regressions introduced in Scrapy 2.6.0.

    See the changelog.

    2.6.1

    Fixes a regression introduced in 2.6.0 that would unset the request method when following redirects.

    2.6.0

    • Security fixes for cookie handling (see details below)
    • Python 3.10 support
    • asyncio support is no longer considered experimental, and works out-of-the-box on Windows regardless of your Python version
    • Feed exports now support pathlib.Path output paths and per-feed item filtering and post-processing

    See the full changelog

    Security bug fixes

    • When a Request object with cookies defined gets a redirect response causing a new Request object to be scheduled, the cookies defined in the original Request object are no longer copied into the new Request object.

      If you manually set the Cookie header on a Request object and the domain name of the redirect URL is not an exact match for the domain of the URL of the original Request object, your Cookie header is now dropped from the new Request object.

      The old behavior could be exploited by an attacker to gain access to your cookies. Please, see the cjvr-mfj7-j4j8 security advisory for more information.

      Note: It is still possible to enable the sharing of cookies between different domains with a shared domain suffix (e.g. example.com and any subdomain) by defining the shared domain suffix (e.g. example.com) as the cookie domain when defining your cookies. See the documentation of the Request class for more information.

    • When the domain of a cookie, either received in the Set-Cookie header of a response or defined in a Request object, is set to a public suffix <https://publicsuffix.org/>_, the cookie is now ignored unless the cookie domain is the same as the request domain.

      The old behavior could be exploited by an attacker to inject cookies from a controlled domain into your cookiejar that could be sent to other domains not controlled by the attacker. Please, see the mfjm-vh54-3f96 security advisory for more information.

    2.5.1

    Security bug fix:

    If you use HttpAuthMiddleware (i.e. the http_user and http_pass spider attributes) for HTTP authentication, any request exposes your credentials to the request target.

    To prevent unintended exposure of authentication credentials to unintended domains, you must now additionally set a new, additional spider attribute, http_auth_domain, and point it to the specific domain to which the authentication credentials must be sent.

    If the http_auth_domain spider attribute is not set, the domain of the first request will be considered the HTTP authentication target, and authentication credentials will only be sent in requests targeting that domain.

    If you need to send the same HTTP authentication credentials to multiple domains, you can use w3lib.http.basic_auth_header instead to set the value of the Authorization header of your requests.

    If you really want your spider to send the same HTTP authentication credentials to any domain, set the http_auth_domain spider attribute to None.

    Finally, if you are a user of scrapy-splash, know that this version of Scrapy breaks compatibility with scrapy-splash 0.7.2 and earlier. You will need to upgrade scrapy-splash to a greater version for it to continue to work.

    2.5.0

    • Official Python 3.9 support
    • Experimental HTTP/2 support
    • New get_retry_request() function to retry requests from spider callbacks

    ... (truncated)

    Changelog

    Sourced from scrapy's changelog.

    Scrapy 2.6.2 (2022-07-25)

    Security bug fix:

    • When :class:~scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware processes a request with :reqmeta:proxy metadata, and that :reqmeta:proxy metadata includes proxy credentials, :class:~scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware sets the Proxy-Authentication header, but only if that header is not already set.

      There are third-party proxy-rotation downloader middlewares that set different :reqmeta:proxy metadata every time they process a request.

      Because of request retries and redirects, the same request can be processed by downloader middlewares more than once, including both :class:~scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware and any third-party proxy-rotation downloader middleware.

      These third-party proxy-rotation downloader middlewares could change the :reqmeta:proxy metadata of a request to a new value, but fail to remove the Proxy-Authentication header from the previous value of the :reqmeta:proxy metadata, causing the credentials of one proxy to be sent to a different proxy.

      To prevent the unintended leaking of proxy credentials, the behavior of :class:~scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware is now as follows when processing a request:

      • If the request being processed defines :reqmeta:proxy metadata that includes credentials, the Proxy-Authorization header is always updated to feature those credentials.

      • If the request being processed defines :reqmeta:proxy metadata without credentials, the Proxy-Authorization header is removed unless it was originally defined for the same proxy URL.

        To remove proxy credentials while keeping the same proxy URL, remove the Proxy-Authorization header.

      • If the request has no :reqmeta:proxy metadata, or that metadata is a falsy value (e.g. None), the Proxy-Authorization header is removed.

        It is no longer possible to set a proxy URL through the :reqmeta:proxy metadata but set the credentials through the Proxy-Authorization header. Set proxy credentials through the :reqmeta:proxy metadata instead.

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Fixed build error: execvp No such file or directory

    Fixed build error: execvp No such file or directory

    When building the docker image, I got the issue

    aarch64-linux-gnu-gcc: error trying to exec 'cc1plus': execvp: No such file or directory
        error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
        ----------------------------------------
    ERROR: Command errored out with exit status 1: /usr/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-DgBg6u/brotli/setup.py'"'"'; __file__='"'"'/tmp/pip-install-DgBg6u/brotli/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-8I3TnB/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python2.7/brotli Check the logs for full command output.
    

    To fix it I need to reinstall build-essential and set cryptography to 2.8

    opened by billmetangmo 0
  • Bump lxml from 3.8.0 to 4.9.1

    Bump lxml from 3.8.0 to 4.9.1

    Bumps lxml from 3.8.0 to 4.9.1.

    Changelog

    Sourced from lxml's changelog.

    4.9.1 (2022-07-01)

    Bugs fixed

    • A crash was resolved when using iterwalk() (or canonicalize()) after parsing certain incorrect input. Note that iterwalk() can crash on valid input parsed with the same parser after failing to parse the incorrect input.

    4.9.0 (2022-06-01)

    Bugs fixed

    • GH#341: The mixin inheritance order in lxml.html was corrected. Patch by xmo-odoo.

    Other changes

    • Built with Cython 0.29.30 to adapt to changes in Python 3.11 and 3.12.

    • Wheels include zlib 1.2.12, libxml2 2.9.14 and libxslt 1.1.35 (libxml2 2.9.12+ and libxslt 1.1.34 on Windows).

    • GH#343: Windows-AArch64 build support in Visual Studio. Patch by Steve Dower.

    4.8.0 (2022-02-17)

    Features added

    • GH#337: Path-like objects are now supported throughout the API instead of just strings. Patch by Henning Janssen.

    • The ElementMaker now supports QName values as tags, which always override the default namespace of the factory.

    Bugs fixed

    • GH#338: In lxml.objectify, the XSI float annotation "nan" and "inf" were spelled in lower case, whereas XML Schema datatypes define them as "NaN" and "INF" respectively.

    ... (truncated)

    Commits
    • d01872c Prevent parse failure in new test from leaking into later test runs.
    • d65e632 Prepare release of lxml 4.9.1.
    • 86368e9 Fix a crash when incorrect parser input occurs together with usages of iterwa...
    • 50c2764 Delete unused Travis CI config and reference in docs (GH-345)
    • 8f0bf2d Try to speed up the musllinux AArch64 build by splitting the different CPytho...
    • b9f7074 Remove debug print from test.
    • b224e0f Try to install 'xz' in wheel builds, if available, since it's now needed to e...
    • 897ebfa Update macOS deployment target version from 10.14 to 10.15 since 10.14 starts...
    • 853c9e9 Prepare release of 4.9.0.
    • d3f77e6 Add a test for https://bugs.launchpad.net/lxml/+bug/1965070 leaving out the a...
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Support for OPML

    Support for OPML

    Most RSS readers allow for import of OPML which includes links to all your feeds.

    Example: https://gist.github.com/webpro/5907452

    It would be great if PolitePol exposed OPML data of all feeds in an account.

    opened by gingerbeardman 0
  • Mouse cursor to change to pointer when selecting elements

    Mouse cursor to change to pointer when selecting elements

    Currently the mouse cursor is a caret when the mouse is over text items, and an arrow elsewhere.

    It would make more sense if it was an arrow over all elements.

    opened by gingerbeardman 0
Owner
Alexandr Nesterenko
Alexandr Nesterenko
A feed generator. Currently supports generating RSS feeds from Google, Bing, and Yahoo news.

A feed generator. Currently supports generating RSS feeds from Google, Bing, and Yahoo news.

Josh Cardenzana 0 Dec 13, 2021
Fully Automated YouTube Channel ▶️with Added Extra Features.

Fully Automated Youtube Channel ▒█▀▀█ █▀▀█ ▀▀█▀▀ ▀▀█▀▀ █░░█ █▀▀▄ █▀▀ █▀▀█ ▒█▀▀▄ █░░█ ░░█░░ ░▒█░░ █░░█ █▀▀▄ █▀▀ █▄▄▀ ▒█▄▄█ ▀▀▀▀ ░░▀░░ ░▒█░░ ░▀▀▀ ▀▀▀░

sam-sepiol 249 Jan 2, 2023
This is a simple website crawler which asks for a website link from the user to crawl and find specific data from the given website address.

This is a simple website crawler which asks for a website link from the user to crawl and find specific data from the given website address.

Faisal Ahmed 1 Jan 10, 2022
A Happy and lightweight Python Package that searches Google News RSS Feed and returns a usable JSON response and scrap complete article - No need to write scrappers for articles fetching anymore

GNews ?? A Happy and lightweight Python Package that searches Google News RSS Feed and returns a usable JSON response ?? As well as you can fetch full

Muhammad Abdullah 273 Dec 31, 2022
A telegram mirror bot with an integrated RSS feed reader.

About What is this repo? This is a slightly modified fork which includes some extra features & memes added to my liking. How's this different from the

null 11 May 15, 2022
This Python3 script will monitor Upwork RSS feed and then email you the results.

Upwork RSS Parser This Python3 script will monitor Upwork RSS feed and then email you the results. Table of Contents General Info Technologies Used Fe

Chris 5 Nov 29, 2021
An awesome tool to save articles from RSS feed to Pocket automatically.

RSS2Pocket An awesome tool to save articles from RSS feed to Pocket automatically. About the Project I used to use IFTTT to save articles from RSS fee

Hank Liao 10 Nov 12, 2022
A Python package for downloading / archiving all available episodes from a podcast RSS feed.

allcasts ?? ?? A Python package for downloading all available episodes from a podcast RSS feed. Useful for making private archives of your favourite p

Lewis Gentle 5 Nov 20, 2022
FPKG Maker GUI - A user friendly User Interface for fPKG Tools for PS4

Know Issues being worked on Please place this application on the root of a drive

null 26 Nov 27, 2022
Stream Framework is a Python library, which allows you to build news feed, activity streams and notification systems using Cassandra and/or Redis. The authors of Stream-Framework also provide a cloud service for feed technology:

Stream Framework Activity Streams & Newsfeeds Stream Framework is a Python library which allows you to build activity streams & newsfeeds using Cassan

Thierry Schellenbach 4.7k Jan 2, 2023
A website application running in Google app engine, deliver rss news to your kindle. generate mobi using python, multilanguages supported.

Readme of english version refers to Readme_EN.md 简介 这是一个运行在Google App Engine(GAE)上的Kindle个人推送服务应用,生成排版精美的杂志模式mobi/epub格式自动每天推送至您的Kindle或其他邮箱。 此应用目前的主要

null 2.6k Jan 6, 2023
Ascify-Art - An easy to use, GUI based and user-friendly colored ASCII art generator from images!

Ascify-Art This is a python based colored ASCII art generator for free! How to Install? You can download and use the python version if you want, modul

Akash Bora 14 Dec 31, 2022
RSS reader client for CLI (Command Line Interface),

rReader is RSS reader client for CLI(Command Line Interface)

Lee JunHaeng 10 Dec 24, 2022
Django e-commerce website with Advanced Features and SEO Friendly

MyTech® - Your Technology Django e-commerce website with Advanced Features and SEO Friendly Images and Prices are only used for Demo purpose and does

null 28 Dec 21, 2022
NS-LOOKUP - A python script for scanning website for getting ip address of a website

NS-LOOKUP A python script for scanning website for getting ip address of a websi

Spider Anongreyhat 5 Aug 2, 2022
API to retrieve the number of grades on the OGE website (Website listing the grades of students) to know if a new grade is available. If a new grade has been entered, the program sends a notification e-mail with the subject.

OGE-ESIREM-API Introduction API to retrieve the number of grades on the OGE website (Website listing the grades of students) to know if a new grade is

Benjamin Milhet 5 Apr 27, 2022
This is a API/Website to see the attendance recorded in your college website along with how many days you can take days off OR to attend class!!

Bunker-Website This is a GUI version of the Bunker-API along with some visualization charts to see your attendance progress. Website Link Check out th

Mathana Mathav 11 Dec 27, 2022
📱 An extension for Django admin that makes interface mobile-friendly. Merged into Django 2.0

Django Flat Responsive django-flat-responsive is included as part of Django from version 2.0! ?? Use this app if your project is powered by an older D

elky 248 Sep 2, 2022
As easy as /aitch-tee-tee-pie/ 🥧 Modern, user-friendly command-line HTTP client for the API era. JSON support, colors, sessions, downloads, plugins & more. https://twitter.com/httpie

HTTPie: human-friendly CLI HTTP client for the API era HTTPie (pronounced aitch-tee-tee-pie) is a command-line HTTP client. Its goal is to make CLI in

HTTPie 25.4k Dec 30, 2022