A standalone package to scrape financial data from listed Vietnamese companies via Vietstock

Overview

Scrape Financial Data of Vietnamese Listed Companies - Version 2

A standalone package to scrape financial data from listed Vietnamese companies via Vietstock. If you are looking for raw financial data from listed Vietnamese companies, this may help you.

Table of Contents

Prerequisites

A computer that can run Docker

Because the core components of this project runs on Docker.

Cloning this project

Because you will have to build the image from source. I have not released this project's image on Docker Hub yet.

A Vietstock user cookie string

How to get it:

  • Sign on to finance.vietstock.vn
  • Hover over "Corporate"/"Doanh nghiệp", and choose "Corporate A-Z"/"Doanh nghiệp A-Z"
  • Click on any ticker
  • Open your browser's Inspect console by right-clicking on any empty area of the page, and choose Inspect
  • Go to the Network tab, filter only XHR requests
  • On the page, click "Financials"/"Tài chính"
  • On the list of XHR requests, click on any requests, then go to the Cookies tab underneath
  • Take note of the the string in the vts_usr_lg cookie, which is your user cookie
  • Done

Some pointers about Vietstock financial API parameters, which will be used when scraping

Financial report types and their meanings:

Report type code Meaning
CTKH Financial targets/Chỉ Tiêu Kế Hoạch
CDKT Balance sheet/Cân Đối Kế Toán
KQKD Income statement/Kết Quả Kinh Doanh
LC Cash flow statement/Lưu Chuyển (Tiền Tệ)
CSTC Financial ratios/Chỉ STài Chính

Financial report terms and their meanings:

Report term code Meaning
1 Annually
2 Quarterly

Noting the project folder

All core functions are located within the functions_vietstock folder and so are the scraped files; thus, from now on, references to the functions_vietstock folder will be simply put as ./.

Run within Docker Compose (recommended)

Configuration

1. Add your Vietstock user cookie to docker-compose.yml

It should be in this area:

...
functions-vietstock:
    build: .
    container_name: functions-vietstock
    command: wait-for-it -s torproxy:8118 -s scraper-redis:6379 -t 600  -- bash
    environment: 
        - REDIS_HOST=scraper-redis
        - PROXY=yes
        - TORPROXY_HOST=torproxy
        - USER_COOKIE=<YOUR_VIETSTOCK_USER_COOKIE>
...

2. Specify whether you want to use proxy

In the same config area as the user cookie above, removing the environment variable PROXY and TORPROXY_HOST to stop using proxy. Please note that I have not tested this scraper without proxy.

Build image and start related services

At the project folder, run:

docker-compose build --no-cache && docker-compose up -d

Next, open the scraper container in another terminal:

docker exec -it functions-vietstock ./userinput.sh

From now, you can follow along the userinput script

Note: To stop the scraping, stop the userinput script terminal, then open another terminal and run:

docker exec -it functions-vietstock ./celery_stop.sh

to clean everything related to the scraping process (local scraped files are intact).

Some quesitons require you to answer in a specific syntax, as follows:

  • Do you wish to scrape by a specific business type-industry or by tickers? [y for business type-industry/n for tickers]
    • If you enter y, the next prompt is: Enter business type ID and industry ID combination in the form of businesstype_id;industry_id:
      • If you chose to scrape a list of all business types-industries and their respective tickers, you should have the file bizType_ind_tickers.csv in the scrape result folder (./localData/overview).
      • Then you answer this prompt by entering a business type ID and industry ID combination in the form of businesstype_id;industry_id.
    • If you enter n, the next prompts ask for ticker(s), report type(s), report term(s) and page.
      • Again, suppose you have the bizType_ind_tickers.csv file
      • Then you answer the prompts as follows:
        • ticker: a ticker symbol or a list of ticker symbols of your choice. You can enter either ticker_1 or ticker_1,ticker_2

        • report_type and report_term: use the report type codes and report term codes in the following tables (which was already mentioned above). You can enter either report_type_1 or report_type_1,report_type_2. Same goes for report term.

          Report type code Meaning
          CTKH Financial targets/Chỉ Tiêu Kế Hoạch
          CDKT Balance sheet/Cân Đối Kế Toán
          KQKD Income statement/Kết Quả Kinh Doanh
          LC Cash flow statement/Lưu Chuyển (Tiền Tệ)
          CSTC Financial ratios/Chỉ STài Chính
          Report term code Meaning
          1 Annually
          2 Quarterly
        • page: the page number for the scrape, this is optional. If omitted, the scraper will start from page 1

Run on Host without Docker Compose

Maybe you do not want to spend time building the image, and just want to play around with the code.

Specify local environment variables

At functions_vietstock folder, create a file named .env with the following content:

REDIS_HOST=localhost
PROXY=yes
TORPROXY_HOST=localhost
USER_COOKIE=<YOUR_VIETSTOCK_USER_COOKIE>

Run Redis and Torproxy

You still need to run these inside containers:

docker run -d -p 6379:6379 --rm --name scraper-redis redis

docker run -d -p 8118:8118 -p 9050:9050 --rm --name torproxy --env TOR_NewCircuitPeriod=10 --env TOR_MaxCircuitDirtiness=60 dperson/torproxy

Clear all previous running files (if any)

Go to the functions_vietstock folder:

cd functions_vietstock

Run the celery_stop.sh script:

./celery_stop.sh

User the userinput script to scrape

Use the ./userinput.sh script to scrape as in the previous section.

Scrape Results

CorporateAZ Overview

File location

If you chose to scrape a list of all business types, industries and their tickers, the result is stored in the ./localData/overview folder, under the file name bizType_ind_tickers.csv.

File preview (shortened)

ticker,biztype_id,bizType_title,ind_id,ind_name
BID,3,Bank,1000,Finance and Insurance
CTG,3,Bank,1000,Finance and Insurance
VCB,3,Bank,1000,Finance and Insurance
TCB,3,Bank,1000,Finance and Insurance
...

FinanceInfo

File location

FinanceInfo results are stored in the ./localData/financeInfo folder, and each file is the form ticker_reportType_reportTermName_page.json, representing a ticker - report type - report term - page instance.

File preview (shortened)

[
    [
        {
            "ID": 4,
            "Row": 4,
            "CompanyID": 2541,
            "YearPeriod": 2017,
            "TermCode": "N",
            "TermName": "Năm",
            "TermNameEN": "Year",
            "ReportTermID": 1,
            "DisplayOrdering": 1,
            "United": "HN",
            "AuditedStatus": "KT",
            "PeriodBegin": "201701",
            "PeriodEnd": "201712",
            "TotalRow": 14,
            "BusinessType": 1,
            "ReportNote": null,
            "ReportNoteEn": null
        },
        {
            "ID": 3,
            "Row": 3,
            "CompanyID": 2541,
            "YearPeriod": 2018,
            "TermCode": "N",
            "TermName": "Năm",
            "TermNameEN": "Year",
            "ReportTermID": 1,
            "DisplayOrdering": 1,
            "United": "HN",
            "AuditedStatus": "KT",
            "PeriodBegin": "201801",
            "PeriodEnd": "201812",
            "TotalRow": 14,
            "BusinessType": 1,
            "ReportNote": null,
            "ReportNoteEn": null
        },
        {
            "ID": 2,
            "Row": 2,
            "CompanyID": 2541,
            "YearPeriod": 2019,
            "TermCode": "N",
            "TermName": "Năm",
            "TermNameEN": "Year",
            "ReportTermID": 1,
            "DisplayOrdering": 1,
            "United": "HN",
            "AuditedStatus": "KT",
            "PeriodBegin": "201901",
            "PeriodEnd": "201912",
            "TotalRow": 14,
            "BusinessType": 1,
            "ReportNote": null,
            "ReportNoteEn": null
        },
        {
            "ID": 1,
            "Row": 1,
            "CompanyID": 2541,
            "YearPeriod": 2020,
            "TermCode": "N",
            "TermName": "Năm",
            "TermNameEN": "Year",
            "ReportTermID": 1,
            "DisplayOrdering": 1,
            "United": "HN",
            "AuditedStatus": "KT",
            "PeriodBegin": "202001",
            "PeriodEnd": "202112",
            "TotalRow": 14,
            "BusinessType": 1,
            "ReportNote": null,
            "ReportNoteEn": null
        }
    ],
    {
        "Balance Sheet": [
            {
                "ID": 1,
                "ReportNormID": 2995,
                "Name": "TÀI SẢN ",
                "NameEn": "ASSETS",
                "NameMobile": "TÀI SẢN ",
                "NameMobileEn": "ASSETS",
                "CssStyle": "MaxB",
                "Padding": "Padding1",
                "ParentReportNormID": 2995,
                "ReportComponentName": "Cân đối kế toán",
                "ReportComponentNameEn": "Balance Sheet",
                "Unit": null,
                "UnitEn": null,
                "OrderType": null,
                "OrderingComponent": null,
                "RowNumber": null,
                "ReportComponentTypeID": null,
                "ChildTotal": 0,
                "Levels": 0,
                "Value1": null,
                "Value2": null,
                "Value3": null,
                "Value4": null,
                "Vl": null,
                "IsShowData": true
            },
            {
                "ID": 2,
                "ReportNormID": 3000,
                "Name": "A. TÀI SẢN NGẮN HẠN",
                "NameEn": "A. SHORT-TERM ASSETS",
                "NameMobile": "A. TÀI SẢN NGẮN HẠN",
                "NameMobileEn": "A. SHORT-TERM ASSETS",
                "CssStyle": "LargeB",
                "Padding": "Padding1",
                "ParentReportNormID": 2996,
                "ReportComponentName": "Cân đối kế toán",
                "ReportComponentNameEn": "Balance Sheet",
                "Unit": null,
                "UnitEn": null,
                "OrderType": null,
                "OrderingComponent": null,
                "RowNumber": null,
                "ReportComponentTypeID": null,
                "ChildTotal": 25,
                "Levels": 1,
                "Value1": 4496051.0,
                "Value2": 4971364.0,
                "Value3": 3989369.0,
                "Value4": 2142717.0,
                "Vl": null,
                "IsShowData": true
            },
...

Please note that you have to determine whether the order of the financial values match the order of the periods

Logs

Logs are stored in the ./logs folder, in the form of scrapySpiderName_log_verbose.log.

Debugging and How This Thing Works

What is Torproxy?

Quick introduction

Torproxy is "Tor and Privoxy (web proxy configured to route through tor) docker container." See: https://github.com/dperson/torproxy. We need it in this container to avoid IP-banning for scraping too much.

Configuration used in this project

The only two configuration variables I used with Torproxy are TOR_MaxCircuitDirtiness and TOR_NewCircuitPeriod, which means the maximum Tor circuit age (in seconds) and time period between every attempt to change Tor circuit (in seconds), respectively. Note that TOR_MaxCircuitDirtiness is set at max = 60 seconds, and TOR_NewCircuitPeriod is set at 10 seconds.

What is Redis?

"Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker." See: https://redis.io/. In this project, Redis serves as a message broker and an in-memory queue for Scrapy. No non-standard Redis configurations were made for this project.

Debugging

Redis

If scraper run in Docker container:

To open an interactive shell with Redis, you have to enter the container first:

docker exec -it functions-vietstock bash

Then:

redis-cli -h scraper-redis

If scraper run on host:

To open an interactive shell with Redis:

docker exec -it scraper-redis redis-cli

Celery

Look inside each log file.

How This Scraper Works

This scraper utilizes scrapy-redis and Redis to crawl and scrape tickers' information from a top-down approach (going from business types, then industries, then tickers in each business type-industry combination) by passing necessary information into Redis queues for different Spiders to consume. The scraper also makes use of Torproxy to avoid IP-banning.

Limitations and Lessons Learned

Limitations

  • When talking about a crawler/scraper, one must consider speed, among other things. That said, I haven't run a benchmark for this scraper project.
    • There are about 3000 tickers on the market, each with its own set of available report types, report terms and pages.
    • Scraping all historical financials of all those 3000 tickers will, I believe, be pretty slow, because we have to use Torproxy and there can be many pages for a ticker-report type-report term combination.
    • Scrape results are written on disk, so that is also a bottleneck if you want to mass-scrape. Of course, this point is different if you only scrape one or two tickers.
    • To mass-scrape, a distributed scraping architecture is desirable, not only for speed, but also for anonymity (not entirely if you use the same user cookie across machines). However, one should respect the API service provider (i.e., Vietstock) and avoid bombarding them with tons of requests in a short period of time.
  • Possibility of being banned on Vietstock? Yes.
    • Each request has a unique Vietstock user cookie on it, and thus you are identifiable when making each request.
    • As of now (May 2021), I still don't know how many concurrent requests can Vietstock server handle at any given point. While this API is publicly open, it's not documented on Vietstock. Because of this, I recently added a throttling feature to the financeInfo Spider to avoid bombarding Vietstock's server. See financeInfo's configuration file.
  • Constantly changing Tor circuit maybe harmful to the Tor network.
    • Looking at this link on Tor metrics, we see that the number of exit nodes is below 2000. By changing the circuits as we scrape, we will eventually expose almost all of these available exit nodes to the Vietstock server, which in turn undermines the purpose of avoiding ban.
    • In addition, in an unlikely circumstance, interested users who want to use Tor network to view a Vietstock page may not be able to do so, because the exit node may have been banned.
  • Scrape results are as-is and not processed.
    • As mentioned, scrape results are currently stored on disk as JSONs, and a unified format for financial statements has not been produced. Thus, to fully integrate this scraping process with an analysis project, you must do a lot of data standardization.
  • There is no user-friendly interface to monitor Redis queue, and I haven't looked much into this.

Lessons learned

  • Utilizing Redis creates a nice and smooth workflow for mass scraping data, provided that the paths to data can be logically determined (e.g., in the form of pagination).
  • Using proxies cannot offer the best anonymity while scraping, because you have to use a user cookie to have access to data anyway.
  • Packing inter-dependent services with Docker Compose helps create a cleaner and more professional-looking code base.

Disclaimer

  • This project is completed for educational and non-commercial purposes only.
  • The scrape results are as-is from Vietstock API and without any modification. Thus, you are responsible for your own use of the data scraped using this project.
  • Vietstock has all the rights to modify or remove access to the API used in this project in their own way, without any notice. I am not responsible for updating access to their API in a promptly manner and any consequences to your use of this project resulting from such mentioned change.
Comments
  • Endless loop of downloading

    Endless loop of downloading

    When I select the option scraping with business ID and industry ID, the execution goes into an endless loop. The downloaded json files are continuously replaced by new ones with exactly the same names.

    Haven't checked if the issue happens for mass scraping or other options.

    bug 
    opened by truongphanduykhanh 15
  • Miss tickers when mass scraping

    Miss tickers when mass scraping

    There are more than 3,000 tickers in file bizType_ind_tickers.csv. However, there are only 600-1000 tickers dowloaded when I mass scrap (it took 5 hours everytime to scrap those tickers with my internet connection). It missed many bluechips, whose information are certainly available on VietStock such as VIC, GAS.

    I have tried several times by both WiFi and Lan connection. The number of tickers and number json files vary from one time to another. One time has 657 tickers, another time has 1025 tickers. None of those times are closed to the total 3129 tickers as in bizType_ind_tickers.csv.

    The executions were all stopped by itself and stated clearly that it finished (as terminal shown here at the end).

    Following are count summary of total tickers and downloaded tickers for one time I scrap mass:

    ||biztype_id|ind_id|ticker|ticker_download| |------|----------|------|------|---------------| |TOTAL||| 3129| 657 |0 |1 |100 |136 |110 | |1 |1 |200 |81 |30 | |2 |1 |300 |171 |49 | |3 |1 |400 |598 |49 | |4 |1 |500 |903 |50 | |5 |1 |600 |192 |51 | |6 |1 |700 |67 |11 | |7 |1 |800 |221 |50 | |8 |1 |900 |84 |24 | |9 |1 |1000 |22 |0 | |10 |1 |1100 |3 |0 | |11 |1 |1200 |66 |12 | |12 |1 |1300 |5 |0 | |13 |1 |1400 |66 |7 | |14 |1 |1500 |4 |0 | |15 |1 |1600 |5 |0 | |16 |1 |1700 |7 |0 | |17 |1 |1800 |43 |0 | |18 |1 |1900 |6 |0 | |19 |1 |2000 |4 |0 | |20 |2 |1000 |105 |103 | |21 |3 |1000 |75 |60 | |22 |4 |900 |1 |0 | |23 |4 |1000 |39 |24 | |24 |4 |1600 |1 |0 | |25 |5 |1000 |32 |19 | |26 |6 |1000 |2 |1 | |27 |6 |2000 |2 |1 | |28 |7 |1000 |7 |6 | |29 |8 |1200 |181 |0 |

    Following are terminal record:

    Scrape Finance Data - version 2
    
    Do you wish to mass scrape? [y/n] y
    Do you wish clear ALL scraped files and kill ALL running Celery workers? [y/n] y
    Clearing scraped files and all running workers, please wait...
    
    OK
    rm: cannot remove './run/celery/*': No such file or directory
    removed './run/scrapy/financeInfo.scrapy' removed './logs/corporateAZExpress_log_verbose.log' removed './logs/corporateAZOverview_log_verbose.log' removed './logs/financeInfo_log_verbose.log'
    Do you wish to start mass scraping now? Process will automatically exit when finished. [y] y
    Creating Celery workers...
    Running Celery tasks for mass scrape...
    Scrapy is still running…
    Scrapy is still running...
    Scrapy is still running...
    …
    Scrapy is still running...
    Scrapy has finished
    Killing Celery workers, flushing Redis queues, deleting Celery run files...
    OK
    removed './run/celery/workercorpAZ.pid'
    removed './run/celery/workerfinance.pid'
    Exiting...
    
    opened by truongphanduykhanh 10
  • New API security measure from Vietstock

    New API security measure from Vietstock

    opened by vincetran96 3
  • Can't scrape anymore

    Can't scrape anymore

    Hi Vince,

    It seems Vietstock has applied some new scrapping blocker. The options scrapping mass and industry are unusable. Scrapping by ticker works but it scraps only a few first pages of the financial statements. Here are the log of mass scrapping:

    2021-11-02 15:28:11 [scrapy.extensions.telnet] INFO: Telnet Password: 60c4207748982772
    2021-11-02 15:28:11 [scrapy.middleware] INFO: Enabled extensions:
    ['scrapy.extensions.corestats.CoreStats',
     'scrapy.extensions.telnet.TelnetConsole',
     'scrapy.extensions.memusage.MemoryUsage',
     'scrapy.extensions.logstats.LogStats',
     'scrapy.extensions.throttle.AutoThrottle']
    2021-11-02 15:28:11 [financeInfo] INFO: Reading start URLs from redis key 'financeInfo:start_urls' (batch size: 16, encoding: utf-8
    2021-11-02 15:28:11 [scrapy.middleware] INFO: Enabled downloader middlewares:
    ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
     'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
     'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
     'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
     'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
     'scrapy.downloadermiddlewares.retry.RetryMiddleware',
     'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
     'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
     'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
     'rotating_proxies.middlewares.BanDetectionMiddleware',
     'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
     'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
     'scrapy.downloadermiddlewares.stats.DownloaderStats']
    2021-11-02 15:28:11 [scrapy.middleware] INFO: Enabled spider middlewares:
    ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
     'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
     'scrapy.spidermiddlewares.referer.RefererMiddleware',
     'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
     'scrapy.spidermiddlewares.depth.DepthMiddleware']
    2021-11-02 15:28:11 [scrapy.middleware] INFO: Enabled item pipelines:
    ['scrapy_redis.pipelines.RedisPipeline']
    2021-11-02 15:28:11 [scrapy.core.engine] INFO: Spider opened
    2021-11-02 15:28:11 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
    2021-11-02 15:28:11 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6024
    2021-11-02 15:28:11 [celery.app.trace] INFO: Task celery_tasks.finance_task[cd208afb-3cd5-40f7-a938-248c7d98603c] succeeded in 0.11887021500001538s: None
    2021-11-02 15:28:12 [financeInfo] INFO: === IDLING... ===
    2021-11-02 15:28:16 [financeInfo] INFO: === IDLING... ===
    2021-11-02 15:28:21 [financeInfo] INFO: === IDLING... ===
    2021-11-02 15:28:26 [financeInfo] INFO: === IDLING... ===
    2021-11-02 15:28:31 [financeInfo] INFO: === IDLING... ===
    2021-11-02 15:28:31 [financeInfo] INFO: corpAZ closed key: 1
    2021-11-02 15:28:31 [financeInfo] INFO: corpAZ key financeInfo:corpAZtickers contains: set()
    2021-11-02 15:28:31 [financeInfo] INFO: set()
    2021-11-02 15:28:31 [financeInfo] INFO: Deleted status file at run/scrapy/financeInfo.scrapy
    2021-11-02 15:28:31 [scrapy.core.engine] INFO: Closing spider (CorpAZ is closed; CorpAZ queue is empty; Spider is idling)
    2021-11-02 15:28:31 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
    {'elapsed_time_seconds': 20.031222,
     'finish_reason': 'CorpAZ is closed; CorpAZ queue is empty; Spider is idling',
     'finish_time': datetime.datetime(2021, 11, 2, 15, 28, 31, 938732),
     'log_count/INFO': 21,
     'log_count/WARNING': 1,
     'memusage/max': 65544192,
     'memusage/startup': 65331200,
     'start_time': datetime.datetime(2021, 11, 2, 15, 28, 11, 907510)}
    2021-11-02 15:28:31 [scrapy.core.engine] INFO: Spider closed (CorpAZ is closed; CorpAZ queue is empty; Spider is idling)
    
    opened by truongphanduykhanh 2
  • financeInfo's log file is incomplete

    financeInfo's log file is incomplete

    financeInfo's log file does not show downloads stats, etc.:

    2021-09-24 20:21:29 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
    {'bans/status/403': 1,
     'elapsed_time_seconds': 1402.909312,
     'finish_reason': 'CorpAZ is closed; CorpAZ queue is empty; Spider is idling',
     'finish_time': datetime.datetime(2021, 9, 25, 1, 21, 29, 850110),
     'httpcompression/response_bytes': 162246573,
     'httpcompression/response_count': 2135,
     'log_count/INFO': 2910,
     'memusage/max': 61169664,
     'memusage/startup': 55721984,
     'response_received_count': 2136,
     'robotstxt/request_count': 1,
     'robotstxt/response_count': 1,
     'robotstxt/response_status_count/403': 1,
     'scheduler/dequeued/redis': 2135,
     'scheduler/enqueued/redis': 2135,
     'start_time': datetime.datetime(2021, 9, 25, 0, 58, 6, 940798)}
    
    bug 
    opened by vincetran96 0
Owner
Viet Anh (Vincent) Tran
Viet Anh (Vincent) Tran
The Django Leaflet Admin List package provides an admin list view featured by the map and bounding box filter for the geo-based data of the GeoDjango.

The Django Leaflet Admin List package provides an admin list view featured by the map and bounding box filter for the geo-based data of the GeoDjango. It requires a django-leaflet package.

Vsevolod Novikov 33 Nov 11, 2022
Faker is a Python package that generates fake data for you.

Faker is a Python package that generates fake data for you. Whether you need to bootstrap your database, create good-looking XML documents, fill-in yo

Daniele Faraglia 15.2k Jan 1, 2023
A Django app to accept payments from various payment processors via Pluggable backends.

Django-Merchant Django-Merchant is a django application that enables you to use multiple payment processors from a single API. Gateways Following gate

Agiliq 997 Dec 24, 2022
📊📈 Serves up Pandas dataframes via the Django REST Framework for use in client-side (i.e. d3.js) visualizations and offline analysis (e.g. Excel)

Django REST Pandas Django REST Framework + pandas = A Model-driven Visualization API Django REST Pandas (DRP) provides a simple way to generate and se

wq framework 1.2k Jan 1, 2023
This website serves as an online database (hosted via SQLLite) for fictional businesses in the area to store contact information (name, email, phone number, etc.) for fictional customers.

Django-Online-Business-Database-Project this project is still in progress Overview of Website This website serves as an online database (hosted via SQ

null 1 Oct 30, 2021
Django-gmailapi-json-backend - Email backend for Django which sends email via the Gmail API through a JSON credential

django-gmailapi-json-backend Email backend for Django which sends email via the

Innove 1 Sep 9, 2022
Meta package to combine turbo-django and stimulus-django

Hotwire + Django This repository aims to help you integrate Hotwire with Django ?? Inspiration might be taken from @hotwired/hotwire-rails. We are sti

Hotwire for Django 31 Aug 9, 2022
A package to handle images in django

Django Image Tools Django Image Tools is a small app that will allow you to manage your project's images without worrying much about image sizes, how

The Bonsai Studio 42 Jun 2, 2022
DRF_commands is a Django package that helps you to create django rest framework endpoints faster using manage.py.

DRF_commands is a Django package that helps you to create django rest framework endpoints faster using manage.py.

Mokrani Yacine 2 Sep 28, 2022
Django Persistent Filters is a Python package which provide a django middleware that take care to persist the querystring in the browser cookies.

Django Persistent Filters Django Persistent Filters is a Python package which provide a django middleware that take care to persist the querystring in

Lorenzo Prodon 2 Aug 5, 2022
Mobile Detect is a lightweight Python package for detecting mobile devices (including tablets).

Django Mobile Detector Mobile Detect is a lightweight Python package for detecting mobile devices (including tablets). It uses the User-Agent string c

Botir 6 Aug 31, 2022
Django application and library for importing and exporting data with admin integration.

django-import-export django-import-export is a Django application and library for importing and exporting data with included admin integration. Featur

null 2.6k Dec 26, 2022
A reusable Django model field for storing ad-hoc JSON data

jsonfield jsonfield is a reusable model field that allows you to store validated JSON, automatically handling serialization to and from the database.

Ryan P Kilby 1.1k Jan 3, 2023
Django project starter on steroids: quickly create a Django app AND generate source code for data models + REST/GraphQL APIs (the generated code is auto-linted and has 100% test coverage).

Create Django App ?? We're a Django project starter on steroids! One-line command to create a Django app with all the dependencies auto-installed AND

imagine.ai 68 Oct 19, 2022
This is django-import-export module that exports data into many formats

django-import-export This is django-import-export module which exports data into many formats, you can implement this in your admin panel. -> Dehydrat

Shivam Rohilla 3 Jun 3, 2021
Social Media Network Focuses On Data Security And Being Community Driven Web App

privalise Social Media Network Focuses On Data Security And Being Community Driven Web App The Main Idea: We`ve seen social media web apps that focuse

Privalise 8 Jun 25, 2021
django-dashing is a customisable, modular dashboard application framework for Django to visualize interesting data about your project. Inspired in the dashboard framework Dashing

django-dashing django-dashing is a customisable, modular dashboard application framework for Django to visualize interesting data about your project.

talPor Solutions 703 Dec 22, 2022
Updates redisearch instance with igdb data used for kimosabe

igdb-pdt Update RediSearch with IGDB games data in the following Format: { "game_slug": { "name": "game_name", "cover": "igdb_coverart_url",

6rotoms 0 Jul 30, 2021
DCM is a set of tools that helps you to keep your data in your Django Models consistent.

Django Consistency Model DCM is a set of tools that helps you to keep your data in your Django Models consistent. Motivation You have a lot of legacy

Occipital 59 Dec 21, 2022