Backend app for visualizing CANedge log files in Grafana (directly from local disk or S3)

Overview

CANedge Grafana Backend - Visualize CAN/LIN Data in Dashboards

This project enables easy dashboard visualization of log files from the CANedge CAN/LIN data logger.

Specifically, a light-weight backend app loads, DBC decodes and parses MDF log files from local disk or an S3 server. This is done 'on demand' in response to query requests sent from a Grafana dashboard frontend by end users.

This project is currently in BETA - major changes will be made.

CAN Bus Grafana Dashboard

Backend vs. Writer

We provide two options for integrating your CANedge data with Grafana dashboards:

The CANedge Grafana Backend app only processes data 'when needed' by an end user - and requires no database. It is ideal when you have large amounts of data - as you only process the data you need to visualize.

The CANedge InfluxDB Writer processes data in advance (e.g. periodically or on-file-upload) and writes it to a database. It is ideal if dashboard loading speed is critical - but with the downside that data is processed & stored even if it is not used.

For details incl. 'pros & cons', see our intro to telematics dashboards.


Features

- allow users to visualize data from all of your devices & log files in Grafana 
- data is only processed "on request" - avoiding the need for costly databases
- data can be fetched from local disk or S3
- data can be visualized as soon as log files are uploaded to S3 for 'near real-time updates'
- the backend app can be easily deployed on e.g. your PC or AWS EC2 instance 
- plug & play dashboard templates & sample data let you get started quickly 
- view log file sessions & splits via Annotations, enabling easy identification of underlying data 
- allow end users control over what devices/signals are displayed via flexible Variables

Installation

In this section we detail how to deploy the app on a PC or an AWS EC2 instance.

Note: We recommend to test the local deployment with our sample data as the first step.


1: Deploy the integration locally on your PC

A local PC deployment is recommended if you wish to load data from an SD, local disk or MinIO S3.

Deploy the backend app locally

  • Install Python 3.7 for Windows (32 bit/64 bit) or Linux (enable 'Add to PATH')
  • Download this project as a zip via the green button and unzip it
  • Open the folder with the requirements.txt file and enter below in your command prompt:
Windows
python -m venv env & env\Scripts\activate & pip install -r requirements.txt
python canedge_datasource_cli.py "file:///%cd%/LOG" --port 8080
Linux
python3 -m venv env && source env/bin/activate && pip install -r requirements.txt
python3 canedge_datasource_cli.py file:///$PWD/LOG --port 8080

Set up Grafana locally

  • Install Grafana locally and enter http://localhost:3000 in your browser to open Grafana
  • In Configuration/Plugins install SimpleJson and TrackMap
  • In Configuration/DataSources select Add datasource and SimpleJson and set it as the 'default'
  • Enter the URL http://localhost:8080/, hit Save & test and verify that it works
  • In Dashboards/Browse click Import and load the dashboard-template-sample-data.json from this repo

You should now see the sample data visualized in Grafana.

Next: If you aim to work with CANedge2 data from AWS S3, go to step 2 - otherwise go to step 3.


2: Deploy the integration on AWS EC2 & Grafana Cloud

An AWS EC2 instance is recommended if you wish to load data from your AWS S3 bucket.

Deploy the backend app on AWS EC2

  • Login to AWS, search for EC2/Instances and click Launch instances
  • Select Ubuntu Server 20.04 LTS (HVM), SSD Volume Type, t3.small and proceed
  • In Step 6, click Add Rule/Custom TCP Rule and set Port Range to 8080
  • Launch the instance, then create & store your credentials (we will not use them for now)
  • Wait ~5 min, click on your instance and note your IP (the Public IPv4 address)
  • Click Connect/Connect to enter the GUI console, then enter the following:
sudo apt update && sudo apt install python3 python3-pip python3-venv tmux 
git clone https://github.com/CSS-Electronics/canedge-grafana-backend.git && cd canedge-grafana-backend
python3 -m venv env && source env/bin/activate && pip install -r requirements.txt
tmux
python3 canedge_datasource_cli.py file:///$PWD/LOG --port 8080

Set up Grafana Cloud

  • Set up a free Grafana Cloud account and log in
  • In Configuration/Plugins install SimpleJson and TrackMap (log out and in again)
  • In Configuration/DataSources select Add datasource and SimpleJson and set it as the 'default'
  • Replace your datasource URL with the http://[IP]:[port] endpoint and click Save & test
  • In Dashboards/Browse click Import and load the dashboard-template-sample-data.json from this repo

You should now see the sample data visualized in your imported dashboard. In the AWS EC2 console you can press ctrl + B then D to de-attach from the session, allowing it to run even when you close the GUI console.

Next: See step 3 on loading your AWS S3 data and step 5 on deploying the app as a service for production.


3: Load your own data & DBC files

Below we outline how to load your own data & DBC files.

Note: To activate your virtual environment use env\Scripts\activate (Linux: source env/bin/activate)

Load from local disk

  • Replace the sample LOG/ folder with your own LOG/ folder (or add an absolute path)
  • Verify that your data is structured as on the CANedge SD card i.e. [device_id]/[session]/[split].MF4
  • Add your DBC file(s) to the root of the folder
  • Verify that your venv is active and start the app

Load from S3

  • Add your DBC file(s) to the root of your S3 bucket
  • Verify that your venv is active and start the app with below syntax (use python3 on Linux/EC2)
python canedge_datasource_cli.py [endpoint] --port 8080 --s3_ak [access_key] --s3_sk [secret_key] --s3_bucket [bucket]
  • AWS S3 endpoint example: https://s3.eu-central-1.amazonaws.com
  • Google S3 endpoint example: https://storage.googleapis.com
  • MinIO S3 endpoint example: http://192.168.192.1:9000

Import simplified dashboard template

  • To get started, import the dashboard-template-simple.json to visualize your own data
  • After this, you can start customizing your panels as explained in step 4

Regarding DBC files

You can load as many DBC files as you want without reducing performance, as long as your queries only use one at a time (as is e.g. the case when using the simple dashboard template). However, if your queries need to use multiple DBC files, you may consider 'combining' your DBC files for optimal performance.

Regarding compression

It is recommended to enable the CANedge compression as the compressed MFC files are 50%+ smaller and thus faster to load.


4: Customize your Grafana dashboard

The dashboard-template-sample-data.json can be used to identify how to make queries, incl. below examples:

# create a fully customized query that depends on what the user selects in the dropdown 
{"device":"${DEVICE}","itf":"${ITF}","chn":"${CHN}","db":"${DB}","signal":"${SIGNAL}"}

# create a query for a panel that locks a signal, but keeps the device selectable
{"device":"${DEVICE}","itf":"CAN","chn":"CH2","db":"canmod-gps","signal":"Speed"}

# create a query for parsing multiple signals, e.g. for a TrackMap plot
{"device":"${DEVICE}","itf":"CAN","chn":"CH2","db":"canmod-gps","signal":"(Latitude|Longitude)"}

Bundle queries for multiple panels

When displaying multiple panels in your dashboard, it is critical to setup all queries in a single panel (as in our sample data template). All other panels can then be set up to refer to the original panel by setting the datasource as -- Dashboard --. For both the 'query panel' and 'referring panels' you can then use the Transform tab to Filter data by query. This allows you to specify which query should be displayed in which panel. The end result is that only 1 query is sent to the backend - which means that your CANedge log files are only processed once per update.

Set up Grafana Variables & Annotations

Grafana Variables allow users to dynamically control what is displayed in certain panels via dropdowns. For details on how the Variables are defined, see the template dashboard under Settings/Variables.

Similarly, Annotations can be used to display when a new log file 'session' or 'split' occurs, as well as display the log file name. This makes it easy to identify the log files underlying a specific view - and then finding these via CANcloud or TntDrive for further processing.

Regarding performance

Using the 'zoom out' button repeatedly will currently generate a queue of requests, each of which will be processed by the backend. Until this is optimized, we recommend to make a single request a time - e.g. by using the time period selector instead of the 'zoom out' button.

Also, loading speed increases when displaying long time periods (as the data for the period is processed in real-time).


5: Move to a production setup

Managing your EC2 tmux session

Below commands are useful in managing your tmux session while you're still testing your deployment.

  • tmux: Start a session
  • tmux ls: List sessions
  • tmux attach: Re-attach to session
  • tmux kill-session: Stop session

Deploy your app as an EC2 service for production

The above setup is suitable for development & testing. Once you're ready to deploy for production, you may prefer to set up a service. This ensures that your app automatically restarts after an instance reboot or a crash. To set it up as a service, follow the below steps:

  • Ensure you've followed the previous EC2 steps incl. the virtual environment
  • Update the ExecStart line in the canedge_grafana_backend.service 'unit file' with your S3 details
  • Upload the modified file to get a public URL
  • In your EC2 instance, use below commands to deploy the file
sudo wget -N [your_file_url]
sudo cp canedge_grafana_backend.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl start canedge_grafana_backend
sudo systemctl enable canedge_grafana_backend
sudo journalctl -f -u canedge_grafana_backend

The service should now be deployed, which you can verify via the console output. If you need to make updates to your unit file, simply repeat the above. You can stop the service via sudo systemctl stop [service].

Regarding EC2 costs

You can find details on AWS EC2 pricing here. A t3.small instance typically costs ~0.02$/hour (~15-20$/month). We recommend that you monitor usage during your tests early on to ensure that no unexpected cost developments occur. Note also that you do not pay for the data transfer from S3 into EC2 if deployed within the same region.

Regarding public EC2 IP

Note that rebooting your EC2 instance will imply that your endpoint IP is changed - and thus you'll need to update your datasource. There are methods to set a fixed IP, though not in scope of this README.

Port forwarding a local deployment

If you want to access the data remotely, you can set up port forwarding. Below we outline how to port forward the backend app for use as a datasource in Grafana Cloud - but you could of course also directly port forward your local Grafana dashboard directly via port 3000.

  • Set up port forwarding on your WiFi router for port 8080
  • Run the app again (you may need to allow access via your firewall)
  • Find your public IP to get your endpoint as: http://[IP]:[port] (e.g. http://5.105.117.49:8080/)
  • In Grafana, add your new endpoint URL and click Save & test

Pending tasks

Below are a list of pending items:

  • Optimize Flask/Waitress session management for stability
  • Improve performance for multiple DBC files
  • Update code/guide for TLS-enabled deployment
  • Provide guidance on how to best scale the app for multiple front-end users
  • Determine if using Browser in SimpleJson datasource improves performance (requires TLS)
You might also like...
A command line tool for visualizing CSV/spreadsheet-like data
A command line tool for visualizing CSV/spreadsheet-like data

PerfPlotter Read data from CSV files using pandas and generate interactive plots using bokeh, which can then be embedded into HTML pages and served by

The Timescale NFT Starter Kit is a step-by-step guide to get up and running with collecting, storing, analyzing and visualizing NFT data from OpenSea, using PostgreSQL and TimescaleDB.

Timescale NFT Starter Kit The Timescale NFT Starter Kit is a step-by-step guide to get up and running with collecting, storing, analyzing and visualiz

Python toolkit for defining+simulating+visualizing+analyzing attractors, dynamical systems, iterated function systems, roulette curves, and more
Python toolkit for defining+simulating+visualizing+analyzing attractors, dynamical systems, iterated function systems, roulette curves, and more

Attractors A small module that provides functions and classes for very efficient simulation and rendering of iterated function systems; dynamical syst

A python-generated website for visualizing the novel coronavirus (COVID-19) data for Greece.
A python-generated website for visualizing the novel coronavirus (COVID-19) data for Greece.

COVID-19-Greece A python-generated website for visualizing the novel coronavirus (COVID-19) data for Greece. Data sources Data provided by Johns Hopki

Visualizing weather changes across the world using third party APIs and Python.
Visualizing weather changes across the world using third party APIs and Python.

WEATHER FORECASTING ACROSS THE WORLD Overview Python scripts were created to visualize the weather for over 500 cities across the world at varying di

Tools for calculating and visualizing Elo-like ratings of MLB teams using Retosheet data
Tools for calculating and visualizing Elo-like ratings of MLB teams using Retosheet data

Overview This project uses historical baseball games data to calculate an Elo-like rating for MLB teams based on regular season match ups. The Elo rat

Generate SVG (dark/light) images visualizing (private/public) GitHub repo statistics for profile/website.

Generate daily updated visualizations of GitHub user and repository statistics from the GitHub API using GitHub Actions for any combination of private and public repositories, whether owned or contributed to - no server required.

Pydrawer: The Python package for visualizing curves and linear transformations in a super simple way

pydrawer 📐 The Python package for visualizing curves and linear transformations in a super simple way. ✏️ Installation Install pydrawer package with

Curvipy - The Python package for visualizing curves and linear transformations in a super simple way

Curvipy - The Python package for visualizing curves and linear transformations in a super simple way

Comments
  • Issue with deploying the app locally (1: Deploy the integration locally on your PC)

    Issue with deploying the app locally (1: Deploy the integration locally on your PC)

    Commit : 196aa9efc4ca892554eabff52175e6b2523b3e6 (tag : v1.0.0)

    Platform : Windows 10

    I have followed these steps : https://github.com/CSS-Electronics/canedge-grafana-backend#1-deploy-the-integration-locally-on-your-pc

    The application starts up fine and is validated by SimpleJson; As soon as I attempt to import the dashboard dashboard-template-sample-data.json, the following error appears :

    (env) C:\{SNIPPED/PATH/}canedge-grafana-backend>python canedge_datasource_cli.py "file:///%cd%/LOG" --port 8080 --limit 100
    Mount path: file:///C:\{SNIPPED/PATH/}canedge-grafana-backend/LOG
    Loaded DBs: canmod-gps
    2022-04-27 08:42:36,438 - waitress - INFO - Serving on http://0.0.0.0:8080
    2022-04-27 08:54:08,622 - canedge_datasource.annotations - WARNING - Failed to annotate: int() argument must be a string, a bytes-like object or a number, not 'dict'
    2022-04-27 08:54:08,628 - canedge_datasource.annotations - WARNING - Failed to annotate: int() argument must be a string, a bytes-like object or a number, not 'dict'
    2022-04-27 08:54:08,854 - canedge_datasource - ERROR - Exception on /query [POST]
    Traceback (most recent call last):
      File "C:\{SNIPPED/PATH/}canedge-grafana-backend\env\lib\site-packages\flask\app.py", line 2073, in wsgi_app
        response = self.full_dispatch_request()
      File "C:\{SNIPPED/PATH/}canedge-grafana-backend\env\lib\site-packages\flask\app.py", line 1518, in full_dispatch_request
        rv = self.handle_user_exception(e)
      File "C:\{SNIPPED/PATH/}canedge-grafana-backend\env\lib\site-packages\flask\app.py", line 1516, in full_dispatch_request
        rv = self.dispatch_request()
      File "C:\{SNIPPED/PATH/}canedge-grafana-backend\env\lib\site-packages\flask\app.py", line 1502, in dispatch_request
        return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
      File "C:\{SNIPPED/PATH/}canedge-grafana-backend\canedge_datasource\query.py", line 130, in query_view
        return jsonify(query_cache(req_in))
      File "C:\{SNIPPED/PATH/}canedge-grafana-backend\env\lib\site-packages\flask_caching\__init__.py", line 952, in decorated_function
        rv = f(*args, **kwargs)
      File "C:\{SNIPPED/PATH/}canedge-grafana-backend\canedge_datasource\query.py", line 117, in query_cache
        res = _query_time_series(req, start_date, stop_date)
      File "C:\{SNIPPED/PATH/}canedge-grafana-backend\canedge_datasource\query.py", line 171, in _query_time_series
        return time_series_phy_data(fs=app.fs,
      File "C:\{SNIPPED/PATH/}canedge-grafana-backend\canedge_datasource\signal.py", line 202, in time_series_phy_data
        log_files = canedge_browser.get_log_files(fs, device, start_date=start_date, stop_date=stop_date,
      File "C:\{SNIPPED/PATH/}canedge-grafana-backend\env\lib\site-packages\canedge_browser\listing.py", line 152, in get_log_files
        selected_sessions = _bisect_list(
      File "C:\{SNIPPED/PATH/}canedge-grafana-backend\env\lib\site-packages\canedge_browser\listing.py", line 354, in _bisect_list
        start_index = bisect.bisect_left(bisect_list, lower_bound)
      File "C:\{SNIPPED/PATH/}canedge-grafana-backend\env\lib\site-packages\canedge_browser\support\FuncBackedList.py", line 39, in __getitem__
        self._values[item] = self._func(key)
      File "C:\{SNIPPED/PATH/}canedge-grafana-backend\env\lib\site-packages\canedge_browser\support\FuncBackedList.py", line 15, in <lambda>
        self._func = lambda x: func(x, *args, **kwargs)
      File "C:\{SNIPPED/PATH/}canedge-grafana-backend\env\lib\site-packages\canedge_browser\listing.py", line 325, in _extract_date_from_session_wrapper
        result = extract_date(handle, passwords)
      File "C:\{SNIPPED/PATH/}canedge-grafana-backend\env\lib\site-packages\canedge_browser\listing.py", line 239, in _extract_date_mdf4
        mdf_file = mdf_iter.MdfFile(*args)
    TypeError: int() argument must be a string, a bytes-like object or a number, not 'dict'
    
    opened by fstojanovic 3
Releases(v1.1.2)
Owner
null
Extract data from ThousandEyes REST API and visualize it on your customized Grafana Dashboard.

ThousandEyes Grafana Dashboard Extract data from the ThousandEyes REST API and visualize it on your customized Grafana Dashboard. Deploy Grafana, Infl

Flo Pachinger 16 Nov 26, 2022
2021 grafana arbitrary file read

2021_grafana_arbitrary_file_read base on pocsuite3 try 40 default plugins of grafana alertlist annolist barchart cloudwatch dashlist elasticsearch gra

ATpiu 5 Nov 9, 2022
Smarthome Dashboard with Grafana & InfluxDB

Smarthome Dashboard with Grafana & InfluxDB This is a complete overhaul of my Raspberry Dashboard done with Flask. I switched from sqlite to InfluxDB

null 6 Oct 20, 2022
Extract and visualize information from Gurobi log files

GRBlogtools Extract information from Gurobi log files and generate pandas DataFrames or Excel worksheets for further processing. Also includes a wrapp

Gurobi Optimization 56 Nov 17, 2022
VDLdraw - Batch plot the log files exported from VisualDL using Matplotlib

VDLdraw Batch plot the log files exported from VisualDL using Matplotlib. At pre

Yizhou Chen 5 Sep 26, 2022
eoplatform is a Python package that aims to simplify Remote Sensing Earth Observation by providing actionable information on a wide swath of RS platforms and provide a simple API for downloading and visualizing RS imagery

An Earth Observation Platform Earth Observation made easy. Report Bug | Request Feature About eoplatform is a Python package that aims to simplify Rem

Matthew Tralka 4 Aug 11, 2022
HM02: Visualizing Interesting Datasets

HM02: Visualizing Interesting Datasets This is a homework assignment for CSCI 40 class at Claremont McKenna College. Go to the project page to learn m

Qiaoling Chen 11 Oct 26, 2021
Keir&'s Visualizing Data on Life Expectancy

Keir's Visualizing Data on Life Expectancy Below is information on life expectancy in the United States from 1900-2017. You will also find information

null 9 Jun 6, 2022
HW 2: Visualizing interesting datasets

HW 2: Visualizing interesting datasets Check out the project instructions here! Mean Earnings per Hour for Males and Females My first graph uses data

null 7 Oct 27, 2021
Leyna's Visualizing Data With Python

Leyna's Visualizing Data Below is information on the number of bilingual students in three school districts in Massachusetts. You will also find infor

null 11 Oct 28, 2021