A flexible data historian based on InfluxDB, Grafana, MQTT and more. Free, open, simple.

Overview

Kotori

https://assets.okfn.org/images/ok_buttons/ok_80x15_red_green.png https://assets.okfn.org/images/ok_buttons/oc_80x15_blue.png https://assets.okfn.org/images/ok_buttons/os_80x15_orange_grey.png

Chart recorder

Telemetry data acquisition and sensor networks for humans.



At a glance

Kotori is a data historian based on InfluxDB, Grafana, MQTT and more. Free, open, simple.

It is a telemetry data acquisition, time series data processing and graphing toolkit aiming to become a fully integrated data historian. It supports scientific environmental monitoring projects, distributed sensor networks and likewise scenarios.

The best way to find out more about Kotori is by looking at how others use it already. Enjoy visiting the gallery to read about some examples where Kotori has been used.

Features

The key features are:

  • Multi-channel and multi-protocol data-acquisition and -storage.
  • Built-in sensor adapters, flexible configuration capabilities, durable database storage and unattended graph visualization out of the box.
  • Based on an infrastructure toolkit assembled from different components suitable for data-acquisition, -storage, -fusion, -graphing and more.
  • The system is used for building flexible telemetry solutions in different scenarios. It has been used to support conceiving data logging systems, test benches, sensor networks for environmental monitoring as well as other data-gathering and -aggregation projects.
  • It integrates well with established hardware-, software- and data acquisition workflows through flexible adapter interfaces.

Technologies

Kotori is based on a number of fine infrastructure components and technologies and supports a number of protocols in one way or another. Standing on the shoulders of giants.

Installation

Kotori can be installed through a Debian package, from the Python Package Index (PyPI) or from the Git repository. Please follow up to the corresponding installation instructions:

https://getkotori.org/docs/setup/

Examples

Data acquisition

Submitting measurement data is easy and flexible, both MQTT and HTTP are supported.

First, let's define a data acquisition channel:

CHANNEL=amazonas/ecuador/cuyabeno/1

and some data to submit:

DATA='{"temperature": 42.84, "humidity": 83.1}'

MQTT:

MQTT_BROKER=daq.example.org
echo "$DATA" | mosquitto_pub -h $MQTT_BROKER -t $CHANNEL/data.json -l

HTTP:

HTTP_URI=https://daq.example.org/api/
echo "$DATA" | curl --request POST --header 'Content-Type: application/json' --data @- $HTTP_URI/$CHANNEL/data

Data export

Measurement data can be exported in a variety of formats.

This is a straight-forward example for CSV data export:

http $HTTP_URI/$CHANNEL/data.csv

Acknowledgements

Thanks to all the contributors who helped to co-create and conceive Kotori in one way or another. You know who you are.

License

This project is licensed under the terms of the AGPL license.

Comments
  • Installation from .deb package on Ubuntu 18.04 fails

    Installation from .deb package on Ubuntu 18.04 fails

    Hi! I installed the latest kotori package (0.22.7) for amd64 in my Ubuntu 18.04 LTS machine, but the kotori service fails to start. My machine has the architecture x86_64. If anyone could please help.

    ● kotori.service
       Loaded: not-found (Reason: No such file or directory)
       Active: failed (Result: exit-code) since Thu 2019-05-30 14:56:34 UTC; 19h ago
     Main PID: 15597 (code=exited, status=1/FAILURE)
    
    May 30 14:56:34 igup-be systemd[1]: kotori.service: Main process exited, code=exited, status=1/FAILURE
    May 30 14:56:34 igup-be systemd[1]: kotori.service: Failed with result 'exit-code'.
    May 30 14:56:34 igup-be systemd[1]: kotori.service: Service hold-off time over, scheduling restart.
    May 30 14:56:34 igup-be systemd[1]: kotori.service: Scheduled restart job, restart counter is at 5.
    May 30 14:56:34 igup-be systemd[1]: Stopped Kotori data acquisition and graphing toolkit.
    May 30 14:56:34 igup-be systemd[1]: kotori.service: Start request repeated too quickly.
    May 30 14:56:34 igup-be systemd[1]: kotori.service: Failed with result 'exit-code'.
    May 30 14:56:34 igup-be systemd[1]: Failed to start Kotori data acquisition and graphing toolkit.
    

    The logs from kotori are:

    from .compat import unicode
      File "/opt/kotori/lib/python2.7/site-packages/twisted/python/compat.py", line 611, in <module>
        import cookielib
      File "/usr/lib/python2.7/cookielib.py", line 32, in <module>
        import re, urlparse, copy, time, urllib
      File "/usr/lib/python2.7/copy.py", line 52, in <module>
        import weakref
      File "/usr/lib/python2.7/weakref.py", line 14, in <module>
        from _weakref import (
    ImportError: cannot import name _remove_dead_weakref
    

    Regards

    opened by RuiPinto96 33
  • Kotori using Weewx (MQTT) - ERROR: Processing MQTT message failed from topic

    Kotori using Weewx (MQTT) - ERROR: Processing MQTT message failed from topic "weewx//loop":

    Hello!!

    I have a weather station Vantage Pro 2 console and i'm trying to use Kotori and Weewx to publish data into grafana and store the weather data in InfluxDB. I'm using an MQTT broker (Mosquitto) to send and receive the data from weewx to kotori. But i have a problem processing MQTT messages from the topic. The topic that i used is 'weewx/#'.

    I am getting this error from kotori.log:

    ERROR   : Processing MQTT message failed from topic "weewx//loop"
    

    If anyone could please help...

    This is the error (kotori.log):

        2019-05-22T15:24:17+0100 [kotori.daq.services.mig            ] ERROR   : Processing MQTT message failed from topic "weewx//loop":
    
        [Failure instance: Traceback: <type 'exceptions.AttributeError'>: 'dict' object has no attribute 'slot'
    /usr/lib/python2.7/threading.py:801:__bootstrap_inner
    /usr/lib/python2.7/threading.py:754:run
    /opt/kotori/lib/python2.7/site-packages/twisted/_threads/_threadworker.py:46:work
    /opt/kotori/lib/python2.7/site-packages/twisted/_threads/_team.py:190:doWork
    --- <exception caught here> ---
    /opt/kotori/lib/python2.7/site-packages/twisted/python/threadpool.py:250:inContext
    /opt/kotori/lib/python2.7/site-packages/twisted/python/threadpool.py:266:<lambda>
    /opt/kotori/lib/python2.7/site-packages/twisted/python/context.py:122:callWithContext
    /opt/kotori/lib/python2.7/site-packages/twisted/python/context.py:85:callWithContext
    /opt/kotori/lib/python2.7/site-packages/kotori/daq/services/mig.py:234:process_message
    /opt/kotori/lib/python2.7/site-packages/kotori/daq/services/mig.py:92:topology_to_storage
    /opt/kotori/lib/python2.7/site-packages/kotori/daq/intercom/strategies.py:85:topology_to_storage
    
    opened by RuiPinto96 18
  • How data is flowing through Kotori

    How data is flowing through Kotori

    Coming from #12, I have a question about the export of data from InfluxDB in the weewx.ini configuration file.

    ; ----------------------------------------------------------------------
    ; Data export
    ; https://getkotori.org/docs/handbook/export/
    ; https://getkotori.org/docs/applications/forwarders/http-to-influx.html
    ; ----------------------------------------------------------------------
    [weewx.data-export]
    enable = true
    
    type = application
    application = kotori.io.protocol.forwarder:boot
    
    realm = weewx
    source = http:/api/{realm:weewx}/{network:.}/{gateway:.}/{node:.*}/{slot:(data|event)}.{suffix} [GET]
    target = influxdb:/{database}?measurement={measurement}
    transform = kotori.daq.intercom.strategies:WanBusStrategy.topology_to_storage,
    kotori.io.protocol.influx:QueryTransformer.transform
    

    My question is about the source, target and transform options. In the option source you utilize a HTTP GET request to the database or is to get the JSON payload published in the topic and transform it in order to put in Influx?

    The target option i understand, is for defining the database and measurment where we put the data.

    The transform option i don't really understand, i just know you guys transform the parameters in JSON to queries.

    Thanks a lot for the help already. Best Regards.

    opened by RuiPinto96 10
  • Optimize packaging

    Optimize packaging

    We've learned from @RuiPinto96 and @Dewieinns through #7, #19 and #22 that the packaging might not be done appropriately.

    Within this issue, we will try to walk through any issues observed. Thanks again for your feedback, we appreciate that very much!

    opened by amotl 9
  • How to export data from InfluxDB

    How to export data from InfluxDB

    Hi,

    thank you for creating Kotori, it is really useful.

    I have a setup that uses weewx to acquire weather data, saves it to InfluxDB and displays graphs via Grafana.

    So far everything works as expected:

    [wetter]
    enable      = true
    type        = application
    realm       = wetter
    mqtt_topics = wetter/#
    application = kotori.daq.application.mqttkit:mqttkit_application
    
    # How often to log metrics
    metrics_logger_interval = 60
    

    My question is about data export which currently yields Connection reset by peer regardless of the requested URL.

    [wetter.influx-data-export]
    enable          = true
    
    type            = application
    application     = kotori.io.protocol.forwarder:boot
    
    realm           = wetter
    source          = http:/api/{realm:wetter}/{network:.*}/{gateway:.*}/{node:.*}/{slot:(data|event)}.{suffix} [GET]
    target          = influxdb:/{database}?measurement={measurement}
    transform       = kotori.daq.intercom.strategies:WanBusStrategy.topology_to_storage,
                      kotori.io.protocol.influx:QueryTransformer.transform
    
    [kotori.io.protocol.forwarder       ] INFO    : Starting ProtocolForwarderService(wetter.influx-data-export-forwarder)
    [kotori.io.protocol.forwarder       ] INFO    : Forwarding payloads from http:/api/{realm:wetter}/{network:.*}/{gateway:.*}/{node:.*}/{slot:(data|event)}.{suffix} [GET] to influxdb:/{database}?measurement={measurement}
    [kotori.io.protocol.http            ] INFO    : Initializing HttpChannelContainer
    [kotori.io.protocol.http            ] INFO    : Connecting to Metadata storage database
    [kotori.io.protocol.http            ] INFO    : Starting HTTP service on localhost:24642
    [kotori.io.protocol.http.LocalSite  ] INFO    : Starting factory <kotori.io.protocol.http.LocalSite instance at 0x7fcb3495b1e0>
    [kotori.io.protocol.http            ] INFO    : Registering endpoint at path '/api/{realm:wetter}/{network:.*}/{gateway:.*}/{node:.*}/{slot:(data|event)}.{suffix}' for methods [u'GET']
    [kotori.io.protocol.target          ] INFO    : Starting ForwarderTargetService(wetter-wetter.influx-data-export) for serving address influxdb:/{database}?measurement={measurement} []
    
    $ curl http://localhost:24642/api/wetter/de/ogd/oben_sensors/data.csv
    curl: (56) Recv failure: Connection reset by peer
    

    The InfluxDB database is called wetter_de. Grafana runs queries like these: SELECT mean(windSpeed_kph) FROM wetter_de.autogen.ogd_oben_sensors WHERE time >= now() - 5m GROUP BY time(500ms) successfully.

    I don't know how I need to construct the data export URL to get data from InfluxDB. As far as I understand from the docs http://localhost:24642/api/wetter/de/ogd/oben_sensors/data.csv is transformed as follows:

    • wetter/de is translated to the wetter_de database,
    • ogd/oben_sensors is translated to the ogd_oben_sensors measurement

    Unfortunately, Kotori does not log my HTTP requests and their transformations. InfluxDB also does not log any failing queries. Can you help me find what goes wrong?

    I'm also not sure why I need to specify the realm twice in the export ini:

    realm           = wetter
    source          = http:/api/{realm:wetter}/...
    
    opened by agross 9
  • Installation on RaspberryPi using Docker

    Installation on RaspberryPi using Docker

    Hi,

    lots of Home Automation enthusiasts use a Raspberry Pi (model 3, 3plus or 4) for their computing needs. I made the mistake of perhaps not reading through all steps and tried the docker install and failed.

    • Issue 1 was the Grafana permissions - but resolved.
    • Issue 2 No MongoDB for the pi
    • Issue 3 I read that MongoDB is optionally, so tried docker run -it --rm daqzilla/kotori kotori --version Nope!
    $ docker run -it --rm daqzilla/kotori kotori --version
    Unable to find image 'daqzilla/kotori:latest' locally
    latest: Pulling from daqzilla/kotori
    68ced04f60ab: Pull complete 
    0f5503414412: Pull complete 
    Digest: sha256:ff3d0a569de75fda447ad108a2ec664d8aaf545ded82ecd8c9010fc50817f94b
    Status: Downloaded newer image for daqzilla/kotori:latest
    standard_init_linux.go:211: exec user process caused "exec format error"
    failed to resize tty, using default size
    

    I am a Linux noob so maybe i am doing it all wrong or perhaps Kotori is not for the Pi.

    Would love to hear from you as I am quite excited with how you have brought MQTT (even Tasmota!!), InfluxDB and Grafana all together.

    Cheers and best wishes!

    opened by timaseth 7
  • Packaging: `make package-baseline-images` croaks when building arm64v8 images

    Packaging: `make package-baseline-images` croaks when building arm64v8 images

    Hi there,

    while working on #64, by running the packaging machinery on a Linux system, we discovered that the Building baseline image for Debian "bullseye" on arm64v8 step croaks when installing libc-bin, coming from

    RUN apt-get install --yes --no-install-recommends inetutils-ping nano git build-essential pkg-config libffi-dev ruby ruby-dev
    

    The error manifests itself as

    Processing triggers for libc-bin (2.31-13+deb11u2) ...
    qemu: uncaught target signal 11 (Segmentation fault) - core dumped
    Segmentation fault
    qemu: uncaught target signal 11 (Segmentation fault) - core dumped
    Segmentation fault
    dpkg: error processing package libc-bin (--configure):
     installed libc-bin package post-installation script subprocess returned error exit status 139
    
    Errors were encountered while processing:
     libc-bin
    E: Sub-process /usr/bin/dpkg returned an error code (1)
    The command '/bin/sh -c apt-get install --yes --no-install-recommends     inetutils-ping nano git     build-essential pkg-config libffi-dev     ruby ruby-dev' returned a non-zero code: 100
    Command failed
    make: *** [packaging/tasks.mk:13: package-baseline-images] Error 1
    

    With kind regards, Andreas.

    opened by amotl 5
  • Kotori server does not start

    Kotori server does not start

    Good morning, I am new to the installation part of Kotori, I have followed all the steps on your page but I get to the moment where I have to enter the server with http://kotori.example.org:3000/ and it does not work. I've started the server with "systemctl start kotori" and it doesn't work either. When I get to the step of the tutorial where you have to put the command:

    mosquitto_pub -t $CHANNEL_TOPIC -m '{"temperature": 42.84, "humidity": 83.1}'
    

    in the terminal I get this: "Error: Connection Refused". I am using Linux Mint Ulyssa as the operating system.

    Can someone help me with this please?

    Thank you

    opened by Cr4ck3r32 5
  • Building packages for Debian 11 (Bullseye) fails

    Building packages for Debian 11 (Bullseye) fails

    Hi,

    friends of Kotori have been trying to build .deb packages for Debian 11 (Bullseye). They are on a Linux environment (Intel x86, current Linux kernel, Docker 20.10.11, running within a KVM).

    So, what they are looking at, would be to run this command successfully.

    make package-debian flavor=full dist=bullseye arch=amd64 version=0.26.12
    

    However, the problem is that the preparation command make package-baseline-images already croaks.

    standard_init_linux.go:228: exec user process caused: exec format error
    The command '/bin/sh -c apt-get update && apt-get upgrade --yes' returned a non-zero code: 1
    Command failed
    

    With kind regards, Andreas.

    opened by amotl 3
  • "Basic example with MQTT" fails

    With apologies if I'm missing something terribly obvious (as I'm very much a n00b to MQTT, InfluxDB, and Grafana), the "basic example with MQTT" from the docs (https://getkotori.org/docs/getting-started/basic-mqtt.html) isn't working for me.

    Background

    I installed Kotori on a fresh, updated Ubuntu 18 server following the instructions at https://getkotori.org/docs/setup/debian.html. I then went to the "Getting Started" documentation and tried to follow the example with MQTT. The snipped in amazonas.ini looks like this:

    [amazonas]
    enable      = true
    type        = application
    realm       = amazonas
    mqtt_topics = amazonas/#
    application = kotori.daq.application.mqttkit:mqttkit_application
    

    But when I run the mosquitto_pub command, I get an error in the kotori log:

    2020-12-07T19:59:39-0500 [kotori.daq.graphing.grafana.manager] INFO    : Provisioning Grafana dashboard "amazonas-ecuador" for database "amazonas_ecuador" and measurement "cuyabeno_1_sensors"
    2020-12-07T19:59:39-0500 [kotori.daq.graphing.grafana.api    ] INFO    : Checking/Creating datasource "amazonas_ecuador"
    2020-12-07T19:59:40-0500 [kotori.daq.services.mig            ] ERROR   : Grafana provisioning failed for storage={"node": "1", "slot": "data.json", "realm": "amazonas", "network": "ecuador", "database": "amazonas_ecuador", "measurement_events": "cuyabeno_1_events", "label": "cuyabeno_1", "measurement": "cuyabeno_1_sensors", "gateway": "cuyabeno"}, message={u'temperature': 42.84, u'humidity': 83.1}:
    	[Failure instance: Traceback: <class 'grafana_api_client.GrafanaUnauthorizedError'>: Unauthorized
    	/opt/kotori/lib/python2.7/site-packages/twisted/python/threadpool.py:250:inContext
    	/opt/kotori/lib/python2.7/site-packages/twisted/python/threadpool.py:266:<lambda>
    	/opt/kotori/lib/python2.7/site-packages/twisted/python/context.py:122:callWithContext
    	/opt/kotori/lib/python2.7/site-packages/twisted/python/context.py:85:callWithContext
    	--- <exception caught here> ---
    	/opt/kotori/lib/python2.7/site-packages/kotori/daq/services/mig.py:269:process_message
    	/opt/kotori/lib/python2.7/site-packages/kotori/daq/graphing/grafana/manager.py:129:provision
    	/opt/kotori/lib/python2.7/site-packages/kotori/daq/graphing/grafana/manager.py:85:create_datasource
    	/opt/kotori/lib/python2.7/site-packages/kotori/daq/graphing/grafana/api.py:104:create_datasource
    	/opt/kotori/lib/python2.7/site-packages/grafana_api_client/__init__.py:73:create
    	/opt/kotori/lib/python2.7/site-packages/grafana_api_client/__init__.py:64:make_request
    	/opt/kotori/lib/python2.7/site-packages/grafana_api_client/__init__.py:171:make_raw_request
    	]
    

    I'm sure it's as a result of this that browsing to ip:3000/dashboard/db/ecuador/ fails a "Dashboard not found" message.

    I'd appreciate a pointer in the right direction.

    opened by danb35 3
  • Installation/Configuration on fresh install of Debian 10 fails.

    Installation/Configuration on fresh install of Debian 10 fails.

    Hey @amotl I am starting a new thread to follow up with a post I made where similar was happening as I'm experiencing slightly different variations now.

    After my reply last evening I set about working with a new VM. I didn't properly document everything I had done so this morning I started over COMPLETELY fresh. I was seeing weird issues where systemctl didn't seem to be running and the VM I was using had only one cpu/core. I assumed this was the reason for performance issues I was seeing so I wiped it, made a new VM (on a different drive in my server even) and set about installing Debian (debian-10.3.0-amd64-netinst.iso)

    I opted not to install the GUI and enabled the SSH server.

    With Debian successfully installed I installed screen and then started following the Setup on Debian guide again.

    Note: When I started following this guide initially (yesterday) it wasn't obvious to me (I'm a n00b) that I needed to install the package source for Debian Stretch (9) OR Debian Buster (10).

    This time I added only the package source I needed (Buster) and was good to go... until:

    Add GPG key for checking package signatures: wget -qO - https://packages.elmyra.de/elmyra/foss/debian/pubkey.txt | apt-key add -

    Error: E: gnupg, gnupg2 and gnupg1 do not seem to be installed, but one of them is required for this operation

    Fix: apt-get install gnupg

    All is good and I set about installing Kotori as well as recommended and suggested packages (14.9GB - takes about an hour and a half on my slow internet connection)

    Next thing prompted for is "Configuring jackd2"

    _Do I want to run with realtime priority? (_explains it may lead to lock-ups.) <-- Selected No

    then followed:

    Configuring Kerberos Authentication Default Kerbors version 5 realm: - Pre-populated with my domain <-- left it as is Kerberos servers for your realm: - Nothing pre-populated <-- left it blank Administrative server for your Kerberos realm: - Nothing pre-populated <-- left it blank

    It then goes through and installs everything - this takes some time - until it gets to the end where there are a couple of errors displayed:

    Setting up mh-e (8.5-2.1) ...
    ERROR: mh-e is broken - called emacs-package-install as a new-style add-on, but has no compat file.
    

    and then:

    Errors were encountered while processing:
     lirc
     lirc-x
    E: Sub-process /usr/bin/dpkg returned an error code (1)
    

    At this point I rebooted the VM using systemclt reboot

    After reboot I noticed VM was SLOW again - mega slow.

    I again ran apt update, all packages were up to date.

    I then again ran apt install --install-recommends --install-suggests kotori and noticed the following:

    0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
    2 not fully installed or removed.
    

    I hit Y to continue and it looked like everything was successful this time.

    I attempted to execute systemctl start influxdb grafana-server but received the following warning again:

    System has not been booted with systemd as init system (PID 1). Can't operate.
    Failed to connect to bus: Host is down
    

    It was after this that I wiped the VM last time thinking I had done something mega wrong...

    I then installed open-vm-tools on the VM which allowed me to gracefully reboot the system via the VM Host console.

    Upon rebooting systemctl still isn't running... at this point I'm kind of at a loss. I was farther than this trying to install it on Ubuntu yesterday.

    Sorry for the wall of information but I wanted to be thorough enough you may be able to provide some insight as to what's going on (Hopefully something more than "that's really weird" haha)

    If you wish me to get in touch directly I can do this also.

    opened by Dewieinns 3
  • Unhandled exception: module 'pandas' has no attribute 'tslib'

    Unhandled exception: module 'pandas' has no attribute 'tslib'

    This request croaks.

    https://swarm.hiveeyes.org/api/hiveeyes/25a0e5df-9517-405b-ab14-cb5b514ac9e8/3756782252718325761/1/data.png?include=wght2&from=20160519T040000&to=20160519T170000&renderer=ggplot

    bug 
    opened by amotl 0
  • Problem when using unicode characters in channel name or field name

    Problem when using unicode characters in channel name or field name

    From the backlog at [1], it is from 2017 already.

    Problem when using unicode characters like "Niederkrüchten-Overhetfeld" or field names like "Temperatur außen".

    exceptions.UnicodeEncodeError: 'ascii' codec can't encode character u'\xdf' in position 13: ordinal not in range(128)
    

    Example URL: https://swarm.hiveeyes.org/api/hiveeyes/testdrive-aw/Niederkr%C3%BCchten-Overhetfeld/node-001/data.txt?from=2016-01-01

    [1] https://github.com/daq-tools/kotori/blob/0.26.12/doc/source/development/backlog.rst#2017-02-06-1

    bug 
    opened by amotl 0
  • Support receiving data via AMQP

    Support receiving data via AMQP

    Hi there,

    @tonkenfo started playing with AMQP on his Ratrack setup the other day [^1]. It would be sweet if Kotori could support it on the ingest side as a first citizen.

    With kind regards, Andreas.

    [^1]: On his experiments, I think he managed to use RabbitMQ as a message broker for both AMQP and MQTT.

    opened by amotl 0
  • Receiving telemetry data via UDP

    Receiving telemetry data via UDP

    Hi there,

    @ClemensGruber just asked over at ^1 whether it would be possible to make Kotori receive UDP data from specific devices. I wanted to create this as a note here in order to have a place to track it.

    With kind regards, Andreas.

    /cc @easyhive, @jacobron

    opened by amotl 1
  • Add TTS (The Things Stack) / TTN (The Things Network) decoder adapter

    Add TTS (The Things Stack) / TTN (The Things Network) decoder adapter

    Dear @thiasB, @MKO1640 and @einsiedlerkrebs,

    finally, aiming to resolve #8, and attaching to your requests at [1] ff., this patch adds a basic decoder for TTS/TTN v3 Uplink Messages [2], submitted using Webhooks [3].

    It can be improved by adding further commits to the collab/tts-ttn branch, if you want to share a hand. It will also need a dedicated documentation page within the "Channel decoders" section [4], optimally with a short but concise walkthrough tutorial how to configure a corresponding webhook within the TTS/TTN console, like [5].

    You can invoke the specific test case after checking out the referenced branch by running those commands within the toplevel directory of the repository in two different shells:

    make start-foundation-services
    
    source .venv/bin/activate
    pytest -vvv -m ttn --capture=no
    

    With kind regards, Andreas.

    [1] https://community.hiveeyes.org/t/ttn-daten-an-kotori-weiterleiten/1422/34 [2] https://www.thethingsindustries.com/docs/reference/data-formats/#uplink-messages [3] https://www.thethingsindustries.com/docs/integrations/webhooks/ [4] https://getkotori.org/docs/handbook/decoders/ [5] https://www.thethingsindustries.com/docs/integrations/webhooks/creating-webhooks/

    opened by amotl 2
Releases(0.27.0)
  • 0.27.0(Nov 26, 2022)

    What's Changed

    • Add documentation about running Kotori with RabbitMQ as MQTT broker, see Running Kotori with RabbitMQ
    • Allow connecting to individual MQTT broker per application
    • Improve MQTT logging when connection to broker fails
    • Make MQTT broker credential settings username and password optional
    • Add software tests for simulating all advanced actions against Grafana
      • Publish single reading in JSON format to MQTT broker and proof that a corresponding datasource and a dashboard was created in Grafana.
      • Publish two subsequent readings in JSON format to MQTT broker and proof that a corresponding datasource and a dashboard was first created and then updated in Grafana.
      • Publish two subsequent readings to two different topics and proof that a corresponding datasource and a dashboard with two panels has been created in Grafana.
      • Publish two subsequent readings to two different topics and proof that a corresponding datasource and two dashboards have been created in Grafana.
    • Adjust logging format re. milli/microseconds
    • Because accessing dashboards by slug has been removed with Grafana 8, Kotori will now use the slug-name of the data channel for all of Grafana's uid, name and title fields.
    • Improve decoding fractional epoch timestamps
    • Update to numpy<1.24 on Python >3.10
    • Replace Bunch with Munch

    Breaking changes

    • Stop converging latitude and longitude ingress fields to tags. It has been implemented as a convenience case when processing LDI data, but it is not applicable in standard data acquisition scenarios, specifically when recording positions of moving objects. Thanks, @tonkenfo.

    Infrastructure

    • Improve sandbox and CI setup, software tests and documentation
    • Update to Twisted <23
    • CI: Update to Grafana 7.5.17, 8.5.15, and 9.2.6
    • CI: Update to MongoDB 5.0
    • Tests: Remove nosetests test runner, replace with pytest
    • Build: Use python -m build for building sdist and wheel packages
    • Add support for Python 3.10 and 3.11
    • Drop support for Python 3.5 and 3.6
    • CI: Modernize GHA workflow recipe
    • Documentation: Add link checker and fix a few broken links
    • Documentation: Update to Sphinx 5

    Full Changelog: https://github.com/daq-tools/kotori/compare/0.26.12...0.27.0

    Source code(tar.gz)
    Source code(zip)
Owner
Open source data acquisition, processing and visualization software
null
Iec62056-21-mqtt - Publish DSMR P1 telegrams acquired over IEC62056-21 to MQTT

IEC 62056-21 Publish DSMR P1 telegrams acquired over IEC62056-21 to MQTT. -21 is

Marijn Suijten 1 Jun 5, 2022
A script that publishes power usage data of iDrac enabled servers to an MQTT broker for integration into automation and power monitoring systems

iDracPowerMonitorMQTT This script publishes iDrac power draw data for iDrac 6 enabled servers to an MQTT broker. This can be used to integrate the pow

Lucas Zanchetta 10 Oct 6, 2022
Alternative firmware for ESP8266 with easy configuration using webUI, OTA updates, automation using timers or rules, expandability and entirely local control over MQTT, HTTP, Serial or KNX. Full documentation at

Alternative firmware for ESP8266/ESP32 based devices with easy configuration using webUI, OTA updates, automation using timers or rules, expandability

Theo Arends 59 Dec 26, 2022
A small Python app to converse between MQTT messages and 433MHz RF signals.

mqtt-rf-bridge A small Python app to converse between MQTT messages and 433MHz RF signals. This acts as a bridge between Paho MQTT and rpi-rf. Require

David Swarbrick 3 Jan 27, 2022
Python script: Enphase Envoy mqtt json for Home Assistant

A Python script that takes a real time stream from Enphase Envoy and publishes to a mqtt broker. This can then be used within Home Assistant or for other applications. The data updates at least once per second with negligible load on the Envoy.

null 29 Dec 27, 2022
Connect a TeslaMate instance to Home Assistant, using MQTT

TeslaBuddy Connect a TeslaMate instance to Home Assistant, using MQTT. It allows basic control of your Tesla vehicle via Home Assistant (currently, ju

null 4 May 23, 2022
ArduinoWaterHeaterIOT - IoT Probe of a solar PV water heating system - Arduino, Python, MQTT, MySQL

ArduinoWaterHeaterIOT IoT Probe of a solar PV water heating system - Arduino, Raspberry Pi, Python, MQTT, MySQL The Arduino sends the AC and DC watts

Jacques Fourie 1 Jan 11, 2022
Home-Assistant MQTT bridge for Panasonic Comfort Cloud

Panasonic Comfort Cloud MQTT Bridge Home-Assistant MQTT bridge for Panasonic Comfort Cloud. Note: Currently this brige is a one evening prototype proj

Santtu Järvi 2 Jan 4, 2023
What if home automation was homoiconic? Just transformations of data? No more YAML!

radiale what if home-automation was also homoiconic? The upper or proximal row contains three bones, to which Gegenbaur has applied the terms radiale,

Felix Barbalet 21 Mar 26, 2022
A PYTHON Library for Controlling Motors using SOLO Motor Controllers with RASPBERRY PI, Linux, windows, and more!

A PYTHON Library for Controlling Motors using SOLO Motor Controllers with RASPBERRY PI, Linux, windows, and more!

SOLO Motor Controllers 3 Apr 29, 2022
This application works with serial communication. Use a simple gui to send and receive serial data from arduino and control leds and motor direction

This application works with serial communication. Use a simple gui to send and receive serial data from arduino and control leds and motor direction

ThyagoKZKR 2 Jul 18, 2022
A simple portable USB MIDI controller based on Raspberry-PI Pico and a 16-button keypad, written in Circuit Python

RPI-Pico-16-BTn-MIDI-Controller-using-CircuitPython A simple portable USB MIDI controller based on Raspberry-PI Pico, written in Circuit Python. Link

Rounak Dutta 3 Dec 4, 2022
Mycodo is open source software for the Raspberry Pi that couples inputs and outputs in interesting ways to sense and manipulate the environment.

Mycodo Environmental Regulation System Latest version: 8.12.9 Mycodo is open source software for the Raspberry Pi that couples inputs and outputs in i

Kyle Gabriel 2.3k Dec 29, 2022
OpenStickFirmware is open source software designed to handle any and all tasks required in a custom Fight Stick

OpenStickFirmware is open source software designed to handle any and all tasks required in a custom Fight Stick. It can handle being the brains of your entire stick, or just handling the bells and whistles while your Brook board talks to your console.

Sleep Unit 23 Nov 24, 2022
The project is an open-source and low-cost kit to get started with underactuated robotics.

Torque Limited Simple Pendulum Introduction The project is an open-source and low-cost kit to get started with underactuated robotics. The kit targets

null 34 Dec 14, 2022
Open source home automation that puts local control and privacy first.

Home Assistant Open source home automation that puts local control and privacy first. Powered by a worldwide community of tinkerers and DIY enthusiast

Home Assistant 57k Jan 1, 2023
Parametric open source reconstructions of Voron printed parts

The Parametric Voron This repository contains Fusion 360 reconstructions of various printed parts from the Voron printers

Matthew Lloyd 26 Dec 19, 2022
An open source operating system designed primarily for the Raspberry Pi Pico, written entirely in MicroPython

PycOS An open source operating system designed primarily for the Raspberry Pi Pico, written entirely in MicroPython. "PycOS" is an combination of the

null 8 Oct 6, 2022
Mini Pupper - Open-Source,ROS Robot Dog Kit

Mini Pupper - Open-Source,ROS Robot Dog Kit

MangDang 747 Dec 28, 2022