Random scripts and other bits for interacting with the SpaceX Starlink user terminal hardware

Overview

starlink-grpc-tools

This repository has a handful of tools for interacting with the gRPC service implemented on the Starlink user terminal (AKA "the dish").

For more information on what Starlink is, see starlink.com and/or the r/Starlink subreddit.

Prerequisites

Most of the scripts here are Python scripts. To use them, you will either need Python installed on your system or you can use the Docker image. If you use the Docker image, you can skip the rest of the prerequisites other than making sure the dish IP is reachable and Docker itself. For Linux systems, the python package from your distribution should be fine, as long as it is Python 3. The JSON script should actually work with Python 2.7, but the grpc scripts all require Python 3 (and Python 2.7 is past end-of-life, so is not recommended anyway).

All the tools that pull data from the dish expect to be able to reach it at the dish's fixed IP address of 192.168.100.1, as do the Starlink Android app, iOS app, and the browser app you can run directly from http://192.168.100.1. When using a router other than the one included with the Starlink installation kit, this usually requires some additional router configuration to make it work. That configuration is beyond the scope of this document, but if the Starlink app doesn't work on your home network, then neither will these scripts. That being said, you do not need the Starlink app installed to make use of these scripts.

Running the scripts within a Docker container requires Docker to be installed. Information about how to install that can be found at https://docs.docker.com/engine/install/

parseJsonHistory.py operates on a JSON format data representation of the protocol buffer messages, such as that output by gRPCurl. The command lines below assume grpcurl is installed in the runtime PATH. If that's not the case, just substitute in the full path to the command.

Required Python modules

If you don't care about the details or minimizing your package requirements, you can skip the rest of this section and just do this to install latest versions of a superset of required modules:

pip install --upgrade -r requirements.txt

The scripts that don't use grpcurl to pull data require the grpcio Python package at runtime and the optional step of generating the gRPC protocol module code requires the grpcio-tools package. Information about how to install both can be found at https://grpc.io/docs/languages/python/quickstart/. If you skip generation of the gRPC protocol modules, the scripts will instead require the yagrc Python package. Information about how to install that is at https://github.com/sparky8512/yagrc.

The scripts that use MQTT for output require the paho-mqtt Python package. Information about how to install that can be found at https://www.eclipse.org/paho/index.php?page=clients/python/index.php

The scripts that use InfluxDB for output require the influxdb Python package. Information about how to install that can be found at https://github.com/influxdata/influxdb-python. Note that this is the (slightly) older version of the InfluxDB client Python module, not the InfluxDB 2.0 client. It can still be made to work with an InfluxDB 2.0 server, but doing so requires using influx v1 CLI commands on the server to map the 1.x username, password, and database names to their 2.0 equivalents.

Note that the Python package versions available from various Linux distributions (ie: installed via apt-get or similar) tend to run a bit behind those available to install via pip. While the distro packages should work OK as long as they aren't extremely old, they may not work as well as the later versions.

Generating the gRPC protocol modules

This step is no longer required, as the grpc scripts can now get the protocol module classes at run time via reflection, but generating the protocol modules will improve script startup time, and it would be a good idea to at least stash away the protoset file emitted by grpcurl in case SpaceX ever turns off server reflection in the dish software.

The grpc scripts require some generated code to support the specific gRPC protocol messages used. These would normally be generated from .proto files that specify those messages, but to date (2020-Dec), SpaceX has not publicly released such files. The gRPC service running on the dish appears to have server reflection enabled, though. grpcurl can use that to extract a protoset file, and the protoc compiler can use that to make the necessary generated code:

grpcurl -plaintext -protoset-out dish.protoset 192.168.100.1:9200 describe SpaceX.API.Device.Device
mkdir src
cd src
python3 -m grpc_tools.protoc --descriptor_set_in=../dish.protoset --python_out=. --grpc_python_out=. spacex/api/device/device.proto
python3 -m grpc_tools.protoc --descriptor_set_in=../dish.protoset --python_out=. --grpc_python_out=. spacex/api/common/status/status.proto
python3 -m grpc_tools.protoc --descriptor_set_in=../dish.protoset --python_out=. --grpc_python_out=. spacex/api/device/command.proto
python3 -m grpc_tools.protoc --descriptor_set_in=../dish.protoset --python_out=. --grpc_python_out=. spacex/api/device/common.proto
python3 -m grpc_tools.protoc --descriptor_set_in=../dish.protoset --python_out=. --grpc_python_out=. spacex/api/device/dish.proto
python3 -m grpc_tools.protoc --descriptor_set_in=../dish.protoset --python_out=. --grpc_python_out=. spacex/api/device/wifi.proto
python3 -m grpc_tools.protoc --descriptor_set_in=../dish.protoset --python_out=. --grpc_python_out=. spacex/api/device/wifi_config.proto
python3 -m grpc_tools.protoc --descriptor_set_in=../dish.protoset --python_out=. --grpc_python_out=. spacex/api/device/transceiver.proto

Then move the resulting files to where the Python scripts can find them in the import path, such as in the same directory as the scripts themselves.

Usage

Of the 3 groups below, the grpc scripts are really the only ones being actively developed. The others are mostly by way of example of what could be done with the underlying data.

The grpc scripts

This set of scripts includes dish_grpc_text.py, dish_grpc_influx.py, dish_grpc_sqlite.py, and dish_grpc_mqtt.py. They mostly support the same functionality, but write their output in different ways. dish_grpc_text.py writes data to standard output, dish_grpc_influx.py sends it to an InfluxDB server, dish_grpc_sqlite.py writes it a sqlite database, and dish_grpc_mqtt.py sends it to a MQTT broker.

All 4 scripts support processing status data and/or history data in various modes. The status data is mostly what appears related to the dish in the Debug Data section of the Starlink app, whereas most of the data displayed in the Statistics page of the Starlink app comes from the history data. Specific status or history data groups can be selected by including their mode names on the command line. Run the scripts with -h command line option to get a list of available modes. See the documentation at the top of starlink_grpc.py for detail on what each of the fields means within each mode group.

For example, all the currently available status groups can be output by doing:

python3 dish_grpc_text.py status obstruction_detail alert_detail

By default, dish_grpc_text.py (and parseJsonHistory.py, described below) will output in CSV format. You can use the -v option to instead output in a (slightly) more human-readable format.

To collect and record packet loss summary stats at the top of every hour, you could put something like the following in your user crontab (assuming you have moved the scripts to ~/bin and made them executable):

00 * * * * [ -e ~/dishStats.csv ] || ~/bin/dish_grpc_text.py -H >~/dishStats.csv; ~/bin/dish_grpc_text.py ping_drop >>~/dishStats.csv

By default, all of these scripts will pull data once, send it off to the specified data backend, and then exit. They can instead be made to run in a periodic loop by passing a -t option to specify loop interval, in seconds. For example, to capture status information to a InfluxDB server every 30 seconds, you could do something like this:

python3 dish_grpc_influx.py -t 30 [... probably other args to specify server options ...] status

Some of the scripts (currently only the InfluxDB one) also support specifying options through environment variables. See details in the scripts for the environment variables that map to options.

Bulk history data collection

dish_grpc_influx.py, dish_grpc_sqlite.py, and dish_grpc_text.py also support a bulk history mode that collects and writes the full second-by-second data instead of summary stats. To select bulk mode, use bulk_history for the mode argument. You'll probably also want to use the -t option to have it run in a loop.

The JSON parser script

parseJsonHistory.py takes input from a file and writes its output to standard output. The easiest way to use it is to pipe the grpcurl command directly into it. For example:

grpcurl -plaintext -d {\"get_history\":{}} 192.168.100.1:9200 SpaceX.API.Device.Device/Handle | python parseJsonHistory.py

For more usage options, run:

python parseJsonHistory.py -h

When used as-is, parseJsonHistory.py will summarize packet loss information from the data the dish records. There's other bits of data in there, though, so that script (or more likely the parsing logic it uses, which now resides in starlink_json.py) could be used as a starting point or example of how to iterate through it.

The one bit of functionality this script has over the grpc scripts is that it supports capturing the grpcurl output to a file and reading from that, which may be useful if you're collecting data in one place but analyzing it in another. Otherwise, it's probably better to use dish_grpc_text.py, described above.

Other scripts

dump_dish_status.py is a simple example of how to use the grpc modules (the ones generated by protoc, not starlink_grpc) directly. Just run it as:

python3 dump_dish_status.py

and revel in copious amounts of dish status information. OK, maybe it's not as impressive as all that. This one is really just meant to be a starting point for real functionality to be added to it.

poll_history.py is another silly example, but this one illustrates how to periodically poll the status and/or bulk history data using the starlink_grpc module's API. It's not really useful by itself, but if you really want to, you can run it as:

python3 poll_history.py

Possibly more simple examples to come, as the other scripts have started getting a bit complicated.

Docker for InfluxDB ( & MQTT under development )

Initialization of the container can be performed with the following command:

docker run -d -t --name='starlink-grpc-tools' -e INFLUXDB_HOST={InfluxDB Hostname} \
    -e INFLUXDB_PORT={Port, 8086 usually} \
    -e INFLUXDB_USER={Optional, InfluxDB Username} \
    -e INFLUXDB_PWD={Optional, InfluxDB Password} \
    -e INFLUXDB_DB={Pre-created DB name, starlinkstats works well} \
    neurocis/starlink-grpc-tools dish_grpc_influx.py -v status alert_detail

The -t option to docker run will prevent Python from buffering the script's standard output and can be omitted if you don't care about seeing the verbose output in the container logs as soon as it is printed.

The dish_grpc_influx.py -v status alert_detail is optional and omitting it will run same but not verbose, or you can replace it with one of the other scripts if you wish to run that instead, or use other command line options. There is also a GrafanaDashboard - Starlink Statistics.json which can be imported to get some charts like:

image

You'll probably want to run with the -t option to dish_grpc_influx.py to collect status information periodically for this to be meaningful.

To Be Done (Maybe)

Maybe more data backend options. If there's one you'd like to see supported, please open a feature request issue.

There are reboot and dish_stow requests in the Device protocol, too, so it should be trivial to write a command that initiates dish reboot and stow operations. These are easy enough to do with grpcurl, though, as there is no need to parse through the response data. For that matter, they're easy enough to do with the Starlink app.

Proper Python packaging, since the dependency list keeps growing....

Some of the functionality implemented in the starlink-grpc module could be ported into starlink-json easily enough, but this won't be a priority unless someone asks for it.

Other Tidbits

The Starlink Android app actually uses port 9201 instead of 9200. Both appear to expose the same gRPC service, but the one on port 9201 uses gRPC-Web, which can use HTTP/1.1, whereas the one on port 9200 uses HTTP/2, which is what most gRPC tools expect.

The Starlink router also exposes a gRPC service, on ports 9000 (HTTP/2.0) and 9001 (HTTP/1.1).

The file get_history_notes.txt has my original ramblings on how to interpret the history buffer data (with the JSON format naming). It may be of interest if you're interested in pulling the get_history grpc data directly and don't want to dig through the convoluted logic in the starlink-grpc module.

Related Projects

ChuckTSI's Better Than Nothing Web Interface uses grpcurl and PHP to provide a spiffy web UI for some of the same data this project works on.

starlink-cli is another command line tool for interacting with the Starlink gRPC services, including the one on the Starlink router, in case Go is more your thing.

Comments
  • IndexError: list index (nnn) out of range (3155b85a fw)

    IndexError: list index (nnn) out of range (3155b85a fw)

    Just started seeing this this morning ... not sure what's going on yet, might've started with a fw update, since it started ~3:51am, local time.

    starlink-grpc-tools     | current counter:       23098
    starlink-grpc-tools     | All samples:           900
    starlink-grpc-tools     | Valid samples:         900
    starlink-grpc-tools     | Traceback (most recent call last):
    starlink-grpc-tools     |   File "/app/dish_grpc_influx.py", line 330, in <module>
    starlink-grpc-tools     |     main()
    starlink-grpc-tools     |   File "/app/dish_grpc_influx.py", line 311, in main
    starlink-grpc-tools     |     rc = loop_body(opts, gstate)
    starlink-grpc-tools     |   File "/app/dish_grpc_influx.py", line 254, in loop_body
    starlink-grpc-tools     |     rc = dish_common.get_data(opts, gstate, cb_add_item, cb_add_sequence, add_bulk=cb_add_bulk)
    starlink-grpc-tools     |   File "/app/dish_common.py", line 200, in get_data
    starlink-grpc-tools     |     rc = get_history_stats(opts, gstate, add_item, add_sequence)
    starlink-grpc-tools     |   File "/app/dish_common.py", line 296, in get_history_stats
    starlink-grpc-tools     |     groups = starlink_grpc.history_stats(parse_samples,
    starlink-grpc-tools     |   File "/app/starlink_grpc.py", line 968, in history_stats
    starlink-grpc-tools     |     if not history.scheduled[i]:
    starlink-grpc-tools     | IndexError: list index (598) out of range
    

    Using the latest docker image.

    Looking back at the data captured before the update, and I see it was on a different fw (ee5aa15c). What kind of diagnostics should I provide to help?

    opened by bdruth 19
  • Add support for pulling dish_get_obstruction_map data

    Add support for pulling dish_get_obstruction_map data

    It looks like SpaceX added a 2 dimensional map of obstruction data (actually, it looks like it's SNR per direction) to the dish firmware at some point, as the mobile Starlink app has just added support for displaying it.

    I'm not sure how useful it would be to collect this data over time, given that the dish presumably creates this map from data it has collected over time already, and the app does a decent job of visualizing it. However, it might be of some interest to poll it somewhat infrequently, say once a day or so, and see how it changes over time.

    Adding this would probably require a better approach to recording array data in the database backends, though. The existing array storage is a bit too simplistic for this level of data. Also, not all the data backends would be appropriate for this. It may be better off as a completely separate script, maybe one that outputs an image instead of the raw data.

    enhancement 
    opened by sparky8512 16
  • Add more history stats

    Add more history stats

    Right now, the stats computed from the history data are all about packet loss, because that's mostly what I'm interested in tracking myself.

    However, users over on the Starlink subreddit seem really interested in latency stats over time and there has been a question recently about tracking upload/download usage. Both these things are reported by the status info scripts, but are only instantaneous numbers. This data is also present in the history data and it should be easy enough to add computation of more robust stats from that.

    enhancement 
    opened by sparky8512 16
  • Make work on arm64 (aka Raspberry Pi)

    Make work on arm64 (aka Raspberry Pi)

    Here's what I get when trying to run the Docker container on my RPi4b 8GB:

    $ sudo docker run --name='starlink-grpc-tools' ghcr.io/sparky8512/starlink-grpc-tools dish_grpc_text.py -v status alert_detail
    WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
    standard_init_linux.go:219: exec user process caused: exec format error
    

    Any chance of making an arm64 version of the container available?

    enhancement 
    opened by DurvalMenezes 15
  • Hangs after starting influx script

    Hangs after starting influx script

    Environment: Server OS CentOS 8 - 4.18.0-305.19.1.el8_4.x86_64 Python venv version 3.6.8

    command: python3 dish_grpc_influx.py -v status alert_detail -t 10 -U <username> -P <password>

    Results: Running command, in this case it's running as a service, but does the same then when not running as a service.

    Nov 02 22:18:17 BIL-STARLINK systemd[1]: Started Starlink Influx Script.
    

    Hangs at this point and never runs.

    opened by bile0026 11
  • InfluxDB optional parameter to transform true->1 and false->0 ?

    InfluxDB optional parameter to transform true->1 and false->0 ?

    Hello @sparky8512,

    I was curious, since InfluxDB/Grafana is more friendly to null/numeric values when it comes to graphing and value mappings in Grafana. Would it be possible to add a optional parameter to convert true to 1 and false to 0 on load into InfluxDB?

    See this for an example of the issue: https://stackoverflow.com/questions/60669691/boolean-to-text-mapping-in-grafana I can create value mappings in Grafana based on int values but not string values.

    Thanks!

    enhancement 
    opened by StephenShamakian 11
  • performance issue

    performance issue

    I'm running the following (from the docker image):

    dish_grpc_influx.py -t 10 --all-samples -v status obstruction_detail ping_drop ping_run_length ping_latency ping_loaded_latency usage alert_detail
    

    This is using 100% CPU constantly. That seems excessive to convert starlink data to influxdb. Are there any known performance issues here?

    opened by dustin 10
  • Type hints for `status_data` returns

    Type hints for `status_data` returns

    So off the top of my head, there are two approaches to this we can take.

    First (and my preferred) is to create a dataclass for attributes from status_data, so we would return a Tuple[StatusData, ObstructionDetails, Alerts]. This would be a breaking change, as references such as groups[0]["id"] become groups[0].id. I prefer this one because it's slightly less work to reference variables ๐Ÿ˜†

    Second is to create a class extending TypedDict. Referencing variables will look the same as it is now, but groups[0]["id"] would return str rather than Any.

    Both of these require creating new classes, and both mean we can deprecate both status_field_names and status_field_types, since this information is present in the return type

    opened by boswelja 9
  • Dish firmware version change removed last_24h_obstructed_s field?

    Dish firmware version change removed last_24h_obstructed_s field?

    Since yesterday looks like I've been getting this error: % python ./dish_grpc_text.py status Traceback (most recent call last): File "/Users/leadzero/workspace/starlink-grpc-tools/./dish_grpc_text.py", line 221, in <module> main() File "/Users/leadzero/workspace/starlink-grpc-tools/./dish_grpc_text.py", line 207, in main rc = loop_body(opts, gstate) File "/Users/leadzero/workspace/starlink-grpc-tools/./dish_grpc_text.py", line 174, in loop_body rc = dish_common.get_data(opts, File "/Users/leadzero/workspace/starlink-grpc-tools/dish_common.py", line 196, in get_data rc = get_status_data(opts, gstate, add_item, add_sequence) File "/Users/leadzero/workspace/starlink-grpc-tools/dish_common.py", line 231, in get_status_data groups = starlink_grpc.status_data(context=gstate.context) File "/Users/leadzero/workspace/starlink-grpc-tools/starlink_grpc.py", line 641, in status_data "seconds_obstructed": status.obstruction_stats.last_24h_obstructed_s, AttributeError: last_24h_obstructed_s

    Commenting out like 641 in /Users/leadzero/workspace/starlink-grpc-tools/starlink_grpc.py seems to work to solve.. Wondering if maybe a version update yesterday caused it since it looks like my dish rebooted yesterday too.

    opened by leadZERO 8
  • Help with history

    Help with history

    Hi all, I just started writing a nice python script to mimic the Starlink web page. So far I've got everything I want working.

    But now I want to look at outage history. So in my script I do this:

    h = starlink_grpc.get_history() And then look at outages: lastout=h.outages[-1] And I get this cause: NO_DOWNLINK start_timestamp_ns: 1341872909960043179 duration_ns: 15259975072 did_switch: true The timestamp when converted is off by 10 years! n=datetime.datetime.fromtimestamp(1341877680020042191/1000000000) and the value of n is datetime.datetime(2012, 7, 9, 19, 48, 0, 20042) It's only off by 10 years.... Am I missing something here?

    opened by bmillham 7
  • Multiple issues running tools on new(er) dish

    Multiple issues running tools on new(er) dish

    Running a new(er) dish/router, and ran into some issues -

    1. It seems the new port is 192.168.1.1:9000 for the RPC calls
    2. I wasn't able to use any of the "spacex.api" files until I touched a init.py
    3. My system (Older Pi running stretch) couldn't find influxdb and the whole requirements.txt install failed
    4. Traceback (most recent call last): File "dump_dish_status.py", line 25, in print("Connected" if response.dish_get_status.state == AttributeError: 'DishGetStatusResponse' object has no attribute 'state'
    5. To be able to run some things, I needed to create spacex/api/device/dish_config.proto
    6. python3 dish_obstruction_map.py -e 192.168.1.1:9000 obstruction.png Traceback (most recent call last): File "/home/pi/starlink-grpc-tools/starlink_grpc.py", line 1242, in obstruction_map map_data = get_obstruction_map(context) File "/home/pi/starlink-grpc-tools/starlink_grpc.py", line 1222, in get_obstruction_map return call_with_channel(grpc_call, context=context) File "/home/pi/starlink-grpc-tools/starlink_grpc.py", line 427, in call_with_channel return function(channel, *args, **kwargs) File "/home/pi/starlink-grpc-tools/starlink_grpc.py", line 1219, in grpc_call timeout=REQUEST_TIMEOUT) File "/home/pi/.local/lib/python3.5/site-packages/grpc/_channel.py", line 946, in call return _end_unary_response_blocking(state, call, False, None) File "/home/pi/.local/lib/python3.5/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking raise _InactiveRpcError(state) grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNIMPLEMENTED details = "Unimplemented: *device.Request_DishGetObstructionMap" debug_error_string = "{"created":"@1653509590.780266717","description":"Error received from peer ipv4:192.168.1.1:9000","file":"src/core/lib/surface/call.cc","file_line":1070,"grpc_message":"Unimplemented: *device.Request_DishGetObstructionMap","grpc_status":12}"

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "dish_obstruction_map.py", line 186, in main() File "dish_obstruction_map.py", line 172, in main rc = loop_body(opts, context) File "dish_obstruction_map.py", line 28, in loop_body snr_data = starlink_grpc.obstruction_map(context) File "/home/pi/starlink-grpc-tools/starlink_grpc.py", line 1244, in obstruction_map raise GrpcError(e) starlink_grpc.GrpcError: Unimplemented: *device.Request_DishGetObstructionMap

    but in spacex/api/device/dish_pb2.py I see - _DISHGETOBSTRUCTIONMAPREQUEST = _descriptor.Descriptor( name='DishGetObstructionMapRequest', full_name='SpaceX.API.Device.DishGetObstructionMapRequest',

    and

    _DISHGETOBSTRUCTIONMAPRESPONSE = _descriptor.Descriptor( name='DishGetObstructionMapResponse', full_name='SpaceX.API.Device.DishGetObstructionMapResponse',

    But not sure why now.

    Anything I can do for testing/help, lemme know!

    Tnx, Tuc

    opened by tuctboh 7
  • Make robust against field removal in grpc protocol

    Make robust against field removal in grpc protocol

    (These are mostly notes to self, for anyone wondering why I'm being so long-winded about this...)

    Several times in the past, SpaceX has removed fields from the dish's grpc protocol that the scripts in this project were using, resulting in a script crash on attempt to read any data from the same category (status, history stats, bulk history, or location). While loss of that data is unavoidable unless it can be reconstructed from some other field, it would be much better if it didn't cause a crash. I mentioned in issue #65 that I would put some thought into how to accomplish that. After all, one of the intentions of the core module (starlink_grpc.py) is to insulate the calling scripts from the details of the grpc protocol, even though it's mostly just passing the data along as-is in dictionary form instead of protobuf structure.

    The problem here is a result of using grpc reflection to pull the protobuf message structure definitions instead of pre-compiling them using protoc and delivering them along with this project. It's pretty clear from the protobuf documentation that the intended usage is to pre-compile the protocol definitions, but there's a specific reason why I don't do it that way: SpaceX has not published those protocol definitions other than by leaving the reflection service enabled, and thus have not published them under a license that would allow for redistribution without violating copyright. Whether or not they care is a different story, but I'd prefer not to get their legal team thinking about why reflection is even enabled.

    Before I built use of reflection into the scripts (by way of yagrc), I avoided the copyright question by making users generate the protocol modules themselves. This complicated installation, caused some other problems, and ultimately didn't actually fix this problem, it just moved it earlier, since it was still just getting the protocol definitions via reflection.

    So, other than use of pre-compiled protocol definitions, I can think of 2 main options:

    1. Wrap all (or at least most) access to message structure attributes inside a try clause that catches AttributeError or otherwise access them in a way that won't break if the attribute doesn't exist. This could get messy fast, since I don't want a single missing attribute to cause total failure, but would probably be manageable. It also really fights against the design intent of how rigidly these structures are defined by the protoc compiler, but I find myself not really caring about that, as it's exactly this rigidity that is causing the problem here.

    2. Stop using the protocol message structures entirely and switch to pulling fields by protobuf ID instead. This would make the guts of the code less readable, but would insolate this project against the possibility that SpaceX ever decides to disable the reflection service. I have no reason to believe they would do so, but I also have little reason to believe they wouldn't. I'm not sure how difficult this would be, though, as the low-level protobuf Python module is not meant to be used in this way, as far as I can tell. Also, this would still cause problems if SpaceX were to reuse field IDs from obsoleted fields that have been removed, but they haven't been doing that (and it's bad practice in general, as it can break their own client software), as far as I can tell. If I were writing this from scratch, knowing what I do now, I would probably try to do it this way.

    However it's done, I should note that this will make obsoleted fields less obvious. The removed data will just stop being reported by the tools. I have been using these failures as an opportunity to update the documentation on which items are obsolete, but I suspect this will still get noticed, and I'd rather have the documentation lag behind a bit than have the scripts break.

    enhancement 
    opened by sparky8512 1
Releases(v1.1.1)
  • v1.1.1(Nov 9, 2022)

    This is a bug fix release.

    Changes since 1.1.0:

    • Fix a crash when pulling status data introduced by latest firmware obsoleting 2 of the attributes in the grpc protocol used for the obstruction_detail mode group. That mode is pretty useless now, but it remains selectable for backwards compatibility purposes. Users of that mode group may want to look into the dish_obstruction_map.py script or related functions in the starlink_grpc module.
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Sep 18, 2022)

    This release added a new Python module prerequisite, so either repeat the pip command from the README file's installation instructions, or just do: pip install typing-extensions to get it.

    Major changes since 1.0.2:

    • Started adding type hints to the core module (starlink_grpc). While this is probably only of interest to developers who use that module directly, it is what added the new prerequisite.
    • Added new mode group location for physical location (GPS) data. Use of this mode requires some specific configuration of the dish, see the README file for details.
    • Added new item "is_snr_above_noise_floor" to the status mode group.
    Source code(tar.gz)
    Source code(zip)
  • v1.0.2(Aug 19, 2022)

    This release is mostly just a roll-up of small changes in order to facilitate the export of the core module package.

    Changes since 1.0.1:

    • Added new script dish_control.py for (minimal) control of dish system state.
    • Minor changes around error case handling and small functionality enhancements.
    • Added packaging configuration for export of the starlink_grpc module as an installable pip package (named starlink-grpc-core) for use by other projects.
    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Mar 4, 2022)

    Major changes since 1.0.0:

    • Add separate output script for InfluxDB 2.x servers that does not require the compatibility mode commands be run on the server
    • Improvements to the --poll-loops functionality to make it keep data better across script restart and better handle some dish connection failure cases
    • Add a JSON mode to the MQTT output script that allows for reporting a single correlated data set instead of individually publishing each data field
    • Add some systemd start script examples and better support for such in some of the output scripts
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Nov 9, 2021)

    First non-pre-release version.

    Code changes since 0.4.0:

    • Improvements to behavior around interrupted network connectivity to the dish

    Documentation changes since 0.4.0:

    • Moved a bunch of content out of the README to the project Wiki
    • Changed the Docker usage instructions to point to the image published to GitHub Packages repository by this project's Action workflow, thus changing the officially supported Docker image to that one
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Oct 25, 2021)

    Another development release. Things are basically stable at this point, but I want to re-vamp the README file before declaring a 1.0 release, as it would be dumb to have to bump the version number just for that.

    Major changes since 0.3.0:

    • Added -o option to poll history more frequently than stats are computed. Relevant only to the ping_* andusage mode groups.
    • Added dish direction and "prolonged" obstruction info to the status mode group.
    • Added new script dish_obstruction_map.py which emits a PNG image depicting directional obstruction data, as collected by the dish.
    • Removed usage of a number of fields in the gRPC response messages that have been obsoleted by recent dish firmware updates. This rendered useless the "snr" and "seconds_obstructed" items in the status mode group, the "snr" item in the bulk_history group, and the "obstructed and "scheduled/unscheduled" items in the ping_drop and bulk_history mode groups.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Feb 16, 2021)

    Yet another development release, but interfaces should be stable at this point. Would probably have called this 1.0 if it had proper packaging.

    Major changes since 0.2.0:

    • Protocol definition modules can now be loaded via reflection instead of having to generate them via protoc. This requires the yagrc Python package to be installed.
    • New raw_wedges_fraction_obstructed field in obstruction_detail group.
    • Counter state is now tracked for history stats groups, not just bulk history.
    • Dish IP and port can now be set to something other than the standard.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Feb 5, 2021)

    Another development release, but getting close to declaring the interfaces stable enough for a real release.

    Major changes since 0.1.0:

    • Changed command line interface to combine status+history scripts: dishHistoryInflux.py, dishHistoryMqtt.py, dishHistoryStats.py, dishStatusCsv.py, dishStatusInflux.py, and dishStatusMqtt.py are replaced with dish_grpc_influx.py, dish_grpc_mqtt.py, and dish_grpc_text.py.
    • Added new script for sqlite output: dish_grpc_sqlite.py.
    • Added option for latency and usage summary stats.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Jan 28, 2021)

    Development release.

    Mostly just tagging the repository here because the interfaces are about to change in order to combine scripts and reduce code duplication across the main scripts. This will affect both the command line interface and the starlink-grpc Python module interface.

    Source code(tar.gz)
    Source code(zip)
Owner
reddit user u/CenterSpark's random bits and bobs
null
Python library and command line tool for interacting with Bugzilla

python-bugzilla This package provides two bits: bugzilla python module for talking to a Bugzilla instance over XMLRPC or REST /usr/bin/bugzilla comman

Python Bugzilla Project 112 Nov 5, 2022
Command line tool for interacting and testing warehouse components

Warehouse debug CLI Example usage for Zumo debugging See all messages queued and handled. Enable by compiling the zumo-controller with -DDEBUG_MODE_EN

null 1 Jan 3, 2022
A command line application, written in Python, for interacting with Spotify.

spotify-py-cli A command line application, written in Python, for interacting with Spotify. The primary purpose behind developing this app was to gain

Drew Loukusa 0 Oct 7, 2021
jenkins-tui is a terminal based user interface for Jenkins.

jenkins-tui ?? jenkins-tui is a terminal based user interface for Jenkins. ?? โš ๏ธ This app is a prototype and in very early stages of development. Ther

Craig Gumbley 22 Oct 24, 2022
A simple script that outputs the current date on the user interface/terminal.

Py-Date A simple script that outputs the current date on the user interface/terminal. How to Run > Open your terminal and cd into the folder containi

Arinzechukwu Okoye 1 Jan 13, 2022
liquidctl โ€“ liquid cooler control Cross-platform tool and drivers for liquid coolers and other devices

Cross-platform CLI and Python drivers for AIO liquid coolers and other devices

null 1.7k Jan 8, 2023
Interactive Redis: A Terminal Client for Redis with AutoCompletion and Syntax Highlighting.

Interactive Redis: A Cli for Redis with AutoCompletion and Syntax Highlighting. IRedis is a terminal client for redis with auto-completion and syntax

null 2.2k Dec 29, 2022
A Terminal Client for MySQL with AutoCompletion and Syntax Highlighting.

mycli A command line client for MySQL that can do auto-completion and syntax highlighting. HomePage: http://mycli.net Documentation: http://mycli.net/

dbcli 10.7k Jan 7, 2023
Rich is a Python library for rich text and beautiful formatting in the terminal.

The Rich API makes it easy to add color and style to terminal output. Rich can also render pretty tables, progress bars, markdown, syntax highlighted source code, tracebacks, and more โ€” out of the box.

Will McGugan 41.4k Jan 3, 2023
Lets you view, edit and execute Jupyter Notebooks in the terminal.

Lets you view, edit and execute Jupyter Notebooks in the terminal.

David Brochart 684 Dec 28, 2022
Module for converting 2D Python lists to fancy ASCII tables. Table2Ascii lets you display pretty tables in the terminal and on Discord.

table2ascii Module for converting 2D Python lists to a fancy ASCII/Unicode tables table2ascii ?? Installation ??โ€?? Usage Convert lists to ASCII table

Jonah Lawrence 40 Jan 3, 2023
A terminal UI dashboard to monitor requests for code review across Github and Gitlab repositories.

A terminal UI dashboard to monitor requests for code review across Github and Gitlab repositories.

Kyle Harrison 150 Dec 14, 2022
CLabel is a terminal-based cluster labeling tool that allows you to explore text data interactively and label clusters based on reviewing that data.

CLabel is a terminal-based cluster labeling tool that allows you to explore text data interactively and label clusters based on reviewing that

Peter Baumgartner 29 Aug 9, 2022
This CLI give the possibility to do a queries in Star Wars API and returns a JSON in a terminal.

Star Wars CLI (swcli) This CLI give the possibility to do a queries in Star Wars API and returns a JSON in a terminal. Install $ pip install swcli Qu

Pery Lemke 5 Nov 5, 2021
WA Terminal is a CLI application that allows us to login and send message with WhatsApp with a single command.

WA Terminal is a CLI application that allows us to login and send message with WhatsApp with a single command.

Aziz Fikri 15 Apr 15, 2022
A terminal spreadsheet multitool for discovering and arranging data

VisiData v2.6.1 A terminal interface for exploring and arranging tabular data. VisiData supports tsv, csv, sqlite, json, xlsx (Excel), hdf5, and many

Saul Pwanson 6.2k Jan 4, 2023
A useful and easy to use Terminal Timer made with Python.

Terminal SpeedCubeTimer Installation ยกNo requirements! Just Download and play Usage Starts timer.py and you will see this. python timer.py Scramble

Achalogy 5 Dec 22, 2022
A Tempmail Tool for Terminal and Termux.

A Tempmail Tool for Terminal and Termux.

MAO-COMMUNITY 8 Oct 19, 2022