Produces a summary CSV report of an Amber Electric customer's energy consumption and cost data.

Overview

Amber Electric Usage Summary

This is a command line tool that produces a summary CSV report of an Amber Electric customer's energy consumption and cost data.

You simply need to provide your Amber API token, and the tool will output a CSV like this for the last 12 months:

CHANNEL                         , 2020-09-01, 2020-09-02, 2020-09-03, ...
B4 (FEED_IN) Usage (kWh)        ,      1.351,      0.463,      0.447, ...
E3 (CONTROLLED_LOAD) Usage (kWh),      2.009,      2.669,      2.757, ...
E4 (GENERAL) Usage (kWh)        ,     20.400,     20.965,     16.011, ...

About Amber Electric

Amber Electric is an innovative energy retailer in Australia which gives customers access to the wholesale energy price as determined by the National Energy Market. This gives customers the opportunity to reduce their bills and their reliance on fossil fuels by shifting their biggest energy usage to times of the day when energy is cheaper and greener.

Amber's API

Amber gives customers access to a LOT of their own data through their public Application Programming Interface or API.

This tool relies on you having access to Amber's API, which means you need to be an Amber customer, and you need to get an API token. But that's pretty easy. Start here.

How To Get The Tool

If you're a programmer comfortable with Git, I'm sure you already know how to get this code onto your machine from GitHub.

If you're not familiar with Git, you can download this code as a Zip file by clicking on this link. Once it's downloaded, unzip the file, which will create a directory containing all the files of this project.

How To Use It

Pre-Requisites

You'll need Python 3.9+ installed.

And an Amber API token. (See above)

Setup

Using a terminal, in the directory of this project:

  1. Create a Python virtual environment with this command:
python3.9  -m  venv  venv
  1. Start using the virtual environment with this command:
source  ./venv/bin/activate
  1. Install the required dependencies with this command:
python  -m  pip  install  -r  requirements.txt

Running the tool

Using a terminal, in the directory of this project:

  1. Start using the virtual environment with this command:
source  ./venv/bin/activate
  1. Run the tool with this command, replacing YOUR_API_TOKEN with your own API token:
python  amber_usage_summary.py  --api-token  YOUR_API_TOKEN  >  my_amber_usage_data.csv

Using the above, your summary consumption data for the last year will be saved to the file called my_amber_usage_data.csv in the same directory.

Options

Help

Run the script with the -h option to see its help page:

python  amber_usage_summary.py  -h

API Token File

If you'd prefer not to paste your API token into a terminal command, you can save it in a file called apitoken in the project's directory.

Costs Summary

By default, the tool just outputs energy consumption data. If you also want a summary of your cost data, add the --include-cost option:

python  amber_usage_summary.py  --include-cost

Site Selection

If you have multiple sites in your Amber Electric account, you'll need to select one using the --site-id option:

python  amber_usage_summary.py  --site-id  SITE_ID_YOU_WANT_DATA_FOR

Date Range

By default, the report includes the last 12 full calendar months of data, plus all of the current month's data up until yesterday. You can select what date range to include in the output by adding and start date and, optionally, an end date to the command.

python  amber_usage_summary.py  2020-07-01  2021-06-30

Contributions

I'm open to accepting contributions that improve the tool.

If you're planning on altering the code with the intention of contributing the changes back, it'd be great to have a chat about it first to check we're on the same page about how the improvement might be added. It's probably easiest to create an issue describing your planned improvement (and being clear that you plan to implement it yourself).

License

All files in this project are licensed under the 3-clause BSD License. See LICENSE.md for details.

You might also like...
Elementary is an open-source data reliability framework for modern data teams. The first module of the framework is data lineage.
Elementary is an open-source data reliability framework for modern data teams. The first module of the framework is data lineage.

Data lineage made simple, reliable, and automated. Effortlessly track the flow of data, understand dependencies and analyze impact. Features Visualiza

 ๐Ÿงช Panel-Chemistry - exploratory data analysis and build powerful data and viz tools within the domain of Chemistry using Python and HoloViz Panel.
๐Ÿงช Panel-Chemistry - exploratory data analysis and build powerful data and viz tools within the domain of Chemistry using Python and HoloViz Panel.

๐Ÿงช๐Ÿ“ˆ ๐Ÿ. The purpose of the panel-chemistry project is to make it really easy for you to do DATA ANALYSIS and build powerful DATA AND VIZ APPLICATIONS within the domain of Chemistry using using Python and HoloViz Panel.

Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.
Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

PrimaryBid - Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift
PrimaryBid - Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift

Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift This project is composed of two parts: Part1 and Part2

fds is a tool for Data Scientists made by DAGsHub to version control data and code at once.
fds is a tool for Data Scientists made by DAGsHub to version control data and code at once.

Fast Data Science, AKA fds, is a CLI for Data Scientists to version control data and code at once, by conveniently wrapping git and dvc

Python data processing, analysis, visualization, and data operations

Python This is a Python data processing, analysis, visualization and data operations of the source code warehouse, book ISBN: 9787115527592 Descriptio

Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials
Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Data Scientist Learning Plan Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code

Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code. Tuplex has similar Python APIs to Apache Spark or Dask, but rather than invoking the Python interpreter, Tuplex generates optimized LLVM bytecode for the given pipeline and input data set.

A data parser for the internal syncing data format used by Fog of World.
A data parser for the internal syncing data format used by Fog of World.

A data parser for the internal syncing data format used by Fog of World. The parser is not designed to be a well-coded library with good performance, it is more like a demo for showing the data structure.

Owner
Graham Lea
Graham Lea
A tool to compare differences between dataframes and create a differences report in Excel

similarpanda A module to check for differences between pandas Dataframes, and generate a report in Excel format. This is helpful in a workplace settin

Andre Pretorius 9 Sep 15, 2022
Generates a simple report about the current Covid-19 cases and deaths in Malaysia

Generates a simple report about the current Covid-19 cases and deaths in Malaysia. Results are delay one day, data provided by the Ministry of Health Malaysia Covid-19 public data.

Yap Khai Chuen 7 Dec 15, 2022
Pandas on AWS - Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).

AWS Data Wrangler Pandas on AWS Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretMana

Amazon Web Services - Labs 3.3k Jan 4, 2023
This creates a ohlc timeseries from downloaded CSV files from NSE India website and makes a SQLite database for your research.

NSE-timeseries-form-CSV-file-creator-and-SQL-appender- This creates a ohlc timeseries from downloaded CSV files from National Stock Exchange India (NS

PILLAI, Amal 1 Oct 2, 2022
Analysiscsv.py for extracting analysis and exporting as CSV

wcc_analysis Lichess page documentation: https://lichess.org/page/world-championships Each WCC has a study, studies are fetched using: https://lichess

null 32 Apr 25, 2022
Convert tables stored as images to an usable .csv file

Convert an image of numbers to a .csv file This Python program aims to convert images of array numbers to corresponding .csv files. It uses OpenCV for

null 711 Dec 26, 2022
a tool that compiles a csv of all h1 program stats

h1stats - h1 Program Stats Scraper This python3 script will call out to HackerOne's graphql API and scrape all currently active programs for informati

Evan 40 Oct 27, 2022
For making Tagtog annotation into csv dataset

tagtog_relation_extraction for making Tagtog annotation into csv dataset How to Use On Tagtog 1. Go to Project > Downloads 2. Download all documents,

hyeong 4 Dec 28, 2021
CSV database for chihuahua (HUAHUA) blockchain transactions

super-fiesta Shamelessly ripped components from https://github.com/hodgerpodger/staketaxcsv - Thanks for doing all the hard work. This code does only

Arlene Macciaveli 1 Jan 7, 2022
Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.

Amundsen is a metadata driven application for improving the productivity of data analysts, data scientists and engineers when interacting with data.

Amundsen 3.7k Jan 3, 2023