A no-BS, dead-simple training visualizer for tf-keras

Overview


A no-BS, dead-simple training visualizer for tf-keras
PyPI version PyPI version

TrainingDashboard

Plot inter-epoch and intra-epoch loss and metrics within a jupyter notebook with a simple callback. Features:

  • Plots the training loss and a training metric, updated at the end of each batch
  • Plots training and validation losses, updated at the end of each epoch
  • For each metric, plots training and validation values, updated at the end of each epoch
  • Tabulates losses and metrics (both train and validation) and highlights the highest and lowest values in each column

Why should I use this over tensorboard?
This is way simpler to use.

What about livelossplot?
AFAIK, livelossplot does not support intra-epoch loss/metric plotting. Also, TrainingDashboard uses bqplot for plotting, which provides support for much more interactive elements like tooltips (currently a TODO). On the other hand, livelossplot is a much more mature project, and you should use it if you have a specific use case.

Installation

TrainingDashboard can be installed from PyPI with the following command:

pip install training-dashboard

Alternatively, you can clone this repository and run the following command from the root directory:

pip install .

Usage

TrainingDashboard is a tf-keras callback and should be used as such. It takes the following optional arguments:

  • validation (bool): whether validation data is being used or not
  • min_loss (float): the minimum possible value of the loss function, to fix the lower bound of the y-axis
  • max_loss (float): the maximum possible value of the loss function, to fix the upper bound of the y-axis
  • metrics (list): list of metrics that should be considered for plotting
  • min_metric_dict (dict): dictionary mapping each (or a subset) of the metrics to their minimum possible value, to fix the lower bound of the y-axis
  • max_metric_dict (dict): dictionary mapping each (or a subset) of the metrics to their maximum possible value, to fix the upper bound of the y-axis
  • batch_step (int): step size for plotting the results within each epoch. If the time to process each batch is very small, plotting at each step may cause the training to slow down significantly. In such cases, it is advisable to skip a few batches between each update.
from training_dashboard import TrainingDashboard
model.fit(X,
          Y,
          epochs=10,
          callbacks=[TrainingDashboard()])

or, a more elaborate example:

from training_dashboard import TrainingDashboard
dashboard = TrainingDashboard(validation=True, # because we are using validation data and want to track its metrics
                             min_loss=0, # we want the loss axes to be fixed on the lower end
                             metrics=["accuracy", "auc"], # metrics that we want plotted
                             batch_step=10, # plot every 10th batch
                             min_metric_dict={"accuracy": 0, "auc": 0}, # minimum possible value for metrics used
                             max_metric_dict={"accuracy": 1, "auc": 1}) # maximum possible value for metrics used
model.fit(x_train,
          y_train,
          batch_size=512,
          epochs=25,
          verbose=1,
          validation_split=0.2,
          callbacks=[dashboard])

For a more detailed example, check mnist_example.ipynb inside the examples folder.

Support

Reach out to me at one of the following places!

Twitter: @vibhuagrawal
Email: vibhu[dot]agrawal14[at]gmail

License

Project is distributed under MIT License.

You might also like...
Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better performance.
Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better performance.

InfoPro-Pytorch The Information Propagation algorithm for training deep networks with local supervision. (ICLR 2021) Revisiting Locally Supervised Lea

ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training
ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training

ActNN : Activation Compressed Training This is the official project repository for ActNN: Reducing Training Memory Footprint via 2-Bit Activation Comp

This is the code for our KILT leaderboard submission to the T-REx and zsRE tasks. It includes code for training a DPR model then continuing training with RAG.

KGI (Knowledge Graph Induction) for slot filling This is the code for our KILT leaderboard submission to the T-REx and zsRE tasks. It includes code fo

BERT model training impelmentation using 1024 A100 GPUs for MLPerf Training v1.1

Pre-trained checkpoint and bert config json file Location of checkpoint and bert config json file This MLCommons members Google Drive location contain

FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization

FuseDream This repo contains code for our paper (paper link): FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimizat

Super-Fast-Adversarial-Training - A PyTorch Implementation code for developing super fast adversarial training

Super-Fast-Adversarial-Training This is a PyTorch Implementation code for develo

Image-to-Image Translation with Conditional Adversarial Networks (Pix2pix) implementation in keras

pix2pix-keras Pix2pix implementation in keras. Original paper: Image-to-Image Translation with Conditional Adversarial Networks (pix2pix) Paper Author

An end-to-end machine learning web app to predict rugby scores (Pandas, SQLite, Keras, Flask, Docker)
An end-to-end machine learning web app to predict rugby scores (Pandas, SQLite, Keras, Flask, Docker)

Rugby score prediction An end-to-end machine learning web app to predict rugby scores Overview An demo project to provide a high-level overview of the

The fastest way to visualize GradCAM with your Keras models.
The fastest way to visualize GradCAM with your Keras models.

VizGradCAM VizGradCam is the fastest way to visualize GradCAM in Keras models. GradCAM helps with providing visual explainability of trained models an

Owner
Vibhu Agrawal
Vibhu Agrawal
This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras)

Yogi-Optimizer_Keras This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras) The NeurIPS-Paper can be found here: http://papers.nips.c

null 14 Sep 13, 2022
Keras udrl - Keras implementation of Upside Down Reinforcement Learning

keras_udrl Keras implementation of Upside Down Reinforcement Learning This is me

Eder Santana 7 Jan 24, 2022
Example-custom-ml-block-keras - Custom Keras ML block example for Edge Impulse

Custom Keras ML block example for Edge Impulse This repository is an example on

Edge Impulse 8 Nov 2, 2022
Classification models 1D Zoo - Keras and TF.Keras

Classification models 1D Zoo - Keras and TF.Keras This repository contains 1D variants of popular CNN models for classification like ResNets, DenseNet

Roman Solovyev 12 Jan 6, 2023
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX (.onnx, .pb, .pbtxt), Keras (.h5, .keras), Tens

Lutz Roeder 21k Jan 6, 2023
Visualizer using audio and semantic analysis to explore BigGAN (Brock et al., 2018) latent space.

BigGAN Audio Visualizer Description This visualizer explores BigGAN (Brock et al., 2018) latent space by using pitch/tempo of an audio file to generat

Rush Kapoor 2 Nov 21, 2022
PyGAD, a Python 3 library for building the genetic algorithm and training machine learning algorithms (Keras & PyTorch).

PyGAD: Genetic Algorithm in Python PyGAD is an open-source easy-to-use Python 3 library for building the genetic algorithm and optimizing machine lear

Ahmed Gad 1.1k Dec 26, 2022
Keras + Hyperopt: A very simple wrapper for convenient hyperparameter optimization

This project is now archived. It's been fun working on it, but it's time for me to move on. Thank you for all the support and feedback over the last c

Max Pumperla 2.1k Jan 3, 2023
Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly

Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly Code for this paper Ultra-Data-Efficient GAN Tra

VITA 77 Oct 5, 2022