Train/evaluate a Keras model, get metrics streamed to a dashboard in your browser.

Overview

Hera

Train/evaluate a Keras model, get metrics streamed to a dashboard in your browser.

demo

Setting up

Step 1. Plant the spy

Install the package


    pip install heraspy

Add the callback

    herasCallback = HeraCallback(
        'model-key',
        'localhost',
        4000
    )

    model.fit(X_train, Y_train, callbacks=[herasCallback])

Step 2. Start the server

Git clone this repository, then run


    cd server
    npm install
    gulp # optional, for now the build file is kept track in git
    node build/server

Step 3. Start the dashboard


    cd client
    npm install
    npm start

Using RabbitMQ

By default hera uses socket.io for messaging - both from keras callback to server, and from server to dashboard. This is to minimize the number of things one needs to install before getting up and running with hera.

However, in production socket.io is outperformed by a number of alternatives, also it is good in general to decouple the server-client communication from the inter-process communitation (python -> node) so that each can be managed and optimized independently.

To demonstrate how this works Hera ships with the option to use rabbitMQ for interprocess communication. Here's how to use it.

In your model file

    from heraspy.callback import HeraCallback
    from heraspy.dispatchers.rabbitmq import get_rabbitmq_dispatcher

    herasCallback = HeraCallback(
        'model-key', 'localhost', 4000,
        dispatch=get_rabbitmq_dispatcher(
          queue='[my-queue]',
          amqps_url='amqps://[user]:[pass]@my-amqp-address'
        )
    )

In server/src/server.js

Replace the only line in the file with

    getServer({
        dispatcher: 'rabbitmq',
        dispatcherConfig: {
            amqpUrl: 'amqps://[user]:[pass]@my-amqp-address',
            amqpQueue: '[my-queue]'
        }
    }).start();

That's it! Now communication from the python process and the node webserver process goes through rabbitmq.

Credits

Aside from the obvious ones:

Comments
  • GIF in README is too large for some devices

    GIF in README is too large for some devices

    Hi Jake!

    Please reduce size of gif embedded to README because now it is so large that Chrome on Android crashes when I try to open the repository page. As far as I understand that's because it decompresses all frames to RAM, so their total size becomes larger than available memory.

    Maybe it is better to just use a static screenshot with a link to a video on YouTube?

    opened by a-rodin 3
  • Can not catch any plots in website

    Can not catch any plots in website

    The server side shows that a client was connected after I run the keras example image

    And the keras run as following: image

    But I can not find the plot in my website:

    image

    The hera config code pasted as below: herasCallback = HeraCallback( 'model-key', 'localhost', 4000 ) model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1, validation_data=(X_test, Y_test), callbacks=[herasCallback])

    What's wrong with me? Please could you kindly give me some information. Many thanks

    opened by welcomealcol 2
  • Request entity too large

    Request entity too large

    Hi Jake, first all of thank you for your work - I was looking for something like this for quite some time. Good stuff!

    However, I am running into a problem when running my model, the error is server-side: Error: request entity too large at readStream (/home/marko/projects/hera/server/node_modules/raw-body/index.js:196:17) at getRawBody (/home/marko/projects/hera/server/node_modules/raw-body/index.js:106:12) at read (/home/marko/projects/hera/server/node_modules/body-parser/lib/read.js:76:3) at jsonParser (/home/marko/projects/hera/server/node_modules/body-parser/lib/types/json.js:127:5) at Layer.handle [as handle_request] (/home/marko/projects/hera/server/node_modules/express/lib/router/layer.js:95:5) at trim_prefix (/home/marko/projects/hera/server/node_modules/express/lib/router/index.js:312:13) at /home/marko/projects/hera/server/node_modules/express/lib/router/index.js:280:7 at Function.process_params (/home/marko/projects/hera/server/node_modules/express/lib/router/index.js:330:12) at next (/home/marko/projects/hera/server/node_modules/express/lib/router/index.js:271:10) at expressInit (/home/marko/projects/hera/server/node_modules/express/lib/middleware/init.js:33:5)

    I tried increasing request body size limit in server/src/server.js app.use(bodyParser.json({limit: '500mb'}));, but it didn't help.

    If it helps, this only happens if I fit a model with validation split/data included, and this error happens only in validation phase (during traning there are no errors),

    Thank you for looking into this! Best regards, Marko

    opened by jocicmarko 2
  • IndentationError: expected an indented block

    IndentationError: expected an indented block

    Using Python 3.4.3 on an ubuntu 14.04 docker, When using the heraspy library gives an indentations error

      File "train.py", line 26, in <module>
        from heraspy.model import HeraModel
      File "/usr/local/lib/python3.4/dist-packages/heraspy/model.py", line 1, in <module>
        from heraspy.callback import HeraCallback
      File "/usr/local/lib/python3.4/dist-packages/heraspy/callback.py", line 103
        from keras import backend as K
        ^
    
    opened by igorbb 1
  • Update PyPI for Python 3 (Name 'reduce' is not defined)

    Update PyPI for Python 3 (Name 'reduce' is not defined)

    I tried training a model with this on Python 3 and it crashed when the first epoch ended (and it was a long epoch too!) saying name 'reduce' is not defined in callback.py. It seems like this was already fixed with https://github.com/jakebian/hera/commit/62a0583c88202cc3c78ce013a351d4548efd068c, however.

    Any chance heraspy could be updated on PyPI?

    opened by somewacko 1
  • Error on fit

    Error on fit

    I'm getting an error from the following code -- any idea what's going on?

    import numpy as np
    
    from keras.models import Sequential
    from keras.layers import Dense
    
    from heraspy.model import HeraModel
    hera_model = HeraModel({'id': 'my-model'}, {'domain': 'localhost', 'port': 4000})
    
    X = np.random.uniform(0, 1, (100000, 100))
    y = np.random.choice((0, 1), 100000)
    
    model = Sequential()
    model.add(Dense(1, input_shape=(100,)))
    model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
    
    model.fit(X, y, verbose=True, callbacks=[hera_model])
    

    (I'm running keras version 1.0.5)

    Thanks Ben

    opened by bkj 1
  • Python dependencies missing

    Python dependencies missing

    I'm certainly no expert on python packaging, but it seems like the pip package is not installing its dependencies (requests, socketIO-client). I had to install them manually.

    $ pip3 show heraspy
    
    ---
    Metadata-Version: 2.0
    Name: heraspy
    Version: 0.0
    Summary: Keras data extraction callbacks
    Home-page: UNKNOWN
    Author: Jake Bian
    Author-email: [email protected]
    Installer: pip
    License: wtfpl
    Location: /usr/lib/python3.4/site-packages
    Requires: 
    Classifiers:
    
    opened by max-vogler 1
  • Cannot resolve module 'jquery'

    Cannot resolve module 'jquery'

    I'm getting the following error when running npm start in the client folder:

    ERROR in ./src/components/Model/index.js
    Module not found: Error: Cannot resolve module 'jquery' in /home/max/hera/client/src/components/Model
     @ ./src/components/Model/index.js 35:14-31
    Child html-webpack-plugin for "index.html":
             Asset    Size  Chunks       Chunk Names
        index.html  553 kB       0       
    webpack: bundle is now VALID.
    
    opened by max-vogler 1
  •  IndentationError: expected an indented block

    IndentationError: expected an indented block

    After I "from heraspy.model import HeraModel", I always get the following error. File "/usr/local/lib/python2.7/dist-packages/heraspy/callback.py", line 103 from keras import backend as K ^ IndentationError: expected an indented block

    opened by welcomealcol 0
  • Why is the method on_epoch_end() commented out in callback.py?

    Why is the method on_epoch_end() commented out in callback.py?

    Please update this project. It is unclear if the initialization syntax presently works: HeraCallback vs. HeraModel. Also with HeraCallback I get syntax errors from callback.py. I'm running in python3.5.

    opened by StrategicVisionary 0
  •     TypeError: Not JSON Serializable

    TypeError: Not JSON Serializable

    When try to import HeraCallback get this error

    from heraspy.model import HeraModel Traceback (most recent call last): File "C:\Program Files\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2881, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-13-0afb57adc2e7>", line 1, in <module> from heraspy.model import HeraModel File "C:\Program Files\JetBrains\PyCharm Community Edition 2017.1.1\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "C:\Program Files\Anaconda3\lib\site-packages\heraspy\model.py", line 1, in <module> from heraspy.callback import HeraCallback File "C:\Program Files\JetBrains\PyCharm Community Edition 2017.1.1\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "C:\Program Files\Anaconda3\lib\site-packages\heraspy\callback.py", line 20, in <module> raise TypeError('Not JSON Serializable') TypeError: Not JSON Serializable

    opened by junjer 0
  • Usage instructions

    Usage instructions

    I'm on the current master (the release pip version failed on import), trying to define a callback. Trying to imitate the example in readme.md didn't work out:

    hera = HeraCallback('bla', 'localhost', '4000')
    

    ...

    /Users/olevinkr/anaconda/lib/python2.7/site-packages/sklearn/model_selection/_validation.pyc in cross_val_score(estimator, X, y, groups, scoring, cv, n_jobs, verbose, fit_params, pre_dispatch)
        138                                               train, test, verbose, None,
        139                                               fit_params)
    --> 140                       for train, test in cv_iter)
        141     return np.array(scores)[:, 0]
        142
    
    /Users/olevinkr/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in __call__(self, iterable)
        756             # was dispatched. In particular this covers the edge
        757             # case of Parallel used with an exhausted iterator.
    --> 758             while self.dispatch_one_batch(iterator):
        759                 self._iterating = True
        760             else:
    
    /Users/olevinkr/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in dispatch_one_batch(self, iterator)
        606                 return False
        607             else:
    --> 608                 self._dispatch(tasks)
        609                 return True
        610
    
    /Users/olevinkr/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in _dispatch(self, batch)
        569         dispatch_timestamp = time.time()
        570         cb = BatchCompletionCallBack(dispatch_timestamp, len(batch), self)
    --> 571         job = self._backend.apply_async(batch, callback=cb)
        572         self._jobs.append(job)
        573
    
    /Users/olevinkr/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/_parallel_backends.pyc in apply_async(self, func, callback)
        107     def apply_async(self, func, callback=None):
        108         """Schedule a func to be run"""
    --> 109         result = ImmediateResult(func)
        110         if callback:
        111             callback(result)
    
    /Users/olevinkr/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/_parallel_backends.pyc in __init__(self, batch)
        324         # Don't delay the application, to avoid keeping the input
        325         # arguments in memory
    --> 326         self.results = batch()
        327
        328     def get(self):
    
    /Users/olevinkr/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in __call__(self)
        129
        130     def __call__(self):
    --> 131         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        132
        133     def __len__(self):
    
    /Users/olevinkr/anaconda/lib/python2.7/site-packages/sklearn/model_selection/_validation.pyc in _fit_and_score(estimator, X, y, scorer, train, test, verbose, parameters, fit_params, return_train_score, return_parameters, return_n_test_samples, return_times, error_score)
        236             estimator.fit(X_train, **fit_params)
        237         else:
    --> 238             estimator.fit(X_train, y_train, **fit_params)
        239
        240     except Exception as e:
    
    /Users/olevinkr/anaconda/lib/python2.7/site-packages/keras/wrappers/scikit_learn.pyc in fit(self, X, y, **kwargs)
        146         fit_args.update(kwargs)
        147
    --> 148         history = self.model.fit(X, y, **fit_args)
        149
        150         return history
    
    /Users/olevinkr/anaconda/lib/python2.7/site-packages/keras/models.pyc in fit(self, x, y, batch_size, nb_epoch, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, **kwargs)
        650                               shuffle=shuffle,
        651                               class_weight=class_weight,
    --> 652                               sample_weight=sample_weight)
        653
        654     def evaluate(self, x, y, batch_size=32, verbose=1,
    
    /Users/olevinkr/anaconda/lib/python2.7/site-packages/keras/engine/training.pyc in fit(self, x, y, batch_size, nb_epoch, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch)
       1109                               val_f=val_f, val_ins=val_ins, shuffle=shuffle,
       1110                               callback_metrics=callback_metrics,
    -> 1111                               initial_epoch=initial_epoch)
       1112
       1113     def evaluate(self, x, y, batch_size=32, verbose=1, sample_weight=None):
    
    /Users/olevinkr/anaconda/lib/python2.7/site-packages/keras/engine/training.pyc in _fit_loop(self, f, ins, out_labels, batch_size, nb_epoch, verbose, callbacks, val_f, val_ins, shuffle, callback_metrics, initial_epoch)
        795             'metrics': callback_metrics,
        796         })
    --> 797         callbacks.on_train_begin()
        798         callback_model.stop_training = False
        799         self.validation_data = val_ins
    
    /Users/olevinkr/anaconda/lib/python2.7/site-packages/keras/callbacks.pyc in on_train_begin(self, logs)
         72     def on_train_begin(self, logs={}):
         73         for callback in self.callbacks:
    ---> 74             callback.on_train_begin(logs)
         75
         76     def on_train_end(self, logs={}):
    
    /Users/olevinkr/anaconda/lib/python2.7/site-packages/heraspy/callback.pyc in on_train_begin(self, *args)
         37             {
         38                 'params': self.params,
    ---> 39                 'modelJson': json.loads(self.model.to_json()),
         40             }
         41         )
    
    TypeError: 'str' object is not callable
    
    opened by ohadle 1
  • Error when running client in docker

    Error when running client in docker

    I got the following error when I run the client inside a docker:

    npm WARN optional Skipping failed optional dependency /chokidar/fsevents:
    npm WARN notsup Not compatible with your operating system or architecture: [email protected]
    npm WARN [email protected] requires a peer of eslint@^2.0.0-rc.0 but none was installed.
    npm WARN [email protected] requires a peer of eslint@^2.1.0 but none was installed.
    npm WARN [email protected] requires a peer of eslint@^2.1.0 but none was installed.
    npm ERR! Linux 3.16.0-76-generic
    npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "install"
    npm ERR! node v4.2.6
    npm ERR! npm  v3.5.2
    npm ERR! file sh
    npm ERR! code ELIFECYCLE
    npm ERR! errno ENOENT
    npm ERR! syscall spawn
    
    npm ERR! [email protected] install: `node-gyp rebuild`
    npm ERR! spawn ENOENT
    npm ERR! 
    npm ERR! Failed at the [email protected] install script 'node-gyp rebuild'.
    npm ERR! Make sure you have the latest version of node.js and npm installed.
    npm ERR! If you do, this is most likely a problem with the contextify package,
    npm ERR! not with npm itself.
    npm ERR! Tell the author that this fails on your system:
    npm ERR!     node-gyp rebuild
    npm ERR! You can get information on how to open an issue for this project with:
    npm ERR!     npm bugs contextify
    npm ERR! Or if that isn't available, you can get their info via:
    npm ERR!     npm owner ls contextify
    npm ERR! There is likely additional logging output above.
    
    npm ERR! Please include the following file with any support request:
    npm ERR!     /root/hera/client/npm-debug.log
    

    I used ubuntu 16.04 and installed the required packages via:

    apt install python-pip git nodejs npm
    
    opened by entron 1
Owner
Keplr
Keplr
Locally cache assets that are normally streamed in POPULATION: ONE

Population One Localizer This is no longer needed as of the build shipped on 03/03/22, thank you bigbox :) Locally cache assets that are normally stre

Ahman Woods 2 Mar 4, 2022
Metrics to evaluate quality and efficacy of synthetic datasets.

An Open Source Project from the Data to AI Lab, at MIT Metrics for Synthetic Data Generation Projects Website: https://sdv.dev Documentation: https://

The Synthetic Data Vault Project 129 Jan 3, 2023
Ludwig is a toolbox that allows to train and evaluate deep learning models without the need to write code.

Translated in ???? Korean/ Ludwig is a toolbox that allows users to train and test deep learning models without the need to write code. It is built on

Ludwig 8.7k Jan 5, 2023
Ludwig is a toolbox that allows to train and evaluate deep learning models without the need to write code.

Translated in ???? Korean/ Ludwig is a toolbox that allows users to train and test deep learning models without the need to write code. It is built on

Ludwig 8.7k Dec 31, 2022
This repository contains a set of codes to run (i.e., train, perform inference with, evaluate) a diarization method called EEND-vector-clustering.

EEND-vector clustering The EEND-vector clustering (End-to-End-Neural-Diarization-vector clustering) is a speaker diarization framework that integrates

null 45 Dec 26, 2022
HeatNet is a python package that provides tools to build, train and evaluate neural networks designed to predict extreme heat wave events globally on daily to subseasonal timescales.

HeatNet HeatNet is a python package that provides tools to build, train and evaluate neural networks designed to predict extreme heat wave events glob

Google Research 6 Jul 7, 2022
Pytorch Lightning 1.2k Jan 6, 2023
This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras)

Yogi-Optimizer_Keras This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras) The NeurIPS-Paper can be found here: http://papers.nips.c

null 14 Sep 13, 2022
Keras udrl - Keras implementation of Upside Down Reinforcement Learning

keras_udrl Keras implementation of Upside Down Reinforcement Learning This is me

Eder Santana 7 Jan 24, 2022
Example-custom-ml-block-keras - Custom Keras ML block example for Edge Impulse

Custom Keras ML block example for Edge Impulse This repository is an example on

Edge Impulse 8 Nov 2, 2022
Classification models 1D Zoo - Keras and TF.Keras

Classification models 1D Zoo - Keras and TF.Keras This repository contains 1D variants of popular CNN models for classification like ResNets, DenseNet

Roman Solovyev 12 Jan 6, 2023
Train an imgs.ai model on your own dataset

imgs.ai is a fast, dataset-agnostic, deep visual search engine for digital art history based on neural network embeddings.

Fabian Offert 5 Dec 21, 2021
Using Streamlit to host a multi-page tool with model specs and classification metrics, while also accepting user input values for prediction.

Predicitng_viability Using Streamlit to host a multi-page tool with model specs and classification metrics, while also accepting user input values for

Gopalika Sharma 1 Nov 8, 2021
Your interactive network visualizing dashboard

Your interactive network visualizing dashboard Documentation: Here What is Jaal Jaal is a python based interactive network visualizing tool built usin

Mohit 177 Jan 4, 2023
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano <https:

null 9.6k Dec 31, 2022
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano <https:

null 9.6k Jan 6, 2023
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano <https:

null 9.3k Feb 12, 2021
OCTIS: Comparing Topic Models is Simple! A python package to optimize and evaluate topic models (accepted at EACL2021 demo track)

OCTIS : Optimizing and Comparing Topic Models is Simple! OCTIS (Optimizing and Comparing Topic models Is Simple) aims at training, analyzing and compa

MIND 478 Jan 1, 2023