Interactive convnet features visualization for Keras

Overview

Quiver

Gitter chat

Interactive convnet features visualization for Keras

gzqll3

The quiver workflow

Video Demo

  1. Build your model in keras

    model = Model(...)
  2. Launch the visualization dashboard with 1 line of code

    quiver_engine.server.launch(model, classes=['cat','dog'], input_folder='./imgs')
  3. Explore layer activations on all the different images in your input folder.

Quickstart

Installation

    pip install quiver_engine

If you want the latest version from the repo

    pip install git+git://github.com/keplr-io/quiver.git

Usage

Take your keras model, launching Quiver is a one-liner.

    from quiver_engine import server
    server.launch(model)

This will launch the visualization at localhost:5000

Options

    server.launch(
        model, # a Keras Model

        classes, # list of output classes from the model to present (if not specified 1000 ImageNet classes will be used)

        top, # number of top predictions to show in the gui (default 5)

        # where to store temporary files generatedby quiver (e.g. image files of layers)
        temp_folder='./tmp',

        # a folder where input images are stored
        input_folder='./',

        # the localhost port the dashboard is to be served on
        port=5000,
        # custom data mean
        mean=[123.568, 124.89, 111.56],
        # custom data standard deviation
        std=[52.85, 48.65, 51.56]
    )

Development

Building from master

Check out this repository and run

cd quiver_engine
python setup.py develop

Building the Client

    cd quiverboard
    npm install
    export QUIVER_URL=localhost:5000 # or whatever you set your port to be
    npm start

Note this will run your web application with webpack and hot reloading. If you don't care about that, or are only in this section because pip install somehow failed for you, you should tell it to simply build the javascript files instead

    npm run deploy:prod

Credits

  • This is essentially an implementation of some ideas of deepvis and related works.
  • A lot of the pre/pos/de processing code was taken from here and other writings of fchollet.
  • The dashboard makes use of react-redux-starter-kit

Citing Quiver

misc{bianquiver,
  title={Quiver},
  author={Bian, Jake},
  year={2016},
  publisher={GitHub},
  howpublished={\url{https://github.com/keplr-io/quiver}},
}
Issues
  • Not working on theano backed

    Not working on theano backed

    Do you have a plan to make a theano version of this tool?

    opened by toqitahamid 39
  • 404 error

    404 error

    Hi,

    Thanks for sharing your tool. Unfortunately, I cannot make it work properly, whenever the server is running, it seems to be idle and navigating to localhost: gives me a 404 not found error (so the server responds but there is nothing to display).

    Any thoughts? Thank you!

    Here is a sample of the code that creates the issue:

    from __future__ import print_function
    import numpy as np
    import h5py
    
    from keras.models import Sequential
    from keras.layers import Dense, Dropout, Activation, Flatten
    from keras.layers import Convolution2D, MaxPooling2D
    
    # input image dimensions
    img_rows, img_cols = 28, 28
    # number of convolutional filters to use
    nb_filters = 32
    # size of pooling area for max pooling
    pool_size = (2, 2)
    # convolution kernel size
    kernel_size = (3, 3)
    
    nb_classes = 10
    
    model = Sequential()
    input_shape = (1,img_rows, img_cols)
    
    model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
                            border_mode='valid', activation='relu',
                            input_shape=input_shape))
    model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1], activation='relu'))
    model.add(MaxPooling2D(pool_size=pool_size))
    model.add(Dropout(0.25))
    
    model.add(Flatten())
    model.add(Dense(128, activation='relu'))
    model.add(Dropout(0.5))
    model.add(Dense(nb_classes, activation='softmax'))
    
    model.compile(loss='categorical_crossentropy',
                  optimizer='adadelta',
                  metrics=['accuracy'])
    
    model.load_weights('MNIST_weights.h5')
    
    # launching the visualization server
    from quiver_engine import server
    server.launch(model, temp_folder='./tmp', input_folder='./', port=7777)
    

    and here is the output of this script:

    (tsflow) [email protected]:~/test$ python test.py 
    Using TensorFlow backend.
    127.0.0.1 - - [2016-11-14 17:51:54] "GET / HTTP/1.1" 404 374 0.005049
    127.0.0.1 - - [2016-11-14 17:51:59] "GET / HTTP/1.1" 404 374 0.000828
    127.0.0.1 - - [2016-11-14 17:52:01] "GET / HTTP/1.1" 404 374 0.000582
    
    opened by jdespraz 17
  • TypeError: can't pickle _thread.RLock objects

    TypeError: can't pickle _thread.RLock objects

    Hello, everything's in the title.

    I am currently working with the MaskRCNN network described here: https://github.com/matterport/Mask_RCNN and I can't get past the server.launch. Apparently it can't get a pickle file from what I can tell...

    Any way to get rid of that error? The server launches but the exception comes after that.

    Thanks.

    question 
    opened by Gloupys 11
  • number of channels only 3?

    number of channels only 3?

    What happens if the model is for one channel images?

    Exception: Error when checking : expected input_1 to have shape (1, 128, 128, 1) but got array with shape (1, 128, 128, 3)

    to-verify 
    opened by varoudis 10
  • image files don't show up in right-hand panel

    image files don't show up in right-hand panel

    Started quiver and it appears to launch normally. My model is visible in the left pane. However, no files visible on the right pane (path is correctly set and there are legit image files present).

    Any thoughts?

    I'm using Chrome on a Mac; keras 1.1.1; installed quiver via pip.

    Perhaps its a browser issue?

    to-verify 
    opened by planaria158 8
  • No module named 'imagenet_utils'

    No module named 'imagenet_utils'

    ImportError: No module named 'imagenet_utils' which seems to be coming from this line

    https://github.com/jakebian/quiver/blob/master/quiver_engine/util.py#L4 which refers to this library https://github.com/fchollet/deep-learning-models/blob/master/imagenet_utils.py

    It seems like this dependency is anyways only valid if the training set is imagenet based so maybe it would be good to make this more flexible and explicitly include the deep-learning-models as a dependency?

    opened by kmader 7
  • get_layer_outputs() assumes tf ordering

    get_layer_outputs() assumes tf ordering

    get_layer_outputs() assumes tf ordering when deciding how many tiles are in an output. in 84... "for z in range(0, layer_outputs.shape[2]) "

    layer_outputs.shape[2] only valid for tf, for theano, use layer_outputs.shape[0]

    Also, img = layer_outputs[:,:,z] is tf dependent. Should be img = layer_outputs[z,:,:] for theano.

    to-verify 
    opened by isaacgerg 7
  • input_path is truncated before being passed to get_layer_outputs in server.py

    input_path is truncated before being passed to get_layer_outputs in server.py

    We are running Quiver on Keras' default VGG16 model. Running the application works until we try to click on a layer and try to see the selected image's activations.

    After numerous inserted print() statements, it appears that the code that asks for a generated prediction is giving only the image file name rather than the full path to the image; consequently we were able to see the predictions from VGG when we were in the images' own directory. The error seems to lie with whatever is processing requests from the HTML interface.

    Thank you for producing this tool! We've been trying to find something like this for awhile (and/or write our own much more simplistic version).

    bug to-verify 
    opened by AStrangeQuark 6
  • pip install fails on Ubuntu

    pip install fails on Ubuntu

    Pip install on Ubuntu 16.04 with Python 2.7 fails the following error

    Installing collected packages: quiver-engine
      Running setup.py install for quiver-engine ... error
        Complete output from command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-Yr7lN5/quiver-engine/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-XVYrCV-record/install-record.txt --single-version-externally-managed --compile:
        running install
        running build
        running build_py
        creating build
        creating build/lib.linux-x86_64-2.7
        creating build/lib.linux-x86_64-2.7/quiver_engine
        copying quiver_engine/util.py -> build/lib.linux-x86_64-2.7/quiver_engine
        copying quiver_engine/server.py -> build/lib.linux-x86_64-2.7/quiver_engine
        copying quiver_engine/__init__.py -> build/lib.linux-x86_64-2.7/quiver_engine
        copying quiver_engine/layer_result_generators.py -> build/lib.linux-x86_64-2.7/quiver_engine
        copying quiver_engine/imagenet_utils.py -> build/lib.linux-x86_64-2.7/quiver_engine
        running egg_info
        writing requirements to quiver_engine.egg-info/requires.txt
        writing quiver_engine.egg-info/PKG-INFO
        writing top-level names to quiver_engine.egg-info/top_level.txt
        writing dependency_links to quiver_engine.egg-info/dependency_links.txt
        warning: manifest_maker: standard file '-c' not found
        
        reading manifest file 'quiver_engine.egg-info/SOURCES.txt'
        reading manifest template 'MANIFEST.in'
        writing manifest file 'quiver_engine.egg-info/SOURCES.txt'
        Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/tmp/pip-build-Yr7lN5/quiver-engine/setup.py", line 20, in <module>
            'pillow'
          File "/usr/lib/python2.7/distutils/core.py", line 151, in setup
            dist.run_commands()
          File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
            self.run_command(cmd)
          File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
            cmd_obj.run()
          File "/usr/lib/python2.7/dist-packages/setuptools/command/install.py", line 61, in run
            return orig.install.run(self)
          File "/usr/lib/python2.7/distutils/command/install.py", line 601, in run
            self.run_command('build')
          File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
            self.distribution.run_command(command)
          File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
            cmd_obj.run()
          File "/usr/lib/python2.7/distutils/command/build.py", line 128, in run
            self.run_command(cmd_name)
          File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
            self.distribution.run_command(command)
          File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
            cmd_obj.run()
          File "/usr/lib/python2.7/dist-packages/setuptools/command/build_py.py", line 52, in run
            self.build_package_data()
          File "/usr/lib/python2.7/dist-packages/setuptools/command/build_py.py", line 107, in build_package_data
            for package, src_dir, build_dir, filenames in self.data_files:
          File "/usr/lib/python2.7/dist-packages/setuptools/command/build_py.py", line 65, in __getattr__
            self.data_files = self._get_data_files()
          File "/usr/lib/python2.7/dist-packages/setuptools/command/build_py.py", line 79, in _get_data_files
            return list(map(self._get_pkg_data_files, self.packages or ()))
          File "/usr/lib/python2.7/dist-packages/setuptools/command/build_py.py", line 91, in _get_pkg_data_files
            for file in self.find_data_files(package, src_dir)
          File "/usr/lib/python2.7/dist-packages/setuptools/command/build_py.py", line 98, in find_data_files
            + self.package_data.get(package, []))
        TypeError: can only concatenate list (not "str") to list
        
        ----------------------------------------
    Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-Yr7lN5/quiver-engine/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-XVYrCV-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-Yr7lN5/quiver-engine/
    
    opened by sushanttripathy 6
  • Error : No data for this layer.

    Error : No data for this layer.

    Hi, I'm just trying to follow your demo with pre-trained VGG-19 model . But I keep getting this error after selecting the layer : No data for this layer

    On the terminal, i get the following message : Exception: Error when checking : expected zeropadding2d_input_1 to have shape (None, 3, 224, 224) but got array with shape (1, 3, 3, 224)

    Is this a problem due to my images ?

    my code : model = VGG_19('vgg19_weights.h5') server.launch(model,input_folder='./input_images')

    opened by srinivasjdi 5
  • MobilenetV3 support?

    MobilenetV3 support?

    After bugfixing the issues in #83 and the issue in #78, feeding a mobilenetv3 model will not show the model tree on the left side of the panel, but no errors are reported. I am mamking sure of feeding a full model, as i know it is not possible to properly access collapsed keras models:

    ` training_model.summary()

    Model: "model"


    Layer (type) Output Shape Param #

    input_1 (InputLayer) [(None, 224, 398, 3)] 0


    sequential (Sequential) (None, 224, 398, 3) 0


    model (Functional) (None, 2) 4228994

    Total params: 4,228,994 Trainable params: 2,562 Non-trainable params: 4,226,432


    ` here the collapsed model is in the layer (model)

    i create a submodel like so:

    submodel = training_model.get_layer("model").get_layer("MobilenetV3large") (because of how i build the model, the actual model is nested as a collapsed model "MobilenetV3large" inside the "model" layer)

    this gives the full model with the correct structure, but it is not shown in the webserver.

    if that matters, the script outputs the following when the server is started:

    ::1 - - [2021-01-18 10:18:35] "GET /model HTTP/1.1" 200 106901 0.058173 ::1 - - [2021-01-18 10:18:35] "GET /inputs HTTP/1.1" 200 640 0.007560

    opened by ghylander 0
  • RuntimeError

    RuntimeError "The layer has never been called and thus has no defined input shape."

    I'm trying to install and run quiver to visualize a Keras model I've trained.

    Running Windows 10 and Anaconda.

    I installed quiver with git on a fresh conda environment

    pip install git+git://github.com/keplr-io/quiver.git
    

    When I tried to launch, I got the imsave error from #78

    After fixing that, I got the following error:

    AttributeError: module 'keras.backend' has no attribute 'image_dim_ordering'
    

    After fixing that (replacing image_dim_ordering as noted in this issue in util.py) I'm getting the following:

    Starting webserver from: D:\Anaconda3\envs\quiver_env\lib\site-packages\quiver_engine
    Traceback (most recent call last):
      File "quiver.py", line 22, in <module>
        server.launch(model)
      File "D:\Anaconda3\envs\quiver_env\lib\site-packages\quiver_engine\server.py", line 157, in launch
        mean=mean, std=std
      File "D:\Anaconda3\envs\quiver_env\lib\site-packages\quiver_engine\server.py", line 47, in get_app
        single_input_shape, input_channels = get_input_config(model)
      File "D:\Anaconda3\envs\quiver_env\lib\site-packages\quiver_engine\util.py", line 43, in get_input_config
        model.get_input_shape_at(0)[1:3],
      File "D:\Anaconda3\envs\quiver_env\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 2010, in get_input_shape_at
        'input shape')
      File "D:\Anaconda3\envs\quiver_env\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 2603, in _get_node_attribute_at_index
        'and thus has no defined ' + attr_name + '.')
    RuntimeError: The layer has never been called and thus has no defined input shape.
    

    Here's my launch code:

    model = keras.models.load_model('model.hdf5')
    server.launch(model)
    

    Is there a specific Tensorflow/Keras version I need to use or what am I doing wrong?

    opened by tomassams 4
  • build(deps): bump node-sass from 3.13.1 to 4.14.1 in /quiver_engine/quiverboard

    build(deps): bump node-sass from 3.13.1 to 4.14.1 in /quiver_engine/quiverboard

    Bumps node-sass from 3.13.1 to 4.14.1.

    Release notes

    Sourced from node-sass's releases.

    v4.14.1

    Community

    Fixes

    Supported Environments

    OS Architecture Node
    Windows x86 & x64 0.10, 0.12, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14
    OSX x64 0.10, 0.12, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14
    Linux* x86 & x64 0.10, 0.12, 1, 2, 3, 4, 5, 6, 7, 8**, 9**, 10**^, 11**^, 12**^, 13**^, 14**^
    Alpine Linux x64 6, 8, 10, 11, 12, 13, 14
    FreeBSD i386 amd64 10, 12, 13

    *Linux support refers to Ubuntu, Debian, and CentOS 5+ ** Not available on CentOS 5 ^ Only available on x64

    v4.14.0

    Features

    Fixes

    Supported Environments

    OS Architecture Node
    Windows x86 & x64 0.10, 0.12, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14
    OSX x64 0.10, 0.12, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14
    Linux* x86 & x64 0.10, 0.12, 1, 2, 3, 4, 5, 6, 7, 8**, 9**, 10**^, 11**^, 12**^, 13**^, 14**^
    Alpine Linux x64 6, 8, 10, 11, 12, 13, 14
    FreeBSD i386 amd64 10, 12, 13

    *Linux support refers to Ubuntu, Debian, and CentOS 5+ ** Not available on CentOS 5 ^ Only available on x64

    v4.13.1

    Community

    Changelog

    Sourced from node-sass's changelog.

    v4.14.0

    https://github.com/sass/node-sass/releases/tag/v4.14.0

    v4.13.1

    https://github.com/sass/node-sass/releases/tag/v4.13.1

    v4.13.0

    https://github.com/sass/node-sass/releases/tag/v4.13.0

    v4.12.0

    https://github.com/sass/node-sass/releases/tag/v4.12.0

    v4.11.0

    https://github.com/sass/node-sass/releases/tag/v4.11.0

    v4.10.0

    https://github.com/sass/node-sass/releases/tag/v4.10.0

    v4.9.4

    https://github.com/sass/node-sass/releases/tag/v4.9.4

    v4.9.3

    https://github.com/sass/node-sass/releases/tag/v4.9.3

    v4.9.2

    https://github.com/sass/node-sass/releases/tag/v4.9.2

    v4.9.1

    https://github.com/sass/node-sass/releases/tag/v4.9.1

    v4.9.0

    https://github.com/sass/node-sass/releases/tag/v4.9.0

    v4.8.3

    https://github.com/sass/node-sass/releases/tag/v4.8.3

    v4.8.2

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • build(deps-dev): bump codecov from 1.0.1 to 3.7.1 in /quiver_engine/quiverboard

    build(deps-dev): bump codecov from 1.0.1 to 3.7.1 in /quiver_engine/quiverboard

    Bumps codecov from 1.0.1 to 3.7.1.

    Release notes

    Sourced from codecov's releases.

    v3.6.4

    Fix for Cirrus CI

    v3.6.3

    AWS Codebuild fixes + package updates

    v3.6.2

    command line args sanitised

    v3.6.1

    Fix for Semaphore

    v3.6.0

    AWS CodeBuild Semaphore v2

    v3.3.0

    Added pipe --pipe, -l

    v3.1.0

    Custom Yaml file Token from .codecov.yml

    v3.0.4

    Security fixes

    v3.0.3

    Fix for not git repos

    V3.0.2

    No release notes provided.

    v3.0.1

    Fixing security vulnerability

    v3.0.0

    Removed support for node 0.12

    Fixing for older node

    No release notes provided.

    v2.2.0

    Added: Support for Jenkins Blue Ocean Added: Clear function to remove all discovered reports after uploading Fix for Gitlab CI

    v2.1.0

    Added support for flags http://docs.codecov.io/docs/flags

    v2.0.2

    Commits
    Maintainer changes

    This version was pushed to npm by drazisil, a new releaser for codecov since your current version.


    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Works with multi label tabular data?

    Works with multi label tabular data?

    Hi i just was curious if i can use this for multi label tabular data. Since my table has 1000 columns as input data and target has 19 columns, can anyone explain if Quiver works? I tried but got errors like:

    IndexError: tuple index out of range
    
    opened by Shameendra 0
  • ImportError: cannot import name 'imsave'

    ImportError: cannot import name 'imsave'

    ImportError: cannot import name 'imsave'

    ImportError Traceback (most recent call last) in ----> 1 from scipy.misc import imsave 2 from quiver_engine.server import launch 3 4 5 launch(model=model, input_folder='./img',port=7000)

    ImportError: cannot import name 'imsave'

    opened by ashishpatel26 2
  • Wrote an article and cited your package

    Wrote an article and cited your package

    Hi,

    My name is Tirthajyoti and I write popular articles (not too technical but mainly for beginners in DS and ML) on data science and machine learning.

    I just wrote an article on a compact way to train and visualize conv net models. I initially wrote about the Keract package.

    Today I saw your package and added that too to my article. I will test it later with my code and hopefully, it will work great :-)

    I borrowed your video demo picture in my article for citing your library properly. Hope that is OK.

    Here is the article

    Activation maps for deep learning models in a few lines of code

    opened by tirthajyoti 0
  •  Changing of the interface (feature request)

    Changing of the interface (feature request)

    Unfortunately this doesn't work on google colaboratory, cound you please make a fix for that ? Like without a web server

    opened by rushic24 0
  • Quiver fails to load data when layers use 'channels_first' config

    Quiver fails to load data when layers use 'channels_first' config

    For functional API, when input data uses 'channels_first' format, the proceeding layers also follow the same format 'channels_first'. Looks like quiver doesn't handle such situations.

    Quiver fails to display data and shows 'No data for this layer' when input image_dim_ordering, conv_output_dim_ordering, max_pooling_dim_ordering is 'channels_first'.

    Heads-up: If this fix can be done in python flask code, I can provide a work around. Please guide me.

    With Regards, Sudheer.

    opened by sudheerExperiments 0
GNNLens2 is an interactive visualization tool for graph neural networks (GNN).

GNNLens2 is an interactive visualization tool for graph neural networks (GNN).

Distributed (Deep) Machine Learning Community 56 Nov 26, 2021
Visual Computing Group (Ulm University) 73 Nov 16, 2021
A ultra-lightweight 3D renderer of the Tensorflow/Keras neural network architectures

A ultra-lightweight 3D renderer of the Tensorflow/Keras neural network architectures

Souvik Pratiher 16 Nov 17, 2021
🎆 A visualization of the CapsNet layers to better understand how it works

CapsNet-Visualization For more information on capsule networks check out my Medium articles here and here. Setup Use pip to install the required pytho

Nick Bourdakos 377 Nov 28, 2021
Logging MXNet data for visualization in TensorBoard.

Logging MXNet Data for Visualization in TensorBoard Overview MXBoard provides a set of APIs for logging MXNet data for visualization in TensorBoard. T

Amazon Web Services - Labs 333 Oct 20, 2021
Visualization Toolbox for Long Short Term Memory networks (LSTMs)

Visualization Toolbox for Long Short Term Memory networks (LSTMs)

Hendrik Strobelt 1k Nov 29, 2021
null 131 Jun 25, 2021
Pytorch implementation of convolutional neural network visualization techniques

Convolutional Neural Network Visualizations This repository contains a number of convolutional neural network visualization techniques implemented in

Utku Ozbulak 6.1k Nov 26, 2021
Visualization toolkit for neural networks in PyTorch! Demo -->

FlashTorch A Python visualization toolkit, built with PyTorch, for neural networks in PyTorch. Neural networks are often described as "black box". The

Misa Ogura 633 Nov 28, 2021
Convolutional neural network visualization techniques implemented in PyTorch.

This repository contains a number of convolutional neural network visualization techniques implemented in PyTorch.

null 1 Nov 6, 2021
Auralisation of learned features in CNN (for audio)

AuralisationCNN This repo is for an example of auralisastion of CNNs that is demonstrated on ISMIR 2015. Files auralise.py: includes all required func

Keunwoo Choi 37 Oct 26, 2021
LeViT a Vision Transformer in ConvNet's Clothing for Faster Inference

LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference This repository contains PyTorch evaluation code, training code and pretrained

Facebook Research 363 Nov 30, 2021
ConvNet training using pytorch

Convolutional networks using PyTorch This is a complete training example for Deep Convolutional Networks on various datasets (ImageNet, Cifar10, Cifar

Elad Hoffer 308 Nov 23, 2021
This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras)

Yogi-Optimizer_Keras This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras) The NeurIPS-Paper can be found here: http://papers.nips.c

null 11 Nov 22, 2020
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.

MMdnn MMdnn is a comprehensive and cross-framework tool to convert, visualize and diagnose deep learning (DL) models. The "MM" stands for model manage

Microsoft 5.5k Dec 3, 2021
Neural network visualization toolkit for tf.keras

Neural network visualization toolkit for tf.keras

Yasuhiro Kubota 189 Nov 16, 2021
Interactive Data Visualization in the browser, from Python

Bokeh is an interactive visualization library for modern web browsers. It provides elegant, concise construction of versatile graphics, and affords hi

Bokeh 15.8k Dec 3, 2021
Interactive Data Visualization in the browser, from Python

Bokeh is an interactive visualization library for modern web browsers. It provides elegant, concise construction of versatile graphics, and affords hi

Bokeh 14.7k Feb 13, 2021
Interactive Data Visualization in the browser, from Python

Bokeh is an interactive visualization library for modern web browsers. It provides elegant, concise construction of versatile graphics, and affords hi

Bokeh 14.7k Feb 18, 2021
SummVis is an interactive visualization tool for text summarization.

SummVis is an interactive visualization tool for analyzing abstractive summarization model outputs and datasets.

Robustness Gym 210 Nov 28, 2021
Interactive Terraform visualization. State and configuration explorer.

Rover - Terraform Visualizer Rover is a Terraform visualizer. In order to do this, Rover: generates a plan file and parses the configuration in the ro

Tu Nguyen 1.6k Nov 30, 2021
GNNLens2 is an interactive visualization tool for graph neural networks (GNN).

GNNLens2 is an interactive visualization tool for graph neural networks (GNN).

Distributed (Deep) Machine Learning Community 56 Nov 26, 2021
Learning Convolutional Neural Networks with Interactive Visualization.

CNN Explainer An interactive visualization system designed to help non-experts learn about Convolutional Neural Networks (CNNs) For more information,

Polo Club of Data Science 5.7k Nov 27, 2021
Visual Automata is a Python 3 library built as a wrapper for Caleb Evans' Automata library to add more visualization features.

Visual Automata Copyright 2021 Lewi Lie Uberg Released under the MIT license Visual Automata is a Python 3 library built as a wrapper for Caleb Evans'

Lewi Uberg 40 Nov 4, 2021
improvement of CLIP features over the traditional resnet features on the visual question answering, image captioning, navigation and visual entailment tasks.

CLIP-ViL In our paper "How Much Can CLIP Benefit Vision-and-Language Tasks?", we show the improvement of CLIP features over the traditional resnet fea

null 170 Dec 2, 2021
kapre: Keras Audio Preprocessors

Kapre Keras Audio Preprocessors - compute STFT, ISTFT, Melspectrogram, and others on GPU real-time. Tested on Python 3.6 and 3.7 Why Kapre? vs. Pre-co

Keunwoo Choi 793 Dec 3, 2021
Image-to-Image Translation with Conditional Adversarial Networks (Pix2pix) implementation in keras

pix2pix-keras Pix2pix implementation in keras. Original paper: Image-to-Image Translation with Conditional Adversarial Networks (pix2pix) Paper Author

William Falcon 132 Nov 8, 2021
An end-to-end machine learning web app to predict rugby scores (Pandas, SQLite, Keras, Flask, Docker)

Rugby score prediction An end-to-end machine learning web app to predict rugby scores Overview An demo project to provide a high-level overview of the

null 30 Jul 28, 2021
The fastest way to visualize GradCAM with your Keras models.

VizGradCAM VizGradCam is the fastest way to visualize GradCAM in Keras models. GradCAM helps with providing visual explainability of trained models an

null 48 Nov 26, 2021