A collection of infrastructure and tools for research in neural network interpretability.

Overview

Lucid

PyPI project status Travis build status Code coverage Supported Python version PyPI release version

Lucid is a collection of infrastructure and tools for research in neural network interpretability.

We're not currently supporting tensorflow 2!

If you'd like to use lucid in colab which defaults to tensorflow 2, add this magic to a cell before you import tensorflow:

%tensorflow_version 1.x

Lucid is research code, not production code. We provide no guarantee it will work for your use case. Lucid is maintained by volunteers who are unable to provide significant technical support.


Notebooks

Start visualizing neural networks with no setup. The following notebooks run right from your browser, thanks to Colaboratory. It's a Jupyter notebook environment that requires no setup to use and runs entirely in the cloud.

You can run the notebooks on your local machine, too. Clone the repository and find them in the notebooks subfolder. You will need to run a local instance of the Jupyter notebook environment to execute them.

Tutorial Notebooks

Feature Visualization Notebooks

Notebooks corresponding to the Feature Visualization article

Building Blocks Notebooks

Notebooks corresponding to the Building Blocks of Interpretability article





Differentiable Image Parameterizations Notebooks

Notebooks corresponding to the Differentiable Image Parameterizations article


Activation Atlas Notebooks

Notebooks corresponding to the Activation Atlas article

Collecting activations Simple activation atlas Class activation atlas Activation atlas patches

Miscellaneous Notebooks



Recomended Reading

Related Talks

Community

We're in #proj-lucid on the Distill slack (join link).

We'd love to see more people doing research in this space!


Additional Information

License and Disclaimer

You may use this software under the Apache 2.0 License. See LICENSE.

This project is research code. It is not an official Google product.

Special consideration for TensorFlow dependency

Lucid requires tensorflow, but does not explicitly depend on it in setup.py. Due to the way tensorflow is packaged and some deficiencies in how pip handles dependencies, specifying either the GPU or the non-GPU version of tensorflow will conflict with the version of tensorflow your already may have installed.

If you don't want to add your own dependency on tensorflow, you can specify which tensorflow version you want lucid to install by selecting from extras_require like so: lucid[tf] or lucid[tf_gpu].

In actual practice, we recommend you use your already installed version of tensorflow.

Comments
  • Errors when using official Resnet model

    Errors when using official Resnet model

    Hi! I can't get Lucid to work with model trained on "official Resnet" code: https://github.com/tensorflow/models/tree/master/official/resnet

    Steps to reproduce

    1. Official pretrained model is unusable because of harcoded batch dimension (64). I slightly tweaked model export code to get free batch dimension + some preprocessing tweaks, and trained Resnet-50 v1 model on Imagenet.
    2. Export frozen graph from SavedModel: !python ~/miniconda3/lib/python3.6/site-packages/tensorflow/python/tools/freeze_graph.py --input_saved_model_dir exported_model/1542646825 --output_node_names softmax_tensor --output_graph /tmp/resnet50.pb.zoo
    3. Load frozen graph into custom Lucid model:
    from lucid.modelzoo.vision_base import Model
    class MyResnet(Model):
        image_shape = [224, 224, 3]
        model_path  = '/tmp/resnet50.pb.zoo'
        labels_path = 'gs://modelzoo/labels/ImageNet_standard.txt'
        image_value_range = (-117, 255-117)
        input_name = 'input_tensor'
    model = MyResnet()
    model.load_graphdef()
    
    1. Try to render response from any channel of last Resnet block:
    obj = objectives.channel("resnet_model/block_layer4", 42)
    param_f = lambda: param.image(256, fft=True, decorrelate=True, batch=1)
    _ = render.render_vis(model, obj, param_f=param_f)
    
    1. Fail!
    InvalidArgumentError: Inputs to operation gradients/import/resnet_model/Relu_48_grad/Select of type Select must have the same size and shape.  Input 0: [1,2048,9,9] != input 1: [1,2048,7,7]
    	 [[{{node gradients/import/resnet_model/Relu_48_grad/Select}} = Select[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](gradients/import/resnet_model/Relu_48_grad/Less, gradients/import/resnet_model/Relu_42_grad/Select-1-TransposeNHWCToNCHW-LayoutOptimizer, gradients/import/resnet_model/Relu_48_grad/Select-2-TransposeNHWCToNCHW-LayoutOptimizer)]]
    	 [[{{node Mean/_45}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_3442_Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
    

    This is strange error because model has fixed shape for all dimensions except batch. Tensor with shape [1,2048,9,9] just should not exist. Size of 'bad dimension' (9 in this case) depends on rendered image size. If I use larger image size (512), bad dimension goes to 12 or 13. Lucid don't accept NCHW, I tried to ะฐ) train model with NCHW and export results to NHWC, and b) directly train model in NHWC - same results.

    I made a gist with complete code sample and model sanity check: https://gist.github.com/Arturus/c93052e951d5bc04fcd79c714d6175b6

    Can you help me?

    opened by Arturus 15
  • How can I find a particular neuron?

    How can I find a particular neuron?

    I find that the CPPN parametrization produces very beautiful images, and I would like to experiment with similar feature visualizations as in the Differentiable Parameterizations paper.

    Did someone already do a reverse mapping, with which we can visually search for a beautiful feature and find its corresponding neuron identifier?

    opened by lemairecarl 15
  • after some layers the image given as output by render.render_vis become grey

    after some layers the image given as output by render.render_vis become grey

    When i try to use lucid on my own network the output of a lot of filters is just gray (there are some pixels that vary however the returned image is gray for a human. The deeper the layer the more filters are grey (in the highest layers everything returned is completely grey). Is there something I am doing wrong?

    opened by JaspervDalen 13
  • Adding More Models

    Adding More Models

    TF Slim Models

    • โœ… inception_v1
    • โœ… inception_v2
    • โœ… inception_v3
    • โœ… inception_v4
    • โœ… inception_resnet_v2
    • โœ… resnet_v1_50
    • โœ… resnet_v1_101
    • โœ… resnet_v1_152
    • โœ… resnet_v2_50
    • โœ… resnet_v2_101
    • โœ… resnet_v2_152
    • โœ… vgg_16
    • โœ… vgg_19
    • โœ… mobilenet_v1
    • โœ… mobilenet_v1_050
    • โœ… mobilenet_v1_025
    • โœ… nasnet-a_mobile
    • โœ… nasnet-a_large
    • โœ… pnasnet-5_large
    • โœ… pnasnet-5_mobile

    Couldn't create list of layers for some models.

    Caffe Models

    Issues

    • โš ๏ธ Some models may not have the correct list of labels.
    • โš ๏ธ Presently assuming all caffe models are BGR but it's possible some aren't in cases where they were originally trained in a different framework.
    • โš ๏ธ Some models may not have the correct input range.
    • ๐Ÿ—๏ธ Layer lists for some models are empty.

    At the moment, I'm inferring all of these from the convention of the framework the model was trained in, but this may not be reliable.

    opened by colah 12
  • Experimental API for saving and loading models

    Experimental API for saving and loading models

    The biggest pain point of using lucid seems to be importing models. Most users wish to visualize their own models, and need to get them into a format lucid can use. Unfortunately, this presently involves several steps, which can be unintuitive:

    1. Save a graph
    2. Convert it into a frozen graph
    3. Write a Lucid modelzoo.Model class, filling in values like image_value_range

    This PR proposes an alternate API where there is only one, clearly defined step to preparing your model for use in Lucid. This import path is only for TensorFlow users.

    We assume that the user can construct an inference graph of their model. This should be easy for anyone training models (because they will have one for tracking accuracy) and for anyone using a model for inference.

    A this point, the user simply calls the save_model() function.

    # Run this code with inference graph in default graph and session
    save_model(
        save_path    = 'gs://.../test.pb',  # Local paths are also fine!
        input_name   = 'input',
        output_names = ['prob/Softmax'],
        image_shape  = [224, 224, 3],
        image_value_range = [0,1]
      )
    

    If the user successfully does this with the correct arguments, they should be done. All metadata is baked into the save model (more on this later), meaning it never needs to be specified again.

    To use a model in lucid, the user simple does:

    model = load_model('gs://.../test.pb')
    

    And they're ready to go!

    Suggesting save code for users

    The above import path is still unnecessarily painful, since we can often infer many of the arguments to save_model() with high confidence.

    I considered just making save_model() have optional arguments and try to infer unspecified arguments if possible. However, this struck me as an API that would lead to user confusion, since we can't always infer these arguments, aren't completely certain, and some arguments just can't be inferred.

    Instead, I went with a suggest_save_code() function, which is simply invoked:

    # With inference graph in default graph
    suggest_save_code()
    

    If our heuristics can determine arguments, it will print out something like the following:

    >>> suggest_save_code()
    
    # Infered: input_name = input  (only Placeholder)
    # Infered: image_shape = [224, 224, 3]
    # Infered: output_names = ['prob/Softmax']  (Softmax ops)
    
    # Sanity check all inferred values before using this code!
    save_model(
        save_path    = 'gs://save/model.pb', # TODO: replace
        input_name   = 'input',
        output_names = ['prob/Softmax'],
        image_shape  = [224, 224, 3],
        image_value_range =                  # TODO (eg. [0, 1], [0, 255], [-117, 138] )
        # WARNING: Incorrect `image_value_range` is the most common cause of feature 
        #     visualization bugs! It will fail silently with incorrect visualizations!
      )
    

    In other cases, when it can't be inferred, the output will be something like this, giving the user a n empty template to fill in.

    >>> suggest_save_code()
    
    # Sanity check all inferred values before using this code!
    save_model(
        save_path    = 'gs://save/model.pb', # TODO: replace
        input_name   =   ,                   # TODO (eg. 'input' )
        output_names = [ ],                  # TODO (eg. ['logits'] )
        image_shape  =   ,                   # TODO (eg. [224, 224, 3] )
        image_value_range =                  # TODO (eg. [0, 1], [0, 255], [-117, 138] )
        # WARNING: Incorrect `image_value_range` is the most common cause of feature 
        #     visualization bugs! It will fail silently with incorrect visualizations!
      )
    

    Input Range

    My biggest concern with this API -- and any other I can think of -- is misspecified image_value_ranges. Unlike other errors, this will not cause visualization to fail. Instead, it will cause bad or incorrect visualizations to be produced, failing silently.

    There isn't any reasonable way for us to catch these errors. (My best thought would be to test accuracy on ImageNet for different common ranges.)

    At the moment, trying to warn users that this is a common silent failure mode seems like the best bet.

    Storing Metadata

    This API depends on us being able to save metadata along with the graph. This is critical to making it into "one save step" instead of two separate steps of saving and then later specifying metadata on import.

    For this code, I do something kind of evil to accomplish this. I inject a tf.constant op into the graph named lucid_meta_json, containing a json blob of metadata. On import, we can detect and extract this node to recover meta data. This is a completely legal TensorFlow graph! But not really being used as intended...

    In the future, we might be able to switch over to SavedModel and use their actual supported mechanisms for specifying metadata.

    opened by colah 9
  • Dilation Layers not working  (Custom Model)

    Dilation Layers not working (Custom Model)

    Lucid Version: 0.3.9 Tensorflow Version: 1.15.x Python: 3.7

    $ lsb_release -a
    No LSB modules are available.
    Distributor ID:	Ubuntu
    Description:	Ubuntu 18.04.4 LTS
    Release:	18.04
    Codename:	bionic
    
    

    Hi. Just tried out lucid and really loved it. Tested almost all visualization for models from models zoo. I decided to import my custom model for visualization. I manged to convert my keras model into a single graph.pb file by following the instructions here. My original model was in written in tensorflow 2 and converted to a frozengraph using this tutorial and loaded into tensorflow 1 using the "usual" graph parsing from a pb file. After loading the graph in tf1 i again saved it using command from the instructions:

        Model.save(
          'final_output_pb_path',
          image_shape=[256,256, 1],
          input_name='input_name',
          output_names=['output_node_name'],
          image_value_range=[-50,50],
        )
    

    The Original Model

    input_tensor = Input(shape=[256, 256, 1])
    x = Conv2D(64,3, padding="same", activation='relu', name='conv1_1')(input_tensor)
    x = Conv2D(64,3, strides=[2,2], padding="same",activation='relu', name='conv1_2')(x)
    x = BatchNormalization()(x)
    
    x = Conv2D(128,3, padding="same", activation='relu', name='conv2_1')(x)
    x = Conv2D(128,3, strides=[2,2], padding="same",activation='relu', name='conv2_2')(x)
    x = BatchNormalization()(x)
    
    x = Conv2D(256,3, padding="same", activation='relu', name='conv3_1')(x)
    x = Conv2D(256,3, strides=[2,2], padding="same",activation='relu', name='conv3_2')(x)
    x = BatchNormalization()(x)
    
    x = Conv2D(512,3, padding="same", activation='relu', name='conv4_1')(x)
    x = Conv2D(512,3,  padding="same",activation='relu', name='conv4_2')(x)
    x = Conv2D(512,3,  padding="same",activation='relu', name='conv4_3')(x)
    x = BatchNormalization()(x)
    
    x = Conv2D(512,3, padding="same", activation='relu', name='conv5_1', dilation_rate=2)(x)
    x = Conv2D(512,3,  padding="same",activation='relu', name='conv5_2', dilation_rate=2)(x)
    x = Conv2D(512,3,  padding="same",activation='relu', name='conv5_3', dilation_rate=2)(x)
    x = BatchNormalization()(x)
    
    x = Conv2D(512,3, padding="same", activation='relu', name='conv6_1', dilation_rate=2)(x)
    x = Conv2D(512,3,  padding="same",activation='relu', name='conv6_2', dilation_rate=2)(x)
    x = Conv2D(512,3,  padding="same",activation='relu', name='conv6_3', dilation_rate=2)(x)
    x = BatchNormalization()(x)
    
    x = Conv2D(512,3, padding="same", activation='relu', name='conv7_1')(x)
    x = Conv2D(512,3,  padding="same",activation='relu', name='conv7_2')(x)
    x = Conv2D(512,3,  padding="same",activation='relu', name='conv7_3')(x)
    x = BatchNormalization()(x)
    
    x = UpSampling2D(size=[2,2])(x)
    
    x = Conv2D(256,3, padding="same", activation='relu', name='conv8_1')(x)
    x = Conv2D(265,3,  padding="same",activation='relu', name='conv8_2')(x)
    x = Conv2D(256,3,  padding="same",activation='relu', name='conv8_3')(x)
    x = BatchNormalization()(x)
    

    I managed to generate visualizations upto layers without dilations (YAY!!!):

    model = Model.load('final_output_pb_path')
    param_f = lambda: param.color.to_valid_rgb(param.spatial.naive((1, 256,256,1)))
    _ = render.render_vis(model, "import/functional_1/conv4_3/Relu:0", param_f=param_f)
    

    feature-viz-colorization

    The Problem

    As soon as i try to visualize layers with dilation and after i get some weird error:

    _ = render.render_vis(model, "import/functional_1/conv5_1/Relu:0", param_f=param_f)
    
    ---------------------------------------------------------------------------
    InvalidArgumentError                      Traceback (most recent call last)
    ~/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _do_call(self, fn, *args)
       1364     try:
    -> 1365       return fn(*args)
       1366     except errors.OpError as e:
    
    ~/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata)
       1349       return self._call_tf_sessionrun(options, feed_dict, fetch_list,
    -> 1350                                       target_list, run_metadata)
       1351 
    
    ~/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list, run_metadata)
       1442                                             fetch_list, target_list,
    -> 1443                                             run_metadata)
       1444 
    
    InvalidArgumentError: 2 root error(s) found.
      (0) Invalid argument: padded_shape[0]=37 is not divisible by block_shape[0]=2
    	 [[{{node import/import/functional_1/conv5_1/Conv2D/SpaceToBatchND}}]]
      (1) Invalid argument: padded_shape[0]=37 is not divisible by block_shape[0]=2
    	 [[{{node import/import/functional_1/conv5_1/Conv2D/SpaceToBatchND}}]]
    	 [[Mean/_29]]
    0 successful operations.
    0 derived errors ignored.
    
    During handling of the above exception, another exception occurred:
    
    InvalidArgumentError                      Traceback (most recent call last)
    <ipython-input-46-6d04f696ee88> in <module>
    ----> 1 _ = render.render_vis(model, "import/functional_1/conv5_1/Relu:0", param_f=param_f)
    
    ~/anaconda3/envs/lucid/lib/python3.7/site-packages/lucid/optvis/render.py in render_vis(model, objective_f, param_f, optimizer, transforms, thresholds, print_objectives, verbose, relu_gradient_override, use_fixed_seed)
        101     try:
        102       for i in range(max(thresholds)+1):
    --> 103         loss_, _ = sess.run([loss, vis_op])
        104         if i in thresholds:
        105           vis = t_image.eval()
    
    ~/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
        954     try:
        955       result = self._run(None, fetches, feed_dict, options_ptr,
    --> 956                          run_metadata_ptr)
        957       if run_metadata:
        958         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
    
    ~/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
       1178     if final_fetches or final_targets or (handle and feed_dict_tensor):
       1179       results = self._do_run(handle, final_targets, final_fetches,
    -> 1180                              feed_dict_tensor, options, run_metadata)
       1181     else:
       1182       results = []
    
    ~/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
       1357     if handle is None:
       1358       return self._do_call(_run_fn, feeds, fetches, targets, options,
    -> 1359                            run_metadata)
       1360     else:
       1361       return self._do_call(_prun_fn, handle, feeds, fetches)
    
    ~/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in _do_call(self, fn, *args)
       1382                     '\nsession_config.graph_options.rewrite_options.'
       1383                     'disable_meta_optimizer = True')
    -> 1384       raise type(e)(node_def, op, message)
       1385 
       1386   def _extend_graph(self):
    

    InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: padded_shape[0]=37 is not divisible by block_shape[0]=2 [[node import/import/functional_1/conv5_1/Conv2D/SpaceToBatchND (defined at /home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1748) ]] (1) Invalid argument: padded_shape[0]=37 is not divisible by block_shape[0]=2 [[node import/import/functional_1/conv5_1/Conv2D/SpaceToBatchND (defined at /home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1748) ]] [[Mean/_29]] 0 successful operations. 0 derived errors ignored.

    Original stack trace for 'import/import/functional_1/conv5_1/Conv2D/SpaceToBatchND':
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/ipykernel_launcher.py", line 16, in <module>
        app.launch_new_instance()
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/traitlets/config/application.py", line 664, in launch_instance
        app.start()
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/ipykernel/kernelapp.py", line 612, in start
        self.io_loop.start()
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tornado/platform/asyncio.py", line 149, in start
        self.asyncio_loop.run_forever()
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/asyncio/base_events.py", line 541, in run_forever
        self._run_once()
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once
        handle._run()
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/asyncio/events.py", line 88, in _run
        self._context.run(self._callback, *self._args)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tornado/ioloop.py", line 690, in <lambda>
        lambda f: self._run_callback(functools.partial(callback, future))
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tornado/ioloop.py", line 743, in _run_callback
        ret = callback()
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tornado/gen.py", line 787, in inner
        self.run()
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tornado/gen.py", line 748, in run
        yielded = self.gen.send(value)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 365, in process_one
        yield gen.maybe_future(dispatch(*args))
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tornado/gen.py", line 209, in wrapper
        yielded = next(result)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 268, in dispatch_shell
        yield gen.maybe_future(handler(stream, idents, msg))
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tornado/gen.py", line 209, in wrapper
        yielded = next(result)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/ipykernel/kernelbase.py", line 545, in execute_request
        user_expressions, allow_stdin,
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tornado/gen.py", line 209, in wrapper
        yielded = next(result)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/ipykernel/ipkernel.py", line 306, in do_execute
        res = shell.run_cell(code, store_history=store_history, silent=silent)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/ipykernel/zmqshell.py", line 536, in run_cell
        return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 2867, in run_cell
        raw_cell, store_history, silent, shell_futures)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 2895, in _run_cell
        return runner(coro)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/IPython/core/async_helpers.py", line 68, in _pseudo_sync_runner
        coro.send(None)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3072, in run_cell_async
        interactivity=interactivity, compiler=compiler, result=result)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3263, in run_ast_nodes
        if (await self.run_code(code, result,  async_=asy)):
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3343, in run_code
        exec(code_obj, self.user_global_ns, self.user_ns)
      File "<ipython-input-46-6d04f696ee88>", line 1, in <module>
        _ = render.render_vis(model, "import/functional_1/conv5_1/Relu:0", param_f=param_f)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/lucid/optvis/render.py", line 95, in render_vis
        relu_gradient_override)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/lucid/optvis/render.py", line 177, in make_vis_T
        T = import_model(model, transform_f(t_image), t_image)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/lucid/optvis/render.py", line 257, in import_model
        T_ = model.import_graph(t_image, scope=scope, forget_xy_shape=True, input_map=input_map)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/lucid/modelzoo/vision_base.py", line 201, in import_graph
        self.graph_def, final_input_map, name=scope)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
        return func(*args, **kwargs)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/importer.py", line 405, in import_graph_def
        producer_op_list=producer_op_list)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/importer.py", line 517, in _import_graph_def_internal
        _ProcessNewOps(graph)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/importer.py", line 243, in _ProcessNewOps
        for new_op in graph._add_new_tf_operations(compute_devices=False):  # pylint: disable=protected-access
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3561, in _add_new_tf_operations
        for c_op in c_api_util.new_tf_operations(self)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3561, in <listcomp>
        for c_op in c_api_util.new_tf_operations(self)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 3451, in _create_op_from_tf_operation
        ret = Operation(c_op, self)
      File "/home/ml/anaconda3/envs/lucid/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 1748, in __init__
        self._traceback = tf_stack.extract_stack()
    
    

    Sample conv5_1 node protobuf:

    name: "functional_1/conv5_1/Conv2D"
    op: "Conv2D"
    input: "functional_1/conv5_1/Conv2D/SpaceToBatchND"
    input: "functional_1/conv5_1/Conv2D/ReadVariableOp"
    attr {
      key: "T"
      value {
        type: DT_FLOAT
      }
    }
    attr {
      key: "data_format"
      value {
        s: "NHWC"
      }
    }
    attr {
      key: "dilations"
      value {
        list {
          i: 1
          i: 2
          i: 2
          i: 1
        }
      }
    }
    attr {
      key: "padding"
      value {
        s: "SAME"
      }
    }
    attr {
      key: "strides"
      value {
        list {
          i: 1
          i: 1
          i: 1
          i: 1
        }
      }
    }
    attr {
      key: "use_cudnn_on_gpu"
      value {
        b: true
      }
    }
    
    

    Final Graph node names:

    import/x
    import/functional_1/conv1_1/Conv2D/ReadVariableOp/resource
    import/functional_1/conv1_1/Conv2D/ReadVariableOp
    import/functional_1/conv1_1/Conv2D
    import/functional_1/conv1_1/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv1_1/BiasAdd/ReadVariableOp
    import/functional_1/conv1_1/BiasAdd
    import/functional_1/conv1_1/Relu
    import/functional_1/conv1_2/Conv2D/ReadVariableOp/resource
    import/functional_1/conv1_2/Conv2D/ReadVariableOp
    import/functional_1/conv1_2/Conv2D
    import/functional_1/conv1_2/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv1_2/BiasAdd/ReadVariableOp
    import/functional_1/conv1_2/BiasAdd
    import/functional_1/conv1_2/Relu
    import/functional_1/batch_normalization/ReadVariableOp/resource
    import/functional_1/batch_normalization/ReadVariableOp
    import/functional_1/batch_normalization/ReadVariableOp_1/resource
    import/functional_1/batch_normalization/ReadVariableOp_1
    import/functional_1/batch_normalization/FusedBatchNormV3/ReadVariableOp/resource
    import/functional_1/batch_normalization/FusedBatchNormV3/ReadVariableOp
    import/functional_1/batch_normalization/FusedBatchNormV3/ReadVariableOp_1/resource
    import/functional_1/batch_normalization/FusedBatchNormV3/ReadVariableOp_1
    import/functional_1/batch_normalization/FusedBatchNormV3
    import/functional_1/conv2_1/Conv2D/ReadVariableOp/resource
    import/functional_1/conv2_1/Conv2D/ReadVariableOp
    import/functional_1/conv2_1/Conv2D
    import/functional_1/conv2_1/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv2_1/BiasAdd/ReadVariableOp
    import/functional_1/conv2_1/BiasAdd
    import/functional_1/conv2_1/Relu
    import/functional_1/conv2_2/Conv2D/ReadVariableOp/resource
    import/functional_1/conv2_2/Conv2D/ReadVariableOp
    import/functional_1/conv2_2/Conv2D
    import/functional_1/conv2_2/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv2_2/BiasAdd/ReadVariableOp
    import/functional_1/conv2_2/BiasAdd
    import/functional_1/conv2_2/Relu
    import/functional_1/batch_normalization_1/ReadVariableOp/resource
    import/functional_1/batch_normalization_1/ReadVariableOp
    import/functional_1/batch_normalization_1/ReadVariableOp_1/resource
    import/functional_1/batch_normalization_1/ReadVariableOp_1
    import/functional_1/batch_normalization_1/FusedBatchNormV3/ReadVariableOp/resource
    import/functional_1/batch_normalization_1/FusedBatchNormV3/ReadVariableOp
    import/functional_1/batch_normalization_1/FusedBatchNormV3/ReadVariableOp_1/resource
    import/functional_1/batch_normalization_1/FusedBatchNormV3/ReadVariableOp_1
    import/functional_1/batch_normalization_1/FusedBatchNormV3
    import/functional_1/conv3_1/Conv2D/ReadVariableOp/resource
    import/functional_1/conv3_1/Conv2D/ReadVariableOp
    import/functional_1/conv3_1/Conv2D
    import/functional_1/conv3_1/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv3_1/BiasAdd/ReadVariableOp
    import/functional_1/conv3_1/BiasAdd
    import/functional_1/conv3_1/Relu
    import/functional_1/conv3_2/Conv2D/ReadVariableOp/resource
    import/functional_1/conv3_2/Conv2D/ReadVariableOp
    import/functional_1/conv3_2/Conv2D
    import/functional_1/conv3_2/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv3_2/BiasAdd/ReadVariableOp
    import/functional_1/conv3_2/BiasAdd
    import/functional_1/conv3_2/Relu
    import/functional_1/batch_normalization_2/ReadVariableOp/resource
    import/functional_1/batch_normalization_2/ReadVariableOp
    import/functional_1/batch_normalization_2/ReadVariableOp_1/resource
    import/functional_1/batch_normalization_2/ReadVariableOp_1
    import/functional_1/batch_normalization_2/FusedBatchNormV3/ReadVariableOp/resource
    import/functional_1/batch_normalization_2/FusedBatchNormV3/ReadVariableOp
    import/functional_1/batch_normalization_2/FusedBatchNormV3/ReadVariableOp_1/resource
    import/functional_1/batch_normalization_2/FusedBatchNormV3/ReadVariableOp_1
    import/functional_1/batch_normalization_2/FusedBatchNormV3
    import/functional_1/conv4_1/Conv2D/ReadVariableOp/resource
    import/functional_1/conv4_1/Conv2D/ReadVariableOp
    import/functional_1/conv4_1/Conv2D
    import/functional_1/conv4_1/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv4_1/BiasAdd/ReadVariableOp
    import/functional_1/conv4_1/BiasAdd
    import/functional_1/conv4_1/Relu
    import/functional_1/conv4_2/Conv2D/ReadVariableOp/resource
    import/functional_1/conv4_2/Conv2D/ReadVariableOp
    import/functional_1/conv4_2/Conv2D
    import/functional_1/conv4_2/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv4_2/BiasAdd/ReadVariableOp
    import/functional_1/conv4_2/BiasAdd
    import/functional_1/conv4_2/Relu
    import/functional_1/conv4_3/Conv2D/ReadVariableOp/resource
    import/functional_1/conv4_3/Conv2D/ReadVariableOp
    import/functional_1/conv4_3/Conv2D
    import/functional_1/conv4_3/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv4_3/BiasAdd/ReadVariableOp
    import/functional_1/conv4_3/BiasAdd
    import/functional_1/conv4_3/Relu
    import/functional_1/batch_normalization_3/ReadVariableOp/resource
    import/functional_1/batch_normalization_3/ReadVariableOp
    import/functional_1/batch_normalization_3/ReadVariableOp_1/resource
    import/functional_1/batch_normalization_3/ReadVariableOp_1
    import/functional_1/batch_normalization_3/FusedBatchNormV3/ReadVariableOp/resource
    import/functional_1/batch_normalization_3/FusedBatchNormV3/ReadVariableOp
    import/functional_1/batch_normalization_3/FusedBatchNormV3/ReadVariableOp_1/resource
    import/functional_1/batch_normalization_3/FusedBatchNormV3/ReadVariableOp_1
    import/functional_1/batch_normalization_3/FusedBatchNormV3
    import/functional_1/conv5_1/Conv2D/SpaceToBatchND/block_shape
    import/functional_1/conv5_1/Conv2D/SpaceToBatchND/paddings
    import/functional_1/conv5_1/Conv2D/SpaceToBatchND
    import/functional_1/conv5_1/Conv2D/ReadVariableOp/resource
    import/functional_1/conv5_1/Conv2D/ReadVariableOp
    import/functional_1/conv5_1/Conv2D
    import/functional_1/conv5_1/Conv2D/BatchToSpaceND/block_shape
    import/functional_1/conv5_1/Conv2D/BatchToSpaceND/crops
    import/functional_1/conv5_1/Conv2D/BatchToSpaceND
    import/functional_1/conv5_1/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv5_1/BiasAdd/ReadVariableOp
    import/functional_1/conv5_1/BiasAdd
    import/functional_1/conv5_1/Relu
    import/functional_1/conv5_2/Conv2D/SpaceToBatchND/block_shape
    import/functional_1/conv5_2/Conv2D/SpaceToBatchND/paddings
    import/functional_1/conv5_2/Conv2D/SpaceToBatchND
    import/functional_1/conv5_2/Conv2D/ReadVariableOp/resource
    import/functional_1/conv5_2/Conv2D/ReadVariableOp
    import/functional_1/conv5_2/Conv2D
    import/functional_1/conv5_2/Conv2D/BatchToSpaceND/block_shape
    import/functional_1/conv5_2/Conv2D/BatchToSpaceND/crops
    import/functional_1/conv5_2/Conv2D/BatchToSpaceND
    import/functional_1/conv5_2/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv5_2/BiasAdd/ReadVariableOp
    import/functional_1/conv5_2/BiasAdd
    import/functional_1/conv5_2/Relu
    import/functional_1/conv5_3/Conv2D/SpaceToBatchND/block_shape
    import/functional_1/conv5_3/Conv2D/SpaceToBatchND/paddings
    import/functional_1/conv5_3/Conv2D/SpaceToBatchND
    import/functional_1/conv5_3/Conv2D/ReadVariableOp/resource
    import/functional_1/conv5_3/Conv2D/ReadVariableOp
    import/functional_1/conv5_3/Conv2D
    import/functional_1/conv5_3/Conv2D/BatchToSpaceND/block_shape
    import/functional_1/conv5_3/Conv2D/BatchToSpaceND/crops
    import/functional_1/conv5_3/Conv2D/BatchToSpaceND
    import/functional_1/conv5_3/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv5_3/BiasAdd/ReadVariableOp
    import/functional_1/conv5_3/BiasAdd
    import/functional_1/conv5_3/Relu
    import/functional_1/batch_normalization_4/ReadVariableOp/resource
    import/functional_1/batch_normalization_4/ReadVariableOp
    import/functional_1/batch_normalization_4/ReadVariableOp_1/resource
    import/functional_1/batch_normalization_4/ReadVariableOp_1
    import/functional_1/batch_normalization_4/FusedBatchNormV3/ReadVariableOp/resource
    import/functional_1/batch_normalization_4/FusedBatchNormV3/ReadVariableOp
    import/functional_1/batch_normalization_4/FusedBatchNormV3/ReadVariableOp_1/resource
    import/functional_1/batch_normalization_4/FusedBatchNormV3/ReadVariableOp_1
    import/functional_1/batch_normalization_4/FusedBatchNormV3
    import/functional_1/conv6_1/Conv2D/SpaceToBatchND/block_shape
    import/functional_1/conv6_1/Conv2D/SpaceToBatchND/paddings
    import/functional_1/conv6_1/Conv2D/SpaceToBatchND
    import/functional_1/conv6_1/Conv2D/ReadVariableOp/resource
    import/functional_1/conv6_1/Conv2D/ReadVariableOp
    import/functional_1/conv6_1/Conv2D
    import/functional_1/conv6_1/Conv2D/BatchToSpaceND/block_shape
    import/functional_1/conv6_1/Conv2D/BatchToSpaceND/crops
    import/functional_1/conv6_1/Conv2D/BatchToSpaceND
    import/functional_1/conv6_1/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv6_1/BiasAdd/ReadVariableOp
    import/functional_1/conv6_1/BiasAdd
    import/functional_1/conv6_1/Relu
    import/functional_1/conv6_2/Conv2D/SpaceToBatchND/block_shape
    import/functional_1/conv6_2/Conv2D/SpaceToBatchND/paddings
    import/functional_1/conv6_2/Conv2D/SpaceToBatchND
    import/functional_1/conv6_2/Conv2D/ReadVariableOp/resource
    import/functional_1/conv6_2/Conv2D/ReadVariableOp
    import/functional_1/conv6_2/Conv2D
    import/functional_1/conv6_2/Conv2D/BatchToSpaceND/block_shape
    import/functional_1/conv6_2/Conv2D/BatchToSpaceND/crops
    import/functional_1/conv6_2/Conv2D/BatchToSpaceND
    import/functional_1/conv6_2/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv6_2/BiasAdd/ReadVariableOp
    import/functional_1/conv6_2/BiasAdd
    import/functional_1/conv6_2/Relu
    import/functional_1/conv6_3/Conv2D/SpaceToBatchND/block_shape
    import/functional_1/conv6_3/Conv2D/SpaceToBatchND/paddings
    import/functional_1/conv6_3/Conv2D/SpaceToBatchND
    import/functional_1/conv6_3/Conv2D/ReadVariableOp/resource
    import/functional_1/conv6_3/Conv2D/ReadVariableOp
    import/functional_1/conv6_3/Conv2D
    import/functional_1/conv6_3/Conv2D/BatchToSpaceND/block_shape
    import/functional_1/conv6_3/Conv2D/BatchToSpaceND/crops
    import/functional_1/conv6_3/Conv2D/BatchToSpaceND
    import/functional_1/conv6_3/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv6_3/BiasAdd/ReadVariableOp
    import/functional_1/conv6_3/BiasAdd
    import/functional_1/conv6_3/Relu
    import/functional_1/batch_normalization_5/ReadVariableOp/resource
    import/functional_1/batch_normalization_5/ReadVariableOp
    import/functional_1/batch_normalization_5/ReadVariableOp_1/resource
    import/functional_1/batch_normalization_5/ReadVariableOp_1
    import/functional_1/batch_normalization_5/FusedBatchNormV3/ReadVariableOp/resource
    import/functional_1/batch_normalization_5/FusedBatchNormV3/ReadVariableOp
    import/functional_1/batch_normalization_5/FusedBatchNormV3/ReadVariableOp_1/resource
    import/functional_1/batch_normalization_5/FusedBatchNormV3/ReadVariableOp_1
    import/functional_1/batch_normalization_5/FusedBatchNormV3
    import/functional_1/conv7_1/Conv2D/ReadVariableOp/resource
    import/functional_1/conv7_1/Conv2D/ReadVariableOp
    import/functional_1/conv7_1/Conv2D
    import/functional_1/conv7_1/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv7_1/BiasAdd/ReadVariableOp
    import/functional_1/conv7_1/BiasAdd
    import/functional_1/conv7_1/Relu
    import/functional_1/conv7_2/Conv2D/ReadVariableOp/resource
    import/functional_1/conv7_2/Conv2D/ReadVariableOp
    import/functional_1/conv7_2/Conv2D
    import/functional_1/conv7_2/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv7_2/BiasAdd/ReadVariableOp
    import/functional_1/conv7_2/BiasAdd
    import/functional_1/conv7_2/Relu
    import/functional_1/conv7_3/Conv2D/ReadVariableOp/resource
    import/functional_1/conv7_3/Conv2D/ReadVariableOp
    import/functional_1/conv7_3/Conv2D
    import/functional_1/conv7_3/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv7_3/BiasAdd/ReadVariableOp
    import/functional_1/conv7_3/BiasAdd
    import/functional_1/conv7_3/Relu
    import/functional_1/batch_normalization_6/ReadVariableOp/resource
    import/functional_1/batch_normalization_6/ReadVariableOp
    import/functional_1/batch_normalization_6/ReadVariableOp_1/resource
    import/functional_1/batch_normalization_6/ReadVariableOp_1
    import/functional_1/batch_normalization_6/FusedBatchNormV3/ReadVariableOp/resource
    import/functional_1/batch_normalization_6/FusedBatchNormV3/ReadVariableOp
    import/functional_1/batch_normalization_6/FusedBatchNormV3/ReadVariableOp_1/resource
    import/functional_1/batch_normalization_6/FusedBatchNormV3/ReadVariableOp_1
    import/functional_1/batch_normalization_6/FusedBatchNormV3
    import/functional_1/up_sampling2d/Shape
    import/functional_1/up_sampling2d/strided_slice/stack
    import/functional_1/up_sampling2d/strided_slice/stack_1
    import/functional_1/up_sampling2d/strided_slice/stack_2
    import/functional_1/up_sampling2d/strided_slice
    import/functional_1/up_sampling2d/Const
    import/functional_1/up_sampling2d/mul
    import/functional_1/up_sampling2d/resize/ResizeNearestNeighbor
    import/functional_1/conv8_1/Conv2D/ReadVariableOp/resource
    import/functional_1/conv8_1/Conv2D/ReadVariableOp
    import/functional_1/conv8_1/Conv2D
    import/functional_1/conv8_1/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv8_1/BiasAdd/ReadVariableOp
    import/functional_1/conv8_1/BiasAdd
    import/functional_1/conv8_1/Relu
    import/functional_1/conv8_2/Conv2D/ReadVariableOp/resource
    import/functional_1/conv8_2/Conv2D/ReadVariableOp
    import/functional_1/conv8_2/Conv2D
    import/functional_1/conv8_2/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv8_2/BiasAdd/ReadVariableOp
    import/functional_1/conv8_2/BiasAdd
    import/functional_1/conv8_2/Relu
    import/functional_1/conv8_3/Conv2D/ReadVariableOp/resource
    import/functional_1/conv8_3/Conv2D/ReadVariableOp
    import/functional_1/conv8_3/Conv2D
    import/functional_1/conv8_3/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv8_3/BiasAdd/ReadVariableOp
    import/functional_1/conv8_3/BiasAdd
    import/functional_1/conv8_3/Relu
    import/functional_1/batch_normalization_7/ReadVariableOp/resource
    import/functional_1/batch_normalization_7/ReadVariableOp
    import/functional_1/batch_normalization_7/ReadVariableOp_1/resource
    import/functional_1/batch_normalization_7/ReadVariableOp_1
    import/functional_1/batch_normalization_7/FusedBatchNormV3/ReadVariableOp/resource
    import/functional_1/batch_normalization_7/FusedBatchNormV3/ReadVariableOp
    import/functional_1/batch_normalization_7/FusedBatchNormV3/ReadVariableOp_1/resource
    import/functional_1/batch_normalization_7/FusedBatchNormV3/ReadVariableOp_1
    import/functional_1/batch_normalization_7/FusedBatchNormV3
    import/functional_1/up_sampling2d_1/Shape
    import/functional_1/up_sampling2d_1/strided_slice/stack
    import/functional_1/up_sampling2d_1/strided_slice/stack_1
    import/functional_1/up_sampling2d_1/strided_slice/stack_2
    import/functional_1/up_sampling2d_1/strided_slice
    import/functional_1/up_sampling2d_1/Const
    import/functional_1/up_sampling2d_1/mul
    import/functional_1/up_sampling2d_1/resize/ResizeNearestNeighbor
    import/functional_1/conv_ab/Conv2D/ReadVariableOp/resource
    import/functional_1/conv_ab/Conv2D/ReadVariableOp
    import/functional_1/conv_ab/Conv2D
    import/functional_1/conv_ab/BiasAdd/ReadVariableOp/resource
    import/functional_1/conv_ab/BiasAdd/ReadVariableOp
    import/functional_1/conv_ab/BiasAdd
    import/Identity
    
    
    opened by HarrisDePerceptron 8
  • Link to Slack invitation no longer work

    Link to Slack invitation no longer work

    Hi,

    I tried to join your #proj-lucid but the invitation link is no longer valid.

    Could you please update the link? or should I send you an email for the invitation?

    opened by p16i 8
  • import vgg16

    import vgg16

    import numpy as np
    import tensorflow as tf
    
    import lucid.modelzoo.vision_models as models
    from lucid.misc.io import show
    import lucid.optvis.objectives as objectives
    import lucid.optvis.param as param
    import lucid.optvis.render as render
    import lucid.optvis.transform as transform
    
    model = models.InceptionV1()
    model.load_graphdef()
    

    how to change to vgg16?

    models.VGG16()
    
    opened by najingligong1111 8
  • why lucid can run with varying image size by transform_f

    why lucid can run with varying image size by transform_f

    I want to know why lucid can iterate when feeding image that its size dynamically changes by transform_f per every iteration. For example, if I run the below code,

    import lucid.optvis.transform as lucid_t
    from lucid.optvis. render import make_transform_f
    import numpy as np
    
    #from lucid.optvis transform.py
    standard_transforms = [
        lucid_t.pad(12, mode='constant', constant_value=.5),
        lucid_t.jitter(8),
        lucid_t.random_scale([1 + (i-5)/50. for i in range(11)]),
        lucid_t.random_rotate(list(range(-10, 11)) + 5*[0]),
        lucid_t.jitter(4),
      ]
    
    prior = np.random.normal(size=(1,224,224,3))
    
    input_image = K.variable(prior)
    transform_f = make_transform_f(standard_transforms)
    transformed = transform_f(input_image)
    
    post = K.eval(transformed)
    
    print(prior.shape)
    print(post.shape)
    

    I get

    (1, 224, 224, 3)
    (1, 221, 221, 3)
    

    *I know values changes everytime because of the random scale function.

    feature maximization is a gradient ascent so in iteration t, with above transformation

    image(t+1) = transform_f(image(t)) + gradient

    where their tensor xy size is image(t) (224,224) image(t+1) (221,221)

    however the final output of lucid which generates after iterations is a image of size (224,224) no matter what if I start with (224,224),

    so is lucid either doing the following or any other?

    1. after transform_f, resizing the size back to (224,224) and calculating the gradients by that size in every step. or
    2. calculate the gradients in size (221,221), update the image by adding gradients, resizing back to (224,224) before the next calculation step.

    would be helpful if you can tell me where the codes that lucid is doing inside.

    thanks

    opened by totti0223 8
  • Updating all the colab notebooks

    Updating all the colab notebooks

    Problem

    As of now, colab is having TensorFlow 2.x by default but Tensorflow Lucid works on Tensorflow 1.x . But in the colab notebooks, there is no installation of TensorFlow 1.x. So, it might causes errors and It's a big issue.

    Solution

    I'll love to address this issue by proper documentation and code which implies the installation of TensorFlow 1.x and the Tensorflow Lucid as of now supports only Tensorflow 1.x. Please assign me this task.

    opened by abhinavsp0730 7
  • Fix failing ChannelReducer test

    Fix failing ChannelReducer test

    It seems that sklearn.decomposition.base.BaseEstimator does not exist anymore. I've replaced it with sklearn.base.BaseEstimator and all the tests seem to pass now.

    opened by bmiselis 7
  • I have fixxed feature attribution (100% fully working)

    I have fixxed feature attribution (100% fully working)

    awhile ago the feature attribution abilities of Lucid were broken. I have found a fix! please review the following working examples I have created:

    https://colab.research.google.com/github/400lbhacker/several-feature-inversion-projects-python-/blob/main/FIXXED_FEATURE_ATTRIBUTION(LUCID_2022).ipynb

    https://colab.research.google.com/github/400lbhacker/several-feature-inversion-projects-python-/blob/main/FIXXED-feature_inversion_caffe_Places365_deepstyle.ipynb

    these are hosted at my feature attribution fork / mods here at my github https://github.com/400lbhacker/several-feature-inversion-projects-python-/blob/main/README.md

    opened by 400lbhacker 0
  • tutorial.ipynb is incompatible with Python 3

    tutorial.ipynb is incompatible with Python 3

    The introductory tutorial.ipynb should be runnable with Colab, but Colab defaults to Python 3 and tutorial.ipynb is in Python 2.

    I have submitted some minor changes to make tutorial.ipynb run in Python 3:

    #301

    opened by satsumas 1
  • Update for Python3 compatibility

    Update for Python3 compatibility

    Colab runtime defaults to Python 3. The intro ipynb doesn't run in Python3, so I made two changes to the syntax to fix this.

    • Pin version of numpy that is compatible with lucid
    • Python3 ranges aren't lists, so explicitly cast to list before using in transformations

    Supporting screenshots because ipynb diffs are a bit unwieldy: Screenshot 2022-04-18 at 15 49 44 Screenshot 2022-04-18 at 15 49 28

    opened by satsumas 1
  • refixxed 2d neural transfer again.

    refixxed 2d neural transfer again.

    repaired fatal error: NotImplementedError: Cannot convert a symbolic Tensor (2nd_target:0) to a numpy array it is now 100% working again

    https://github.com/400lbhacker/Lucid-Neural-Style-Transfer/

    any ideas on how to get it to work on rectangular images? I tried modifiying the TF.crop/TF.STACk to statically declared tf.resize var to rectangular dimensions (500x700 for example) and yet it still comes out to congruent square (500x500) I have worked on it for several hours guessing and debugging. (I also tried to directly manipulate tensor array and was getting error)

    opened by 400lbhacker 0
  • I fixxed style transfer and deepdream notebooks

    I fixxed style transfer and deepdream notebooks

    deepdream notebook recently broke as well... it broke during the simple upload file portion, the python3 was having issues with the dic.keys() /w uint8 numpy array conversion. The new working notebook is located here:

    https://colab.research.google.com/github/400lbhacker/lucid-deepdream/blob/master/lucid-deepdream.ipynb

    I have tried the CPPN notebook and found it was working, I beleive all the projects are now working again with the exception of maybe the 3d neural style transfer. for any questions/concerns or if you would like to share your results please join our lucid facebook group here: https://www.facebook.com/groups/3407100409414697

    note that group is not official but its about as much help as any of us are going to get, I promise to assist with all errors, to help ideas manifest and to keep compatability going.

    thank you all

    opened by 400lbhacker 0
Releases(v0.3.10)
Owner
null
A collection of research papers and software related to explainability in graph machine learning.

A collection of research papers and software related to explainability in graph machine learning.

AstraZeneca 1.9k Dec 26, 2022
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX (.onnx, .pb, .pbtxt), Keras (.h5, .keras), Tens

Lutz Roeder 20.9k Dec 28, 2022
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX, TensorFlow Lite, Keras, Caffe, Darknet, ncnn,

Lutz Roeder 16.3k Sep 27, 2021
null 131 Jun 25, 2021
Neural network visualization toolkit for tf.keras

Neural network visualization toolkit for tf.keras

Yasuhiro Kubota 262 Dec 19, 2022
Pytorch implementation of convolutional neural network visualization techniques

Convolutional Neural Network Visualizations This repository contains a number of convolutional neural network visualization techniques implemented in

Utku Ozbulak 7k Jan 3, 2023
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" ๐Ÿง  (ICLR 2019)

Hierarchical neural-net interpretations (ACD) ?? Produces hierarchical interpretations for a single prediction made by a pytorch neural network. Offic

Chandan Singh 111 Jan 3, 2023
pytorch implementation of "Distilling a Neural Network Into a Soft Decision Tree"

Soft-Decision-Tree Soft-Decision-Tree is the pytorch implementation of Distilling a Neural Network Into a Soft Decision Tree, paper recently published

Kim Heecheol 262 Dec 4, 2022
A ultra-lightweight 3D renderer of the Tensorflow/Keras neural network architectures

A ultra-lightweight 3D renderer of the Tensorflow/Keras neural network architectures

Souvik Pratiher 16 Nov 17, 2021
Convolutional neural network visualization techniques implemented in PyTorch.

This repository contains a number of convolutional neural network visualization techniques implemented in PyTorch.

null 1 Nov 6, 2021
Visual analysis and diagnostic tools to facilitate machine learning model selection.

Yellowbrick Visual analysis and diagnostic tools to facilitate machine learning model selection. What is Yellowbrick? Yellowbrick is a suite of visual

District Data Labs 3.9k Dec 30, 2022
Model analysis tools for TensorFlow

TensorFlow Model Analysis TensorFlow Model Analysis (TFMA) is a library for evaluating TensorFlow models. It allows users to evaluate their models on

null 1.2k Dec 26, 2022
Portal is the fastest way to load and visualize your deep neural networks on images and videos ๐Ÿ”ฎ

Portal is the fastest way to load and visualize your deep neural networks on images and videos ??

Datature 243 Jan 5, 2023
Visual Computing Group (Ulm University) 99 Nov 30, 2022
A Practical Debugging Tool for Training Deep Neural Networks

Cockpit is a visual and statistical debugger specifically designed for deep learning!

null 31 Aug 14, 2022
Code for visualizing the loss landscape of neural nets

Visualizing the Loss Landscape of Neural Nets This repository contains the PyTorch code for the paper Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer

Tom Goldstein 2.2k Dec 30, 2022
Visualization toolkit for neural networks in PyTorch! Demo -->

FlashTorch A Python visualization toolkit, built with PyTorch, for neural networks in PyTorch. Neural networks are often described as "black box". The

Misa Ogura 692 Dec 29, 2022
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet

Neural-Backed Decision Trees ยท Site ยท Paper ยท Blog ยท Video Alvin Wan, *Lisa Dunlap, *Daniel Ho, Jihan Yin, Scott Lee, Henry Jin, Suzanne Petryk, Sarah

Alvin Wan 556 Dec 20, 2022
GNNLens2 is an interactive visualization tool for graph neural networks (GNN).

GNNLens2 is an interactive visualization tool for graph neural networks (GNN).

Distributed (Deep) Machine Learning Community 143 Jan 7, 2023