Pretty Tensor - Fluent Neural Networks in TensorFlow

Overview

Pretty Tensor - Fluent Neural Networks in TensorFlow

Pretty Tensor provides a high level builder API for TensorFlow. It provides thin wrappers on Tensors so that you can easily build multi-layer neural networks.

Pretty Tensor provides a set of objects that behave likes Tensors, but also support a chainable object syntax to quickly define neural networks and other layered architectures in TensorFlow.

result = (pretty_tensor.wrap(input_data, m)
          .flatten()
          .fully_connected(200, activation_fn=tf.nn.relu)
          .fully_connected(10, activation_fn=None)
          .softmax(labels, name=softmax_name))

Please look here for full documentation of the PrettyTensor object for all available operations: Available Operations or you can check out the complete documentation

See the tutorial directory for samples: tutorial/

Installation

The easiest installation is just to use pip:

  1. Follow the instructions at tensorflow.org
  2. pip install prettytensor

Note: Head is tested against the TensorFlow nightly builds and pip is tested against TensorFlow release.

Quick start

Imports

import prettytensor as pt
import tensorflow as tf

Setup your input

my_inputs = # numpy array of shape (BATCHES, BATCH_SIZE, DATA_SIZE)
my_labels = # numpy array of shape (BATCHES, BATCH_SIZE, CLASSES)
input_tensor = tf.placeholder(np.float32, shape=(BATCH_SIZE, DATA_SIZE))
label_tensor = tf.placeholder(np.float32, shape=(BATCH_SIZE, CLASSES))
pretty_input = pt.wrap(input_tensor)

Define your model

softmax, loss = (pretty_input.
                 fully_connected(100).
                 softmax_classifier(CLASSES, labels=label_tensor))

Train and evaluate

accuracy = softmax.evaluate_classifier(label_tensor)

optimizer = tf.train.GradientDescentOptimizer(0.1)  # learning rate
train_op = pt.apply_optimizer(optimizer, losses=[loss])

init_op = tf.initialize_all_variables()

with tf.Session() as sess:
    sess.run(init_op)
    for inp, label in zip(my_inputs, my_labels):
        unused_loss_value, accuracy_value = sess.run([loss, accuracy],
                                 {input_tensor: inp, label_tensor: label})
        print 'Accuracy: %g' % accuracy_value

Features

Thin

Full power of TensorFlow is easy to use

Pretty Tensors can be used (almost) everywhere that a tensor can. Just call pt.wrap to make a tensor pretty.

You can also add any existing TensorFlow function to the chain using apply. apply applies the current Tensor as the first argument and takes all the other arguments as normal.

Note: because apply is so generic, Pretty Tensor doesn't try to wrap the world.

Plays well with other libraries

It also uses standard TensorFlow idioms so that it plays well with other libraries, this means that you can use it a little bit in a model or throughout. Just make sure to run the update_ops on each training set (see with_update_ops).

Terse

You've already seen how a Pretty Tensor is chainable and you may have noticed that it takes care of handling the input shape. One other feature worth noting are defaults. Using defaults you can specify reused values in a single place without having to repeat yourself.

with pt.defaults_scope(activation_fn=tf.nn.relu):
  hidden_output2 = (pretty_images.flatten()
                   .fully_connected(100)
                   .fully_connected(100))

Check out the documentation to see all supported defaults.

Code matches model

Sequential mode lets you break model construction across lines and provides the subdivide syntactic sugar that makes it easy to define and understand complex structures like an inception module:

with pretty_tensor.defaults_scope(activation_fn=tf.nn.relu):
  seq = pretty_input.sequential()
  with seq.subdivide(4) as towers:
    towers[0].conv2d(1, 64)
    towers[1].conv2d(1, 112).conv2d(3, 224)
    towers[2].conv2d(1, 32).conv2d(5, 64)
    towers[3].max_pool(2, 3).conv2d(1, 32)

Inception module showing branch and rejoin

Templates provide guaranteed parameter reuse and make unrolling recurrent networks easy:

output = [], s = tf.zeros([BATCH, 256 * 2])

A = (pretty_tensor.template('x')
     .lstm_cell(num_units=256, state=UnboundVariable('state'))

for x in pretty_input_array:
  h, s = A.construct(x=x, state=s)
  output.append(h)

There are also some convenient shorthands for LSTMs and GRUs:

pretty_input_array.sequence_lstm(num_units=256)

Unrolled RNN

Extensible

You can call any existing operation by using apply and it will simply subsitute the current tensor for the first argument.

pretty_input.apply(tf.mul, 5)

You can also create a new operation There are two supported registration mechanisms to add your own functions. @Register() allows you to create a method on PrettyTensor that operates on the Tensors and returns either a loss or a new value. Name scoping and variable scoping are handled by the framework.

The following method adds the leaky_relu method to every Pretty Tensor:

@pt.Register
def leaky_relu(input_pt):
  return tf.select(tf.greater(input_pt, 0.0), input_pt, 0.01 * input_pt)

@RegisterCompoundOp() is like adding a macro, it is designed to group together common sets of operations.

Safe variable reuse

Within a graph, you can reuse variables by using templates. A template is just like a regular graph except that some variables are left unbound.

See more details in PrettyTensor class.

Accessing Variables

Pretty Tensor uses the standard graph collections from TensorFlow to store variables. These can be accessed using tf.get_collection(key) with the following keys:

  • tf.GraphKeys.VARIABLES: all variables that should be saved (including some statistics).
  • tf.GraphKeys.TRAINABLE_VARIABLES: all variables that can be trained (including those before a stop_gradients` call). These are what would typically be called parameters of the model in ML parlance.
  • pt.GraphKeys.TEST_VARIABLES: variables used to evaluate a model. These are typically not saved and are reset by the LocalRunner.evaluate method to get a fresh evaluation.

Authors

Eider Moore (eiderman)

with key contributions from:

  • Hubert Eichner
  • Oliver Lange
  • Sagar Jain (sagarjn)
Comments
  • TypeError: Expected int32, got <prettytensor.pretty_tensor_class.Layer > of type 'Layer' instead.

    TypeError: Expected int32, got of type 'Layer' instead.

    I'm trying to run the StackGAN code (https://github.com/hanzhanggit/StackGAN) using TF1.0 and prettytensor (0.7.2) and I receive the following error "TypeError: Expected int32, got <prettytensor.pretty_tensor_class.Layer object at 0x7f396c1ea590> of type 'Layer' instead." when I run the (sh demo/birds_demo.sh).

    A friend who ran the same in TF 0.12 had no problems. Is this because of TF1.0? Do you have any idea what's the problem or what can be done to overcome it?

    Thank you

    opened by nsarafianos 16
  • import error

    import error

    version = '0.7.1'

    import prettytensor as pt
    

    File "lib\prettytensor_init_.py", line 25, in from prettytensor import funcs File "lib\prettytensor\funcs.py", line 25, in from prettytensor.pretty_tensor_image_methods import * File "lib\prettytensor\pretty_tensor_image_methods.py", line 135, in class conv2d(prettytensor.VarStoreMethod): File "lib\prettytensor\pretty_tensor_image_methods.py", line 145, in conv2d bias=tf.zeros_initializer(), TypeError: zeros_initializer() missing 1 required positional argument: 'shape'

    opened by apiszcz 13
  • batch_normalize=True doesn't work accurately with phase=Phase.* setting

    batch_normalize=True doesn't work accurately with phase=Phase.* setting

    I believe that there is an error when using phase in the default_scope coupled with batch_normalize=True.

    Basically it looks like this:

        def encoder(self, inputs, latent_size, activ=tf.nn.elu, phase=pt.Phase.train):
            with pt.defaults_scope(activation_fn=activ,
                                   batch_normalize=True,
                                   learned_moments_update_rate=0.0003,
                                   variance_epsilon=0.001,
                                   scale_after_normalization=True,
                                   phase=phase):
                params = (pt.wrap(inputs).
                          reshape([-1, self.input_shape[0], self.input_shape[1], 1]).
                          conv2d(5, 32, stride=2).
                          conv2d(5, 64, stride=2).
                          conv2d(5, 128, edges='VALID').
                          flatten().
                          fully_connected(self.latent_size * 2, activation_fn=None)).tensor
    

    Full code here: https://github.com/jramapuram/CVAE/blob/master/cvae.py If I remove phase=phase within the scope assignment my model produces the following: 2d_cluster_orig

    However, when setting the phase appropriately I get the following: 2d_cluster

    This is trained for the same number of iterations using the same model.

    opened by jramapuram 13
  • How do you extract the weights used in the model?

    How do you extract the weights used in the model?

    Request for documentation

    I've trained a 3-layer fully connected layer as a test and it works perfectly. I'd like to see what the weights used in each layer are but I'm not sure how access them through prettytensor. In standard tensorflow the variables are explicit, but here they hidden through the "pretty" interface.

    An example in the docs to shows how to access the weights would be very useful for others. We can use the example you have in the docs:

    result = (pretty_tensor.wrap(input_data, m)
          .flatten()
          .fully_connected(200, activation_fn=tf.nn.relu)
          .fully_connected(10, activation_fn=None)
          .softmax(labels, name=softmax_name))
    

    as an example to extract the weights.

    opened by thoppe 13
  • Cannot assign a device to node

    Cannot assign a device to node

    I'm running the baby_names tutorial, and it is failing with the following error (excerpt):

    tensorflow.python.framework.errors.InvalidArgumentError: Cannot assign a device to node 'Adagrad/update_baby_names/embedding_lookup/params/SparseApplyAdagrad': Could not satisfy explicit device specification '' because the node was colocated with a group of nodes that required incompatible device '/job:localhost/replica:0/task:0/GPU:0' [[Node: Adagrad/update_baby_names/embedding_lookup/params/SparseApplyAdagrad = SparseApplyAdagrad[T=DT_FLOAT, Tindices=DT_INT32, use_locking=false](baby_names/embedding_lookup/params, baby_names/embedding_lookup/params/Adagrad, ExponentialDecay, gradients/concat, gradients/concat_1)]] Caused by op u'Adagrad/update_baby_names/embedding_lookup/params/SparseApplyAdagrad', defined at: File "tutorial/baby_names.py", line 193, in tf.app.run()

    It was previously erroring due to the .csv not being found (so I copied into /usr/local/lib/python2.7/dist-packages/prettytensor/tutorial/)

    Any suggestions for how to fix this?

    opened by mschonwe 12
  • Shakespeare demo broken at save points

    Shakespeare demo broken at save points

    Running out of the box, the shakespeare tutorial crashes with a broken feed value. I am running the tensorflow (0.7.1) and pretty tensor (0.5.3) versions that are the latest right now.

    mnist.py does not have the same issue, despite the apparent use of the same data_utils frameworks (permute_data, etc). Help?

    $ python /usr/share/anaconda/anaconda2/envs/tf2/lib/python2.7/site-packages/prettytensor/tutorial/shakespeare.py --epochs 2 --save_path /home/jeremy/notebooks/shakespeare/
    Starting Shakespeare
    W tensorflow/core/common_runtime/executor.cc:1102] 0x15629680 Compute status: Invalid argument: You must feed a value for placeholder tensor 'shakespeare_2/Placeholder' with dtype int32
             [[Node: shakespeare_2/Placeholder = Placeholder[dtype=DT_INT32, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
             [[Node: _send_shakespeare/cross_entropy/truediv_2_0 = _Send[T=DT_FLOAT, client_terminated=true, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=-2958238964605934168, tensor_name="shakespeare/cross_entropy/truediv_2:0", _device="/job:localhost/replica:0/task:0/cpu:0"](shakespeare/cross_entropy/truediv_2)]]
    Traceback (most recent call last):
      File "/usr/share/anaconda/anaconda2/envs/tf2/lib/python2.7/site-packages/prettytensor/tutorial/shakespeare.py", line 249, in <module>
        tf.app.run()
      File "/usr/share/anaconda/anaconda2/envs/tf2/lib/python2.7/site-packages/tensorflow/python/platform/default/_app.py", line 30, in run
        sys.exit(main(sys.argv))
      File "/usr/share/anaconda/anaconda2/envs/tf2/lib/python2.7/site-packages/prettytensor/tutorial/shakespeare.py", line 226, in main
        print_every=10)
      File "/usr/share/anaconda/anaconda2/envs/tf2/lib/python2.7/site-packages/prettytensor/local_trainer.py", line 216, in train_model
        print_every=print_every)[2:]
      File "/usr/share/anaconda/anaconda2/envs/tf2/lib/python2.7/site-packages/prettytensor/local_trainer.py", line 166, in run_model
        dict(zip(feed_vars, data)))
      File "/usr/share/anaconda/anaconda2/envs/tf2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 315, in run
        return self._run(None, fetches, feed_dict)
      File "/usr/share/anaconda/anaconda2/envs/tf2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 511, in _run
        feed_dict_string)
      File "/usr/share/anaconda/anaconda2/envs/tf2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 564, in _do_run
        target_list)
      File "/usr/share/anaconda/anaconda2/envs/tf2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 586, in _do_call
        e.code)
    
    opened by jkahn 8
  • Issue with tensorflow variable names

    Issue with tensorflow variable names

    Hi,

    I just wanted to try the introductory example and got the following error:

    Traceback (most recent call last):
      File "testpt.py", line 15, in <module>
        softmax_classifier(CLASSES, labels=label_tensor, name="sm"))
      File "/home/.local/lib/python2.7/site-packages/prettytensor/pretty_tensor_class.py", line 2019, in method
        return func(input_layer, *args, **self.fill_kwargs(input_layer, kwargs))
      File "/home/.local/lib/python2.7/site-packages/prettytensor/pretty_tensor_loss_methods.py", line 401, in softmax_classifier
        init=weight_init, bias_init=bias_init)
      File "/home/.local/lib/python2.7/site-packages/prettytensor/pretty_tensor_class.py", line 1980, in method
        result = func(non_seq_layer, *args, **kwargs)
      File "/home/.local/lib/python2.7/site-packages/prettytensor/pretty_tensor_methods.py", line 333, in __call__
        dt=dtype)
      File "/home/.local/lib/python2.7/site-packages/prettytensor/pretty_tensor_class.py", line 1694, in variable
        collections=variable_collections)
      File "/home/code/ml/tensorflow/_python_build/tensorflow/python/ops/variable_scope.py", line 334, in get_variable
        collections=collections)
      File "/home/code/ml/tensorflow/_python_build/tensorflow/python/ops/variable_scope.py", line 257, in get_variable
        collections=collections, caching_device=caching_device)
      File "/home/code/ml/tensorflow/_python_build/tensorflow/python/ops/variable_scope.py", line 118, in get_variable
        name, "".join(traceback.format_list(tb))))
    ValueError: Variable weights already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:
    
      File "/home/.local/lib/python2.7/site-packages/prettytensor/pretty_tensor_class.py", line 1694, in variable
        collections=variable_collections)
      File "/home/.local/lib/python2.7/site-packages/prettytensor/pretty_tensor_methods.py", line 333, in __call__
        dt=dtype)
      File "/home/.local/lib/python2.7/site-packages/prettytensor/pretty_tensor_class.py", line 1980, in method
        result = func(non_seq_layer, *args, **kwargs)
    

    The code that I try to run is as follows

    import tensorflow as tf
    import prettytensor as pt
    import numpy as np
    
    BATCH_SIZE = 100
    DATA_SIZE = 28*28
    CLASSES = 10
    
    input_tensor = tf.placeholder(np.float32, shape=(BATCH_SIZE, DATA_SIZE))
    label_tensor = tf.placeholder(np.float32, shape=(BATCH_SIZE, CLASSES))
    pretty_input = pt.wrap(input_tensor)
    
    softmax, loss = (pretty_input.
                         fully_connected(100, name="fc1").
                         softmax_classifier(CLASSES, labels=label_tensor, name="sm"))
    

    It seems that variable tensors are not scoped properly. Am I doing something wrong?

    Kind regards, Tobias

    opened by fftobiwan 7
  • Please document API changes

    Please document API changes

    I've used PrettyTensor for several of my tutorials on TensorFlow:

    https://github.com/Hvass-Labs/TensorFlow-Tutorials

    I don't think I've updated PT in several months because everything seemed to work fine for me. But recently I started getting reports that softmax_classifier() did not work with the class_count keyword anymore. It appears it has changed to num_classes instead. I prefer the new keyword, but it was a rather significant API change which broke all my code, and it is not listed in the change-log:

    https://github.com/google/prettytensor/blob/master/CHANGELIST.md

    In the future, please list all important changes in the log.

    I'm going through my tutorials now to update them, but I would prefer to wait until the deprecation warnings are removed, see https://github.com/google/prettytensor/issues/41.

    opened by Hvass-Labs 6
  • How to use batch-normalization?

    How to use batch-normalization?

    Once again I hope it's OK that I ask this question here instead of on StackOverflow.

    I don't know if batch-normalization is really useful, there seems to be different opinions on the matter. But I'd like to try it. I can see that it's implemented in Pretty Tensor:

    https://github.com/google/prettytensor/blob/master/docs/PrettyTensor.md#batch_normalize

    But I can't figure out how to use it for the following Convolutional Neural Network:

    with pt.defaults_scope(activation_fn=tf.nn.relu):
        y_pred, loss = x_pretty.\
            conv2d(kernel=5, depth=64, name='layer_conv1').\
            max_pool(kernel=2, stride=2).\
            conv2d(kernel=5, depth=64, name='layer_conv2').\
            max_pool(kernel=2, stride=2).\
            flatten().\
            fully_connected(size=256, name='layer_fc1').\
            fully_connected(size=128, name='layer_fc2').\
            softmax_classifier(class_count=10, labels=y_true)
    

    Any help would be appreciated.

    opened by Hvass-Labs 5
  •  scalar_summary deprecation warning

    scalar_summary deprecation warning

    Hi, I use tensorflow_gpu-0.12.0rc1-cp34-cp34m-linux_x86_64 and prettytensor-0.7.1 with python 3.4. when running the tensorboard I get following warning

    WARNING:tensorflow:From /home/badami/Codes/deeplearning/tensorflow3/lib/python3.4/site-packages/prettytensor/bookkeeper.py:243 in add_scalar_summary.: scalar_summary (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30.
    Instructions for updating:
    Please switch to tf.summary.scalar. Note that tf.summary.scalar uses the node name instead of the tag. This means that TensorFlow will automatically de-duplicate summary names based on the scope they are created in. Also, passing a tensor or list of tags to a scalar summary op is no longer supported.
    

    I also do not see any graph on the tensorboard. I am not sure if my code is incorrect or this warning is the reason I do not see the graph.

    opened by ibadami 4
  • PrettyTensor breaks GPU-installation of TensorFlow

    PrettyTensor breaks GPU-installation of TensorFlow

    Installing PrettyTensor seems to have broken my GPU installation of TensorFlow. After installing PrettyTensor it would use the CPU version of TensorFlow.

    I'm using a conda env named tf-gpu for the GPU version and another env named tf for the CPU version.

    Here's the output of the pip install:

    (tf-gpu) magnus@torpedo:~/development/TensorFlow-Tutorials$ pip install prettytensor
    Collecting prettytensor
      Downloading prettytensor-0.7.1-py3-none-any.whl (273kB)
        100% |████████████████████████████████| 276kB 1.1MB/s 
    Collecting enum34>=1.0.0 (from prettytensor)
      Downloading enum34-1.1.6-py3-none-any.whl
    Requirement already satisfied: six>=1.10.0 in /home/magnus/anaconda3/envs/tf-gpu/lib/python3.5/site-packages (from prettytensor)
    Collecting tensorflow>=0.12.0rc0 (from prettytensor)
      Using cached tensorflow-0.12.0rc1-cp35-cp35m-manylinux1_x86_64.whl
    Requirement already satisfied: numpy>=1.11.0 in /home/magnus/anaconda3/envs/tf-gpu/lib/python3.5/site-packages (from tensorflow>=0.12.0rc0->prettytensor)
    Requirement already satisfied: protobuf==3.1.0 in /home/magnus/anaconda3/envs/tf-gpu/lib/python3.5/site-packages (from tensorflow>=0.12.0rc0->prettytensor)
    Requirement already satisfied: wheel>=0.26 in /home/magnus/anaconda3/envs/tf-gpu/lib/python3.5/site-packages (from tensorflow>=0.12.0rc0->prettytensor)
    Requirement already satisfied: setuptools in /home/magnus/anaconda3/envs/tf-gpu/lib/python3.5/site-packages/setuptools-27.2.0-py3.5.egg (from protobuf==3.1.0->tensorflow>=0.12.0rc0->prettytensor)
    Installing collected packages: enum34, tensorflow, prettytensor
    Successfully installed enum34-1.1.6 prettytensor-0.7.1 tensorflow-0.12.0rc1
    

    The culprit seems to be this:

    Collecting tensorflow>=0.12.0rc0 (from prettytensor)
      Using cached tensorflow-0.12.0rc1-cp35-cp35m-manylinux1_x86_64.whl
    

    I tried pip uninstall tensorflow and then pip install tensorflow_gpu-0.12.0rc0-cp35-cp35m-linux_x86_64.whl but it doesn't work because it says it is already installed:

    (tf-gpu) magnus@torpedo:~/Downloads$ pip install tensorflow_gpu-0.12.0rc0-cp35-cp35m-linux_x86_64.whl 
    Requirement already satisfied: tensorflow-gpu==0.12.0rc0 from file:///home/magnus/Downloads/tensorflow_gpu-0.12.0rc0-cp35-cp35m-linux_x86_64.whl in /home/magnus/anaconda3/envs/tf-gpu/lib/python3.5/site-packages
    Requirement already satisfied: six>=1.10.0 in /home/magnus/anaconda3/envs/tf-gpu/lib/python3.5/site-packages (from tensorflow-gpu==0.12.0rc0)
    Requirement already satisfied: protobuf==3.1.0 in /home/magnus/anaconda3/envs/tf-gpu/lib/python3.5/site-packages (from tensorflow-gpu==0.12.0rc0)
    Requirement already satisfied: wheel>=0.26 in /home/magnus/anaconda3/envs/tf-gpu/lib/python3.5/site-packages (from tensorflow-gpu==0.12.0rc0)
    Requirement already satisfied: numpy>=1.11.0 in /home/magnus/anaconda3/envs/tf-gpu/lib/python3.5/site-packages (from tensorflow-gpu==0.12.0rc0)
    Requirement already satisfied: setuptools in /home/magnus/anaconda3/envs/tf-gpu/lib/python3.5/site-packages/setuptools-27.2.0-py3.5.egg (from protobuf==3.1.0->tensorflow-gpu==0.12.0rc0)
    

    How do I fix this, please?

    opened by Hvass-Labs 4
  • AttributeError: 'VariableScope' object has no attribute 'current_scope

    AttributeError: 'VariableScope' object has no attribute 'current_scope

    i got the following error while executing the code AttributeError: 'VariableScope' object has no attribute 'current_scope can anyone suggest how to resolve the error.

    opened by sree5472 2
  • With Tensorflow 0.12, it is throwing

    With Tensorflow 0.12, it is throwing "TypeError: zeros_initializer() takes at least 1 argument (0 given)".

    I am using Tensorflow 0.12 and prettytensor. When I am importing prettytensor it is throwing out this error "TypeError: zeros_initializer() takes at least 1 argument (0 given)".

    Version mismatch might be a problem.

    I am unable to figure out which version of Tensorflow will work with 0.7.4 versio of prettytensor.

    opened by kailashahirwar 0
  • Prettytensor not working with TF1.8

    Prettytensor not working with TF1.8

    After upgrading to TF1.8, prettytensor stopped working with the following error. It seems _VARSCOPE_KEY is removed from variable_scope.

    .../lib/python3.6/site-packages/prettytensor/scopes.py in var_and_name_scope(names) 53 full_name = var_scope.name 54 ---> 55 vs_key = tf.get_collection_ref(variable_scope._VARSCOPE_KEY) 56 try: 57 # TODO(eiderman): Remove this hack or fix the full file.

    AttributeError: module 'tensorflow.python.ops.variable_scope' has no attribute '_VARSCOPE_KEY'

    opened by YutingZhang 4
  • manipulating output layer of tensorflow during learning

    manipulating output layer of tensorflow during learning

    Hi, I have an autoencoder tensorflow code that attached below. I want to change the output of encoder part during learning and then send it as input to decoder part. but I do not know how can I do it? here the output of encoder part is encoded that I want to turn it into a vector and change some values of it and then send it to decoder part. please guide me about this problem. please tell me about changes that should be done in following code. thanks.

    #matplotlib inline
    
    import numpy as np
    import tensorflow as tf
    import matplotlib.pyplot as plt
    from tensorflow.examples.tutorials.mnist import input_data
    mnist = input_data.read_data_sets('MNIST_data', validation_size=10000)
    w=np.random.randint(2,size=60000)
    img = mnist.train.images[2]
    plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
    
    learning_rate = 0.001
    # Input and target placeholders
    inputs_ = tf.placeholder(tf.float32, (None, 28,28,1), name="input")
    targets_ = tf.placeholder(tf.float32, (None, 28,28,1), name="target")
    
    ### Encoder
    conv1 = tf.layers.conv2d(inputs=inputs_, filters=16, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
    # Now 28x28x16
    maxpool1 = tf.layers.max_pooling2d(conv1, pool_size=(2,2), strides=(2,2), padding='same')
    # Now 14x14x16
    conv2 = tf.layers.conv2d(inputs=maxpool1, filters=8, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
    # Now 14x14x8
    maxpool2 = tf.layers.max_pooling2d(conv2, pool_size=(2,2), strides=(2,2), padding='same')
    # Now 7x7x8
    conv3 = tf.layers.conv2d(inputs=maxpool2, filters=8, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
    # Now 7x7x8
    encoded = tf.layers.max_pooling2d(conv3, pool_size=(2,2), strides=(2,2), padding='same')
    # Now 4x4x8
    
    
    ### Decoder
    upsample1 = tf.image.resize_images(encoded, size=(7,7), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
    # Now 7x7x8
    conv4 = tf.layers.conv2d(inputs=upsample1, filters=8, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
    # Now 7x7x8
    upsample2 = tf.image.resize_images(conv4, size=(14,14), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
    # Now 14x14x8
    conv5 = tf.layers.conv2d(inputs=upsample2, filters=8, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
    # Now 14x14x8
    upsample3 = tf.image.resize_images(conv5, size=(28,28), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
    # Now 28x28x8
    conv6 = tf.layers.conv2d(inputs=upsample3, filters=16, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
    # Now 28x28x16
    
    logits = tf.layers.conv2d(inputs=conv6, filters=1, kernel_size=(3,3), padding='same', activation=None)
    #Now 28x28x1
    
    # Pass logits through sigmoid to get reconstructed image
    decoded = tf.nn.sigmoid(logits)
    
    # Pass logits through sigmoid and calculate the cross-entropy loss
    loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
    
    # Get cost and define the optimizer
    cost = tf.reduce_mean(loss)
    opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
    sess = tf.Session()
    epochs = 1
    batch_size = 200
    sess.run(tf.global_variables_initializer())
    for e in range(epochs):
        for ii in range(mnist.train.num_examples//batch_size):
            batch = mnist.train.next_batch(batch_size)
            imgs = batch[0].reshape((-1, 28, 28, 1))
            batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
                                                             targets_: imgs})
            Fprim=tf.reshape(encoded,[-1,1])
            temp=Fprim[0]
            Fprim[0]=Fprim[1]
            Fprim[1]=temp
            encoded=tf.reshape(Fprim,[4,4,8])
            
    
    
            print("Epoch: {}/{}...".format(e+1, epochs),
                  "Training loss: {:.4f}".format(batch_cost))
     
    fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
    in_imgs = mnist.test.images[:10]
    noisy_imgs = in_imgs + 0.01 * np.random.randn(*in_imgs.shape)
    noisy_imgs = np.clip(noisy_imgs, 0., 1.)
    
    reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
    
    for images, row in zip([noisy_imgs, reconstructed], axes):
        for img, ax in zip(images, row):
            ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
            ax.get_xaxis().set_visible(False)
            ax.get_yaxis().set_visible(False)
    
    fig.tight_layout(pad=0.1)
    
    opened by nadianaji 0
Owner
Google
Google ❤️ Open Source
Google
Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks

Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks This repository contains the code and data for the corresp

Friederike Metz 7 Apr 23, 2022
Complex-Valued Neural Networks (CVNN)Complex-Valued Neural Networks (CVNN)

Complex-Valued Neural Networks (CVNN) Done by @NEGU93 - J. Agustin Barrachina Using this library, the only difference with a Tensorflow code is that y

youceF 1 Nov 12, 2021
Simulating Sycamore quantum circuits classically using tensor network algorithm.

Simulating the Sycamore quantum supremacy circuit This repo contains data we have obtained in simulating the Sycamore quantum supremacy circuits with

Feng Pan 46 Nov 17, 2022
Spectral Tensor Train Parameterization of Deep Learning Layers

Spectral Tensor Train Parameterization of Deep Learning Layers This repository is the official implementation of our AISTATS 2021 paper titled "Spectr

Anton Obukhov 12 Oct 23, 2022
TuckER: Tensor Factorization for Knowledge Graph Completion

TuckER: Tensor Factorization for Knowledge Graph Completion This codebase contains PyTorch implementation of the paper: TuckER: Tensor Factorization f

Ivana Balazevic 296 Dec 6, 2022
FluidNet re-written with ATen tensor lib

fluidnet_cxx: Accelerating Fluid Simulation with Convolutional Neural Networks. A PyTorch/ATen Implementation. This repository is based on the paper,

JoliBrain 50 Jun 7, 2022
A torch.Tensor-like DataFrame library supporting multiple execution runtimes and Arrow as a common memory format

TorchArrow (Warning: Unstable Prototype) This is a prototype library currently under heavy development. It does not currently have stable releases, an

Facebook Research 536 Jan 6, 2023
Gradient-free global optimization algorithm for multidimensional functions based on the low rank tensor train format

ttopt Description Gradient-free global optimization algorithm for multidimensional functions based on the low rank tensor train (TT) format and maximu

null 5 May 23, 2022
(Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework

(Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework Background: Outlier detection (OD) is a key data mining task for identify

Yue Zhao 127 Jan 5, 2023
Code to reproduce the results in the paper "Tensor Component Analysis for Interpreting the Latent Space of GANs".

Tensor Component Analysis for Interpreting the Latent Space of GANs [ paper | project page ] Code to reproduce the results in the paper "Tensor Compon

James Oldfield 4 Jun 17, 2022
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

DeepMind 892 Dec 28, 2022
NeuPy is a Tensorflow based python library for prototyping and building neural networks

NeuPy v0.8.2 NeuPy is a python library for prototyping and building neural networks. NeuPy uses Tensorflow as a computational backend for deep learnin

Yurii Shevchuk 729 Jan 3, 2023
Graph Neural Networks with Keras and Tensorflow 2.

Welcome to Spektral Spektral is a Python library for graph deep learning, based on the Keras API and TensorFlow 2. The main goal of this project is to

Daniele Grattarola 2.2k Jan 8, 2023
Lingvo is a framework for building neural networks in Tensorflow, particularly sequence models.

Lingvo is a framework for building neural networks in Tensorflow, particularly sequence models.

null 2.7k Jan 5, 2023
A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks without the use of any outside machine learning libraries - all from scratch.

Kordel K. France 2 Nov 14, 2022
Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy. Now with tensorflow 1.0 support. Evaluation usa

Marcel R. 349 Aug 6, 2022
TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

null 2.6k Jan 4, 2023
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Peter Lin 6.5k Jan 4, 2023