TensorFlow Tutorial and Examples for Beginners (support TF v1 & v2)

Overview

TensorFlow Examples

This tutorial was designed for easily diving into TensorFlow, through examples. For readability, it includes both notebooks and source codes with explanation, for both TF v1 & v2.

It is suitable for beginners who want to find clear and concise examples about TensorFlow. Besides the traditional 'raw' TensorFlow implementations, you can also find the latest TensorFlow API practices (such as layers, estimator, dataset, ...).

Update (05/16/2020): Moving all default examples to TF2. For TF v1 examples: check here.

Tutorial index

0 - Prerequisite

1 - Introduction

  • Hello World (notebook). Very simple example to learn how to print "hello world" using TensorFlow 2.0+.
  • Basic Operations (notebook). A simple example that cover TensorFlow 2.0+ basic operations.

2 - Basic Models

  • Linear Regression (notebook). Implement a Linear Regression with TensorFlow 2.0+.
  • Logistic Regression (notebook). Implement a Logistic Regression with TensorFlow 2.0+.
  • Word2Vec (Word Embedding) (notebook). Build a Word Embedding Model (Word2Vec) from Wikipedia data, with TensorFlow 2.0+.
  • GBDT (Gradient Boosted Decision Trees) (notebooks). Implement a Gradient Boosted Decision Trees with TensorFlow 2.0+ to predict house value using Boston Housing dataset.

3 - Neural Networks

Supervised
  • Simple Neural Network (notebook). Use TensorFlow 2.0 'layers' and 'model' API to build a simple neural network to classify MNIST digits dataset.
  • Simple Neural Network (low-level) (notebook). Raw implementation of a simple neural network to classify MNIST digits dataset.
  • Convolutional Neural Network (notebook). Use TensorFlow 2.0+ 'layers' and 'model' API to build a convolutional neural network to classify MNIST digits dataset.
  • Convolutional Neural Network (low-level) (notebook). Raw implementation of a convolutional neural network to classify MNIST digits dataset.
  • Recurrent Neural Network (LSTM) (notebook). Build a recurrent neural network (LSTM) to classify MNIST digits dataset, using TensorFlow 2.0 'layers' and 'model' API.
  • Bi-directional Recurrent Neural Network (LSTM) (notebook). Build a bi-directional recurrent neural network (LSTM) to classify MNIST digits dataset, using TensorFlow 2.0+ 'layers' and 'model' API.
  • Dynamic Recurrent Neural Network (LSTM) (notebook). Build a recurrent neural network (LSTM) that performs dynamic calculation to classify sequences of variable length, using TensorFlow 2.0+ 'layers' and 'model' API.
Unsupervised
  • Auto-Encoder (notebook). Build an auto-encoder to encode an image to a lower dimension and re-construct it.
  • DCGAN (Deep Convolutional Generative Adversarial Networks) (notebook). Build a Deep Convolutional Generative Adversarial Network (DCGAN) to generate images from noise.

4 - Utilities

  • Save and Restore a model (notebook). Save and Restore a model with TensorFlow 2.0+.
  • Build Custom Layers & Modules (notebook). Learn how to build your own layers / modules and integrate them into TensorFlow 2.0+ Models.
  • Tensorboard (notebook). Track and visualize neural network computation graph, metrics, weights and more using TensorFlow 2.0+ tensorboard.

5 - Data Management

  • Load and Parse data (notebook). Build efficient data pipeline with TensorFlow 2.0 (Numpy arrays, Images, CSV files, custom data, ...).
  • Build and Load TFRecords (notebook). Convert data into TFRecords format, and load them with TensorFlow 2.0+.
  • Image Transformation (i.e. Image Augmentation) (notebook). Apply various image augmentation techniques with TensorFlow 2.0+, to generate distorted images for training.

6 - Hardware

  • Multi-GPU Training (notebook). Train a convolutional neural network with multiple GPUs on CIFAR-10 dataset.

TensorFlow v1

The tutorial index for TF v1 is available here: TensorFlow v1.15 Examples. Or see below for a list of the examples.

Dataset

Some examples require MNIST dataset for training and testing. Don't worry, this dataset will automatically be downloaded when running examples. MNIST is a database of handwritten digits, for a quick description of that dataset, you can check this notebook.

Official Website: http://yann.lecun.com/exdb/mnist/.

Installation

To download all the examples, simply clone this repository:

git clone https://github.com/aymericdamien/TensorFlow-Examples

To run them, you also need the latest version of TensorFlow. To install it:

pip install tensorflow

or (with GPU support):

pip install tensorflow_gpu

For more details about TensorFlow installation, you can check TensorFlow Installation Guide

TensorFlow v1 Examples - Index

The tutorial index for TF v1 is available here: TensorFlow v1.15 Examples.

0 - Prerequisite

1 - Introduction

  • Hello World (notebook) (code). Very simple example to learn how to print "hello world" using TensorFlow.
  • Basic Operations (notebook) (code). A simple example that cover TensorFlow basic operations.
  • TensorFlow Eager API basics (notebook) (code). Get started with TensorFlow's Eager API.

2 - Basic Models

  • Linear Regression (notebook) (code). Implement a Linear Regression with TensorFlow.
  • Linear Regression (eager api) (notebook) (code). Implement a Linear Regression using TensorFlow's Eager API.
  • Logistic Regression (notebook) (code). Implement a Logistic Regression with TensorFlow.
  • Logistic Regression (eager api) (notebook) (code). Implement a Logistic Regression using TensorFlow's Eager API.
  • Nearest Neighbor (notebook) (code). Implement Nearest Neighbor algorithm with TensorFlow.
  • K-Means (notebook) (code). Build a K-Means classifier with TensorFlow.
  • Random Forest (notebook) (code). Build a Random Forest classifier with TensorFlow.
  • Gradient Boosted Decision Tree (GBDT) (notebook) (code). Build a Gradient Boosted Decision Tree (GBDT) with TensorFlow.
  • Word2Vec (Word Embedding) (notebook) (code). Build a Word Embedding Model (Word2Vec) from Wikipedia data, with TensorFlow.

3 - Neural Networks

Supervised
  • Simple Neural Network (notebook) (code). Build a simple neural network (a.k.a Multi-layer Perceptron) to classify MNIST digits dataset. Raw TensorFlow implementation.
  • Simple Neural Network (tf.layers/estimator api) (notebook) (code). Use TensorFlow 'layers' and 'estimator' API to build a simple neural network (a.k.a Multi-layer Perceptron) to classify MNIST digits dataset.
  • Simple Neural Network (eager api) (notebook) (code). Use TensorFlow Eager API to build a simple neural network (a.k.a Multi-layer Perceptron) to classify MNIST digits dataset.
  • Convolutional Neural Network (notebook) (code). Build a convolutional neural network to classify MNIST digits dataset. Raw TensorFlow implementation.
  • Convolutional Neural Network (tf.layers/estimator api) (notebook) (code). Use TensorFlow 'layers' and 'estimator' API to build a convolutional neural network to classify MNIST digits dataset.
  • Recurrent Neural Network (LSTM) (notebook) (code). Build a recurrent neural network (LSTM) to classify MNIST digits dataset.
  • Bi-directional Recurrent Neural Network (LSTM) (notebook) (code). Build a bi-directional recurrent neural network (LSTM) to classify MNIST digits dataset.
  • Dynamic Recurrent Neural Network (LSTM) (notebook) (code). Build a recurrent neural network (LSTM) that performs dynamic calculation to classify sequences of different length.
Unsupervised
  • Auto-Encoder (notebook) (code). Build an auto-encoder to encode an image to a lower dimension and re-construct it.
  • Variational Auto-Encoder (notebook) (code). Build a variational auto-encoder (VAE), to encode and generate images from noise.
  • GAN (Generative Adversarial Networks) (notebook) (code). Build a Generative Adversarial Network (GAN) to generate images from noise.
  • DCGAN (Deep Convolutional Generative Adversarial Networks) (notebook) (code). Build a Deep Convolutional Generative Adversarial Network (DCGAN) to generate images from noise.

4 - Utilities

  • Save and Restore a model (notebook) (code). Save and Restore a model with TensorFlow.
  • Tensorboard - Graph and loss visualization (notebook) (code). Use Tensorboard to visualize the computation Graph and plot the loss.
  • Tensorboard - Advanced visualization (notebook) (code). Going deeper into Tensorboard; visualize the variables, gradients, and more...

5 - Data Management

  • Build an image dataset (notebook) (code). Build your own images dataset with TensorFlow data queues, from image folders or a dataset file.
  • TensorFlow Dataset API (notebook) (code). Introducing TensorFlow Dataset API for optimizing the input data pipeline.
  • Load and Parse data (notebook). Build efficient data pipeline (Numpy arrays, Images, CSV files, custom data, ...).
  • Build and Load TFRecords (notebook). Convert data into TFRecords format, and load them.
  • Image Transformation (i.e. Image Augmentation) (notebook). Apply various image augmentation techniques, to generate distorted images for training.

6 - Multi GPU

  • Basic Operations on multi-GPU (notebook) (code). A simple example to introduce multi-GPU in TensorFlow.
  • Train a Neural Network on multi-GPU (notebook) (code). A clear and simple TensorFlow implementation to train a convolutional neural network on multiple GPUs.

More Examples

The following examples are coming from TFLearn, a library that provides a simplified interface for TensorFlow. You can have a look, there are many examples and pre-built operations and layers.

Tutorials

  • TFLearn Quickstart. Learn the basics of TFLearn through a concrete machine learning task. Build and train a deep neural network classifier.

Examples

Comments
  • Convert Tensor to numpy array

    Convert Tensor to numpy array

    I am trying to calculate ruc score after every epoch. For than the tensor object need to be converted to numpy array. Following is the code I am trying.

    
    # Launch the graph
    with tf.Session() as sess:
        sess.run(init)
    
        # Training cycle
        for epoch in range(training_epochs):
            avg_cost = 0.
            total_batch = int(len(trX)/batch_size)
            # Loop over all batches
            for i in range(total_batch):
                batch_x, batch_y = trX[batch_size*i:batch_size*(i+1)],trY[batch_size*i:batch_size*(i+1)]
                # Run optimization op (backprop) and cost op (to get loss value)
                _, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
                                                              y: batch_y})
                # Compute average loss
                avg_cost += c / total_batch
            # Display logs per epoch step
            if epoch % display_step == 0:
                print "Epoch:", '%04d' % (epoch+1), "cost=", \
                    "{:.9f}".format(avg_cost)
                print roc_auc_score(teY,pred)
    
        print "Optimization Finished!"
        # Test model
        correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
        # Calculate accuracy
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
        print "Accuracy:", accuracy.eval({x: teX, y: teY})
    

    It is giving following error:

    TypeError: Expected sequence or array-like, got <class 'tensorflow.python.framework.ops.Tensor'>

    Can you please tell how to convert tensor to numpy array. I tried pred.eval() but it is showing error also.

    opened by nthakor 23
  • TypeError: unsupported operand type(s) for +: 'dict_values' and 'dict_values'

    TypeError: unsupported operand type(s) for +: 'dict_values' and 'dict_values'

    when I run this file tensorflow_v2/notebooks/3_NeuralNetworks/neural_network_raw.ipynb Link. the error occured, TypeError: unsupported operand type(s) for +: 'dict_values' and 'dict_values'. How can I solve this?

    my tf version 2.1.0, python 3.7

    opened by Billy1900 5
  • tf.nn.sparse_softmax_cross_entropy_with_logits &

    tf.nn.sparse_softmax_cross_entropy_with_logits & "ValueError: Rank mismatch: Rank of labels (received 2) should equal rank of logits minus 1 (received 2).

    if you use tensorflow 1.20 ,you maight be get this error "ValueError: Rank mismatch: Rank of labels (received 2) should equal rank of logits minus 1 (received 2). ",when you use tf.nn.sparse_softmax_cross_entropy_with_logits function.

    my solution is changing " loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits( logits=logits, labels=tf.cast(labels, dtype=tf.int32)))" to "loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits( logits=logits_train, labels=tf.argmax(tf.cast(labels, dtype=tf.int32),1)))"

    and then changing "acc_op = tf.metrics.accuracy(labels=labels, predictions=pred_classes)" to "acc_op = tf.metrics.accuracy(labels=tf.argmax(tf.cast(labels, dtype=tf.int32),1), predictions=pred_classes)"

    good luck!

    opened by lhyfst 5
  • hello tensorflow

    hello tensorflow

    hi, I am running my first programe in spyder IDE,
    from future import print_function import tensorflow as tf hello = tf.constant('Hello, TensorFlow!') sess = tf.Session() print(sess.run(hello))

    Th following error occur. File "C:/Users/srprotech/Desktop/thesis/Spyder Practice/Hello world.py", line 9, in import tensorflow as tf

    ImportError: No module named 'tensorflow' Please guide.

    opened by Mehak144 4
  • change n_steps value in training and test

    change n_steps value in training and test

    Hello, Aymeric Damien,

    Thanks in advance

    What I want to do is using different n_steps values in training and testing, How to set it? for example, in current training n_steps = 16, however in test I want to use n_steps = 55 (or changed as various example). I reset n_steps =55 in test, but error happened:

    tensorflow.python.framework.errors.InvalidArgumentError: Number of ways to split should evenly divide the split dimension, but got split_dim 0 (size = 55) and num_split 16

    Thank you very much

    Leo

    opened by 987410 4
  • Bi-Directional LSTM

    Bi-Directional LSTM

    Thanks for sharing this valuable resource. The recurrent network example was very useful to me for sequence classification. Can you pl add a new example which is same as recurrent_network.py but uses Bi-Directional LSTM instead of uni-directional. That will be very useful for me. Thanks once again.

    opened by shoaibahmed 4
  • I have a question about rnn code

    I have a question about rnn code

    istate = tf.placeholder("float", [None, 2*n_hidden]) #state & cell => 2x n_hidden
    

    why i have to 2*n_hidden? does it mean that we are using lstm cell and tensorflow.model.rnn require us to provide initial hidden state and initial cell state together when using lstm cell?

    opened by yanghoonkim 4
  • Running random_forest.py gives error

    Running random_forest.py gives error

    On copy-pasting the code of random_forest.py on Mac OSX, python3.6 and running gives the following error

    Extracting /tmp/data/train-images-idx3-ubyte.gz
    Extracting /tmp/data/train-labels-idx1-ubyte.gz
    Extracting /tmp/data/t10k-images-idx3-ubyte.gz
    Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
    Traceback (most recent call last):
      File "random_forest.py", line 51, in <module>
        infer_op, _, _ = forest_graph.inference_graph(X)
      File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 476, in __iter__
        raise TypeError("'Tensor' object is not iterable.")
    TypeError: 'Tensor' object is not iterable.
    
    opened by shray-yes 3
  • Random Forest error for tf 1.4

    Random Forest error for tf 1.4 "Resource localhost/tree-1//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist."

    Hi, I am getting below error while running the random forest code in the tensorflow 1.4 version, even after changing the "infer_op = forest_graph.inference_graph(X)" to "infer_op = forest_graph.inference_graph(X)[0]".

    Complete Error details:- 2017-12-08 18:16:23.961954: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2017-12-08 18:16:24.417796: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-1//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.417800: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-0//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.417800: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-9//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.418090: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-2//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.418755: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-1//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.418760: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-0//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.418799: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-9//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.418807: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-2//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.419076: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-3//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.419078: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-4//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.419120: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-8//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.419130: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-5//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.419730: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-4//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.419749: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-3//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.419778: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-5//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.419822: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-8//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.420025: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-6//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.420071: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-7//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.420673: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-6//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. 2017-12-08 18:16:24.420730: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Resource localhost/tree-7//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. Traceback (most recent call last): File "/Users/srikanth_m07/anaconda/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1323, in _do_call return fn(*args) File "/Users/srikanth_m07/anaconda/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1302, in _run_fn status, run_metadata) File "/Users/srikanth_m07/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 473, in exit c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.NotFoundError: Resource localhost/tree-1//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. [[Node: TreeSize_1 = TreeSize_device="/job:localhost/replica:0/task:0/device:CPU:0"]]

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/Users/srikanth_m07/Documents/work/tf_random_forest/tf_rf_mnist.py", line 68, in _, l = sess.run([train_op, loss_op], feed_dict={X: batch_x, Y: batch_y}) File "/Users/srikanth_m07/anaconda/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 889, in run run_metadata_ptr) File "/Users/srikanth_m07/anaconda/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1120, in _run feed_dict_tensor, options, run_metadata) File "/Users/srikanth_m07/anaconda/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1317, in _do_run options, run_metadata) File "/Users/srikanth_m07/anaconda/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1336, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.NotFoundError: Resource localhost/tree-1//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. [[Node: TreeSize_1 = TreeSize_device="/job:localhost/replica:0/task:0/device:CPU:0"]]

    Caused by op 'TreeSize_1', defined at: File "/Users/srikanth_m07/Documents/work/tf_random_forest/tf_rf_mnist.py", line 38, in loss_op = forest_graph.training_loss(X, Y) File "/Users/srikanth_m07/anaconda/lib/python3.5/site-packages/tensorflow/contrib/tensor_forest/python/tensor_forest.py", line 541, in training_loss return math_ops.negative(self.average_size(), name=name) File "/Users/srikanth_m07/anaconda/lib/python3.5/site-packages/tensorflow/contrib/tensor_forest/python/tensor_forest.py", line 534, in average_size sizes.append(self.trees[i].size()) File "/Users/srikanth_m07/anaconda/lib/python3.5/site-packages/tensorflow/contrib/tensor_forest/python/tensor_forest.py", line 690, in size return model_ops.tree_size(self.variables.tree) File "/Users/srikanth_m07/anaconda/lib/python3.5/site-packages/tensorflow/contrib/tensor_forest/python/ops/gen_model_ops.py", line 351, in tree_size "TreeSize", tree_handle=tree_handle, name=name) File "/Users/srikanth_m07/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/Users/srikanth_m07/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op op_def=op_def) File "/Users/srikanth_m07/anaconda/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1470, in init self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

    NotFoundError (see above for traceback): Resource localhost/tree-1//N10tensorflow12tensorforest20DecisionTreeResourceE does not exist. [[Node: TreeSize_1 = TreeSize_device="/job:localhost/replica:0/task:0/device:CPU:0"]]

    opened by smsrikanthreddy 3
  • TypeError in Random Forest

    TypeError in Random Forest

    I got an error while running the example of Random Forest. And I don't know what causes it. Maybe the version of TensorFlow?

    My environment is Python 3.6.3 + TensorFlow 1.4.0 on Mac 10.13.1

    Error: /Users/sakigami/anaconda/bin/python /Users/sakigami/programmer/Python/tf_study/02_basic_models/random_forest.py Extracting ../MNIST_data/train-images-idx3-ubyte.gz Extracting ../MNIST_data/train-labels-idx1-ubyte.gz Extracting ../MNIST_data/t10k-images-idx3-ubyte.gz Extracting ../MNIST_data/t10k-labels-idx1-ubyte.gz Traceback (most recent call last): File "/Users/sakigami/programmer/Python/tf_study/02_basic_models/random_forest.py", line 36, in correct_prediction = tf.equal(tf.argmax(infer_op, 1), tf.cast(Y, tf.string)) File "/Users/sakigami/anaconda/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 316, in new_func return func(*args, **kwargs) File "/Users/sakigami/anaconda/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 205, in argmax return gen_math_ops.arg_max(input, axis, name=name, output_type=output_type) File "/Users/sakigami/anaconda/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 441, in arg_max name=name) File "/Users/sakigami/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 513, in _apply_op_helper raise err File "/Users/sakigami/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 510, in _apply_op_helper preferred_dtype=default_dtype) File "/Users/sakigami/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 926, in internal_convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "/Users/sakigami/anaconda/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 970, in _autopacking_conversion_function return _autopacking_helper(v, inferred_dtype, name or "packed") File "/Users/sakigami/anaconda/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 912, in _autopacking_helper elem)) TypeError: Cannot convert a list containing a tensor of dtype <dtype: 'string'> to <dtype: 'float32'> (Tensor is: <tf.Tensor 'stack_2:0' shape=(?, 10) dtype=string>)

    Process finished with exit code 1

    opened by SakigamiYang 3
  • failed in running bidirectional_rnn

    failed in running bidirectional_rnn

    using tensorflow 0.12, it seems that tensorflow.contrib don't have modules like BasicLSTMCell and static_bidirectional_rnn

    in 3_NeuralNetworks/bidirectional_rnn.py, line 67, 69, 73 and 76

    opened by qinglintian 3
  • Add NN function

    Add NN function

    Go through the practice results, it looks that NN need an activation function (like relu), otherwise it's just a trivial linear model.By applying relu, the accuracy is neural_network.py: 92% -> 95%, Accuracy for neural_network_raw.py: 92% -> 94%.And the learning rate is too large, even for example. In practice, it is usually set between 1e-2 and 1e-4.

    opened by Elvis-ever 2
  • Add a development container

    Add a development container

    Hi folks 👋! I'm part of the team working on the development container specification, and we'd love to discuss adding a dev container(s) to this repo.

    Context

    A dev container lets you use a container as a full-featured dev environment. It can be used to run an app, to separate tools needed for working with a codebase, and to aid in continuous integration and testing.

    We're working on an open spec so that any user in any tool can create and connect to dev containers, including tools beyond VS Code or GitHub Codespaces. We recently released the reference implementation as an open source CLI, in addition to an initial version of the spec.

    The CLI and spec are both in active development, so they'll continue to evolve, especially with external feedback.

    Contributing to and collaborating with TensorFlow Examples

    Our team has been hosting an example dev container for TensorFlow Examples, and we'd love to discuss if dev containers being a more open, agnostic format might make them a more interesting addition to host in this repo directly. The current example we're hosting is designed for the TensorFlow Examples repo overall, but it could also work well to have a dev container for individual projects or folders (i.e. if they each require unique libraries).

    Please let us know your thoughts - we're eager to discuss and collaborate on any feedback or questions you may have. Thank you!

    opened by bamurtaugh 0
  • possible issue at: tensorflow_v2/notebooks/3_NeuralNetworks/autoencoder.ipynb

    possible issue at: tensorflow_v2/notebooks/3_NeuralNetworks/autoencoder.ipynb

    Referring to the notebook:

    https://github.com/aymericdamien/TensorFlow-Examples/blob/master/tensorflow_v2/notebooks/3_NeuralNetworks/autoencoder.ipynb

    In the following code portion (8th code block of the notebook):

    `# Optimization process. def run_optimization(x): # Wrap computation inside a GradientTape for automatic differentiation. with tf.GradientTape() as g: reconstructed_image = decoder(encoder(x)) loss = mean_square(reconstructed_image, x)

    # Variables to update, i.e. trainable variables.
    trainable_variables = weights.values() + biases.values()
    
    # Compute gradients.
    gradients = g.gradient(loss, trainable_variables)
    
    # Update W and b following gradients.
    optimizer.apply_gradients(zip(gradients, trainable_variables))
    
    return loss`
    

    the line:

    trainable_variables = weights.values() + biases.values()

    results to me in an error due to the impossibility of summing two python dict_values. I personally solved the issue by converting both dict_values to lists:

    trainable_variables = list(weights.values()) + list(biases.values())

    I hope my issue is of utility for this great repo! Thank you for the work

    opened by nucccc 6
Owner
Aymeric Damien
Deep Learning Enthusiast. MLE @Snapchat. Past: Tsinghua University, EISTI
Aymeric Damien
Classic Papers for Beginners and Impact Scope for Authors.

There have been billions of academic papers around the world. However, maybe only 0.0...01% among them are valuable or are worth reading. Since our limited life has never been forever, TopPaper provide a Top Academic Paper Chart for beginners and reseachers to take one step faster.

Qiulin Zhang 228 Dec 18, 2022
A tutorial showing how to train, convert, and run TensorFlow Lite object detection models on Android devices, the Raspberry Pi, and more!

A tutorial showing how to train, convert, and run TensorFlow Lite object detection models on Android devices, the Raspberry Pi, and more!

Evan 1.3k Jan 2, 2023
Tutorial to set up TensorFlow Object Detection API on the Raspberry Pi

A tutorial showing how to set up TensorFlow's Object Detection API on the Raspberry Pi

Evan 1.1k Dec 26, 2022
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.

mtomo Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation.

Katsuya Hyodo 24 Mar 2, 2022
A Genetic Programming platform for Python with TensorFlow for wicked-fast CPU and GPU support.

Karoo GP Karoo GP is an evolutionary algorithm, a genetic programming application suite written in Python which supports both symbolic regression and

Kai Staats 149 Jan 9, 2023
Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy. Now with tensorflow 1.0 support. Evaluation usa

Marcel R. 349 Aug 6, 2022
Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification.

Easy Few-Shot Learning Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification. This repository is made for you

Sicara 399 Jan 8, 2023
Hypersearch weight debugging and losses tutorial

tutorial Activate tensorboard option Running TensorBoard remotely When working on a remote server, you can use SSH tunneling to forward the port of th

null 1 Dec 11, 2021
Simulation code and tutorial for BBHnet training data

Simulation Dataset for BBHnet NOTE: OLD README, UPDATE IN PROGRESS We generate simulation dataset to train BBHnet, our deep learning framework for det

null 0 May 31, 2022
TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

null 2.6k Jan 4, 2023
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Peter Lin 6.5k Jan 4, 2023
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting (RVM) English | 中文 Official repository for the paper Robust High-Resolution Video Matting with Temporal Guidance. RVM is specific

flow-dev 2 Aug 21, 2022
Source code, datasets and trained models for the paper Learning Advanced Mathematical Computations from Examples (ICLR 2021), by François Charton, Amaury Hayat (ENPC-Rutgers) and Guillaume Lample

Maths from examples - Learning advanced mathematical computations from examples This is the source code and data sets relevant to the paper Learning a

Facebook Research 171 Nov 23, 2022
Tutorial on active learning with the Nvidia Transfer Learning Toolkit (TLT).

Active Learning with the Nvidia TLT Tutorial on active learning with the Nvidia Transfer Learning Toolkit (TLT). In this tutorial, we will show you ho

Lightly 25 Dec 3, 2022
basic tutorial on pytorch

Quick Tutorial on PyTorch PyTorch Basics Linear Regression Logistic Regression Artificial Neural Networks Convolutional Neural Networks Recurrent Neur

null 7 Sep 15, 2022
Streamlit Tutorial (ex: stock price dashboard, cartoon-stylegan, vqgan-clip, stylemixing, styleclip, sefa)

Streamlit Tutorials Install pip install streamlit Run cd [directory] streamlit run app.py --server.address 0.0.0.0 --server.port [your port] # http:/

Jihye Back 30 Jan 6, 2023
Yet Another Reinforcement Learning Tutorial

This repo contains self-contained RL implementations

Sungjoon 65 Dec 10, 2022
The materials used in the SaxonJS tutorial presented at Declarative Amsterdam, 2021

SaxonJS-Tutorial-2021, version 1.0.4 Last updated on 4 November, 2021. Table of contents Background Prerequisites Starting a web server Running a Java

Saxonica 11 Oct 23, 2022