Deep learning library featuring a higher-level API for TensorFlow.

Overview

Build Status PyPI version License Join the chat at https://gitter.im/einsteinsci/betterbeginnings

TFLearn: Deep learning library featuring a higher-level API for TensorFlow.

TFlearn is a modular and transparent deep learning library built on top of Tensorflow. It was designed to provide a higher-level API to TensorFlow in order to facilitate and speed-up experimentations, while remaining fully transparent and compatible with it.

TFLearn features include:

  • Easy-to-use and understand high-level API for implementing deep neural networks, with tutorial and examples.
  • Fast prototyping through highly modular built-in neural network layers, regularizers, optimizers, metrics...
  • Full transparency over Tensorflow. All functions are built over tensors and can be used independently of TFLearn.
  • Powerful helper functions to train any TensorFlow graph, with support of multiple inputs, outputs and optimizers.
  • Easy and beautiful graph visualization, with details about weights, gradients, activations and more...
  • Effortless device placement for using multiple CPU/GPU.

The high-level API currently supports most of recent deep learning models, such as Convolutions, LSTM, BiRNN, BatchNorm, PReLU, Residual networks, Generative networks... In the future, TFLearn is also intended to stay up-to-date with latest deep learning techniques.

Note: Latest TFLearn (v0.5) is only compatible with TensorFlow v2.0 and over.

Overview

# Classification
tflearn.init_graph(num_cores=8, gpu_memory_fraction=0.5)

net = tflearn.input_data(shape=[None, 784])
net = tflearn.fully_connected(net, 64)
net = tflearn.dropout(net, 0.5)
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='adam', loss='categorical_crossentropy')

model = tflearn.DNN(net)
model.fit(X, Y)
# Sequence Generation
net = tflearn.input_data(shape=[None, 100, 5000])
net = tflearn.lstm(net, 64)
net = tflearn.dropout(net, 0.5)
net = tflearn.fully_connected(net, 5000, activation='softmax')
net = tflearn.regression(net, optimizer='adam', loss='categorical_crossentropy')

model = tflearn.SequenceGenerator(net, dictionary=idx, seq_maxlen=100)
model.fit(X, Y)
model.generate(50, temperature=1.0)

There are many more examples available here.

Compatibility

TFLearn is based on the original tensorflow v1 graph API. When using TFLearn, make sure to import tensorflow that way:

import tflearn
import tensorflow.compat.v1 as tf

Installation

TensorFlow Installation

TFLearn requires Tensorflow (version 2.0+) to be installed.

To install TensorFlow, simply run:

pip install tensorflow

or, with GPU-support:

pip install tensorflow-gpu

For more details see TensorFlow installation instructions

TFLearn Installation

To install TFLearn, the easiest way is to run

For the bleeding edge version (recommended):

pip install git+https://github.com/tflearn/tflearn.git

For the latest stable version:

pip install tflearn

Otherwise, you can also install from source by running (from source folder):

python setup.py install

Getting Started

See Getting Started with TFLearn to learn about TFLearn basic functionalities or start browsing TFLearn Tutorials.

Examples

There are many neural network implementation available, see Examples.

Documentation

http://tflearn.org/doc_index

Model Visualization

Graph

Graph Visualization

Loss & Accuracy (multiple runs)

Loss Visualization

Layers

Layers Visualization

Contributions

This is the first release of TFLearn, if you find any bug, please report it in the GitHub issues section.

Improvements and requests for new features are more than welcome! Do not hesitate to twist and tweak TFLearn, and send pull-requests.

For more info: Contribute to TFLearn.

License

MIT License

Issues
  • define a customized objective function

    define a customized objective function

    hi there how can I define my own loss function in tf-learn, thanks!

    opened by kingfengji 30
  • Understanding Tensorflow/Tflearn LSTM input?

    Understanding Tensorflow/Tflearn LSTM input?

    I am trying to train a LSTM using glove embeddings After reading through https://github.com/tflearn/tflearn/issues/8, I tried implementing it using the following code:

    Reading and pre-processing data

    image

    Setting up an LSTM and training

    image

    but am getting the following error:

    image

    I followed all the instructions that were highlighted in https://github.com/tflearn/tflearn/issues/8 and I do not understand why it is crashing. Can someone please help? :(

    Thanks a lot!

    opened by pbhatnagar3 26
  • How to save model and retrain it?

    How to save model and retrain it?

    Hi, Is there some APIs to be used to save model and restore it for retraining?

    opened by lfwin 22
  • Question: how to use rnn in context of image sequence classification in TFLearn?

    Question: how to use rnn in context of image sequence classification in TFLearn?

    Hi,

    I saw the example code of using LSTM cells in TFLearn, classifying mnist images. I want to classify an image sequence. I'm not sure though how to achieve this in TFLearn, but I thought maybe to go for this architecture:

    1. Conv
    2. Max Pooling .... repeat
    3. Fully connected
    4. LSTM
    5. SoftMax

    From what I understand normal layers receive this shape type: (batch_size, height, width, depth) while LSTM: (batch_size, sequence_length, input_length)

    I thought of reshaping the input from: (batch_size, height, width, depth) to: (batch_size, sequence_length, height * width * depth)

    but i'm not sure this how this should work. Am I missing something? Any advice?

    opened by amirbar 22
  • import tflearn error: AttributeError: module 'pandas' has no attribute 'computation'

    import tflearn error: AttributeError: module 'pandas' has no attribute 'computation'

    After installing tflearn in windows 10.

    For the line of code:

    import tflearn

    I got the error:

    Traceback (most recent call last):
      File "C:\Users\Ernest\git\test-code\test-code\src\tflearn_quick_start\main.py", line 5, in <module>
        import tflearn
      File "C:\Users\Ernest\AppData\Local\Continuum\lib\site-packages\tflearn\__init__.py", line 4, in <module>
        from . import config
      File "C:\Users\Ernest\AppData\Local\Continuum\lib\site-packages\tflearn\config.py", line 5, in <module>
        from .variables import variable
      File "C:\Users\Ernest\AppData\Local\Continuum\lib\site-packages\tflearn\variables.py", line 7, in <module>
        from tensorflow.contrib.framework.python.ops import add_arg_scope as contrib_add_arg_scope
      File "C:\Users\Ernest\AppData\Local\Continuum\lib\site-packages\tensorflow\contrib\__init__.py", line 30, in <module>
        from tensorflow.contrib import factorization
      File "C:\Users\Ernest\AppData\Local\Continuum\lib\site-packages\tensorflow\contrib\factorization\__init__.py", line 24, in <module>
        from tensorflow.contrib.factorization.python.ops.gmm import *
      File "C:\Users\Ernest\AppData\Local\Continuum\lib\site-packages\tensorflow\contrib\factorization\python\ops\gmm.py", line 27, in <module>
        from tensorflow.contrib.learn.python.learn.estimators import estimator
      File "C:\Users\Ernest\AppData\Local\Continuum\lib\site-packages\tensorflow\contrib\learn\__init__.py", line 87, in <module>
        from tensorflow.contrib.learn.python.learn import *
      File "C:\Users\Ernest\AppData\Local\Continuum\lib\site-packages\tensorflow\contrib\learn\python\__init__.py", line 23, in <module>
        from tensorflow.contrib.learn.python.learn import *
      File "C:\Users\Ernest\AppData\Local\Continuum\lib\site-packages\tensorflow\contrib\learn\python\learn\__init__.py", line 25, in <module>
        from tensorflow.contrib.learn.python.learn import estimators
      File "C:\Users\Ernest\AppData\Local\Continuum\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\__init__.py", line 297, in <module>
        from tensorflow.contrib.learn.python.learn.estimators.dnn import DNNClassifier
      File "C:\Users\Ernest\AppData\Local\Continuum\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\dnn.py", line 29, in <module>
        from tensorflow.contrib.learn.python.learn.estimators import dnn_linear_combined
      File "C:\Users\Ernest\AppData\Local\Continuum\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\dnn_linear_combined.py", line 31, in <module>
        from tensorflow.contrib.learn.python.learn.estimators import estimator
      File "C:\Users\Ernest\AppData\Local\Continuum\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 49, in <module>
        from tensorflow.contrib.learn.python.learn.learn_io import data_feeder
      File "C:\Users\Ernest\AppData\Local\Continuum\lib\site-packages\tensorflow\contrib\learn\python\learn\learn_io\__init__.py", line 21, in <module>
        from tensorflow.contrib.learn.python.learn.learn_io.dask_io import extract_dask_data
      File "C:\Users\Ernest\AppData\Local\Continuum\lib\site-packages\tensorflow\contrib\learn\python\learn\learn_io\dask_io.py", line 26, in <module>
        import dask.dataframe as dd
      File "C:\Users\Ernest\AppData\Local\Continuum\lib\site-packages\dask\dataframe\__init__.py", line 3, in <module>
        from .core import (DataFrame, Series, Index, _Frame, map_partitions,
      File "C:\Users\Ernest\AppData\Local\Continuum\lib\site-packages\dask\dataframe\core.py", line 36, in <module>
        pd.computation.expressions.set_use_numexpr(False)
    **AttributeError: module 'pandas' has no attribute 'computation'**
    

    I use the following versions:

    TensorFlow: 1.1.0-rc2
    pandas: 0.20.1
    

    Any help?

    opened by ebonat 18
  • conv_2d_transpose output_shape does not match

    conv_2d_transpose output_shape does not match

    I'm attempting to use conv2d_transpose in my network, but can't seem to get past this error: ValueError: output_shape does not match filter's output channels, 1 != 16. I'm wondering if its possible there is an issue in tflearn. I noticed that the tensorflow docs say the 3rd argument of conv2d_transpose is the output_shape, but tflearn passes in stride as the third argument.

    Thanks in advance for your help.

    Relevant bit of my network:

    network = conv_2d(network, 4096, 6, activation='relu')
    network = dropout(network, 0.5)
    network = conv_2d(network, 4096, 1, activation='relu')
    network = dropout(network, 0.5)
    network = conv_2d(network, 16, 1)
    network = conv_2d_transpose(network, 16, 63, strides=32)
    

    Full traceback:

    Traceback (most recent call last):
      File "ux_network.py", line 50, in <module>
        network = input_data(shape=[None, 224, 224, 3], data_preprocessing=img_prep)
      File "/usr/local/lib/python2.7/dist-packages/tflearn/layers/conv.py", line 175, in conv_2d_transpose
        inference = tf.nn.conv2d_transpose(incoming, W, strides, padding)
      File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 93, in conv2d_transpose
        "{} != {}".format(output_shape[3], filter.get_shape()[2]))
    ValueError: output_shape does not match filter's output channels, 1 != 16
    
    opened by FreakTheMighty 17
  • Bidirectional LTSM example throws shape error

    Bidirectional LTSM example throws shape error

    Running the following file throws an error: https://github.com/tflearn/tflearn/blob/master/examples/nlp/bidirectional_lstm.py

    ValueError: Shape (128, ?) must have rank at least 3

    Setup:

    • MacOS Sierra (10.12)
    • Python 2.7
    • Tensorflow v1.2.0
    • TFLearn v0.3.2
    Traceback (most recent call last):
      File "bidirectional_lstm.py", line 47, in <module>
        net = bidirectional_rnn(net, BasicLSTMCell(128), BasicLSTMCell(128))
      File "/Users/colin/tensorflow/lib/python2.7/site-packages/tflearn/layers/recurrent.py", line 374, in bidirectional_rnn
        dtype=tf.float32)
      File "/Users/colin/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 375, in bidirectional_dynamic_rnn
        time_major=time_major, scope=fw_scope)
      File "/Users/colin/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 574, in dynamic_rnn
        dtype=dtype)
      File "/Users/colin/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 637, in _dynamic_rnn_loop
        for input_ in flat_input)
      File "/Users/colin/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 637, in <genexpr>
        for input_ in flat_input)
      File "/Users/colin/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.py", line 649, in with_rank_at_least
        raise ValueError("Shape %s must have rank at least %d" % (self, rank))
    ValueError: Shape (128, ?) must have rank at least 3
    
    opened by colinskow 16
  • ImportError: cannot import name titanic

    ImportError: cannot import name titanic

    Hi guys, I'm facing with this error right now. Not sure how to fix it, because this is the first time I check out TFLearn - Quick Start.

     [ ~/Desktop ] 👉 python my_test.py Traceback (most recent call last): File "my_test.py", line 7, in from tflearn.datasets import titanic ImportError: cannot import name titanic

    I'm running a mac (10.11.6) and follow the installation tutor already. Thanks

    opened by mncvnn 16
  • Training an RNN to generate a sine wave

    Training an RNN to generate a sine wave

    I'm attempting to do this but the predicted values (in green) don't look anything like a sine wave.

    image

    I've tried various learning rates, optimizers, number of units, length of history for training, number of steps I try to look ahead. I'm not yet seeing what the problem is, so I would be grateful for suggestions.

    # Simple example using recurrent neural network to predict time series values
    
    from __future__ import division, print_function, absolute_import
    
    import tflearn
    from tflearn.layers.normalization import batch_normalization
    import numpy as np
    import math
    import matplotlib
    matplotlib.use('Agg')
    import matplotlib.pyplot as plt
    
    step_radians = 0.01
    steps_of_history = 100
    steps_in_future = 2
    index = 0
    
    x = np.arange(0, 4*math.pi, step_radians)
    y = np.sin(x)
    
    # Put the data into the right shape
    while (index+steps_of_history+steps_in_future < len(y)):
        window = y[index:index+steps_of_history]
        target = y[index+steps_of_history+steps_in_future]
        if index == 0:
            trainX = window
            trainY = target
        else:
            trainX = np.vstack([trainX, window])
            trainY = np.append(trainY, target)
        index = index+1
    trainX.shape = (index, steps_of_history, 1)
    trainY.shape = (index, 1)
    
    # Network building
    net = tflearn.input_data(shape=[None, steps_of_history, 1])
    net = tflearn.simple_rnn(net, n_units=512, return_seq=False)
    net = tflearn.dropout(net, 0.5)
    net = tflearn.fully_connected(net, 1, activation='linear')
    net = tflearn.regression(net, optimizer='sgd', loss='mean_square', learning_rate=0.001)
    
    # Training
    model = tflearn.DNN(net, clip_gradients=0.0, tensorboard_verbose=0)
    model.fit(trainX, trainY, n_epoch=150, validation_set=0.1, show_metric=True, batch_size=128)
    
    # Prepare the testing data set
    # testX = window to use for prediction
    # testY = actual value
    # predictY = predicted value
    index = 0
    while (index+steps_of_history+steps_in_future < len(y)):
        window = y[index:index+steps_of_history]
        target = y[index+steps_of_history+steps_in_future]
        if index == 0:
            testX = window
            testY = target
        else:
            testX = np.vstack([testX, window])
            testY = np.append(testY, target)
        index = index+1
    testX.shape = (index, steps_of_history, 1)
    testY.shape = (index, 1)
    
    # Predict the future values
    predictY = model.predict(testX)
    
    # Plot the results
    plt.figure(figsize=(20,4))
    plt.suptitle('Prediction')
    plt.title('History='+str(steps_of_history)+', Future='+str(steps_in_future))
    plt.plot(y, 'r-', label='Actual')
    plt.plot(predictY, 'gx', label='Predicted')
    plt.legend()
    plt.savefig('sine.png')
    
    
    opened by DarylWM 15
  • lstm value error of different shape

    lstm value error of different shape

    I tried to modify imdb example to my dataset, which is given below 3 3 373 27 9 615 9 16 10 34 0 8 0 199 65917 1319 122 402 319 183 3 3 77 12 4 66 4 3 0 5 0 14 3 50 106 139 38 164 53 109 3 3 86 6 2 6 2 0 0 1 0 25 0 4 284 77888 19 66 11 25 3 3 469 21 7 291 7 43 15 82 0 207 0 181 115646 59073 294 928 112 675 3 3 2090 21 7 4035 7 17 8 40 0 317 10 717 1033 25661 142 2054 1795 1023 3 3 691 18 6 597 6 30 16 61 0 245 18 273 719 2352305 213 1106 324 719 6 6 229 0 8 526 0 11 1 13 0 6 5 101 7246 2082 120 141 288 1570 3 3 1158 9 3 649 3 16 6 17 1 247 38 477 592 987626 82 1305 653 707 4 4 211 0 10 429 0 16 9 20 0 3 0 106 42725 27302 4280 133 477 1567

    The first column is the target which has 9 classes and around 1803 features

    from future import print_function import numpy as np from sklearn.cross_validation import train_test_split import tflearn import pandas as pd from tflearn.data_utils import to_categorical, pad_sequences

    print("Loading") data = pd.read_csv('Train.csv')

    X = data.iloc[:,1:1805] y = data.iloc[:,0]

    X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.2,random_state=42)

    print(X_train.shape) print(X_test.shape) print(y_train.shape) print(y_test.shape)

    print("Preprocessing") X_train1 = X_train.values.T.tolist() X_test1 = X_test.values.tolist() y_train1 = y_train.values.T.tolist() y_test1 = y_test.values.tolist()

    Data preprocessing

    Sequence padding

    trainX = pad_sequences(X_train1, maxlen=200, value=0.) testX = pad_sequences(X_test1, maxlen=200, value=0.)

    Converting labels to binary vectors

    trainY = to_categorical(y_train, nb_classes=0) testY = to_categorical(y_test, nb_classes=0)

    Network building

    net = tflearn.input_data([None, 200]) net = tflearn.embedding(net, input_dim=20000, output_dim=128) net = tflearn.lstm(net, 128) net = tflearn.dropout(net, 0.5) net = tflearn.fully_connected(net, 2, activation='softmax') net = tflearn.regression(net, optimizer='adam',loss='categorical_crossentropy')

    Training

    model = tflearn.DNN(net, clip_gradients=0., tensorboard_verbose=0) model.fit(trainX, trainY, validation_set=(testX, testY), show_metric=True, batch_size=128) untitled

    opened by vinayakumarr 15
  • def variance_scaling in initializations.py tries to call deprecated class

    def variance_scaling in initializations.py tries to call deprecated class

    Tensorflow version: 2.5.1 tflearn version: 0.5.0

    Error: ModuleNotFoundError: No module named 'tensorflow.contrib.layers.python.layers.initializers.variance_scaling_initializer'

    VarianceScaling now is in tensorflow.keras.initializers.

    opened by Movage 0
  • Xception Example model

    Xception Example model

    I wish you will add more examples at tflearn/examples. It would be really cool if you add the Xception model as well. Instead of Keras, tflearn is much more convenient for me, I am not able to write Xception from scratch so i would grateful if you add it👍 💯💯 :)

    opened by FurkanThePythoneer 0
  • ValueError: Cannot feed value of shape (61,) for Tensor 'InputData/X:0', which has shape '(?, 61)'

    ValueError: Cannot feed value of shape (61,) for Tensor 'InputData/X:0', which has shape '(?, 61)'

    HELP needed. How can I fix this error?

    ` def bag_of_words(s, words): bag = [0 for _ in range(len(words))]

    s_words = nltk.word_tokenize(s)
    s_words = [stemmer.stem(word.lower()) for word in s_words]
    
    for se in s_words:
        for i, w in enumerate(words):
            if w == se:
                bag[i] = 1
    
    return np.array(bag)
    

    def chat(): print("Start talking with the bot! (type quit to stop)") while True: inp = input("You: ") if inp.lower() == "quit": break

        result = model.predict([bag_of_words(inp, words)])[0]
        result_index = np.argmax(result)
        tag = labels[result_index]
    
        if result[result_index] > 0.7:
            for tg in data["intents"]:
                if tg['tag'] == tag:
                    responses = tg['responses']
            print(random.choice(responses))
    
        else:
            print("I didnt get that. Can you explain or try again.")
    

    chat()`

    And when I run it I get: Start talking with the bot! (type quit to stop)

    You: j Traceback (most recent call last):

    File "", line 22, in chat()

    File "", line 8, in chat result = model.predict([bag_of_words(inp, words)])

    File "C:\Users\monik\anaconda3\lib\site-packages\tflearn\models\dnn.py", line 251, in predict return self.predictor.predict(feed_dict)

    File "C:\Users\monik\anaconda3\lib\site-packages\tflearn\helpers\evaluator.py", line 69, in predict return self.session.run(self.tensors[0], feed_dict=feed_dict)

    File "C:\Users\monik\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\client\session.py", line 967, in run result = self._run(None, fetches, feed_dict, options_ptr,

    File "C:\Users\monik\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\client\session.py", line 1164, in _run raise ValueError(

    ValueError: Cannot feed value of shape (61,) for Tensor 'InputData/X:0', which has shape '(?, 61)'

    I tried to reshape it, but still it doesn't work

    opened by monibaka 0
  • Security Fix for Arbitrary Code Execution - huntr.dev

    Security Fix for Arbitrary Code Execution - huntr.dev

    https://huntr.dev/users/Anon-Artist has fixed the Arbitrary Code Execution vulnerability 🔨. Think you could fix a vulnerability like this?

    Get involved at https://huntr.dev/

    Q | A Version Affected | ALL Bug Fix | YES Original Pull Request | https://github.com/418sec/tflearn/pull/1 Vulnerability README | https://github.com/418sec/huntr/blob/master/bounties/pip/tflearn/1/README.md

    User Comments:

    :bar_chart: Metadata *

    TFlearn is a modular and transparent deep learning library built on top of Tensorflow. It was designed to provide a higher-level API to TensorFlow in order to facilitate and speed-up experimentations, while remaining fully transparent and compatible with it. This package was vulnerable to Arbitrary Code Execution.

    Bounty URL: https://www.huntr.dev/bounties/1-pip-tflearn

    :gear: Description *

    load_batch() function is used to load CIFAR 10 dataset for training. Lack of restriction in input allowes attacker-crafted file to get unpickled which causes code execution.

    :computer: Technical Description *

    Fixed by avoiding unsafe loader.

    :bug: Proof of Concept (PoC) *

    Create the following PoC file: exploit.py

    import pickle
    import os
    import nevergrad
    from ray.tune.suggest.nevergrad import NevergradSearch
    
    class EvilPickle(object):
        def __reduce__(self):
            return (os.system, ('calc.exe', ))
    
    payload = pickle.dumps(EvilPickle())
    optimizer = nevergrad.optimization.Optimizer(1)
    ngSearch = NevergradSearch(optimizer)
    
    with open('payload', 'wb') as f:
        f.write(payload)
    
    ngSearch.restore('payload')
    
    

    Execute the following commands in another terminal:

    python3 exploit.py
    
    Check the Output:
    

    xcalc will pop up.

    :fire: Proof of Fix (PoF) *

    After fix it will not popup a calc.

    :+1: User Acceptance Testing (UAT)

    After fix functionality is unaffected.

    opened by huntr-helper 0
  • Not working with tensorflow 2.3.1

    Not working with tensorflow 2.3.1

    from tensorflow.contrib.framework.python.ops import add_arg_scope as contrib_add_arg_scope
    

    ModuleNotFoundError: No module named 'tensorflow.contrib'

    opened by AkilaUd96 5
  • #001 'unicodeescape' code can't decode bytes in position 2-3: truncated \UXXXXXXXX escape

    #001 'unicodeescape' code can't decode bytes in position 2-3: truncated \UXXXXXXXX escape

    Getting this syntax error OS : Windows 10 Pyhton version : 3.5 Keras : 2.3.1 Using PyCharm IDE : 2019 edition 3.1

    Keras error Keras error2

    opened by rustyboy0908 1
  • Error in TF2.0... Any ideas?

    Error in TF2.0... Any ideas?

    Um, I know there are lots of dups here but I still want to ask this. When importing tflearn, I got the following error:

    Traceback (most recent call last):
      File "/Users/sam/Desktop/Python/ML/TensorFlow/TCB/venv/lib/python3.6/site-packages/tflearn/helpers/summarizer.py", line 9, in <module>
        merge_summary = tf.summary.merge
    AttributeError: module 'tensorboard.summary._tf.summary' has no attribute 'merge'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/Users/sam/Desktop/Python/ML/TensorFlow/TCB/main.py", line 4, in <module>
        import tflearn
      File "/Users/sam/Desktop/Python/ML/TensorFlow/TCB/venv/lib/python3.6/site-packages/tflearn/__init__.py", line 8, in <module>
        from . import models
      File "/Users/sam/Desktop/Python/ML/TensorFlow/TCB/venv/lib/python3.6/site-packages/tflearn/models/__init__.py", line 2, in <module>
        from .dnn import DNN
      File "/Users/sam/Desktop/Python/ML/TensorFlow/TCB/venv/lib/python3.6/site-packages/tflearn/models/dnn.py", line 6, in <module>
        from ..helpers.trainer import Trainer
      File "/Users/sam/Desktop/Python/ML/TensorFlow/TCB/venv/lib/python3.6/site-packages/tflearn/helpers/__init__.py", line 2, in <module>
        from .evaluator import Evaluator
      File "/Users/sam/Desktop/Python/ML/TensorFlow/TCB/venv/lib/python3.6/site-packages/tflearn/helpers/evaluator.py", line 9, in <module>
        from .trainer import evaluate_flow
      File "/Users/sam/Desktop/Python/ML/TensorFlow/TCB/venv/lib/python3.6/site-packages/tflearn/helpers/trainer.py", line 20, in <module>
        from .summarizer import summaries, summarize, summarize_gradients, \
      File "/Users/sam/Desktop/Python/ML/TensorFlow/TCB/venv/lib/python3.6/site-packages/tflearn/helpers/summarizer.py", line 12, in <module>
        merge_summary = tf.merge_summary
    AttributeError: module 'tensorflow' has no attribute 'merge_summary'
    

    I'm using Python3.6 and have installed tflearn using git, but this error still bothers me. By the way, is this project dead? Will it support tf2... I'm new to tensorflow, so I might have some configs wrong. Thanks!

    opened by samzhangjy 3
  • fix: ChainCallback sharing default mutable list

    fix: ChainCallback sharing default mutable list

    Hello, I am a security engineer at r2c.dev. We are working to write code checks for security in open source code.

    In python, the default values of function parameters are instantiated at function definition time. All calls to that function that use the default value all point to the same global object. e.g.:

    def func(x=[]):
       x.append(1)
       print(x)
    
    func() # [1]
    func() # [1 , 1]
    

    Because of this ChainCallback class potentially all share the same list of callbacks.

    Fix: The recommended solution is to either set default to None and assign a new empty object when the variable is None.

    We have a tool called Bento you can use for your project that continuously detects problems like this one. The check that identified this will be available in the very near future. Thanks, and I hope this helps! Let me know if you have any questions.

    opened by brendongo 0
  • how to update from version1 to version2 in tensorflow

    how to update from version1 to version2 in tensorflow

    I have NN model in tesorflow1.14 but i want to update as per tensorflow2 but how can i do i am facing problem. I read tensorflow website but not useful or get idea for my code.

        # Build neural network
    

    net = tflearn.input_data(shape=[None, len(train_x[0])]) net = tflearn.fully_connected(net, 8) net = tflearn.fully_connected(net, 8) net = tflearn.fully_connected(net, len(train_y[0]), activation='softmax') net = tflearn.regression(net)

    Define model and setup tensorboard

    model = tflearn.DNN(net, tensorboard_dir='tflearn_logs')

    Start training (apply gradient descent algorithm)

    model.fit(train_x, train_y, n_epoch=100, batch_size=8, show_metric=True) model.save('my_drive/AI_values/model/model.ckpt')

    opened by messi313 0
  • Add run on repl.it badge to README

    Add run on repl.it badge to README

    This pull request configures this repository to be run on Repl.it. It adds a .replit configuration file and a Repl.it badge to the README. You can read more about running repos on Repl.it here, or view the Repl here.

    opened by Syndicate-Labs 0
Releases(0.5.0)
  • 0.5.0(Nov 11, 2020)

  • 0.3.2(Jun 18, 2017)

  • 0.3.1(May 18, 2017)

    Minor changes:

    • Grouped Convolution support (depthwise conv).
    • VAE and ResNeXt Examples.
    • New optimizers.
    • New activation functions.
    • Various bug fix.
    Source code(tar.gz)
    Source code(zip)
  • 0.3.0(Feb 20, 2017)

    Major changes:

    • TensorFlow 1.0 compatibility

    Minor changes:

    • Documents refactoring.
    • Inception-ResNet-v2 Example.
    • CIFAR-100 Dataset.
    • Added time monitoring.
    • Various bug fix.
    Source code(tar.gz)
    Source code(zip)
  • 0.2.2(Aug 11, 2016)

    • Support for 3D conv ops
    • New layers: time_distributed, l2_normalize
    • RNNs support for batch norm
    • Added an option to save the best model
    • Seq2seq and Reinforcement learning examples
    • Beginner tutorial
    • Others minor changes
    • Various bug fix
    Source code(tar.gz)
    Source code(zip)
  • 0.2.1(Jun 10, 2016)

  • 0.1.0(May 31, 2016)

  • 0.2.0(May 31, 2016)

    Major changes:

    • DataFlow: A data pipeline for faster computing.
    • Data Augmentation and data preprocessing support.
    • Layers now support any custom function as parameter.
    • Basic tests.
    • Highway network architecture.
    • AUC objective function.
    • New examples.

    Minor changes:

    • Residual net fix.
    • Notebook display issues fix.
    • Datasets fix.
    • Various other bugs fix.
    • More exceptions.
    Source code(tar.gz)
    Source code(zip)
TensorFlow ROCm port

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

ROCm Software Platform 571 Oct 19, 2021
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

null 160.1k Oct 25, 2021
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

null 160.1k Oct 23, 2021
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

null 153.2k Feb 13, 2021
Annotated notes and summaries of the TensorFlow white paper, along with SVG figures and links to documentation

TensorFlow White Paper Notes Features Notes broken down section by section, as well as subsection by subsection Relevant links to documentation, resou

Sam Abrahams 438 Sep 10, 2021
Tutorial to set up TensorFlow Object Detection API on the Raspberry Pi

A tutorial showing how to set up TensorFlow's Object Detection API on the Raspberry Pi

Evan 1k Oct 22, 2021
tf2onnx - Convert TensorFlow, Keras and Tflite models to ONNX.

tf2onnx converts TensorFlow (tf-1.x or tf-2.x), tf.keras and tflite models to ONNX via command line or python api.

Open Neural Network Exchange 1.2k Oct 22, 2021
Tensorflow 2 Object Detection API kurulumu, GPU desteği, custom model hazırlama

Tensorflow 2 Object Detection API Bu tutorial, TensorFlow 2.x'in kararlı sürümü olan TensorFlow 2.3'ye yöneliktir. Bu, görüntülerde / videoda nesne a

null 40 Sep 20, 2021
Advanced Deep Learning with TensorFlow 2 and Keras (Updated for 2nd Edition)

Advanced Deep Learning with TensorFlow 2 and Keras (Updated for 2nd Edition)

Packt 1.1k Oct 21, 2021
Mesh TensorFlow: Model Parallelism Made Easier

Mesh TensorFlow - Model Parallelism Made Easier Introduction Mesh TensorFlow (mtf) is a language for distributed deep learning, capable of specifying

null 1.1k Oct 24, 2021
Deep learning library featuring a higher-level API for TensorFlow.

TFLearn: Deep learning library featuring a higher-level API for TensorFlow. TFlearn is a modular and transparent deep learning library built on top of

TFLearn 9.6k Oct 23, 2021
Deep learning library featuring a higher-level API for TensorFlow.

TFLearn: Deep learning library featuring a higher-level API for TensorFlow. TFlearn is a modular and transparent deep learning library built on top of

TFLearn 9.5k Feb 12, 2021
High level network definitions with pre-trained weights in TensorFlow

TensorNets High level network definitions with pre-trained weights in TensorFlow (tested with 2.1.0 >= TF >= 1.4.0). Guiding principles Applicability.

Taehoon Lee 991 Oct 18, 2021
A resource for learning about ML, DL, PyTorch and TensorFlow. Feedback always appreciated :)

A resource for learning about ML, DL, PyTorch and TensorFlow. Feedback always appreciated :)

Aladdin Persson 2.3k Oct 24, 2021
The Incredible PyTorch: a curated list of tutorials, papers, projects, communities and more relating to PyTorch.

This is a curated list of tutorials, projects, libraries, videos, papers, books and anything related to the incredible PyTorch. Feel free to make a pu

Ritchie Ng 8.5k Oct 22, 2021
TensorFlow 2 AI/ML library wrapper for openFrameworks

ofxTensorFlow2 This is an openFrameworks addon for the TensorFlow 2 ML (Machine Learning) library

Center for Art and Media Karlsruhe 45 Oct 5, 2021
A comprehensive list of published machine learning applications to cosmology

ml-in-cosmology This github attempts to maintain a comprehensive list of published machine learning applications to cosmology, organized by subject ma

George Stein 213 Oct 26, 2021
TensorFlow Similarity is a python package focused on making similarity learning quick and easy.

TensorFlow Similarity is a python package focused on making similarity learning quick and easy.

null 554 Oct 16, 2021