Deep Learning and Reinforcement Learning Library for Scientists and Engineers 🔥

Overview

GitHub last commit (branch) Supported TF Version Documentation Status Build Status Downloads Downloads Docker Pulls Codacy Badge

TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extensive collection of customizable neural layers to build advanced AI models quickly, based on this, the community open-sourced mass tutorials and applications. TensorLayer is awarded the 2017 Best Open Source Software by the ACM Multimedia Society. This project can also be found at iHub and Gitee.

News

🔥 The latest version of TensorLayer will be updated in OpenI. Feel free to use it and make suggestions. We need more people to join the dev team, if you are interested, please email [email protected]

🔥 3.0.0 has been pre-released, it supports TensorFlow,MindSpore and PaddlePaddle backends, allowing users to run the code on different hardware like Nvidia-GPU and Huawei-Ascend. It will support TensorFlow, MindSpore, PaddlePaddle, and PyTorch backends in the future.

🔥 Reinforcement Learning Zoo: Low-level APIs for professional usage, High-level APIs for simple usage, and a corresponding Springer textbook

🔥 Sipeed Maxi-EMC: Run TensorLayer models on the low-cost AI chip (e.g., K210) (Alpha Version)

Design Features

TensorLayer is a new deep learning library designed with simplicity, flexibility and high-performance in mind.

  • Simplicity : TensorLayer has a high-level layer/model abstraction which is effortless to learn. You can learn how deep learning can benefit your AI tasks in minutes through the massive examples.
  • Flexibility : TensorLayer APIs are transparent and flexible, inspired by the emerging PyTorch library. Compared to the Keras abstraction, TensorLayer makes it much easier to build and train complex AI models.
  • Zero-cost Abstraction : Though simple to use, TensorLayer does not require you to make any compromise in the performance of TensorFlow (Check the following benchmark section for more details).

TensorLayer stands at a unique spot in the TensorFlow wrappers. Other wrappers like Keras and TFLearn hide many powerful features of TensorFlow and provide little support for writing custom AI models. Inspired by PyTorch, TensorLayer APIs are simple, flexible and Pythonic, making it easy to learn while being flexible enough to cope with complex AI tasks. TensorLayer has a fast-growing community. It has been used by researchers and engineers all over the world, including those from Peking University, Imperial College London, UC Berkeley, Carnegie Mellon University, Stanford University, and companies like Google, Microsoft, Alibaba, Tencent, Xiaomi, and Bloomberg.

Multilingual Documents

TensorLayer has extensive documentation for both beginners and professionals. The documentation is available in both English and Chinese.

English Documentation Chinese Documentation Chinese Book

If you want to try the experimental features on the the master branch, you can find the latest document here.

Extensive Examples

You can find a large collection of examples that use TensorLayer in here and the following space:

Getting Start

TensorLayer 2.0 relies on TensorFlow, numpy, and others. To use GPUs, CUDA and cuDNN are required.

Install TensorFlow:

pip3 install tensorflow-gpu==2.0.0-rc1 # TensorFlow GPU (version 2.0 RC1)
pip3 install tensorflow # CPU version

Install the stable release of TensorLayer:

pip3 install tensorlayer

Install the unstable development version of TensorLayer:

pip3 install git+https://github.com/tensorlayer/tensorlayer.git

If you want to install the additional dependencies, you can also run

pip3 install --upgrade tensorlayer[all]              # all additional dependencies
pip3 install --upgrade tensorlayer[extra]            # only the `extra` dependencies
pip3 install --upgrade tensorlayer[contrib_loggers]  # only the `contrib_loggers` dependencies

If you are TensorFlow 1.X users, you can use TensorLayer 1.11.0:

# For last stable version of TensorLayer 1.X
pip3 install --upgrade tensorlayer==1.11.0

Performance Benchmark

The following table shows the training speeds of VGG16 using TensorLayer and native TensorFlow on a TITAN Xp.

Mode Lib Data Format Max GPU Memory Usage(MB) Max CPU Memory Usage(MB) Avg CPU Memory Usage(MB) Runtime (sec)
AutoGraph TensorFlow 2.0 channel last 11833 2161 2136 74
TensorLayer 2.0 channel last 11833 2187 2169 76
Graph Keras channel last 8677 2580 2576 101
Eager TensorFlow 2.0 channel last 8723 2052 2024 97
TensorLayer 2.0 channel last 8723 2010 2007 95

Getting Involved

Please read the Contributor Guideline before submitting your PRs.

We suggest users to report bugs using Github issues. Users can also discuss how to use TensorLayer in the following slack channel.



Citing TensorLayer

If you find TensorLayer useful for your project, please cite the following papers:

@article{tensorlayer2017,
    author  = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
    journal = {ACM Multimedia},
    title   = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
    url     = {http://tensorlayer.org},
    year    = {2017}
}

@inproceedings{tensorlayer2021,
  title={Tensorlayer 3.0: A Deep Learning Library Compatible With Multiple Backends},
  author={Lai, Cheng and Han, Jiarong and Dong, Hao},
  booktitle={2021 IEEE International Conference on Multimedia \& Expo Workshops (ICMEW)},
  pages={1--3},
  year={2021},
  organization={IEEE}
}
Comments
  • Update utils.py and Create test_utils_predict.py

    Update utils.py and Create test_utils_predict.py

    Checklist

    • [x] I've tested that my changes are compatible with the latest version of Tensorflow.
    • [x] I've read the Contribution Guidelines
    • [x] I've updated the documentation if necessary.

    Motivation and Context

    Fix #565 In #288, the error has not been handled properly.

    Description

    np.hstack makes data stacked along axis=1 while np.vstack along axis=0.

    bug 
    opened by 2wins 42
  • [Discussion] Roadmap and TensorLayer API Changes for Version 2.0

    [Discussion] Roadmap and TensorLayer API Changes for Version 2.0

    Dear friends,

    I will try to write and summarize a few thoughts regarding the upcoming changes in TensorLayer, namely for version 2.0 🎉🎉🎉.

    Main features that will be released.

    I may forget some, if so I will update my post.

    • Logging contrib Hyperdash module
    • Graph Architecture saving
    • Distributed training with Horovod
    • Network API and features inspired by Keras and PyTorch.
    • Database for data and model life-cycle management

    TensorLayer 1.x recurring issues.

    TensorLayer has always been a quite fuzzy and messy (and it's really improving 👍). The results have been an incredible number of bugs in the different Layers. Implementing one single feature oftenly assume that you partly rewrite the code for absolutely each Layer (I did it already 2 times, and I'm doing it for the 3rd time with the Network API). This is extremely dangerous with a high risk of introducing an incredible number of bugs (I remind you that we had to release a very large number of release candidate to fix all the bugs 1.8.6rc0, 1.8.6rc1, 1.8.6rc2, 1.8.6rc3, 1.8.6rc4, 1.8.6rc5 1.8.6rc6).

    Every so often we find new bugs, just by reading at the code:

    • Issue #572 with tl.layers.DeformableConv2d fixed (PR #573)
    • Issue #664 with tl.layers.ConvLSTMLayer fixed (PR #676)
    • Error in tl.layers.TernaryConv2d fixed - self.inputs not defined (PR #658)
    • etc ...

    Additionally, the current Layer API is slightly counter intuitive:

    • why layer.all_params returns the params of a network and not a layer
    • same layer.all_layers is quite ironic right ?
    • with the Graph API newly introduced, when you save a Layer, you actually save the whole network.
    • layer.count_params() send you the number of params in the graph, not inside the layer
    • etc.

    Proposition: Breaking the breaking the backward compatibility with the Network API

    As TensorFlow did when releasing TF 1.0 or Django (with Django 2.0) in a non deep learning context. Very big libraries oftenly decide to let the good old times behind them and clean the code base. I believe that it is not a problem to break backward compibility if it is for the better and done very rarely.

    What are the changes, I believe would highly improve TL maintainability and clarity for TL 2.0:

    • a huge cleanup in the features of the Layer API, absolutely every features which is a Network feature should move to the newly created Network API.
    • keep Layer features at the Layer level.
    • add feature Layer-related in a Layer API (e.g. how many params in this Layer ?)

    A few words on the Network API

    I believe the network API should NOT be mandatory in TL. It should bring additional, non essential features.

    The following should be possible

    import tensorflow as tf
    import tensorlayer as tl
    
    tf.logging.set_verbosity(tf.logging.DEBUG)
    tl.logging.set_verbosity(tl.logging.DEBUG)
    
    x = tf.placeholder(tf.float32, shape=[None, 30])
    
    net = tl.layers.InputLayer(x, name='input')
    net = tl.layers.DenseLayer(net, n_units=10, name='dense1')
    

    However, the following functionalities should be removed from Layer and move to Network API:

    #  Graph API
    net.load() 
    net.save()
    
    net.print_layers()
    net.print_params(False)
    
    print(net.all_drop)
    print(net.all_params)
    print(net.all_layers)
    print(net.all_graphs)
    

    The list above is not exhaustive. In the same time, new functionalities can be added:

    layer.count_params()  # returns the number of params in this Layer
    layer._local_vars  # returns a list of tf.Variable inside this Layer
    

    The list above is not exhaustive

    Presentation of the current Network API

    It is not finialized, is subject to changes. I plan on releasing two Network Classes:

    Sequential (similar idea than Keras)

    For easy and sequential models, the Sequential Network API is here for rapid prototyping.

    # Model Definition
    
    model = tl.networks.Sequential(name="My_Sequential_1D_Network")  # Automatically adds the InputLayer, no need to do it
    
    model.add(tl.layers.DenseLayer(n_units=10, act=tf.nn.relu, name="seq_layer_1"))
    
    model.add(tl.layers.DenseLayer(n_units=20, act=None, name="seq_layer_2"))
    model.add(tl.layers.PReluLayer(channel_shared=True, name="prelu_layer_2"))
    
    model.add(tl.layers.DenseLayer(n_units=50, act=None, name="seq_layer_3"))
    model.add(tl.layers.PRelu6Layer(channel_shared=False, name="prelu6_layer_3"))
    
    model.add(tl.layers.DenseLayer(n_units=40, act=None, name="seq_layer_4"))
    model.add(tl.layers.PTRelu6Layer(channel_shared=True, name="ptrelu6_layer_4"))
    
    model.add(tl.layers.DenseLayer(n_units=40, act=tf.nn.relu, name="seq_layer_5"))
    model.add(tl.layers.DropoutLayer(keep=0.5, is_fix=True, name="dropout_layer_5"))
    
    # TF Graph Creation
    
    plh = tf.placeholder(tf.float16, (100, 32))
    
    train_model_output = model.compile(plh, reuse=False, is_train=True)  # returns a TF Tensor
    test_model_output = model.compile(plh, reuse=True, is_train=False)   # returns a TF Tensor
    
    # PyTorch-Like Layer Access
    
    layer_1 = model["seq_layer_1"]
    

    Custom Model API

    CustomModel/CustomNetwork/Model/Network: I haven't really decide on the name yet. This Class haven't been created yet, it is subject to change.

    # Model Definition
    
    class MyCustomNetwork(CustomNetwork):    
    
        def define_network(self):  # abstract function that needs to be overwritten
        
            net_in = tl.layers.InputLayer(name="input_layer")
    
            net = tl.layers.DenseLayer(n_units=10, act=tf.nn.relu, name="seq_layer_1")
    
            net1 = tl.layers.DenseLayer(n_units=20, act=None, name="seq_layer_2")(net)
            net1 = tl.layers.PReluLayer(channel_shared=True, name="prelu_layer_2")(net)
    
            net2 = tl.layers.DenseLayer(n_units=50, act=None, name="seq_layer_3")(net)
            net2 = tl.layers.PRelu6Layer(channel_shared=False, name="prelu6_layer_3")(net)
    
            net3 = tl.layers.DenseLayer(n_units=40, act=None, name="seq_layer_4")(net)
            net3 = tl.layers.PTRelu6Layer(channel_shared=True, name="ptrelu6_layer_4")(net)
    
            net4 = tl.layers.DenseLayer(n_units=40, act=tf.nn.relu, name="seq_layer_5")(net)
            net4 = tl.layers.DropoutLayer(keep=0.5, is_fix=True, name="dropout_layer_5")(net)
            
            net_stack = tl.layers.StackLayer(axis=1, name='stack')([net1, net2, net3, net4])
            
            return net_stack
            
    model = MyCustomNetwork(name="My_Custom_Network")
    
    # TF Graph Creation
    
    plh = tf.placeholder(tf.float16, (100, 32))
    
    train_model_output = model.compile(plh, reuse=False, is_train=True)  # returns a TF Tensor
    test_model_output = model.compile(plh, reuse=True, is_train=False)   # returns a TF Tensor
    
    # PyTorch-Like Layer Access
    
    layer_1 = model["seq_layer_1"]
    
    help_wanted discussion feature_request 
    opened by DEKHTIARJonathan 24
  • [WIP] Fix Issue #561 - Convolution Support NCHW

    [WIP] Fix Issue #561 - Convolution Support NCHW

    Checklist

    • [x] I've tested that my changes are compatible with the latest version of TensorFlow.
    • [x] I've read the Contribution Guidelines
    • [ ] I've updated the documentation if necessary.

    Motivation and Context

    Issue #561

    Description

    • [x] Conv1dLayer
    • [x] Conv2dLayer
    • [ ] DeConv2dLayer
    • [x] Conv3dLayer
    • [ ] DeConv3dLayer
    • [ ] UpSampling2dLayer
    • [ ] DownSampling2dLayer
    • [ ] DeformableConv2d
    • [ ] AtrousConv1dLayer
    • [ ] AtrousConv2dLayer
    • [ ] deconv2d_bilinear_upsampling_initializer
    • [x] Conv1d
    • [x] Conv2d
    • [ ] DeConv2d
    • [ ] DeConv3d
    • [ ] DepthwiseConv2d
    • [ ] SeparableConv1d
    • [ ] SeparableConv2d
    • [ ] GroupConv2d
    enhancement feature_request awaiting_response stale 
    opened by 2wins 24
  • Get one of the previous layer by layer name (string)

    Get one of the previous layer by layer name (string)

    Hi all, I think we should support users to get one of the previous layer by using layer name, as follow:

    x = tf.placeholder(tf.float32, [None, 300])
    net = InputLayer(x)
    net = DenseLayer(net, name='dense1')
    net = DenseLayer(net, name='dense2')
    
    net['dense1'].outputs  <==== this one
    

    Reference

    net.tops['conv1'] = L.Convolution(bottom="image", kernel_size=3, stride=1, num_output=32, pad=1, weight_filler=dict(type='xavier'))
    net.tops['relu1'] = L.ReLU(net.tops['conv1'], in_place=True)
    net.tops['conv2'] = L.Convolution(net.tops['relu1'], kernel_size=3, stride=1, num_output=64, pad=1, weight_filler=dict(type='xavier'))
    ...
    
    net.tops['conv1'] <==
    
    help_wanted feature_request 
    opened by zsdonghao 19
  • Transparent distributed model training through TensorLayer GPU Trainer

    Transparent distributed model training through TensorLayer GPU Trainer

    Checklist

    • [x] I've tested that my changes are compatible with the latest version of Tensorflow.
    • [x] I've read the Contribution Guidelines
    • [x] I've updated the documentation if necessary.

    Motivation and Context

    By far the distributed GPU training of TensorLayer is still a difficult to configure given its dependency to the Worker and Parameter Server configuration in TensorFlow.

    In the proposed Horovod trainer, we provide a simple training interface that a user only needs to provide:

    • A model function that takes a X (i.e., example, sample) and Y_ (i.e., label) as inputs, and returns a loss (i.e., cost).
    • A TensorFlow dataset that takes the training data

    The user does not change any hyper-parameter of their models and does not to care about cluster configuration. TensorLayer and Horovod handle all the details underneath for the users.

    Description

    This PR would require adding a horovod-based TL trainer in the distributed module, and remove deprecated worker-ps module, and add a CLI command to help users easily configure a Linux server to support horovod.

    Logging TODO:

    We are having two warning messages:

    • [x] https://github.com/uber/horovod/issues/300
    • [x] WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased.

    The second message is produced as the validation step does not touch the optimizer which increases the global step.

    API TODO:

    • [x] Add an option to direct logs to a single place that helps debugging. This single place is likely to be TensorDB in a multi-node environment.
    • [x] Add an option to disable "tfevents" if TensorBoard is not used.

    Test TODO:

    • [x] Fix doc test
    • [x] Fix RTD build

    @lgarithm @zsdonghao

    opened by luomai 18
  • 🚀🚀  Support network architecture that can be easily use

    🚀🚀 Support network architecture that can be easily use

    Could you support the network architecture like vgg, resnet easily to use. If I want to use resnet in Pytorch, I only need to write the following two lines code:

    from torchvision import models
    model_ft = models.resnet18(pretrained=True)
    

    But in tensorlayer, I have to define the network architecture by myself and then load the pretrained parameters.

    discussion 
    opened by auroua 17
  • Issue #642 - Add `AtrousDeConv2dLayer`

    Issue #642 - Add `AtrousDeConv2dLayer`

    Checklist

    • [x] I've tested that my changes are compatible with the latest version of Tensorflow.
    • [x] I've read the Contribution Guidelines
    • [ ] I've updated the documentation if necessary.

    Motivation and Context

    Reflect feature request #642

    Description

    Add AtrousConv2dTransposeLayer

    enhancement feature_request 
    opened by 2wins 16
  • tutorial_bipedalwalker_a3c_continuous_action.py not convergence

    tutorial_bipedalwalker_a3c_continuous_action.py not convergence

    I tryed to launch the demo tutorial_bipedalwalker_a3c_continuous_action.py, but I have a very low convergence (best result: reward: -5.1 | running_reward: -5.6 either after 40000 iteration). In simulation it fall almost immediately. There is some parameters to tune?

    bug help_wanted 
    opened by EnricoBeltramo 16
  • Deprecation warning fixes - Related to issue #479

    Deprecation warning fixes - Related to issue #479

    All the details related to this PR can be found in issue #479.

    Please proof-read and test the PR before merging. I hope I didn't break anything in the process ;)

    Thanks a lot,

    Jonathan

    opened by DEKHTIARJonathan 16
  • Tutorial fixing

    Tutorial fixing

    This PR aims to insure that all the tutorial are working fine and nicely.

    This PR solve the following bugs:

    • Issue #476 with model saving in tutorial_word2vec
    • tf.flags changed for tf.app.flags
    • various bugs in all files fixed

    Tested Tutorials:

    • [x] tutorial_atari_pong.py
    • [x] tutorial_binarynet_cifar10_tfrecord.py
    • [x] tutorial_binarynet_mnist_cnn.py
    • [ ] tutorial_bipedalwalker_a3c_continuous_action.py => FAILING
    • [x] tutorial_cartpole_ac.py
    • [x] tutorial_cifar10.py
    • [x] tutorial_cifar10_tfrecord.py
    • [x] tutorial_dorefanet_cifar10_tfrecord.py
    • [x] tutorial_dorefanet_mnist_cnn.py
    • [x] tutorial_frozenlake_dqn.py
    • [x] tutorial_frozenlake_q_table.py
    • [x] tutorial_generate_text.py
    • [ ] tutorial_imagenet_inceptionV3_distributed.py => Not tested yet
    • [x] tutorial_image_preprocess.py
    • [x] tutorial_imdb_fasttext.py
    • [x] tutorial_inceptionV3_tfslim.py
    • [x] tutorial_keras.py
    • [x] tutorial_matrix.py
    • [x] tutorial_mlp_dropout1.py
    • [x] tutorial_mlp_dropout2.py
    • [x] tutorial_mnist.py
    • [ ] tutorial_mnist_distributed.py => Not tested yet
    • [x] tutorial_mnist_float16.py
    • [x] tutorial_mnist_simple.py
    • [x] tutorial_mobilenet.py
    • [x] tutorial_models_mobilenetv1.py
    • [x] tutorial_models_squeezenetv1.py
    • [x] tutorial_models_vgg16.py
    • [x] tutorial_ptb_lstm.py
    • [x] tutorial_ptb_lstm_state_is_tuple.py
    • [x] tutorial_squeezenet.py
    • [x] tutorial_ternaryweight_cifar10_tfrecord.py
    • [x] tutorial_ternaryweight_mnist_cnn.py
    • [x] tutorial_tfrecord.py
    • [x] tutorial_tfrecord2.py
    • [x] tutorial_tfrecord3.py
    • [x] tutorial_tfslim.py
    • [x] tutorial_tf_dataset_voc.py
    • [x] tutorial_vgg16.py
    • [x] tutorial_vgg19.py
    • [x] tutorial_word2vec_basic.py
    bug help_wanted 
    opened by DEKHTIARJonathan 15
  • If initializer is a constant, do not specify shape.

    If initializer is a constant, do not specify shape.

    if i want to control the weight to be initialized by a constant, W_init arg in many layers(like DenseLayer) will report error:

    If initializer is a constant, do not specify shape.

    help_wanted feature_request awaiting_response 
    opened by youkaichao 15
  •  module 'tensorflow' has no attribute 'placeholder'

    module 'tensorflow' has no attribute 'placeholder'

    New Issue Checklist

    Issue Description

    [INSERT DESCRIPTION OF THE PROBLEM]

    When I do pip install tensorflow-gpu==2.0.0-rc1 I get an error ERROR: Could not find a version that satisfies the requirement tensorflow-gpu==2.0.0-rc1 (from versi ons: 2.5.0, 2.5.1, 2.5.2, 2.5.3, 2.6.0, 2.6.1, 2.6.2, 2.6.3, 2.6.4, 2.6.5, 2.7.0rc0, 2.7.0rc1, 2.7.0, 2.7.1, 2.7.2, 2.7.3, 2.7.4, 2.8.0rc0, 2.8.0rc1, 2.8.0, 2.8.1, 2.8.2, 2.8.3, 2.9.0rc0, 2.9.0rc1, 2.9.0rc2, 2.9.0, 2.9.1, 2.9.2, 2.10.0rc0, 2.10.0rc1, 2.10.0rc2, 2.10.0rc3, 2.10.0) ERROR: No matching distribution found for tensorflow-gpu==2.0.0-rc1

    Upon looking at the versions I See that there is no 2.0.0 but there is a 2.10.0rc1, I install this version. When I install this version I get the error module 'tensorflow' has no attribute 'placeholder' when trying to run the sample code below

    Reproducible Code

    • Which OS are you using ? Linux x86_64
    • Please provide a reproducible code of your issue. Without any reproducible code, you will probably not receive any help.

    [INSERT CODE HERE]

    import tensorflow as tf
    batch_size = 1
    nw = 1
    nh = 1
    nz = 1
    t_image_good = tf.placeholder('float32', [batch_size, nw, nh, nz], name='good_image')
    
    
    
    opened by gtm2122 1
  • Grammatical error fixed

    Grammatical error fixed

    There is a grammatical error in the "Getting Start" paragraph. It is grammatically incorrect to write "Getting Start", instead "Getting Started" could be written in the place of it.

    Thank you.

    opened by RealWorldEdits376W 0
  • Grammatical error

    Grammatical error

    There is a grammatical error in the "Getting Start" paragraph. It is grammatically incorrect to write "Getting Start", instead "Getting Started" could be written in the place of it.

    Thank you.

    opened by RealWorldEdits376W 1
  • release 2.2.5

    release 2.2.5

    Checklist

    • [ ] I've tested that my changes are compatible with the latest version of Tensorflow.
    • [ ] I've read the Contribution Guidelines
    • [ ] I've updated the documentation if necessary.

    Motivation and Context

    Description

    opened by Laicheng0830 0
  • Questions about PPO

    Questions about PPO

    I use PPO to make the car automatically find the way and avoid obstacles,but it didn't perform well. Similar examples use dqn network. Why can dqn but PPO not?

    opened by imitatorgkw 1
  • Create README.md

    Create README.md

    Checklist

    • [ ] I've tested that my changes are compatible with the latest version of Tensorflow.
    • [ ] I've read the Contribution Guidelines
    • [ ] I've updated the documentation if necessary.

    Motivation and Context

    Description

    opened by hanjr92 0
Releases(3.0.0-alpha)
  • 3.0.0-alpha(Jul 8, 2021)

    Dear all,

    It is our great honour to pre-released TensorLayer 3.0.0-alpha. It supports TensorFlow and MindSpore backends, and supports some PaddlePaddle operator backends, allowing users to run the code on different hardware like Nvidia-GPU and Huawei-Ascend.

    In the next step, we support TensorFlow, MindSpore, PaddlePaddle, and PyTorch backends in the future. Feel free to use it and make suggestions.

    TensorLayer 3.0.0-alpha is a maintenance release.

    Source code(tar.gz)
    Source code(zip)
  • v2.2.4(Jan 6, 2021)

    TensorLayer 2.2.4 is a maintenance release.

    Added

    Changed

    Dependencies Update

    Deprecated

    Fixed

    • Fix batchnorm(#1104)
    • Fix recurrent(#1106)

    Removed

    Security

    Contributors

    • @zsdonghao
    • @Laicheng0830(#1104)
    • @Thinkre(#1106)
    Source code(tar.gz)
    Source code(zip)
  • 2.2.3(Jun 19, 2020)

    TensorLayer 2.2.3 is a maintenance release. It contains numerous bug fixes.

    Added

    Changed

    Dependencies Update

    Deprecated

    Fixed

    • Fix VGG. (#1078, 1079, 1089)
    • Fix norm layer. (#1080)
    • Fix DeCov2d layer. (#1081)
    • Fix ModelLayer and LayerList doc. (#1083)
    • Fix bug in SAC. (#1085)
    • Fix refactoring: Deduplication. (#1086)
    • Fix maxpool, batchnorm Data format fixed, vgg forward. (#1089)
    • Fix package info. (#1090)

    Removed

    Security

    Contributors

    • @zsdonghao
    • @tiancheng2000 (#1078 #1079 #1080 #1081)
    • @ChrisWu1997 (#1083)
    • @quantumiracle (#1085)
    • @marload (#1086)
    • @Gyx-One (#1089)
    • @Laicheng0830 (#1090)
    Source code(tar.gz)
    Source code(zip)
    tensorlayer-2.2.3.tar.gz(252.24 KB)
    tensorlayer-2.2.3-py2.py3-none-any.whl(354.77 KB)
    tensorlayer-2.2.3-py3-none-any.whl(354.77 KB)
    PKG-INFO(11.44 KB)
  • 2.2.2(Apr 26, 2020)

  • 2.2.1(Jan 14, 2020)

  • v2.2.0(Sep 13, 2019)

    TensorLayer 2.2.0 is a maintenance release. It contains numerous API improvement and bug fixes. This release is compatible with TensorFlow 2 RC1.

    Added

    • Support nested layer customization (#PR 1015)
    • Support string dtype in InputLayer (#PR 1017)
    • Support Dynamic RNN in RNN (#PR 1023)
    • Add ResNet50 static model (#PR 1030)
    • Add performance test code for static models (#PR 1041)

    Changed

    • SpatialTransform2dAffine auto in_channels
    • support TensorFlow 2.0.0-rc1
    • Update model weights property, now returns its copy (#PR 1010)

    Fixed

    • RNN updates: remove warnings, fix if seq_len=0, unitest (#PR 1033)
    • BN updates: fix BatchNorm1d for 2D data, refactored (#PR 1040)

    Dependencies Update

    Deprecated

    Fixed

    • Fix tf.models.Model._construct_graph for list of outputs, e.g. STN case (PR #1010)
    • Enable better in_channels exception raise. (PR #1015)
    • Set allow_pickle=True in np.load() (#PR 1021)
    • Remove private_method decorator (#PR 1025)
    • Copy original model's trainable_weights and nontrainable_weights when initializing ModelLayer (#PR 1026)
    • Copy original model's trainable_weights and nontrainable_weights when initializing LayerList (#PR 1029)
    • Remove redundant parts in model.all_layers (#PR 1029)
    • Replace tf.image.resize_image_with_crop_or_pad with tf.image.resize_with_crop_or_pad (#PR 1032)
    • Fix a bug in ResNet50 static model (#PR 1041)

    Removed

    Security

    Contributors

    • @zsdonghao
    • @luomai
    • @ChrisWu1997: #1010 #1015 #1025 #1030 #1040
    • @warshallrho: #1017 #1021 #1026 #1029 #1032 #1041
    • @ArnoldLIULJ: #1023
    • @JingqingZ: #1023
    Source code(tar.gz)
    Source code(zip)
  • 2.1.0(Jun 16, 2019)

    Dear All,

    Three things need to be mentioned for this release.

    • Deep Reinforcement Learning Model Zoo Release!!!
    • We are going to support more Attention models for NLP officially.
    • The model.conf is almost stable, the AIoT team from Sipeed is now working hard to support TL model on the AI Chips.

    Enjoy!

    TensorLayer Team

    Changed

    • Add version_info in model.config. (PR #992)
    • Replace tf.nn.func with tf.nn.func.__name__ in model config.
    • Add Reinforcement learning tutorials. (PR #995)
    • Add RNN layers with simple rnn cell, GRU cell, LSTM cell. (PR #998)
    • Update Seq2seq (#998)
    • Add Seq2seqLuongAttention model (#998)

    Contributors

    • @warshallrho:
    • @quantumiracle: #995
    • @Tokarev-TT-33: #995
    • @initial-h: #995
    • @Officium: #995
    • @ArnoldLIULJ: #998
    • @JingqingZ: #998
    Source code(tar.gz)
    Source code(zip)
    tensorlayer-2.1.0.tar.gz(251.24 KB)
    tensorlayer-2.1.0-py3-none-any.whl(344.07 KB)
    tensorlayer-2.1.0-py2.py3-none-any.whl(344.07 KB)
    PKG-INFO(12.76 KB)
  • 2.0.2(Jun 5, 2019)

    Hello, we want to tell you some GOOD NEWS. Today, AI chip is anywhere, from our phone to our car, however, it still hard for us to have our own AI chip. To end this, TensorLayer team starts to work on AIoT and will soon support to run the TensorLayer models on the low-cost AI chip (e.g., K210) and microcontrollers (e.g., STM32). Details in the following:

    • NNoM is a higher-level layer-based Neural Network library specifically for microcontrollers (MCU), our team and the author of NNoM is working hard to make TensorLayer models to run on different MCUs. Yes! Something like BinaryNet.
    • K210 is a low-cost AI chip, we are collaborating with the designers of K210 and the Sipeed team to make TensorLayer models to run on the K210 AI chip.

    If you are interested in AIoT, feel free to discuss in Slack.



    TensorLayer, Sipeed, NNoM teams

    =======

    Maintain release, recommended to update.

    Changed

    • change the format of network config, change related code and files; change layer act (PR #980)
    • update Seq2seq (#989)

    Fixed

    • Fix dynamic model cannot track PRelu weights gradients problem (PR #982)
    • Raise .weights warning (commit)

    Contributors

    • @warshallrho: #980
    • @ArnoldLIULJ: #989
    • @1FengL: #982
    Source code(tar.gz)
    Source code(zip)
    tensorlayer-2.0.2.tar.gz(250.57 KB)
    tensorlayer-2.0.2-py3-none-any.whl(342.69 KB)
    tensorlayer-2.0.2-py2.py3-none-any.whl(342.70 KB)
    PKG-INFO(12.76 KB)
  • 2.0.1(May 17, 2019)

    Maintain release, recommended to update.

    Changed

    • remove tl.layers.initialize_global_variables(sess) (PR #931)
    • support trainable_weights (PR #966)

    Added

    • Layer
      • InstanceNorm, InstanceNorm1d, InstanceNorm2d, InstanceNorm3d (PR #963)

    Changed

    • remove tl.layers.initialize_global_variables(sess) (PR #931)
    • change tl.layers.core, tl.models.core (PR #966)
    • change weights into all_weights, trainable_weights, nontrainable_weights

    Dependencies Update

    • nltk>=3.3,<3.4 => nltk>=3.3,<3.5 (PR #892)
    • pytest>=3.6,<3.11 => pytest>=3.6,<4.1 (PR #889)
    • yapf>=0.22,<0.25 => yapf==0.25.0 (PR #896)
    • imageio==2.5.0 progressbar2==3.39.3 scikit-learn==0.21.0 scikit-image==0.15.0 scipy==1.2.1 wrapt==1.11.1 pymongo==3.8.0 sphinx==2.0.1 wrapt==1.11.1 opencv-python==4.1.0.25 requests==2.21.0 tqdm==4.31.1 lxml==4.3.3 pycodestyle==2.5.0 sphinx==2.0.1 yapf==0.27.0(PR #967)

    Fixed

    • fix docs of models @zsdonghao #957
    • In BatchNorm, keep dimensions of mean and variance to suit channels first (PR #963)

    Contributors

    • @warshallrho: #966
    • @zsdonghao: #931
    • @yd-yin: #963
    • @dvklopfenstein: #971
    Source code(tar.gz)
    Source code(zip)
    tensorlayer-2.0.1-py3-none-any.whl(338.36 KB)
    tensorlayer-2.0.1.tar.gz(247.81 KB)
    tensorlayer-2.0.1-py2.py3-none-any.whl(338.36 KB)
    PKG-INFO(12.76 KB)
  • 2.0.0(May 4, 2019)

    Dear all,

    It is our great honour to release TensorLayer 2.0.0. In the past few months, we have refactored all layers to support TensorFlow 2.0.0-alpha0 and the dynamic mode! The new API designs allow you to customize layers easily, compared with other libraries.

    We would like to thanks all contributors especially our core members from Peking University and Imperial College London, they are @zsdonghao @JingqingZ @ChrisWu1997 @warshallrho. All contributions are listed in the following.

    In the next step, we are interested in supporting more advanced features for 3D Vision, such as PointCNN and GraphCNN. Also, we still have some remaining examples that need to be updated, such as A3C and distributed training. If you are interested in joining the development team, feel free to contact us: [email protected]

    Enjoy coding!

    TensorLayer Team

    References

    Contribution List

    All contribution can be found as follows:

    Layers

    • [x] core.py:
      • Layer:
        • [x] refactored @JingqingZ 2019/01/28
        • [x] tested @JingqingZ 2019/01/31 2019/03/06
        • [x] documentation @JingqingZ 2019/03/06
      • ModelLayer:
        • [x] created @JingqingZ 2019/01/28
        • [x] tested @JingqingZ 2019/03/06
        • [x] documentation @JingqingZ 2019/03/06
      • LayerList:
        • [x] created @JingqingZ 2019/01/28 @ChrisWu1997
        • [x] tested @JingqingZ 2019/03/06
        • [x] documentation @JingqingZ 2019/03/06
      • LayerNode:
        • [x] created @ChrisWu1997
        • [x] tested @ChrisWu1997 2019/03/22
        • [x] documentation @ChrisWu1997 2019/03/22
    • [x] activation.py:
      • PRelu:
        • [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/20
        • [x] tested @JingqingZ 2019/03/20
        • [x] documentation @JingqingZ 2019/03/20
      • PRelu6:
        • [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/20
        • [x] tested @JingqingZ 2019/03/20
        • [x] documentation @JingqingZ 2019/03/20
      • PTRelu6:
        • [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/20
        • [x] tested @JingqingZ 2019/03/20
        • [x] documentation @JingqingZ 2019/03/20
    • convolution/
      • AtrousConv1dLayer, AtrousConv2dLayer and AtrousDeConv2d are removed, use Conv1d/2d and DeConv2d with dilation_rate instead. (🀄️remember to change CN docs)
      • BinaryConv2d:
        • [x] refactored @zsdonghao 2018/12/05
        • [x] tested @warshallrho 2019/03/16
        • [x] documentation @warshallrho 2019/03/20
      • Conv1d:
        • [x] refactored @zsdonghao 2019/01/16
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/17
      • Conv2d:
        • [x] refactored @zsdonghao 2019/01/16
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/17
      • Conv3d:
        • [x] add @zsdonghao 2019/01/16 : (🀄️remember to change CN docs)
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/17
      • Conv1dLayer:
        • [x] refactored @zsdonghao 2018/12/05
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/17
      • Conv2dLayer:
        • [x] refactored @zsdonghao 2018/12/05
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/17
      • Conv3dLayer:
        • [x] refactored @zsdonghao 2018/12/05
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/17
      • DeConv1dLayer:
        • [x] refactored @warshallrho 2019/03/16
        • [x] tested @warshallrho 2019/03/16
        • [x] documentation @warshallrho 2019/03/17
      • DeConv2dLayer:
        • [x] refactored @zsdonghao 2018/12/06
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/17
      • DeConv3dLayer:
        • [x] refactored @zsdonghao 2018/12/06
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/17
      • DeConv2d:
        • [x] refactored @zsdonghao 2019/01/16
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/17
      • DeConv3d:
        • [x] refactored @zsdonghao 2019/01/16
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/17
      • DeformableConv2d:
        • [x] refactored @warshallrho 2019/03/18
        • [x] tested @warshallrho 2019/03/18
        • [x] documentation @warshallrho 2019/03/18
      • DepthwiseConv2d:
        • [x] refactored @zsdonghao 2018/12/05
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/18
      • DorefaConv2d:
        • [x] refactored @zsdonghao 2018/12/06
        • [x] tested @warshallrho 2019/03/17
        • [x] documentation @warshallrho 2019/03/20
      • GroupConv2d:
        • [x] refactored @zsdonghao 2018/12/06
        • [x] tested @warshallrho 2019/03/17
        • [x] documentation @warshallrho 2019/03/20
      • QuanConv2d:
        • [x] refactored @zsdonghao 2018/12/06
        • [x] tested @warshallrho 2019/03/17
        • [x] documentation @warshallrho 2019/03/20
      • QuanConv2dWithBN:
        • [ ] refactored
        • [ ] tested
        • [ ] documentation
      • SeparableConv1d:
        • [x] refactored @zsdonghao 2019/01/16
        • [x] tested @warshallrho 2019/03/17
        • [x] documentation @warshallrho 2019/03/18
      • SeparableConv2d:
        • [x] refactored @zsdonghao 2019/01/16
        • [x] tested @warshallrho 2019/03/17
        • [x] documentation @warshallrho 2019/03/18
      • SubpixelConv1d:
        • [x] refactored @zsdonghao 2018/12/05 @warshallrho 2019/03/18
        • [x] tested @warshallrho 2019/03/18
        • [x] documentation @warshallrho 2019/03/18
      • SubpixelConv2d:
        • [x] refactored @zsdonghao 2018/12/05 @warshallrho 2019/03/18
        • [x] tested @warshallrho 2019/03/18
        • [x] documentation @warshallrho 2019/03/18
      • TernaryConv2d:
        • [x] refactored @zsdonghao 2018/12/06
        • [x] tested @warshallrho 2019/03/17
        • [x] documentation @warshallrho 2019/03/20
    • dense/ [WIP] @ChrisWu1997
      • BinaryDense:
        • [x] refactored @zsdonghao 2018/12/06
        • [x] tested @ChrisWu1997 2019/04/23 need further test by example
        • [x] documentation @ChrisWu1997 2019/04/23
      • Dense:
        • [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/01/28
        • [x] tested @JingqingZ 2019/01/31 2019/03/06 2019/03/15
        • [x] documentation @JingqingZ 2019/03/15
      • DorefaDense:
        • [x] refactored @zsdonghao 2018/12/04
        • [x] tested @ChrisWu1997 2019/04/23 need further test by example
        • [x] documentation @ChrisWu1997 2019/04/23
      • DropconnectDense:
        • [x] refactored @zsdonghao 2018/12/05
        • [x] tested @ChrisWu1997 2019/04/23 need further test by example
        • [x] documentation @ChrisWu1997 2019/04/23
      • QuanDense:
        • [x] refactored @zsdonghao 2018/12/06
        • [x] tested @ChrisWu1997 2019/04/23 need further test by example
        • [x] documentation @ChrisWu1997 2019/04/23
      • QuanDenseWithBN:
        • [ ] refactored
        • [ ] tested
        • [ ] documentation
      • TernaryDense:
        • [x] refactored @zsdonghao 2018/12/06
        • [x] tested @ChrisWu1997 2019/04/23 need further test by example
        • [x] documentation @ChrisWu1997 2019/04/23
    • dropout.py
      • Dropout:
        • [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/01/28
        • [x] tested @JingqingZ 2019/01/31 2019/03/06 2019/03/15
        • [x] documentation @JingqingZ 2019/03/15
    • extend.py
      • ExpandDims:
        • [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/22
        • [x] tested @JingqingZ 2019/03/22
        • [x] documentation @JingqingZ 2019/03/22
      • Tile:
        • [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/22
        • [x] tested @JingqingZ 2019/03/22
        • [x] documentation @JingqingZ 2019/03/22
    • image_resampling.py
      • UpSampling2d:
        • [x] refactored @zsdonghao 2018/12/04 @ChrisWu1997 2019/04/03
        • [x] tested @ChrisWu1997 2019/04/03
        • [x] documentation @ChrisWu1997 2019/04/03
      • DownSampling2d:
        • [x] refactored @zsdonghao 2018/12/04 @ChrisWu1997 2019/04/03
        • [x] tested @ChrisWu1997 2019/04/03
        • [x] documentation @ChrisWu1997 2019/04/03
    • importer.py
      • SlimNets:
        • [ ] refactored
        • [ ] tested
        • [ ] documentation
      • Keras:
        • [ ] refactored
        • [ ] tested
        • [ ] documentation
    • inputs.py
      • Input:
        • [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/01/28
        • [x] tested @JingqingZ 2019/03/06
        • [x] documentation @JingqingZ 2019/03/06
    • embedding.py
      • OneHotInput: --> OneHot (🀄️remember to change CN docs)
        • [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/02/23
        • [x] tested @JingqingZ 2019/03/19
        • [x] documentation @JingqingZ 2019/03/19
      • Word2vecEmbeddingInput: --> Word2vecEmbedding (🀄️remember to change CN docs)
        • [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/02/21
        • [x] tested @JingqingZ 2019/03/19
        • [x] documentation @JingqingZ 2019/03/19
      • EmbeddingInput: --> Embedding
        • [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/02/22
        • [x] tested @JingqingZ 2019/03/19
        • [x] documentation @JingqingZ 2019/03/19
      • AverageEmbeddingInput: --> AverageEmbedding (🀄️remember to change CN docs)
        • [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/02/20
        • [x] tested @JingqingZ 2019/03/19
        • [x] documentation @JingqingZ 2019/03/19
    • lambda_layers.py
      • ElementwiseLambda:
        • [x] refactored @JingqingZ 2019/03/24
        • [x] tested @JingqingZ 2019/03/24
        • [x] documentation @JingqingZ 2019/03/24
      • Lambda:
        • [x] refactored @JingqingZ 2019/03/24
        • [x] tested @JingqingZ 2019/03/24
        • [x] documentation @JingqingZ 2019/03/24
    • merge.py
      • Concat:
        • [x] refactored @zsdonghao 2018/12/04
        • [x] tested @JingqingZ 2019/03/15
        • [x] documentation @JingqingZ 2019/03/15
      • Elementwise:
        • [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/15
        • [x] tested @JingqingZ 2019/03/15
        • [x] documentation @JingqingZ 2019/03/15
    • noise.py
      • GaussianNoise:
        • [x] refactored @zsdonghao 2018/12/04
        • [x] tested @warshallrho 2019/03/20
        • [x] documentation @warshallrho 2019/03/20
    • normalization.py
      • BatchNorm:
        • [x] refactored @ChrisWu1997 2019/01/22 @ChrisWu1997 2019/03/05
        • [x] tested @ChrisWu1997 2019/03/22
        • [x] documentation @ChrisWu1997 2019/03/22
      • BatchNorm1d:
        • [x] refactored @ChrisWu1997 2019/03/05
        • [x] tested @ChrisWu1997 2019/03/22
        • [x] documentation @ChrisWu1997 2019/03/22
      • BatchNorm2d:
        • [x] refactored @ChrisWu1997 2019/03/05
        • [x] tested @ChrisWu1997 2019/03/22
        • [x] documentation @ChrisWu1997 2019/03/22
      • BatchNorm3d:
        • [x] refactored @ChrisWu1997 2019/03/05
        • [x] tested @ChrisWu1997 2019/03/22
        • [x] documentation @ChrisWu1997 2019/03/22
      • GroupNorm:
        • [x] refactored @zsdonghao 2018/12/05
        • [ ] tested
        • [ ] documentation
      • InstanceNorm:
        • [x] refactored @zsdonghao 2018/12/05
        • [ ] tested
        • [ ] documentation
      • LayerNorm:
        • [x] refactored @ChrisWu1997 2019/01/23
        • [ ] tested
        • [ ] documentation
      • LocalResponseNorm:
        • [x] refactored @zsdonghao 2018/12/05
        • [ ] tested
        • [ ] documentation
      • SwitchNorm:
        • [x] refactored @zsdonghao 2018/12/05
        • [ ] tested
        • [ ] documentation
    • padding.py
      • PadLayer:
        • [x] refactored @zsdonghao 2018/12/04
        • [x] tested @warshallrho 2019/03/21
        • [x] documentation @warshallrho 2019/03/21
      • ZeroPad1d:
        • [x] refactored @zsdonghao 2018/12/04
        • [x] tested @warshallrho 2019/03/21
        • [x] documentation @warshallrho 2019/03/21
      • ZeroPad2d:
        • [x] refactored @zsdonghao 2018/12/04
        • [x] tested @warshallrho 2019/03/21
        • [x] documentation @warshallrho 2019/03/21
      • ZeroPad3d:
        • [x] refactored @zsdonghao 2018/12/04
        • [x] tested @warshallrho 2019/03/21
        • [x] documentation @warshallrho 2019/03/21
    • pooling/
      • MaxPool1d:
        • [x] refactored @zsdonghao 2019/01/08
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/19
      • MaxPool2d:
        • [x] refactored @zsdonghao 2018/12/06
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/19
      • MaxPool3d:
        • [x] refactored @zsdonghao 2019/01/08
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/19
      • MeanPool1d:
        • [x] refactored @zsdonghao 2019/01/08
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/19
      • MeanPool2d:
        • [x] refactored @zsdonghao 2019/01/08
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/19
      • MeanPool3d:
        • [x] refactored @zsdonghao 2019/01/08
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/19
      • GlobalMaxPool1d:
        • [x] refactored @zsdonghao 2018/12/06
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/15
      • GlobalMaxPool2d:
        • [x] refactored @zsdonghao 2018/12/06
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/15
      • GlobalMaxPool3d:
        • [x] refactored @zsdonghao 2018/12/06
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/15
      • GlobalMeanPool1d:
        • [x] refactored @zsdonghao 2018/12/06
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/15
      • GlobalMeanPool2d:
        • [x] refactored @zsdonghao 2018/12/06
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/15
      • GlobalMeanPool3d:
        • [x] refactored @zsdonghao 2018/12/06
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/15
      • PoolLayer:
        • [x] refactored @zsdonghao 2018/12/04
        • [x] tested @warshallrho 2019/03/15
        • [x] documentation @warshallrho 2019/03/18
    • quantize_layers.py
      • Sign:
        • [x] refactored
        • [ ] tested
        • [ ] documentation
    • recurrent/
      • BiRNN:
        • [x] refactored @JingqingZ 2019/04/08
        • [x] tested @JingqingZ 2019/04/08
        • [x] documentation @JingqingZ 2019/04/08
      • ConvLSTM:
        • [ ] refactored
        • [ ] tested
        • [ ] documentation
      • RNN:
        • [x] refactored @JingqingZ 2019/03/31
        • [x] tested @JingqingZ 2019/03/31
        • [x] documentation @JingqingZ 2019/03/31
      • Seq2Seq:
        • [ ] refactored
        • [ ] tested
        • [ ] documentation
    • shape.py
      • Flatten:
        • [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/22
        • [x] tested @JingqingZ 2019/03/22
        • [x] documentation @JingqingZ 2019/03/22
      • Reshape:
        • [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/22
        • [x] tested @JingqingZ 2019/03/22
        • [x] documentation @JingqingZ 2019/03/22
      • Transpose:
        • [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/22
        • [x] tested @JingqingZ 2019/03/22
        • [x] documentation @JingqingZ 2019/03/22
    • scale.py
      • Scale:
        • [x] refactored @zsdonghao 2018/12/04 @JingqingZ 2019/03/22
        • [x] tested @JingqingZ 2019/03/22
        • [x] documentation @JingqingZ 2019/03/22
    • contrib
      • ROIPooling:
        • [ ] refactored
        • [ ] tested
        • [ ] documentation
    • spatial_transformer.py
      • SpatialTransformer2dAffine: see test_layers_spatial_transformer.py
        • [ ] refactored
        • [ ] tested
        • [ ] documentation
    • stack.py [WIP] @ChrisWu1997
      • Stack:
        • [x] refactored @zsdonghao 2018/12/04
        • [x] tested @ChrisWu1997 2019/04/23
        • [x] documentation @ChrisWu1997 2019/04/23
      • UnStack:
        • [x] refactored @zsdonghao 2018/12/04
        • [x] tested @ChrisWu1997 2019/04/23
        • [x] documentation @ChrisWu1997 2019/04/23

    tl.models

    • core.py
      • Model:
        • [x] refactored @JingqingZ 2019/01/28 @ChrisWu1997 2019/02/16 2019/02/22
        • [x] tested @ChrisWu1997 2019/03/21
        • [x] documentation @ChrisWu1997 2019/03/21
    • vgg.py
      • vgg:
        • [x] refactored @warshallrho 2019/02/19
        • [ ] tested
        • [x] documentation @warshallrho 2019/03/21 @ChrisWu1997 2019/03/21
      • vgg16:
        • [x] refactored @warshallrho 2019/02/19
        • [ ] tested
        • [x] documentation @warshallrho 2019/03/21 @ChrisWu1997 2019/03/21
      • vgg19:
        • [x] refactored @warshallrho 2019/03/09
        • [ ] tested
        • [x] documentation @warshallrho 2019/03/21 @ChrisWu1997 2019/03/21
    • mobilenetv1.py
      • MobileNet:
        • [x] refactored @ChrisWu1997 2019/04/23
        • [x] tested @ChrisWu1997 2019/04/23
        • [x] documentation @ChrisWu1997 2019/04/23
      • SqueezeNet:
        • [x] refactored @ChrisWu1997 2019/04/23
        • [x] tested @ChrisWu1997 2019/04/23
        • [x] documentation @ChrisWu1997 2019/04/23

    Examples

    • basic_tutorials Too many basic tutorials, some codes can be removed.
      • [x] Static model example MNIST @JingqingZ 2019/01/28 2019/03/24
      • [x] Dynamic model example MNIST @JingqingZ 2019/01/28 2019/03/24
      • [x] Static model example CIFAR10 (with dataset API) @ChrisWu1997 2019/03/24
      • [x] Siamese example MNIST @ChrisWu1997 2019/03/26
      • tutorial_mnist_float16.py removed by @ChrisWu1997
      • tutorial_mnist_simple.py removed by @ChrisWu1997
    • data_process
      • tutorial_fast_affine_transform.py
        • [x] refactored @ChrisWu1997 2019/04/11
        • [x] tested @ChrisWu1997 2019/04/11
      • tutorial_image_preprocess.py removed by @zsdonghao
      • tutorial_tf_dataset_voc.py
        • [x] refactored @ChrisWu1997 2019/04/11
        • [x] tested @ChrisWu1997 2019/04/11
      • tutorial_tfrecord.py
        • [x] refactored @ChrisWu1997 2019/04/11
        • [x] tested @ChrisWu1997 2019/04/11
      • tutorial_tfrecord2.py
        • [x] refactored @ChrisWu1997 2019/04/11
        • [x] tested @ChrisWu1997 2019/04/11
      • tutorial_tfrecord3.py
        • [ ] refactored
        • [ ] tested
    • database
      • [ ] refactored
      • [ ] tested
    • distributed_training
      • tutorial_cifar10_distributed_trainer.py
        • [ ] refactored
        • [ ] tested
      • tutorial_mnist_distributed_trainer.py
        • [ ] refactored
        • [ ] tested
    • keras_tfslim
      • tutorial_keras.py
        • [x] refactored @ChrisWu1997 2019/04/11
        • [x] tested @ChrisWu1997 2019/04/11
      • tutorial_tfslim.py removed by @ChrisWu1997
    • pretrained_cnn
      • tutorial_inceptionV3_tfslim.py
      • tutorial_mobilenet.py removed by @ChrisWu1997 2019/04/23
      • tutorial_models_mobilenetv1.py
        • [x] refactored @ChrisWu1997 2019/04/23
        • [x] tested @ChrisWu1997 2019/04/23
      • tutorial_models_squeezenetv1.py
        • [x] refactored @ChrisWu1997 2019/04/23
        • [x] tested @ChrisWu1997 2019/04/23
      • tutorial_models_vgg.py
        • [x] refactored @warshallrho 2019/04/30
        • [ ] tested
      • tutorial_models_vgg_static.py
        • [x] refactored @warshallrho 2019/04/30
        • [ ] tested
      • tutorial_models_vgg16.py
        • [x] refactored @warshallrho 2019/02/19
        • [ ] tested
      • tutorial_models_vgg19.py
        • [x] refactored @warshallrho 2019/03/09
        • [ ] tested
      • tutorial_squeezenet.py removed by @ChrisWu1997 2019/04/23
      • tutorial_vgg16.py removed by @warshallrho 2019/04/30
      • tutorial_vgg19.py removed by @warshallrho 2019/04/30
    • quantized_net
      • tutorial_binarynet_cifar10_tfrecord.py
        • [x] refactored
        • [x] tested
      • tutorial_binarynet_mnist_cnn.py
        • [x] refactored
        • [x] tested
      • tutorial_dorefanet_cifar10_tfrecord.py
        • [x] refactored
        • [x] tested
      • tutorial_dorefanet_mnist_cnn.py
        • [x] refactored
        • [x] tested
      • tutorial_quanconv_cifar10.py
        • [x] refactored
        • [x] tested
      • tutorial_quanconv_mnist.py
        • [x] refactored
        • [x] tested
      • tutorial_ternaryweight_cifar10_tfrecord.py
        • [x] refactored
        • [x] tested
      • tutorial_ternaryweight_mnist_cnn.py
        • [x] refactored
        • [x] tested
    • reinforcement_learning
      • tutorial_atari_pong.py @zsdonghao 2019/01/21
        • [x] refactored
        • [x] tested
      • tutorial_bipedalwalker_a3c_continuous_action.py
        • [ ] refactored
        • [ ] tested
      • tutorial_cartpole_ac.py @zsdonghao 2019/02/17
        • [x] refactored
        • [x] tested
      • tutorial_frozenlake_dqn.py @zsdonghao 2019/02/16
        • [x] refactored
        • [x] tested
      • tutorial_frozenlake_q_table.py @zsdonghao 2019/02/16
        • [x] refactored
        • [x] tested
    • text_classification
      • tutorial_imdb_fasttext.py @JingqingZ 2019/03/14
        • [x] refactored
        • [x] tested
    • text_generation
      • tutorial_generate_text.py
        • [ ] refactored
        • [ ] tested
    • text_ptb Are they duplicated?
      • tutorial_ptb_lstm_state_is_tuple.py
        • [ ] refactored
        • [ ] tested
      • tutorial_ptb_lstm.py
        • [ ] refactored
        • [ ] tested
    • text_word_embedding
      • tutorial_word2vec_basic.py @JingqingZ 2019/02/21 2019/03/19
        • [x] refactored
        • [x] tested

    Others

    • tl.activation.py
      • [x] refactored @JingqingZ 2019/03/06
      • [x] tested @JingqingZ 2019/03/06
      • [x] documentation @JingqingZ 2019/03/06
    • tl.cli
      • [x] refactored no update needed @ChrisWu1997 2019/04/12
    • tl.decorators
      • [x] refactored no update needed @ChrisWu1997 2019/04/12
    • tl.logging
      • [x] refactored no update needed @ChrisWu1997 2019/04/12
    • tl.optimizers
      • [ ] refactored
    • tl.third_party
      • [ ] refactored
    • tl.array_ops
      • [x] refactored no update needed @ChrisWu1997 2019/04/12
    • tl.cost
      • [x] refactored @ChrisWu1997 2019/04/12
      • [x] documentation @ChrisWu1997 2019/04/12
    • tl.db [WIP] @ChrisWu1997
      • [ ] refactored
    • tl.distributed
      • [ ] refactored
    • tl.initializers
      • [x] refactored @ChrisWu1997 2019/04/12
      • [x] tested @ChrisWu1997 2019/04/12
      • [x] documentation @ChrisWu1997 2019/04/12
    • tl.iterate
      • [x] refactored no update needed @ChrisWu1997 2019/04/12
    • tl.lazy_imports
      • [x] refactored no update needed @ChrisWu1997 2019/04/12
    • tl.nlp @OliverZijia @JingqingZ
      • [x] refactored
    • tl.package_info
      • [ ] refactored
    • tl.prepro
      • [x] refactored @ChrisWu1997 2019/04/11
    • tl.rein
      • [ ] refactored
    • tl.utils
      • [x] refactored @ChrisWu1997 2019/04/17
      • [x] tested by tutorial_mnist_simple.py @ChrisWu1997 2019/04/17
      • [x] documentation @ChrisWu1997 2019/04/17
    • tl.visualize
      • [x] refactored no update needed @ChrisWu1997 2019/04/12

    Unittests Status:

    • performance_test
      • VGG @JingqingZ @ChrisWu1997 @warshallrho 2019/03/20
    • layers
      • test_layernode.py @ChrisWu1997 2019/03/22
      • test_layers_activation.py @JingqingZ 2019/03/20
      • test_layers_convolution.py (1d, 2d, 3d) @warshallrho 2019/03/20
      • test_layers_core_basedense_dropout.py @JingqingZ 2019/03/06
      • test_layers_convolution_deformable.py @warshallrho 2019/03/18
      • test_layers_embedding.py @JingqingZ 2019/03/19
      • test_layers_extend.py @JingqingZ 2019/03/22
      • test_layers_lambda.py @JingqingZ 2019/03/24
      • test_layers_merge.py @JingqingZ 2019/03/15
      • test_layers_noise.py @warshallrho 2019/03/21
      • test_layers_padding.py @warshallrho 2019/03/21
      • test_layers_pooling.py @warshallrho 2019/03/18
      • test_layers_recurrent.py @JingqingZ 2019/03/06
      • test_layers_scale.py @JingqingZ 2019/03/22
      • test_layers_shape.py @JingqingZ 2019/03/22
    • test_activations.py @JingqingZ 2019/03/06
    • models
      • test_model_save_graph.py @warshallrho 2019/04/30

    Unittests Status (Pending):

    Some testing codes can be removed.

    • test_array_ops.py
    • test_decorators.py
    • test_documentation.py
    • test_layers_basic.py
    • test_layers_flow_control.py removed in favour of eager mode @zsdonghao 2018/12/04 (🀄️remember to change CN docs)
    • test_layers_importer.py
    • test_layers_normalization.py
    • test_layers_padding.py
    • test_layers_spatial_transformer.py
    • test_layers_stack.py
    • test_layers_super_resolution.py
    • test_layers_time_distributed.py
    • test_logging.py
    • test_logging_hyperdash.py
    • test_mnist_simple.py
    • test_model_compilednetwork.py
    • test_models.py
    • test_network_custom_2d.py
    • test_network_custom_input_layers.py
    • test_network_custom_multiple_inputs.py
    • test_network_custom_multiple_outputs.py
    • test_network_sequential_1d.py
    • test_network_sequential_2d.py
    • test_network_sequential_3d.py
    • test_network_sequential_rnn.py
    • test_optimizer_amsgrad.py
    • test_pydocstyle.py
    • test_reuse_mlp.py
    • test_tf_layers.py
    • test_timeout.py
    • test_utils_predict.py
    • test_yapf_format.py

    tl.files

    All save/load methods are also wrapped as class method in model core.

    • save_hdf5_graph
      • [x] created @warshallrho 2019/04/27
      • [x] tested @warshallrho 2019/04/27
      • [x] documentation @warshallrho 2019/04/27
    • load_hdf5_graph
      • [x] created @warshallrho 2019/04/27
      • [x] tested @warshallrho 2019/04/27
      • [x] documentation @warshallrho 2019/04/27
    • save_weights_to_hdf5
      • [x] created
      • [x] tested @ChrisWu1997 2019/03/26
      • [x] documentation @ChrisWu1997 2019/03/26
    • load_hdf5_to_weights_in_order
      • [x] created
      • [x] tested @ChrisWu1997 2019/03/26
      • [x] documentation @ChrisWu1997 2019/03/26
    • load_hdf5_to_weights
      • [x] created
      • [x] tested @ChrisWu1997 2019/03/26
      • [x] documentation @ChrisWu1997 2019/03/26
    • save_npz([save_list, name, sess]) @ChrisWu1997 2019/02/21 --> save_npz([save_list, name]) @ChrisWu1997 2019/03/21
      • [x] refactored
      • [x] tested @ChrisWu1997 2019/03/26
      • [x] documentation @ChrisWu1997 2019/03/26
    • load_npz([path, name]) @ChrisWu1997 2019/02/21
      • [x] refactored
      • [x] tested @ChrisWu1997 2019/03/26
      • [x] documentation @ChrisWu1997 2019/03/26
    • assign_params(sess, params, network) --> assign_weights (🀄️remember to change CN docs) @ChrisWu1997 2019/02/22
      • [x] refactored
      • [ ] tested
    • load_and_assign_npz([sess, name, network]) @ChrisWu1997 2019/02/21 --> load_and_assign_npz([name, network]) @ChrisWu1997 2019/03/21
      • [x] refactored
      • [x] tested @ChrisWu1997 2019/03/26
      • [x] documentation @ChrisWu1997 2019/03/26
    • save_npz_dict([save_list, name, sess]) @ChrisWu1997 2019/02/22 --> save_npz_dict([save_list, name]) @ChrisWu1997 2019/03/21
      • [x] refactored
      • [x] tested @ChrisWu1997 2019/03/26
      • [x] documentation @ChrisWu1997 2019/03/26
    • load_and_assign_npz_dict([name, sess]) --> ([name, network]) @ChrisWu1997 2019/03/21
      • [x] refactored
      • [x] tested @ChrisWu1997 2019/03/26
      • [x] documentation @ChrisWu1997 2019/03/26
    Source code(tar.gz)
    Source code(zip)
    tensorlayer-2.0.0.tar.gz(247.46 KB)
    tensorlayer-2.0.0-py3-none-any.whl(338.18 KB)
    tensorlayer-2.0.0-py2.py3-none-any.whl(338.19 KB)
    PKG-INFO(12.76 KB)
  • 1.11.1(Nov 15, 2018)

    This is a maintenance release. All users are suggested to update.

    Changed

    • guide for pose estimation - flipping (PR #884)
    • cv2 transform support 2 modes (PR #885)

    Dependencies Update

    • pytest>=3.6,<3.9 => pytest>=3.6,<3.10 (PR #874)
    • requests>=2.19,<2.20 => requests>=2.19,<2.21 (PR #874)
    • tqdm>=4.23,<4.28 => tqdm>=4.23,<4.29 (PR #878)
    • pytest>=3.6,<3.10 => pytest>=3.6,<3.11 (PR #886)
    • pytest-xdist>=1.22,<1.24 => pytest-xdist>=1.22,<1.25 (PR #883)
    • tensorflow>=1.6,<1.12 => tensorflow>=1.6,<1.13 (PR #886)
    Source code(tar.gz)
    Source code(zip)
    tensorlayer-1.11.1.tar.gz(224.54 KB)
    tensorlayer-1.11.1-py2.py3-none-any.whl(308.85 KB)
    tensorlayer-1.11.1-py3-none-any.whl(308.85 KB)
    PKG-INFO(12.77 KB)
  • 1.11.0rc0(Oct 15, 2018)

    This release provides high-performance image augmentation API. The API is based on affine transformation. It has been proven useful to offer 80x speed up in augmenting images in the openpose-plus project.

    Added

    • Layer:
      • Release GroupNormLayer (PR #850)
    • Image affine transformation APIs
      • affine_rotation_matrix (PR #857)
      • affine_horizontal_flip_matrix (PR #857)
      • affine_vertical_flip_matrix (PR #857)
      • affine_shift_matrix (PR #857)
      • affine_shear_matrix (PR #857)
      • affine_zoom_matrix (PR #857)
      • affine_transform_cv2 (PR #857)
      • affine_transform_keypoints (PR #857)
    • Affine transformation tutorial
      • examples/data_process/tutorial_fast_affine_transform.py (PR #857)

    Changed

    • BatchNormLayer: support data_format

    Dependencies Update

    • yapf>=0.22,<0.24 => yapf>=0.22,<0.25 (PR #829)
    • sphinx>=1.7,<1.8 => sphinx>=1.7,<1.9 (PR #842)
    • matplotlib>=2.2,<2.3 => matplotlib>=2.2,<3.1 (PR #845)
    • scikit-learn>=0.19,<0.20 => scikit-learn>=0.19,<0.21 (PR #851)
    • tensorflow>=1.6,<1.11 => tensorflow>=1.6,<1.12 (PR #853)
    • tqdm>=4.23,<4.26 => tqdm>=4.23,<4.27 (PR #862)
    • pydocstyle>=2.1,<2.2 => pydocstyle>=2.1,<3.1 (PR #866)

    Deprecated

    Fixed

    • Correct offset calculation in tl.prepro.transform_matrix_offset_center (PR #855)

    Removed

    Security

    Contributors

    • @2wins: #850 #855
    • @DEKHTIARJonathan: #853
    • @zsdonghao: #857
    • @luomai: #857
    Source code(tar.gz)
    Source code(zip)
    tensorlayer-1.11.0rc0.tar.gz(224.28 KB)
    tensorlayer-1.11.0rc0-py3-none-any.whl(308.60 KB)
    tensorlayer-1.11.0rc0-py2.py3-none-any.whl(308.60 KB)
    PKG-INFO(12.77 KB)
  • 1.10.1(Sep 7, 2018)

    Important Notice

    TensorLayer 1.10.x will be the last supported version of TL 1.X, big changes are upcoming and won't preserve backward compatibility. TensorLayer 1.10.x will only be updated with bugfixes on existing features. No additional feature will be implemented in TL 1.10.x

    Changelog

    Added

    • unittest tests\test_timeout.py has been added to ensure the network creation process does not freeze.

    Changed

    • remove 'tensorboard' param, replaced by 'tensorboard_dir' in tensorlayer/utils.py with customizable tensorboard directory (PR #819)

    Removed

    • TL Graph API removed. Memory Leaks Issues with Graph API, will be fixed and integrated in TL 2.0 (PR #818)

    Fixed

    • Issue #817 fixed: TL 1.10.0 - Memory Leaks and very slow network creation.

    Dependencies Update

    • autopep8>=1.3,<1.4 => autopep8>=1.3,<1.5 (PR #815)
    • pytest-cov>=2.5,<2.6 => pytest-cov>=2.5,<2.7 (PR #820)
    • pytest>=3.6,<3.8 => pytest>=3.6,<3.9 (PR #823)
    • imageio>=2.3,<2.4 => imageio>=2.3,<2.5 (PR #823)

    Contributors

    • @DEKHTIARJonathan: #815 #818 #820 #823
    • @ndiy: #819
    • @zsdonghao: #818
    Source code(tar.gz)
    Source code(zip)
    tensorlayer-1.10.1.tar.gz(217.15 KB)
    tensorlayer-1.10.1-py3-none-any.whl(301.66 KB)
    tensorlayer-1.10.1-py2.py3-none-any.whl(301.66 KB)
    PKG-INFO(12.64 KB)
  • 1.10.1rc0(Sep 5, 2018)

    Changelog

    Added

    • unittest tests\test_timeout.py has been added to ensure the network creation process does not freeze.

    Changed

    • remove 'tensorboard' param, replaced by 'tensorboard_dir' in tensorlayer/utils.py with customizable tensorboard directory (PR #819)

    Removed

    • TL Graph API removed. Memory Leaks Issues with this API, will be fixed and integrated in TL 2.0 (PR #818)

    Fixed

    • Issue #817 fixed: TL 1.10.0 - Memory Leaks and very slow network creation.

    Dependencies Update

    • autopep8>=1.3,<1.4 => autopep8>=1.3,<1.5 (PR #815)
    • pytest-cov>=2.5,<2.6 => pytest-cov>=2.5,<2.7 (PR #820)
    • pytest>=3.6,<3.8 => pytest>=3.6,<3.9 (PR #823)
    • imageio>=2.3,<2.4 => imageio>=2.3,<2.5 (PR #823)

    Contributors

    • @DEKHTIARJonathan: #815 #818 #820 #823
    • @ndiy: #819
    • @zsdonghao: #818
    Source code(tar.gz)
    Source code(zip)
    tensorlayer-1.10.1rc0.tar.gz(217.20 KB)
    tensorlayer-1.10.1rc0-py3-none-any.whl(301.72 KB)
    tensorlayer-1.10.1rc0-py2.py3-none-any.whl(301.72 KB)
    PKG-INFO(12.68 KB)
  • 1.10.0(Sep 1, 2018)

    Important Notice

    This release contains a memory leak issue.

    Release Note

    It has been a very busy summer for the TensorLayer team. In this version, we start to support:

    • query and modify a neural network through an intuitive graph API;
    • transparently scale-out your single-GPU training job onto multiple GPUs on a single server and multiple servers using a high-performance trainer module. Trainer is backed by the high-performance and scalable Hovorod library, see examples here;
    • reduce the memory usage of a neural network and even accelerate it using many advanced Network Quantization Layers;
    • add more pre-trained models in our model module.

    Mostly importantly, we decide to open-source a series of neural network application codes that have been used by practitioners. The first batch includes:

    • adaptive style transfer which allows you to do almost any kind of style transfer without compromise performance.
    • flexible openpose which allows you deeply customize your openpose network based on the actual data shapes, accuracy requirement, memory constraints and inference speed targets.
    • super resolution which allows you to apply this fantastic technique to medical imaging and many other important fields.

    At the same time, just want to put a note ahead, we are working very hard towards the TensorLayer 2.0 release in order to synchronize with the coming TensorFlow 2.0.

    Enjoy this release, and we would love your feedback!

    Added

    • API:
      • Add tl.model.vgg19 (PR #698)
      • Add tl.logging.contrib.hyperdash (PR #739)
      • Add tl.distributed.trainer (PR #700)
      • Add prefetch_buffer_size to the tl.distributed.Trainer (PR #766)
      • Add tl.db.TensorHub (PR #751)
      • Add tl.files.save_graph (PR #751)
      • Add tl.files.load_graph_ (PR #751)
      • Add tl.files.save_graph_and_params (PR #751)
      • Add tl.files.load_graph_and_params (PR #751)
      • Add tl.prepro.keypoint_random_xxx (PR #787)
    • Documentation:
      • Add binary, ternary and dorefa links (PR #711)
      • Update input scale of VGG16 and VGG19 to 0~1 (PR #736)
      • Update database (PR #751)
    • Layer:
      • Release SwitchNormLayer (PR #737)
      • Release QuanConv2d, QuanConv2dWithBN, QuanDenseLayer, QuanDenseLayerWithBN (PR#735)
      • Update Core Layer to support graph (PR #751)
      • All Pooling layers support data_format (PR #809)
    • Setup:
      • Creation of installation flaggs all_dev, all_cpu_dev, and all_gpu_dev (PR #739)
    • Examples:
      • change folder struction (PR #802)
      • tutorial_models_vgg19 has been introduced to show how to use tl.model.vgg19 (PR #698).
      • fix bug of tutorial_bipedalwalker_a3c_continuous_action.py (PR #734, Issue #732)
      • tutorial_models_vgg16 and tutorial_models_vgg19 has been changed the input scale from [0,255] to [0,1](PR #710)
      • tutorial_mnist_distributed_trainer.py and tutorial_cifar10_distributed_trainer.py are added to explain the uses of Distributed Trainer (PR #700)
      • add tutorial_quanconv_cifar10.py and tutorial_quanconv_mnist.py (PR #735)
      • add tutorial_work_with_onnx.py(PR #775)
    • Applications:

    Changed

    • function minibatches changed to avoid wasting samples.(PR #762)
    • all the input scale in both vgg16 and vgg19 has been changed the input scale from [0,255] to [0,1](PR #710)
    • Dockerfiles merged and refactored into one file (PR #747)
    • LazyImports move to the most top level imports as possible (PR #739)
    • some new test functions have been added in test_layers_convolution.py, test_layers_normalization.py, test_layers_core.py (PR #735)
    • documentation now uses mock imports reducing the number of dependencies to compile the documentation (PR #785)
    • fixed and enforced pydocstyle D210, D200, D301, D207, D403, D204, D412, D402, D300, D208 (PR #784)

    Deprecated

    • tl.logging.warn has been deprecated in favor of tl.logging.warning (PR #739)

    Removed

    • conv_layers() has been removed in both vgg16 and vgg19(PR #710)

    Fixed

    • import error caused by matplotlib on OSX (PR #705)
    • missing import in tl.prepro (PR #712)
    • Dockerfiles import error fixed - issue #733 (PR #747)
    • Fix a typo in absolute_difference_error in file: tensorlayer/cost.py - Issue #753 (PR #759)
    • Fix the bug of scaling the learning rate of trainer (PR #776)
    • log error instead of info when npz file not found. (PR #812)

    Security

    Dependencies Update

    • tensorflow>=1.8,<1.9 => tensorflow>=1.6,<1.11 (PR #739 and PR #798)
    • tensorflow-gpu>=1.8,<1.9 => tensorflow-gpu>=1.6,<1.11 (PR #739 and PR #798)
    • numpy>=1.14,<1.15 => numpy>=1.14,<1.16 (PR #754)
    • pymongo>=3.6,<3.7 => pymongo>=3.6,<3.8 (PR #750)
    • pytest>=3.6,<3.7 => tqdm>=3.6,<3.8 (PR #798)
    • pytest-xdist>=1.22,<1.23 => pytest-xdist>=1.22,<1.24 (PR #805 and #806)
    • tqdm>=4.23,<4.25 => tqdm>=4.23,<4.26 (PR #798)
    • yapf>=0.21,<0.22 => yapf>=0.22,<0.24 (PR #798 #808)

    Contributors

    • @DEKHTIARJonathan: #739 #747 #750 #754
    • @lgarithm: #705 #700
    • @OwenLiuzZ: #698 #710 #775 #776
    • @zsdonghao: #711 #712 #734 #736 #737 #700 #751 #809
    • @luomai: #700 #751 #766 #802
    • @XJTUWYD: #735
    • @mutewall: #735
    • @thangvubk: #759
    • @JunbinWang: #796
    • @boldjoel: #787
    Source code(tar.gz)
    Source code(zip)
    tensorlayer-1.10.0.tar.gz(194.89 KB)
    tensorlayer-1.10.0-py3-none-any.whl(261.97 KB)
    tensorlayer-1.10.0-py2.py3-none-any.whl(261.98 KB)
    PKG-INFO(12.67 KB)
  • 1.9.1(Jul 30, 2018)

  • 1.9.0(Jun 16, 2018)

    Release Note

    This version was supposed to be release under version: 1.8.6, due to the large amount of changes introduced in this version, it has been decided to release this version under the version: 1.9.0

    Changelog

    Added

    • API:
      • tl.alphas and tl.alphas_like added following the tf.ones/zeros and tf.zeros_like/ones_like (PR #580)
      • tl.lazy_imports.LazyImport to import heavy libraries only when necessary (PR #667)
      • tl.act.leaky_relu6 and tl.layers.PRelu6Layer have been deprecated (PR #686)
      • tl.act.leaky_twice_relu6 and tl.layers.PTRelu6Layer have been deprecated (PR #686)
    • CI Tool:
      • Stale Probot added to clean stale issues (PR #573)
      • Changelog Probot Configuration added (PR #637)
      • Travis Builds now handling a matrix of TF Version from TF==1.6.0 to TF==1.8.0 (PR #644)
      • CircleCI added to build and upload Docker Containers for each PR merged and tag release (PR #648)
    • Decorator:
      • tl.decorators API created including deprecated_alias and private_method (PR #660)
      • tl.decorators API enriched with protected_method (PR #675)
      • tl.decorators API enriched with deprecated directly raising warning and modifying documentation (PR #691)
    • Docker:
      • Containers for each release and for each PR merged on master built (PR #648)
      • Containers built in the following configurations (PR #648):
        • py2 + cpu
        • py2 + gpu
        • py3 + cpu
        • py3 + gpu
    • Documentation:
      • Clean README.md (PR #677)
      • Release semantic version added on index page (PR #633)
      • Optimizers page added (PR #636)
      • AMSGrad added on Optimizers page added (PR #636)
    • Layer:
      • ElementwiseLambdaLayer added to use custom function to connect multiple layer inputs (PR #579)
      • AtrousDeConv2dLayer added (PR #662)
      • Fix bugs of using tf.layers in CNN (PR #686)
    • Optimizer:
      • AMSGrad Optimizer added based on On the Convergence of Adam and Beyond (ICLR 2018) (PR #636)
    • Setup:
      • Creation of installation flaggs all, all_cpu, and all_gpu (PR #660)
    • Test:
      • test_utils_predict.py added to reproduce and fix issue #288 (PR #566)
      • Layer_DeformableConvolution_Test added to reproduce issue #572 with deformable convolution (PR #573)
      • Array_Op_Alphas_Test and Array_Op_Alphas_Like_Test added to test tensorlayer/array_ops.py file (PR #580)
      • test_optimizer_amsgrad.py added to test AMSGrad optimizer (PR #636)
      • test_logging.py added to insure robustness of the logging API (PR #645)
      • test_decorators.py added (PR #660)
      • test_activations.py added (PR #686)
    • Tutorials:
      • tutorial_tfslim has been introduced to show how to use SlimNetsLayer (PR #560).
      • add the following to all tutorials (PR #697):
        tf.logging.set_verbosity(tf.logging.DEBUG)
        tl.logging.set_verbosity(tl.logging.DEBUG)
        

    Changed

    • Tensorflow CPU & GPU dependencies moved to separated requirement files in order to allow PyUP.io to parse them (PR #573)
    • The document of LambdaLayer for linking it with ElementwiseLambdaLayer (PR #587)
    • RTD links point to stable documentation instead of latest used for development (PR #633)
    • TF Version older than 1.6.0 are officially unsupported and raises an exception (PR #644)
    • README.md Badges Updated with Support Python and Tensorflow Versions (PR #644)
    • TL logging API has been consistent with TF logging API and thread-safe (PR #645)
    • Relative Imports changed for absolute imports (PR #657)
    • tl.files refactored into a directory with numerous files (PR #657)
    • tl.files.voc_dataset fixed because of original Pascal VOC website was down (PR #657)
    • extra requirements hidden inside the library added in the project requirements (PR #657)
    • requirements files refactored in requirements/ directory (PR #657)
    • README.md and other markdown files have been refactored and cleaned. (PR #639)
    • Ternary Convolution Layer added in unittest (PR #658)
    • Convolution Layers unittests have been cleaned & refactored (PR #658)
    • All the tests are now using a DEBUG level verbosity when run individualy (PR #660)
    • tf.identity as activation is ignored, thus reducing the size of the graph by removing useless operation (PR #667)
    • argument dictionaries are now checked and saved within the Layer Base Class (PR #667)
    • Layer Base Class now presenting methods to update faultlessly all_layers, all_params, and all_drop (PR #675)
    • Input Layers have been removed from tl.layers.core and added to tl.layers.inputs (PR #675)
    • Input Layers are now considered as true layers in the graph (they represent a placeholder), unittests have been updated (PR #675)
    • Layer API is simplified, with automatic feeding prev_layer into self.inputs (PR #675)
    • Complete Documentation Refactoring and Reorganization (namely Layer APIs) (PR #691)

    Deprecated

    • tl.layers.TimeDistributedLayer argurment args is deprecated in favor of layer_args (PR #667)
    • tl.act.leaky_relu have been deprecated in favor of tf.nn.leaky_relu (PR #686)

    Removed

    • assert() calls remove and replaced by raise AssertionError() (PR #667)
    • tl.identity is removed, not used anymore and deprecated for a long time (PR #667)
    • All Code specific to TF.__version__ < "1.6" have been removed (PR #675)

    Fixed

    • Issue #498 - Deprecation Warning Fix in tl.layers.RNNLayer with inspect (PR #574)
    • Issue #498 - Deprecation Warning Fix in tl.files with truth value of an empty array is ambiguous (PR #575)
    • Issue #565 related to tl.utils.predict fixed - np.hstack problem in which the results for multiple batches are stacked along axis=1 (PR #566)
    • Issue #572 with tl.layers.DeformableConv2d fixed (PR #573)
    • Issue #664 with tl.layers.ConvLSTMLayer fixed (PR #676)
    • Typo of the document of ElementwiseLambdaLayer (PR #588)
    • Error in tl.layers.TernaryConv2d fixed - self.inputs not defined (PR #658)
    • Deprecation warning fixed in tl.layers.binary._compute_threshold() (PR #658)
    • All references to tf.logging replaced by tl.logging (PR #661)
    • Duplicated code removed when bias was used (PR #667)
    • tensorlayer.third_party.roi_pooling.roi_pooling.roi_pooling_ops is now lazy loaded to prevent systematic error raised (PR #675)
    • Documentation not build in RTD due to old version of theme in docs directory fixed (PR #703)
    • Tutorial:
      • tutorial_word2vec_basic.py saving issue #476 fixed (PR #635)
      • All tutorials tested and errors have been fixed (PR #635)

    Dependencies Update

    • Update pytest from 3.5.1 to 3.6.0 (PR #647)
    • Update progressbar2 from 3.37.1 to 3.38.0 (PR #651)
    • Update scikit-image from 0.13.1 to 0.14.0 (PR #656)
    • Update keras from 2.1.6 to 2.2.0 (PR #684)
    • Update requests from 2.18.4 to 2.19.0 (PR #695)

    Contributors

    • @lgarithm: #563
    • @DEKHTIARJonathan: #573 #574 #575 #580 #633 #635 #636 #639 #644 #645 #648 #657 #667 #658 #659 #660 #661 #666 #667 #672 #675 #683 #686 #687 #690 #691 #692 #703
    • @2wins: #560 #566 #662
    • @One-sixth: #579
    • @zsdonghao: #587 #588 #639 #685 #697
    • @luomai: #639 #677
    • @dengyueyun666: #676
    Source code(tar.gz)
    Source code(zip)
    tensorlayer-1.9.0.tar.gz(176.97 KB)
    tensorlayer-1.9.0-py3-none-any.whl(234.72 KB)
    tensorlayer-1.9.0-py2.py3-none-any.whl(234.73 KB)
    PKG-INFO(12.18 KB)
  • 1.8.6rc6(Jun 15, 2018)

    Added

    • API:
      • tl.alphas and tl.alphas_like added following the tf.ones/zeros and tf.zeros_like/ones_like (PR #580)
      • tl.lazy_imports.LazyImport to import heavy libraries only when necessary (PR #667)
      • tl.act.leaky_relu6 and tl.layers.PRelu6Layer have been deprecated (PR #686)
      • tl.act.leaky_twice_relu6 and tl.layers.PTRelu6Layer have been deprecated (PR #686)
    • CI Tool:
      • Stale Probot added to clean stale issues (PR #573)
      • Changelog Probot Configuration added (PR #637)
      • Travis Builds now handling a matrix of TF Version from TF==1.6.0 to TF==1.8.0 (PR #644)
      • CircleCI added to build and upload Docker Containers for each PR merged and tag release (PR #648)
    • Decorator:
      • tl.decorators API created including deprecated_alias and private_method (PR #660)
      • tl.decorators API enriched with protected_method (PR #675)
      • tl.decorators API enriched with deprecated directly raising warning and modifying documentation (PR #691)
    • Docker:
      • Containers for each release and for each PR merged on master built (PR #648)
      • Containers built in the following configurations (PR #648):
        • py2 + cpu
        • py2 + gpu
        • py3 + cpu
        • py3 + gpu
    • Documentation:
      • Clean README.md (PR #677)
      • Release semantic version added on index page (PR #633)
      • Optimizers page added (PR #636)
      • AMSGrad added on Optimizers page added (PR #636)
    • Layer:
      • ElementwiseLambdaLayer added to use custom function to connect multiple layer inputs (PR #579)
      • AtrousDeConv2dLayer added (PR #662)
      • Fix bugs of using tf.layers in CNN (PR #686)
    • Optimizer:
      • AMSGrad Optimizer added based on On the Convergence of Adam and Beyond (ICLR 2018) (PR #636)
    • Setup:
      • Creation of installation flaggs all, all_cpu, and all_gpu (PR #660)
    • Test:
      • test_utils_predict.py added to reproduce and fix issue #288 (PR #566)
      • Layer_DeformableConvolution_Test added to reproduce issue #572 with deformable convolution (PR #573)
      • Array_Op_Alphas_Test and Array_Op_Alphas_Like_Test added to test tensorlayer/array_ops.py file (PR #580)
      • test_optimizer_amsgrad.py added to test AMSGrad optimizer (PR #636)
      • test_logging.py added to insure robustness of the logging API (PR #645)
      • test_decorators.py added (PR #660)
      • test_activations.py added (PR #686)
    • Tutorials:
      • tutorial_tfslim has been introduced to show how to use SlimNetsLayer (PR #560).
      • add the following to all tutorials (PR #697):
        tf.logging.set_verbosity(tf.logging.DEBUG)
        tl.logging.set_verbosity(tl.logging.DEBUG)
        

    Changed

    • Tensorflow CPU & GPU dependencies moved to separated requirement files in order to allow PyUP.io to parse them (PR #573)
    • The document of LambdaLayer for linking it with ElementwiseLambdaLayer (PR #587)
    • RTD links point to stable documentation instead of latest used for development (PR #633)
    • TF Version older than 1.6.0 are officially unsupported and raises an exception (PR #644)
    • README.md Badges Updated with Support Python and Tensorflow Versions (PR #644)
    • TL logging API has been consistent with TF logging API and thread-safe (PR #645)
    • Relative Imports changed for absolute imports (PR #657)
    • tl.files refactored into a directory with numerous files (PR #657)
    • tl.files.voc_dataset fixed because of original Pascal VOC website was down (PR #657)
    • extra requirements hidden inside the library added in the project requirements (PR #657)
    • requirements files refactored in requirements/ directory (PR #657)
    • README.md and other markdown files have been refactored and cleaned. (PR #639)
    • Ternary Convolution Layer added in unittest (PR #658)
    • Convolution Layers unittests have been cleaned & refactored (PR #658)
    • All the tests are now using a DEBUG level verbosity when run individualy (PR #660)
    • tf.identity as activation is ignored, thus reducing the size of the graph by removing useless operation (PR #667)
    • argument dictionaries are now checked and saved within the Layer Base Class (PR #667)
    • Layer Base Class now presenting methods to update faultlessly all_layers, all_params, and all_drop (PR #675)
    • Input Layers have been removed from tl.layers.core and added to tl.layers.inputs (PR #675)
    • Input Layers are now considered as true layers in the graph (they represent a placeholder), unittests have been updated (PR #675)
    • Layer API is simplified, with automatic feeding prev_layer into self.inputs (PR #675)
    • Complete Documentation Refactoring and Reorganization (namely Layer APIs) (PR #691)

    Deprecated

    • tl.layers.TimeDistributedLayer argurment args is deprecated in favor of layer_args (PR #667)
    • tl.act.leaky_relu have been deprecated in favor of tf.nn.leaky_relu (PR #686)

    Removed

    • assert() calls remove and replaced by raise AssertionError() (PR #667)
    • tl.identity is removed, not used anymore and deprecated for a long time (PR #667)
    • All Code specific to TF.__version__ < "1.6" have been removed (PR #675)

    Fixed

    • Issue #498 - Deprecation Warning Fix in tl.layers.RNNLayer with inspect (PR #574)
    • Issue #498 - Deprecation Warning Fix in tl.files with truth value of an empty array is ambiguous (PR #575)
    • Issue #565 related to tl.utils.predict fixed - np.hstack problem in which the results for multiple batches are stacked along axis=1 (PR #566)
    • Issue #572 with tl.layers.DeformableConv2d fixed (PR #573)
    • Issue #664 with tl.layers.ConvLSTMLayer fixed (PR #676)
    • Typo of the document of ElementwiseLambdaLayer (PR #588)
    • Error in tl.layers.TernaryConv2d fixed - self.inputs not defined (PR #658)
    • Deprecation warning fixed in tl.layers.binary._compute_threshold() (PR #658)
    • All references to tf.logging replaced by tl.logging (PR #661)
    • Duplicated code removed when bias was used (PR #667)
    • tensorlayer.third_party.roi_pooling.roi_pooling.roi_pooling_ops is now lazy loaded to prevent systematic error raised (PR #675)
    • Tutorial:
      • tutorial_word2vec_basic.py saving issue #476 fixed (PR #635)
      • All tutorials tested and errors have been fixed (PR #635)

    Dependencies Update

    • Update pytest from 3.5.1 to 3.6.0 (PR #647)
    • Update progressbar2 from 3.37.1 to 3.38.0 (PR #651)
    • Update scikit-image from 0.13.1 to 0.14.0 (PR #656)
    • Update keras from 2.1.6 to 2.2.0 (PR #684)
    • Update requests from 2.18.4 to 2.19.0 (PR #695)

    Contributors

    • @lgarithm: #563
    • @DEKHTIARJonathan: #573 #574 #575 #580 #633 #635 #636 #639 #644 #645 #648 #657 #667 #658 #659 #660 #661 #666 #667 #672 #675 #683 #686 #687 #690 #691 #692
    • @2wins: #560 #566 #662
    • @One-sixth: #579
    • @zsdonghao: #587 #588 #639 #685 #697
    • @luomai: #639 #677
    • @dengyueyun666: #676
    Source code(tar.gz)
    Source code(zip)
    tensorlayer-1.8.6rc6.tar.gz(176.98 KB)
    tensorlayer-1.8.6rc6-py3-none-any.whl(234.73 KB)
    tensorlayer-1.8.6rc6-py2.py3-none-any.whl(234.74 KB)
    PKG-INFO(12.20 KB)
  • 1.8.6rc5(Jun 7, 2018)

    Changelog

    Added

    • API:
      • tl.alphas and tl.alphas_like added following the tf.ones/zeros and tf.zeros_like/ones_like (by @DEKHTIARJonathan in #580)
      • tl.lazy_imports.LazyImport to import heavy libraries only when necessary (by @DEKHTIARJonathan in #667)
    • CI Tool:
      • Stale Probot added to clean stale issues (by @DEKHTIARJonathan in #573)
      • Changelog Probot Configuration added (by @DEKHTIARJonathan in #637)
      • Travis Builds now handling a matrix of TF Version from TF==1.6.0 to TF==1.8.0 (by @DEKHTIARJonathan in #644)
      • CircleCI added to build and upload Docker Containers for each PR merged and tag release (by @DEKHTIARJonathan in #648)
    • Decorator:
      • tl.decorators API created including deprecated_alias and private_method (by @DEKHTIARJonathan in #660)
      • tl.decorators API enriched with protected_method (by @DEKHTIARJonathan in #675)
    • Docker:
      • Containers for each release and for each PR merged on master built (by @DEKHTIARJonathan in #648)
      • Containers built in the following configurations (by @DEKHTIARJonathan in #648):
        • py2 + cpu
        • py2 + gpu
        • py3 + cpu
        • py3 + gpu
    • Documentation:
      • Clean README (by @luomai in #677)
      • Release semantic version added on index page (by @DEKHTIARJonathan in #633)
      • Optimizers page added (by @DEKHTIARJonathan in #636)
      • AMSGrad added on Optimizers page added (by @DEKHTIARJonathan in #636)
    • Layer:
      • ElementwiseLambdaLayer added to use custom function to connect multiple layer inputs (by @One-sixth in #579)
      • AtrousDeConv2dLayer added (by @2wins in #662)
      • Fix bugs of using tf.layers in CNN (by @zsdonghao in #686)
    • Optimizer:
      • AMSGrad Optimizer added based on On the Convergence of Adam and Beyond (ICLR 2018) (by @DEKHTIARJonathan in #636)
    • Setup:
      • Creation of installation flaggs all, all_cpu, and all_gpu (by @DEKHTIARJonathan in #660)
    • Test:
      • test_utils_predict.py added to reproduce and fix issue #288 (by @2wins in #566)
      • Layer_DeformableConvolution_Test added to reproduce issue #572 with deformable convolution (by @DEKHTIARJonathan in #573)
      • Array_Op_Alphas_Test and Array_Op_Alphas_Like_Test added to test tensorlayer/array_ops.py file (by @DEKHTIARJonathan in #580)
      • test_optimizer_amsgrad.py added to test AMSGrad optimizer (by @DEKHTIARJonathan in #636)
      • test_logging.py added to insure robustness of the logging API (by @DEKHTIARJonathan in #645)
      • test_decorators.py added (by @DEKHTIARJonathan in #660)
    • Tutorials:
      • tutorial_tfslim has been introduced to show how to use SlimNetsLayer (by @2wins in #560).

    Changed

    • Tensorflow CPU & GPU dependencies moved to separated requirement files in order to allow PyUP.io to parse them (by @DEKHTIARJonathan in #573)
    • The document of LambdaLayer for linking it with ElementwiseLambdaLayer (by @zsdonghao in #587)
    • RTD links point to stable documentation instead of latest used for development (by @DEKHTIARJonathan in #633)
    • TF Version older than 1.6.0 are officially unsupported and raises an exception (by @DEKHTIARJonathan in #644)
    • Readme Badges Updated with Support Python and Tensorflow Versions (by @DEKHTIARJonathan in #644)
    • TL logging API has been consistent with TF logging API and thread-safe (by @DEKHTIARJonathan in #645)
    • Relative Imports changed for absolute imports (by @DEKHTIARJonathan in #657)
    • tl.files refactored into a directory with numerous files (by @DEKHTIARJonathan in #657)
    • tl.files.voc_dataset fixed because of original Pascal VOC website was down (by @DEKHTIARJonathan in #657)
    • extra requirements hidden inside the library added in the project requirements (by @DEKHTIARJonathan in #657)
    • requirements files refactored in requirements/ directory (by @DEKHTIARJonathan in #657)
    • README.md and other markdown files have been refactored and cleaned. (by @zsdonghao @DEKHTIARJonathan @luomai in #639)
    • Ternary Convolution Layer added in unittest (by @DEKHTIARJonathan in #658)
    • Convolution Layers unittests have been cleaned & refactored (by @DEKHTIARJonathan in #658)
    • All the tests are now using a DEBUG level verbosity when run individualy (by @DEKHTIARJonathan in #660)
    • tf.identity as activation is ignored, thus reducing the size of the graph by removing useless operation (by @DEKHTIARJonathan in #667)
    • argument dictionaries are now checked and saved within the Layer Base Class (by @DEKHTIARJonathan in #667)
    • Layer Base Class now presenting methods to update faultlessly all_layers, all_params, and all_drop (by @DEKHTIARJonathan in #675)
    • Input Layers have been removed from tl.layers.core and added to tl.layers.inputs (by @DEKHTIARJonathan in #675)
    • Input Layers are now considered as true layers in the graph (they represent a placeholder), unittests have been updated (by @DEKHTIARJonathan in #675)
    • Layer API is simplified, with automatic feeding prev_layer into self.inputs (by @DEKHTIARJonathan in #675)

    Deprecated

    • tl.layers.TimeDistributedLayer argurment args is deprecated in favor of layer_args (by @DEKHTIARJonathan in #667)

    Removed

    • assert() calls remove and replaced by raise AssertionError() (by @DEKHTIARJonathan in #667)
    • tl.identity is removed, not used anymore and deprecated for a long time (by @DEKHTIARJonathan in #667)
    • All Code specific to TF.__version__ < "1.6" have been removed (by @DEKHTIARJonathan in #675)

    Fixed

    • Issue #498 - Deprecation Warning Fix in tl.layers.RNNLayer with inspect (by @DEKHTIARJonathan in #574)
    • Issue #498 - Deprecation Warning Fix in tl.files with truth value of an empty array is ambiguous (by @DEKHTIARJonathan in #575)
    • Issue #565 related to tl.utils.predict fixed - np.hstack problem in which the results for multiple batches are stacked along axis=1 (by @2wins in #566)
    • Issue #572 with tl.layers.DeformableConv2d fixed (by @DEKHTIARJonathan in #573)
    • Issue #664 with tl.layers.ConvLSTMLayer fixed (by @dengyueyun666 in #676)
    • Typo of the document of ElementwiseLambdaLayer (by @zsdonghao in #588)
    • Error in tl.layers.TernaryConv2d fixed - self.inputs not defined (by @DEKHTIARJonathan in #658)
    • Deprecation warning fixed in tl.layers.binary._compute_threshold() (by @DEKHTIARJonathan in #658)
    • All references to tf.logging replaced by tl.logging (by @DEKHTIARJonathan in #661)
    • Duplicated code removed when bias was used (by @DEKHTIARJonathan in #667)
    • tensorlayer.third_party.roi_pooling.roi_pooling.roi_pooling_ops is now lazy loaded to prevent systematic error raised (by @DEKHTIARJonathan in #675)
    • Tutorial:
      • tutorial_word2vec_basic.py saving issue #476 fixed (by @DEKHTIARJonathan in #635)
      • All tutorials tested and errors have been fixed (by @DEKHTIARJonathan in #635)

    Dependencies Update

    • Update pytest from 3.5.1 to 3.6.0 (by @DEKHTIARJonathan and @pyup-bot in #647)
    • Update progressbar2 from 3.37.1 to 3.38.0 (by @DEKHTIARJonathan and @pyup-bot in #651)
    • Update scikit-image from 0.13.1 to 0.14.0 (by @DEKHTIARJonathan and @pyup-bot in #656)
    • Update keras from 2.1.6 to 2.2.0 (by @DEKHTIARJonathan and @pyup-bot in #684)

    Contributors

    @lgarithm @DEKHTIARJonathan @2wins @One-sixth @zsdonghao @luomai

    Source code(tar.gz)
    Source code(zip)
    tensorlayer-1.8.6rc5.tar.gz(172.21 KB)
    tensorlayer-1.8.6rc5-py3-none-any.whl(205.12 KB)
    tensorlayer-1.8.6rc5-py2.py3-none-any.whl(205.12 KB)
    PKG-INFO(12.25 KB)
  • 1.8.6rc4(Jun 7, 2018)

    Changelog

    Added

    • API:
      • tl.alphas and tl.alphas_like added following the tf.ones/zeros and tf.zeros_like/ones_like (by @DEKHTIARJonathan in #580)
      • tl.lazy_imports.LazyImport to import heavy libraries only when necessary (by @DEKHTIARJonathan in #667)
    • CI Tool:
      • Stale Probot added to clean stale issues (by @DEKHTIARJonathan in #573)
      • Changelog Probot Configuration added (by @DEKHTIARJonathan in #637)
      • Travis Builds now handling a matrix of TF Version from TF==1.6.0 to TF==1.8.0 (by @DEKHTIARJonathan in #644)
      • CircleCI added to build and upload Docker Containers for each PR merged and tag release (by @DEKHTIARJonathan in #648)
    • Decorator:
      • tl.decorators API created including deprecated_alias and private_method (by @DEKHTIARJonathan in #660)
      • tl.decorators API enriched with protected_method (by @DEKHTIARJonathan in #675)
    • Docker:
      • Containers for each release and for each PR merged on master built (by @DEKHTIARJonathan in #648)
      • Containers built in the following configurations (by @DEKHTIARJonathan in #648):
        • py2 + cpu
        • py2 + gpu
        • py3 + cpu
        • py3 + gpu
    • Documentation:
      • Clean README (by @luomai in #677)
      • Release semantic version added on index page (by @DEKHTIARJonathan in #633)
      • Optimizers page added (by @DEKHTIARJonathan in #636)
      • AMSGrad added on Optimizers page added (by @DEKHTIARJonathan in #636)
    • Layer:
      • ElementwiseLambdaLayer added to use custom function to connect multiple layer inputs (by @One-sixth in #579)
      • AtrousDeConv2dLayer added (by @2wins in #662)
    • Optimizer:
      • AMSGrad Optimizer added based on On the Convergence of Adam and Beyond (ICLR 2018) (by @DEKHTIARJonathan in #636)
    • Setup:
      • Creation of installation flaggs all, all_cpu, and all_gpu (by @DEKHTIARJonathan in #660)
    • Test:
      • test_utils_predict.py added to reproduce and fix issue #288 (by @2wins in #566)
      • Layer_DeformableConvolution_Test added to reproduce issue #572 with deformable convolution (by @DEKHTIARJonathan in #573)
      • Array_Op_Alphas_Test and Array_Op_Alphas_Like_Test added to test tensorlayer/array_ops.py file (by @DEKHTIARJonathan in #580)
      • test_optimizer_amsgrad.py added to test AMSGrad optimizer (by @DEKHTIARJonathan in #636)
      • test_logging.py added to insure robustness of the logging API (by @DEKHTIARJonathan in #645)
      • test_decorators.py added (by @DEKHTIARJonathan in #660)
    • Tutorials:
      • tutorial_tfslim has been introduced to show how to use SlimNetsLayer (by @2wins in #560).

    Changed

    • Tensorflow CPU & GPU dependencies moved to separated requirement files in order to allow PyUP.io to parse them (by @DEKHTIARJonathan in #573)
    • The document of LambdaLayer for linking it with ElementwiseLambdaLayer (by @zsdonghao in #587)
    • RTD links point to stable documentation instead of latest used for development (by @DEKHTIARJonathan in #633)
    • TF Version older than 1.6.0 are officially unsupported and raises an exception (by @DEKHTIARJonathan in #644)
    • Readme Badges Updated with Support Python and Tensorflow Versions (by @DEKHTIARJonathan in #644)
    • TL logging API has been consistent with TF logging API and thread-safe (by @DEKHTIARJonathan in #645)
    • Relative Imports changed for absolute imports (by @DEKHTIARJonathan in #657)
    • tl.files refactored into a directory with numerous files (by @DEKHTIARJonathan in #657)
    • tl.files.voc_dataset fixed because of original Pascal VOC website was down (by @DEKHTIARJonathan in #657)
    • extra requirements hidden inside the library added in the project requirements (by @DEKHTIARJonathan in #657)
    • requirements files refactored in requirements/ directory (by @DEKHTIARJonathan in #657)
    • README.md and other markdown files have been refactored and cleaned. (by @zsdonghao @DEKHTIARJonathan @luomai in #639)
    • Ternary Convolution Layer added in unittest (by @DEKHTIARJonathan in #658)
    • Convolution Layers unittests have been cleaned & refactored (by @DEKHTIARJonathan in #658)
    • All the tests are now using a DEBUG level verbosity when run individualy (by @DEKHTIARJonathan in #660)
    • tf.identity as activation is ignored, thus reducing the size of the graph by removing useless operation (by @DEKHTIARJonathan in #667)
    • argument dictionaries are now checked and saved within the Layer Base Class (by @DEKHTIARJonathan in #667)
    • Layer Base Class now presenting methods to update faultlessly all_layers, all_params, and all_drop (by @DEKHTIARJonathan in #675)
    • Input Layers have been removed from tl.layers.core and added to tl.layers.inputs (by @DEKHTIARJonathan in #675)
    • Input Layers are now considered as true layers in the graph (they represent a placeholder), unittests have been updated (by @DEKHTIARJonathan in #675)
    • Layer API is simplified, with automatic feeding prev_layer into self.inputs (by @DEKHTIARJonathan in #675)

    Deprecated

    • tl.layers.TimeDistributedLayer argurment args is deprecated in favor of layer_args (by @DEKHTIARJonathan in #667)

    Removed

    • assert() calls remove and replaced by raise AssertionError() (by @DEKHTIARJonathan in #667)
    • tl.identity is removed, not used anymore and deprecated for a long time (by @DEKHTIARJonathan in #667)
    • All Code specific to TF.__version__ < "1.6" have been removed (by @DEKHTIARJonathan in #675)

    Fixed

    • Issue #498 - Deprecation Warning Fix in tl.layers.RNNLayer with inspect (by @DEKHTIARJonathan in #574)
    • Issue #498 - Deprecation Warning Fix in tl.files with truth value of an empty array is ambiguous (by @DEKHTIARJonathan in #575)
    • Issue #565 related to tl.utils.predict fixed - np.hstack problem in which the results for multiple batches are stacked along axis=1 (by @2wins in #566)
    • Issue #572 with tl.layers.DeformableConv2d fixed (by @DEKHTIARJonathan in #573)
    • Issue #664 with tl.layers.ConvLSTMLayer fixed (by @dengyueyun666 in #676)
    • Typo of the document of ElementwiseLambdaLayer (by @zsdonghao in #588)
    • Error in tl.layers.TernaryConv2d fixed - self.inputs not defined (by @DEKHTIARJonathan in #658)
    • Deprecation warning fixed in tl.layers.binary._compute_threshold() (by @DEKHTIARJonathan in #658)
    • All references to tf.logging replaced by tl.logging (by @DEKHTIARJonathan in #661)
    • Duplicated code removed when bias was used (by @DEKHTIARJonathan in #667)
    • tensorlayer.third_party.roi_pooling.roi_pooling.roi_pooling_ops is now lazy loaded to prevent systematic error raised (by @DEKHTIARJonathan in #675)
    • Tutorial:
      • tutorial_word2vec_basic.py saving issue #476 fixed (by @DEKHTIARJonathan in #635)
      • All tutorials tested and errors have been fixed (by @DEKHTIARJonathan in #635)

    Dependencies Update

    • Update pytest from 3.5.1 to 3.6.0 (by @DEKHTIARJonathan and @pyup-bot in #647)
    • Update progressbar2 from 3.37.1 to 3.38.0 (by @DEKHTIARJonathan and @pyup-bot in #651)
    • Update scikit-image from 0.13.1 to 0.14.0 (by @DEKHTIARJonathan and @pyup-bot in #656)
    • Update keras from 2.1.6 to 2.2.0 (by @DEKHTIARJonathan and @pyup-bot in #684)

    Contributors

    @lgarithm @DEKHTIARJonathan @2wins @One-sixth @zsdonghao @luomai

    Source code(tar.gz)
    Source code(zip)
    tensorlayer-1.8.6rc4.tar.gz(172.03 KB)
    tensorlayer-1.8.6rc4-py3-none-any.whl(204.95 KB)
    tensorlayer-1.8.6rc4-py2.py3-none-any.whl(204.95 KB)
    PKG-INFO(12.25 KB)
  • 1.8.6rc3(Jun 6, 2018)

    Changelog

    Added

    • API:
      • tl.alphas and tl.alphas_like added following the tf.ones/zeros and tf.zeros_like/ones_like (by @DEKHTIARJonathan in #580)
      • tl.lazy_imports.LazyImport to import heavy libraries only when necessary (by @DEKHTIARJonathan in #667)
    • CI Tool:
      • Stale Probot added to clean stale issues (by @DEKHTIARJonathan in #573)
      • Changelog Probot Configuration added (by @DEKHTIARJonathan in #637)
      • Travis Builds now handling a matrix of TF Version from TF==1.6.0 to TF==1.8.0 (by @DEKHTIARJonathan in #644)
      • CircleCI added to build and upload Docker Containers for each PR merged and tag release (by @DEKHTIARJonathan in #648)
    • Decorator:
      • tl.decorators API created including deprecated_alias and private_method (by @DEKHTIARJonathan in #660)
      • tl.decorators API enriched with protected_method (by @DEKHTIARJonathan in #675)
    • Docker:
      • Containers for each release and for each PR merged on master built (by @DEKHTIARJonathan in #648)
      • Containers built in the following configurations (by @DEKHTIARJonathan in #648):
        • py2 + cpu
        • py2 + gpu
        • py3 + cpu
        • py3 + gpu
    • Documentation:
      • Clean README (by @luomai in #677)
      • Release semantic version added on index page (by @DEKHTIARJonathan in #633)
      • Optimizers page added (by @DEKHTIARJonathan in #636)
      • AMSGrad added on Optimizers page added (by @DEKHTIARJonathan in #636)
    • Layer:
      • ElementwiseLambdaLayer added to use custom function to connect multiple layer inputs (by @One-sixth in #579)
      • AtrousDeConv2dLayer added (by @2wins in #662)
    • Optimizer:
      • AMSGrad Optimizer added based on On the Convergence of Adam and Beyond (ICLR 2018) (by @DEKHTIARJonathan in #636)
    • Setup:
      • Creation of installation flaggs all, all_cpu, and all_gpu (by @DEKHTIARJonathan in #660)
    • Test:
      • test_utils_predict.py added to reproduce and fix issue #288 (by @2wins in #566)
      • Layer_DeformableConvolution_Test added to reproduce issue #572 with deformable convolution (by @DEKHTIARJonathan in #573)
      • Array_Op_Alphas_Test and Array_Op_Alphas_Like_Test added to test tensorlayer/array_ops.py file (by @DEKHTIARJonathan in #580)
      • test_optimizer_amsgrad.py added to test AMSGrad optimizer (by @DEKHTIARJonathan in #636)
      • test_logging.py added to insure robustness of the logging API (by @DEKHTIARJonathan in #645)
      • test_decorators.py added (by @DEKHTIARJonathan in #660)
    • Tutorials:
      • tutorial_tfslim has been introduced to show how to use SlimNetsLayer (by @2wins in #560).

    Changed

    • Tensorflow CPU & GPU dependencies moved to separated requirement files in order to allow PyUP.io to parse them (by @DEKHTIARJonathan in #573)
    • The document of LambdaLayer for linking it with ElementwiseLambdaLayer (by @zsdonghao in #587)
    • RTD links point to stable documentation instead of latest used for development (by @DEKHTIARJonathan in #633)
    • TF Version older than 1.6.0 are officially unsupported and raises an exception (by @DEKHTIARJonathan in #644)
    • Readme Badges Updated with Support Python and Tensorflow Versions (by @DEKHTIARJonathan in #644)
    • TL logging API has been consistent with TF logging API and thread-safe (by @DEKHTIARJonathan in #645)
    • Relative Imports changed for absolute imports (by @DEKHTIARJonathan in #657)
    • tl.files refactored into a directory with numerous files (by @DEKHTIARJonathan in #657)
    • tl.files.voc_dataset fixed because of original Pascal VOC website was down (by @DEKHTIARJonathan in #657)
    • extra requirements hidden inside the library added in the project requirements (by @DEKHTIARJonathan in #657)
    • requirements files refactored in requirements/ directory (by @DEKHTIARJonathan in #657)
    • README.md and other markdown files have been refactored and cleaned. (by @zsdonghao @DEKHTIARJonathan @luomai in #639)
    • Ternary Convolution Layer added in unittest (by @DEKHTIARJonathan in #658)
    • Convolution Layers unittests have been cleaned & refactored (by @DEKHTIARJonathan in #658)
    • All the tests are now using a DEBUG level verbosity when run individualy (by @DEKHTIARJonathan in #660)
    • tf.identity as activation is ignored, thus reducing the size of the graph by removing useless operation (by @DEKHTIARJonathan in #667)
    • argument dictionaries are now checked and saved within the Layer Base Class (by @DEKHTIARJonathan in #667)
    • Layer Base Class now presenting methods to update faultlessly all_layers, all_params, and all_drop (by @DEKHTIARJonathan in #675)
    • Input Layers have been removed from tl.layers.core and added to tl.layers.inputs (by @DEKHTIARJonathan in #675)
    • Input Layers are now considered as true layers in the graph (they represent a placeholder), unittests have been updated (by @DEKHTIARJonathan in #675)
    • Layer API is simplified, with automatic feeding prev_layer into self.inputs (by @DEKHTIARJonathan in #675)

    Deprecated

    • tl.layers.TimeDistributedLayer argurment args is deprecated in favor of layer_args (by @DEKHTIARJonathan in #667)

    Removed

    • assert() calls remove and replaced by raise AssertionError() (by @DEKHTIARJonathan in #667)
    • tl.identity is removed, not used anymore and deprecated for a long time (by @DEKHTIARJonathan in #667)
    • All Code specific to TF.__version__ < "1.6" have been removed (by @DEKHTIARJonathan in #675)

    Fixed

    • Issue #498 - Deprecation Warning Fix in tl.layers.RNNLayer with inspect (by @DEKHTIARJonathan in #574)
    • Issue #498 - Deprecation Warning Fix in tl.files with truth value of an empty array is ambiguous (by @DEKHTIARJonathan in #575)
    • Issue #565 related to tl.utils.predict fixed - np.hstack problem in which the results for multiple batches are stacked along axis=1 (by @2wins in #566)
    • Issue #572 with tl.layers.DeformableConv2d fixed (by @DEKHTIARJonathan in #573)
    • Issue #664 with tl.layers.ConvLSTMLayer fixed (by @dengyueyun666 in #676)
    • Typo of the document of ElementwiseLambdaLayer (by @zsdonghao in #588)
    • Error in tl.layers.TernaryConv2d fixed - self.inputs not defined (by @DEKHTIARJonathan in #658)
    • Deprecation warning fixed in tl.layers.binary._compute_threshold() (by @DEKHTIARJonathan in #658)
    • All references to tf.logging replaced by tl.logging (by @DEKHTIARJonathan in #661)
    • Duplicated code removed when bias was used (by @DEKHTIARJonathan in #667)
    • tensorlayer.third_party.roi_pooling.roi_pooling.roi_pooling_ops is now lazy loaded to prevent systematic error raised (by @DEKHTIARJonathan in #675)
    • Tutorial:
      • tutorial_word2vec_basic.py saving issue #476 fixed (by @DEKHTIARJonathan in #635)
      • All tutorials tested and errors have been fixed (by @DEKHTIARJonathan in #635)

    Security

    Dependencies Update

    • Update pytest from 3.5.1 to 3.6.0 (by @DEKHTIARJonathan and @pyup-bot in #647)
    • Update progressbar2 from 3.37.1 to 3.38.0 (by @DEKHTIARJonathan and @pyup-bot in #651)
    • Update scikit-image from 0.13.1 to 0.14.0 (by @DEKHTIARJonathan and @pyup-bot in #656)

    Contributors

    @lgarithm @DEKHTIARJonathan @2wins @One-sixth @zsdonghao @luomai

    Source code(tar.gz)
    Source code(zip)
    tensorlayer-1.8.6rc3.tar.gz(172.14 KB)
    tensorlayer-1.8.6rc3-py3-none-any.whl(205.06 KB)
    tensorlayer-1.8.6rc3-py2.py3-none-any.whl(205.06 KB)
    PKG-INFO(12.25 KB)
  • 1.8.6rc2(Jun 4, 2018)

    Changelog

    Added

    • API:
      • tl.alphas and tl.alphas_like added following the tf.ones/zeros and tf.zeros_like/ones_like (by @DEKHTIARJonathan in #580)
      • tl.lazy_imports.LazyImport to import heavy libraries only when necessary (by @DEKHTIARJonathan in #667)
    • CI Tool:
      • Stale Probot added to clean stale issues (by @DEKHTIARJonathan in #573)
      • Changelog Probot Configuration added (by @DEKHTIARJonathan in #637)
      • Travis Builds now handling a matrix of TF Version from TF==1.6.0 to TF==1.8.0 (by @DEKHTIARJonathan in #644)
      • CircleCI added to build and upload Docker Containers for each PR merged and tag release (by @DEKHTIARJonathan in #648)
    • Decorator:
      • tl.decorators API created including deprecated_alias and private_method (by @DEKHTIARJonathan in #660)
      • tl.decorators API enriched with protected_method (by @DEKHTIARJonathan in #675)
    • Docker:
      • Containers for each release and for each PR merged on master built (by @DEKHTIARJonathan in #648)
      • Containers built in the following configurations (by @DEKHTIARJonathan in #648):
        • py2 + cpu
        • py2 + gpu
        • py3 + cpu
        • py3 + gpu
    • Documentation:
      • Clean README (by @luomai in #677)
      • Release semantic version added on index page (by @DEKHTIARJonathan in #633)
      • Optimizers page added (by @DEKHTIARJonathan in #636)
      • AMSGrad added on Optimizers page added (by @DEKHTIARJonathan in #636)
    • Layer:
      • ElementwiseLambdaLayer added to use custom function to connect multiple layer inputs (by @One-sixth in #579)
      • AtrousDeConv2dLayer added (by @2wins in #662)
    • Optimizer:
      • AMSGrad Optimizer added based on On the Convergence of Adam and Beyond (ICLR 2018) (by @DEKHTIARJonathan in #636)
    • Setup:
      • Creation of installation flaggs all, all_cpu, and all_gpu (by @DEKHTIARJonathan in #660)
    • Test:
      • test_utils_predict.py added to reproduce and fix issue #288 (by @2wins in #566)
      • Layer_DeformableConvolution_Test added to reproduce issue #572 with deformable convolution (by @DEKHTIARJonathan in #573)
      • Array_Op_Alphas_Test and Array_Op_Alphas_Like_Test added to test tensorlayer/array_ops.py file (by @DEKHTIARJonathan in #580)
      • test_optimizer_amsgrad.py added to test AMSGrad optimizer (by @DEKHTIARJonathan in #636)
      • test_logging.py added to insure robustness of the logging API (by @DEKHTIARJonathan in #645)
      • test_decorators.py added (by @DEKHTIARJonathan in #660)
    • Tutorials:
      • tutorial_tfslim has been introduced to show how to use SlimNetsLayer (by @2wins in #560).

    Changed

    • Tensorflow CPU & GPU dependencies moved to separated requirement files in order to allow PyUP.io to parse them (by @DEKHTIARJonathan in #573)
    • The document of LambdaLayer for linking it with ElementwiseLambdaLayer (by @zsdonghao in #587)
    • RTD links point to stable documentation instead of latest used for development (by @DEKHTIARJonathan in #633)
    • TF Version older than 1.6.0 are officially unsupported and raises an exception (by @DEKHTIARJonathan in #644)
    • Readme Badges Updated with Support Python and Tensorflow Versions (by @DEKHTIARJonathan in #644)
    • TL logging API has been consistent with TF logging API and thread-safe (by @DEKHTIARJonathan in #645)
    • Relative Imports changed for absolute imports (by @DEKHTIARJonathan in #657)
    • tl.files refactored into a directory with numerous files (by @DEKHTIARJonathan in #657)
    • tl.files.voc_dataset fixed because of original Pascal VOC website was down (by @DEKHTIARJonathan in #657)
    • extra requirements hidden inside the library added in the project requirements (by @DEKHTIARJonathan in #657)
    • requirements files refactored in requirements/ directory (by @DEKHTIARJonathan in #657)
    • README.md and other markdown files have been refactored and cleaned. (by @zsdonghao @DEKHTIARJonathan @luomai in #639)
    • Ternary Convolution Layer added in unittest (by @DEKHTIARJonathan in #658)
    • Convolution Layers unittests have been cleaned & refactored (by @DEKHTIARJonathan in #658)
    • All the tests are now using a DEBUG level verbosity when run individualy (by @DEKHTIARJonathan in #660)
    • tf.identity as activation is ignored, thus reducing the size of the graph by removing useless operation (by @DEKHTIARJonathan in #667)
    • argument dictionaries are now checked and saved within the Layer Base Class (by @DEKHTIARJonathan in #667)
    • Layer Base Class now presenting methods to update faultlessly all_layers, all_params, and all_drop (by @DEKHTIARJonathan in #675)
    • Input Layers have been removed from tl.layers.core and added to tl.layers.inputs (by @DEKHTIARJonathan in #675)
    • Input Layers are now considered as true layers in the graph (they represent a placeholder), unittests have been updated (by @DEKHTIARJonathan in #675)
    • Layer API is simplified, with automatic feeding prev_layer into self.inputs (by @DEKHTIARJonathan in #675)

    Deprecated

    • tl.layers.TimeDistributedLayer argurment args is deprecated in favor of layer_args (by @DEKHTIARJonathan in #667)

    Removed

    • assert() calls remove and replaced by raise AssertionError() (by @DEKHTIARJonathan in #667)
    • tl.identity is removed, not used anymore and deprecated for a long time (by @DEKHTIARJonathan in #667)
    • All Code specific to TF.__version__ < "1.6" have been removed (by @DEKHTIARJonathan in #675)

    Fixed

    • Issue #498 - Deprecation Warning Fix in tl.layers.RNNLayer with inspect (by @DEKHTIARJonathan in #574)
    • Issue #498 - Deprecation Warning Fix in tl.files with truth value of an empty array is ambiguous (by @DEKHTIARJonathan in #575)
    • Issue #565 related to tl.utils.predict fixed - np.hstack problem in which the results for multiple batches are stacked along axis=1 (by @2wins in #566)
    • Issue #572 with tl.layers.DeformableConv2d fixed (by @DEKHTIARJonathan in #573)
    • Issue #664 with tl.layers.ConvLSTMLayer fixed (by @dengyueyun666 in #676)
    • Typo of the document of ElementwiseLambdaLayer (by @zsdonghao in #588)
    • Error in tl.layers.TernaryConv2d fixed - self.inputs not defined (by @DEKHTIARJonathan in #658)
    • Deprecation warning fixed in tl.layers.binary._compute_threshold() (by @DEKHTIARJonathan in #658)
    • All references to tf.logging replaced by tl.logging (by @DEKHTIARJonathan in #661)
    • Duplicated code removed when bias was used (by @DEKHTIARJonathan in #667)
    • tensorlayer.third_party.roi_pooling.roi_pooling.roi_pooling_ops is now lazy loaded to prevent systematic error raised (by @DEKHTIARJonathan in #675)
    • Tutorial:
      • tutorial_word2vec_basic.py saving issue #476 fixed (by @DEKHTIARJonathan in #635)
      • All tutorials tested and errors have been fixed (by @DEKHTIARJonathan in #635)

    Dependencies Update

    • Update pytest from 3.5.1 to 3.6.0 (by @DEKHTIARJonathan and @pyup-bot in #647)
    • Update progressbar2 from 3.37.1 to 3.38.0 (by @DEKHTIARJonathan and @pyup-bot in #651)
    • Update scikit-image from 0.13.1 to 0.14.0 (by @DEKHTIARJonathan and @pyup-bot in #656)

    Contributors

    @lgarithm @DEKHTIARJonathan @2wins @One-sixth @zsdonghao @luomai

    Source code(tar.gz)
    Source code(zip)
    tensorlayer-1.8.6rc2.tar.gz(172.37 KB)
    tensorlayer-1.8.6rc2-py3-none-any.whl(205.27 KB)
    tensorlayer-1.8.6rc2-py2.py3-none-any.whl(205.28 KB)
    PKG-INFO(12.54 KB)
  • 1.8.6rc1(Jun 3, 2018)

    Changelog

    Added

    • API:
      • tl.alphas and tl.alphas_like added following the tf.ones/zeros and tf.zeros_like/ones_like (by @DEKHTIARJonathan in #580)
      • tl.lazy_imports.LazyImport to import heavy libraries only when necessary (by @DEKHTIARJonathan in #667)
    • CI Tool:
      • Stale Probot added to clean stale issues (by @DEKHTIARJonathan in #573)
      • Changelog Probot Configuration added (by @DEKHTIARJonathan in #637)
      • Travis Builds now handling a matrix of TF Version from TF==1.6.0 to TF==1.8.0 (by @DEKHTIARJonathan in #644)
      • CircleCI added to build and upload Docker Containers for each PR merged and tag release (by @DEKHTIARJonathan in #648)
    • Decorator:
      • tl.decorators API created including deprecated_alias and private_method (by @DEKHTIARJonathan in #660)
    • Docker:
      • Containers for each release and for each PR merged on master built (by @DEKHTIARJonathan in #648)
      • Containers built in the following configurations (by @DEKHTIARJonathan in #648):
        • py2 + cpu
        • py2 + gpu
        • py3 + cpu
        • py3 + gpu
    • Documentation:
      • Release semantic version added on index page (by @DEKHTIARJonathan in #633)
      • Optimizers page added (by @DEKHTIARJonathan in #636)
      • AMSGrad added on Optimizers page added (by @DEKHTIARJonathan in #636)
    • Layer:
      • ElementwiseLambdaLayer added to use custom function to connect multiple layer inputs (by @One-sixth in #579)
    • Optimizer:
      • AMSGrad Optimizer added based on On the Convergence of Adam and Beyond (ICLR 2018) (by @DEKHTIARJonathan in #636)
    • Setup:
      • Creation of installation flaggs all, all_cpu, and all_gpu (by @DEKHTIARJonathan in #660)
    • Test:
      • test_utils_predict.py added to reproduce and fix issue #288 (by @2wins in #566)
      • Layer_DeformableConvolution_Test added to reproduce issue #572 with deformable convolution (by @DEKHTIARJonathan in #573)
      • Array_Op_Alphas_Test and Array_Op_Alphas_Like_Test added to test tensorlayer/array_ops.py file (by @DEKHTIARJonathan in #580)
      • test_optimizer_amsgrad.py added to test AMSGrad optimizer (by @DEKHTIARJonathan in #636)
      • test_logging.py added to insure robustness of the logging API (by @DEKHTIARJonathan in #645)
      • test_decorators.py added (by @DEKHTIARJonathan in #660)
    • Tutorials:
      • tutorial_tfslim has been introduced to show how to use SlimNetsLayer (by @2wins in #560).

    Changed

    • Tensorflow CPU & GPU dependencies moved to separated requirement files in order to allow PyUP.io to parse them (by @DEKHTIARJonathan in #573)
    • The document of LambdaLayer for linking it with ElementwiseLambdaLayer (by @zsdonghao in #587)
    • RTD links point to stable documentation instead of latest used for development (by @DEKHTIARJonathan in #633)
    • TF Version older than 1.6.0 are officially unsupported and raises an exception (by @DEKHTIARJonathan in #644)
    • Readme Badges Updated with Support Python and Tensorflow Versions (by @DEKHTIARJonathan in #644)
    • TL logging API has been consistent with TF logging API and thread-safe (by @DEKHTIARJonathan in #645)
    • Relative Imports changed for absolute imports (by @DEKHTIARJonathan in #657)
    • tl.files refactored into a directory with numerous files (by @DEKHTIARJonathan in #657)
    • tl.files.voc_dataset fixed because of original Pascal VOC website was down (by @DEKHTIARJonathan in #657)
    • extra requirements hidden inside the library added in the project requirements (by @DEKHTIARJonathan in #657)
    • requirements files refactored in requirements/ directory (by @DEKHTIARJonathan in #657)
    • README.md and other markdown files have been refactored and cleaned. (by @zsdonghao @DEKHTIARJonathan @luomai in #639)
    • Ternary Convolution Layer added in unittest (by @DEKHTIARJonathan in #658)
    • Convolution Layers unittests have been cleaned & refactored (by @DEKHTIARJonathan in #658)
    • All the tests are now using a DEBUG level verbosity when run individualy (by @DEKHTIARJonathan in #660)
    • tf.identity as activation is ignored, thus reducing the size of the graph by removing useless operation (by @DEKHTIARJonathan in #667)
    • argument dictionaries are now checked and saved within the Layer Base Class (by @DEKHTIARJonathan in #667)

    Deprecated

    • tl.layers.TimeDistributedLayer argurment args is deprecated in favor of layer_args (by @DEKHTIARJonathan in #667)

    Removed

    • assert() calls remove and replaced by raise AssertionError() (by @DEKHTIARJonathan in #667)
    • tl.identity is removed, not used anymore and deprecated for a long time (by @DEKHTIARJonathan in #667)

    Fixed

    • Issue #498 - Deprecation Warning Fix in tl.layers.RNNLayer with inspect (by @DEKHTIARJonathan in #574)
    • Issue #498 - Deprecation Warning Fix in tl.files with truth value of an empty array is ambiguous (by @DEKHTIARJonathan in #575)
    • Issue #565 related to tl.utils.predict fixed - np.hstack problem in which the results for multiple batches are stacked along axis=1 (by @2wins in #566)
    • Issue #572 with tl.layers.DeformableConv2d fixed (by @DEKHTIARJonathan in #573)
    • Typo of the document of ElementwiseLambdaLayer (by @zsdonghao in #588)
    • Error in tl.layers.TernaryConv2d fixed - self.inputs not defined (by @DEKHTIARJonathan in #658)
    • Deprecation warning fixed in tl.layers.binary._compute_threshold() (by @DEKHTIARJonathan in #658)
    • All references to tf.logging replaced by tl.logging (by @DEKHTIARJonathan in #661)
    • Duplicated code removed when bias was used (by @DEKHTIARJonathan in #667)
    • Tutorial:
      • tutorial_word2vec_basic.py saving issue #476 fixed (by @DEKHTIARJonathan in #635)
      • All tutorials tested and errors have been fixed (by @DEKHTIARJonathan in #635)

    Dependencies Update

    • Update pytest from 3.5.1 to 3.6.0 (by @DEKHTIARJonathan and @pyup-bot in #647)
    • Update progressbar2 from 3.37.1 to 3.38.0 (by @DEKHTIARJonathan and @pyup-bot in #651)
    • Update scikit-image from 0.13.1 to 0.14.0 (by @DEKHTIARJonathan and @pyup-bot in #656)

    Contributors

    @lgarithm @DEKHTIARJonathan @2wins @One-sixth @zsdonghao @luomai

    Source code(tar.gz)
    Source code(zip)
    tensorlayer-1.8.6rc1.tar.gz(173.70 KB)
    tensorlayer-1.8.6rc1-py3-none-any.whl(204.79 KB)
    tensorlayer-1.8.6rc1-py2.py3-none-any.whl(204.80 KB)
    PKG-INFO(12.54 KB)
  • 1.8.6rc0(Jun 1, 2018)

    ChangeLog

    Added

    • API:
      • tl.alphas and tl.alphas_like added following the tf.ones/zeros and tf.zeros_like/ones_like (by @DEKHTIARJonathan in #580)
    • CI Tool:
      • Stale Probot added to clean stale issues (by @DEKHTIARJonathan in #573)
      • Changelog Probot Configuration added (by @DEKHTIARJonathan in #637)
      • Travis Builds now handling a matrix of TF Version from TF==1.6.0 to TF==1.8.0 (by @DEKHTIARJonathan in #644)
      • CircleCI added to build and upload Docker Containers for each PR merged and tag release (by @DEKHTIARJonathan in #648)
    • Decorator:
      • tl.decorators API created including deprecated_alias and private_method (by @DEKHTIARJonathan in #660)
    • Docker:
      • Containers for each release and for each PR merged on master built (by @DEKHTIARJonathan in #648)
      • Containers built in the following configurations (by @DEKHTIARJonathan in #648):
        • py2 + cpu
        • py2 + gpu
        • py3 + cpu
        • py3 + gpu
    • Documentation:
      • Release semantic version added on index page (by @DEKHTIARJonathan in #633)
      • Optimizers page added (by @DEKHTIARJonathan in #636)
      • AMSGrad added on Optimizers page added (by @DEKHTIARJonathan in #636)
    • Layer:
      • ElementwiseLambdaLayer added to use custom function to connect multiple layer inputs (by @One-sixth in #579)
    • Optimizer:
      • AMSGrad Optimizer added based on On the Convergence of Adam and Beyond (ICLR 2018) (by @DEKHTIARJonathan in #636)
    • Setup:
      • Creation of installation flaggs all, all_cpu, and all_gpu (by @DEKHTIARJonathan in #660)
    • Test:
      • test_utils_predict.py added to reproduce and fix issue #288 (by @2wins in #566)
      • Layer_DeformableConvolution_Test added to reproduce issue #572 with deformable convolution (by @DEKHTIARJonathan in #573)
      • Array_Op_Alphas_Test and Array_Op_Alphas_Like_Test added to test tensorlayer/array_ops.py file (by @DEKHTIARJonathan in #580)
      • test_optimizer_amsgrad.py added to test AMSGrad optimizer (by @DEKHTIARJonathan in #636)
      • test_logging.py added to insure robustness of the logging API (by @DEKHTIARJonathan in #645)
      • test_decorators.py added (by @DEKHTIARJonathan in #660)
    • Tutorials:
      • tutorial_tfslim has been introduced to show how to use SlimNetsLayer (by @2wins in #560).

    Changed

    • Tensorflow CPU & GPU dependencies moved to separated requirement files in order to allow PyUP.io to parse them (by @DEKHTIARJonathan in #573)
    • The document of LambdaLayer for linking it with ElementwiseLambdaLayer (by @zsdonghao in #587)
    • RTD links point to stable documentation instead of latest used for development (by @DEKHTIARJonathan in #633)
    • TF Version older than 1.6.0 are officially unsupported and raises an exception (by @DEKHTIARJonathan in #644)
    • Readme Badges Updated with Support Python and Tensorflow Versions (by @DEKHTIARJonathan in #644)
    • TL logging API has been consistent with TF logging API and thread-safe (by @DEKHTIARJonathan in #645)
    • Relative Imports changed for absolute imports (by @DEKHTIARJonathan in #657)
    • tl.files refactored into a directory with numerous files (by @DEKHTIARJonathan in #657)
    • tl.files.voc_dataset fixed because of original Pascal VOC website was down (by @DEKHTIARJonathan in #657)
    • extra requirements hidden inside the library added in the project requirements (by @DEKHTIARJonathan in #657)
    • requirements files refactored in requirements/ directory (by @DEKHTIARJonathan in #657)
    • README.md and other markdown files have been refactored and cleaned. (by @zsdonghao @DEKHTIARJonathan @luomai in #639)
    • Ternary Convolution Layer added in unittest (by @DEKHTIARJonathan in #658)
    • Convolution Layers unittests have been cleaned & refactored (by @DEKHTIARJonathan in #658)
    • All the tests are now using a DEBUG level verbosity when run individualy (by @DEKHTIARJonathan in #660)

    Fixed

    • Issue #498 - Deprecation Warning Fix in tl.layers.RNNLayer with inspect (by @DEKHTIARJonathan in #574)
    • Issue #498 - Deprecation Warning Fix in tl.files with truth value of an empty array is ambiguous (by @DEKHTIARJonathan in #575)
    • Issue #565 related to tl.utils.predict fixed - np.hstack problem in which the results for multiple batches are stacked along axis=1 (by @2wins in #566)
    • Issue #572 with tl.layers.DeformableConv2d fixed (by @DEKHTIARJonathan in #573)
    • Typo of the document of ElementwiseLambdaLayer (by @zsdonghao in #588)
    • Error in tl.layers.TernaryConv2d fixed - self.inputs not defined (by @DEKHTIARJonathan in #658)
    • Deprecation warning fixed in tl.layers.binary._compute_threshold() (by @DEKHTIARJonathan in #658)
    • All references to tf.logging replaced by tl.logging (by @DEKHTIARJonathan in #661)
    • Tutorial:
      • tutorial_word2vec_basic.py saving issue #476 fixed (by @DEKHTIARJonathan in #635)
      • All tutorials tested and errors have been fixed (by @DEKHTIARJonathan in #635)

    Dependencies Update

    • Update pytest from 3.5.1 to 3.6.0 (by @DEKHTIARJonathan and @pyup-bot in #647)
    • Update progressbar2 from 3.37.1 to 3.38.0 (by @DEKHTIARJonathan and @pyup-bot in #651)
    • Update scikit-image from 0.13.1 to 0.14.0 (by @DEKHTIARJonathan and @pyup-bot in #656)

    Contributors

    @lgarithm @DEKHTIARJonathan @2wins @One-sixth @zsdonghao @luomai

    Source code(tar.gz)
    Source code(zip)
    tensorlayer-1.8.6rc0.tar.gz(172.88 KB)
    tensorlayer-1.8.6rc0-py3-none-any.whl(203.85 KB)
    tensorlayer-1.8.6rc0-py2.py3-none-any.whl(203.86 KB)
    PKG-INFO(12.66 KB)
  • 1.8.5(May 9, 2018)

    Added

    • Github Templates added (by @DEKHTIARJonathan)
      • New issues Template
      • New PR Template
    • Travis Deploy Automation on new Tag (by @DEKHTIARJonathan)
      • Deploy to PyPI and create a new version.
      • Deploy to Github Releases and upload the wheel files
    • PyUP.io has been added to ensure we are compatible with the latest libraries (by @DEKHTIARJonathan)
    • deconv2d now handling dilation_rate (by @zsdonghao)
    • Documentation unittest added (by @DEKHTIARJonathan)
    • test_layers_core has been added to ensure that LayersConfig is abstract.

    Changed

    • All Tests Refactored - Now using unittests and runned with PyTest (by @DEKHTIARJonathan)
    • Documentation updated (by @zsdonghao)
    • Package Setup Refactored (by @DEKHTIARJonathan)
    • Dataset Downlaod now using library progressbar2 (by @DEKHTIARJonathan)
    • deconv2d function transformed into Class (by @zsdonghao)
    • conv1d function transformed into Class (by @zsdonghao)
    • super resolution functions transformed into Class (by @zsdonghao)
    • YAPF coding style improved and enforced (by @DEKHTIARJonathan)

    Fixed

    • Backward Compatibility Restored with deprecation warnings (by @DEKHTIARJonathan)
    • Tensorflow Deprecation Fix (Issue #498):
      • AverageEmbeddingInputlayer (by @zsdonghao)
      • load_mpii_pose_dataset (by @zsdonghao)
    • maxPool2D initializer issue #551 (by @zsdonghao)
    • LayersConfig class has been enforced as abstract
    • Pooling Layer Issue #557 fixed (by @zsdonghao)

    Dependencies Update

    • scipy>=1.0,<1.1 => scipy>=1.1,<1.2

    Contributors

    @zsdonghao @luomai @DEKHTIARJonathan

    Source code(tar.gz)
    Source code(zip)
    tensorlayer-1.8.5-py3-none-any.whl(170.06 KB)
    tensorlayer-1.8.5.tar.gz(164.66 KB)
    tensorlayer-1.8.5-py2.py3-none-any.whl(170.06 KB)
    PKG-INFO(28.05 KB)
  • 1.8.5rc2(Apr 22, 2018)

    Changelog

    • Restored backward compatibility with deprecation warnings (by @DEKHTIARJonathan)
    • All Tests Refactored - Now using unittests and runned with PyTest (by @DEKHTIARJonathan)
    • Github Templates added (by @DEKHTIARJonathan)
      • New issues Template
      • New PR Template
    • Documentation updated (by @zsdonghao)
    • Package Setup Refactored (by @DEKHTIARJonathan)
    • deconv2d function transformed into Class (by @zsdonghao)
    • conv1d function transformed into Class (by @zsdonghao)
    • deconv2d now handling dilation_rate
    • Travis Deploy Automation on new Tag (by @DEKHTIARJonathan)
      • Deploy to PyPI and create a new version.
      • Deploy to Github Releases and upload the wheel files
    • Dataset Downlaod now using library progressbar2 (by @DEKHTIARJonathan)
    • Tensorflow Deprecation Fix (Issue #498):
      • AverageEmbeddingInputlayer (by @zsdonghao)
      • load_mpii_pose_dataset (by @zsdonghao)
    • update super resolution from function to class (by @zsdonghao)
    • YAPF coding style improved and enforced (by @DEKHTIARJonathan)
    • Documentation unittest added (by @DEKHTIARJonathan)

    Contributors:

    @zsdonghao @luomai @DEKHTIARJonathan

    Source code(tar.gz)
    Source code(zip)
    tensorlayer-1.8.5rc2.tar.gz(162.36 KB)
    tensorlayer-1.8.5rc2-py3-none-any.whl(176.20 KB)
    tensorlayer-1.8.5rc2-py2.py3-none-any.whl(176.21 KB)
    PKG-INFO(27.08 KB)
  • 1.8.5rc1(Apr 17, 2018)

    Changelog

    • Restored backward compatibility with deprecation warnings (by @DEKHTIARJonathan)
    • All Tests Refactored - Now using unittests and runned with PyTest (by @DEKHTIARJonathan)
    • Github Templates added (by @DEKHTIARJonathan)
      • New issues Template
      • New PR Template
    • Documentation updated (by @zsdonghao)
    • Package Setup Refactored (by @DEKHTIARJonathan)
    • deconv2d function transformed into Class (by @zsdonghao)
    • conv1d function transformed into Class (by @zsdonghao)
    • deconv2d now handling dilation_rate
    • Travis Deploy Automation on new Tag (by @DEKHTIARJonathan)
      • Deploy to PyPI and create a new version.
      • Deploy to Github Releases and upload the wheel files

    Contributors:

    @zsdonghao @luomai @DEKHTIARJonathan

    Source code(tar.gz)
    Source code(zip)
    tensorlayer-1.8.5rc1.tar.gz(158.27 KB)
    tensorlayer-1.8.5rc1-py3-none-any.whl(175.54 KB)
    tensorlayer-1.8.5rc1-py2.py3-none-any.whl(175.54 KB)
    PKG-INFO(27.04 KB)
  • 1.8.5rc0(Apr 17, 2018)

    Changelog

    • Restored backward compatibility with deprecation warnings (by @DEKHTIARJonathan)
    • All Tests Refactored - Now using unittests and runned with PyTest (by @DEKHTIARJonathan)
    • Github Templates added for
      • New issues
      • New PR
    • Documentation updated (by @zsdonghao)
    • Package Setup Refactored (by @DEKHTIARJonathan)

    Contributors:

    @zsdonghao @luomai @DEKHTIARJonathan

    Source code(tar.gz)
    Source code(zip)
    tensorlayer-1.8.5rc0-py3-none-any.whl(170.48 KB)
    tensorlayer-1.8.5rc0.tar.gz(167.53 KB)
    tensorlayer-1.8.5rc0-py2.py3-none-any.whl(170.48 KB)
    PKG-INFO(27.88 KB)
  • 1.8.4(Apr 13, 2018)

    New Support

    • Release experimental APIs to download and visualize MPII dataset (Pose Estimation) in one line of code (by @zsdonghao)
    >>> import pprint
    >>> import tensorlayer as tl
    >>> img_train_list, ann_train_list, img_test_list, ann_test_list = tl.files.load_mpii_pose_dataset()
    >>> image = tl.vis.read_image(img_train_list[0])
    >>> tl.vis.draw_mpii_pose_to_image(image, ann_train_list[0], 'image.png')
    >>> pprint.pprint(ann_train_list[0])
    
    • Release tl.models API - Provides pre-trained VGG16, SqueezeNet and MobileNetV1 in one line of code (by @lgarithm @zsdonghao), more models will be provided soon!

    Classify ImageNet classes, see tutorial_models_mobilenetv1.py

    >>> x = tf.placeholder(tf.float32, [None, 224, 224, 3])
    >>> # get the whole model
    >>> net = tl.models.MobileNetV1(x)
    >>> # restore pre-trained parameters
    >>> sess = tf.InteractiveSession()
    >>> net.restore_params(sess)
    >>> # use for inferencing
    >>> probs = tf.nn.softmax(net.outputs)
    

    Extract features and Train a classifier with 100 classes

    >>> x = tf.placeholder(tf.float32, [None, 224, 224, 3])
    >>> # get model without the last layer
    >>> cnn = tl.models.MobileNetV1(x, end_with='reshape')
    >>> # add one more layer
    >>> net = Conv2d(cnn, 100, (1, 1), (1, 1), name='out')
    >>> net = FlattenLayer(net, name='flatten')
    >>> # initialize all parameters
    >>> sess = tf.InteractiveSession()
    >>> tl.layers.initialize_global_variables(sess)
    >>> # restore pre-trained parameters
    >>> cnn.restore_params(sess)
    >>> # train your own classifier (only update the last layer)
    >>> train_params = tl.layers.get_variables_with_name('out')
    

    Reuse model

    >>> x1 = tf.placeholder(tf.float32, [None, 224, 224, 3])
    >>> x2 = tf.placeholder(tf.float32, [None, 224, 224, 3])
    >>> # get network without the last layer
    >>> net1 = tl.models.MobileNetV1(x1, end_with='reshape')
    >>> # reuse the parameters with different input
    >>> net2 = tl.models.MobileNetV1(x2, end_with='reshape', reuse=True)
    >>> # restore pre-trained parameters (as they share parameters, we don’t need to restore net2)
    >>> sess = tf.InteractiveSession()
    >>> net1.restore_params(sess)
    

    New Example

    • TensorFlow Dataset API for VOC dataset augmentation here (by @zsdonghao)

    New Update

    • Update tl.iterate.minibatch to support list input (by @zsdonghao)

    API Change Log

    @DEKHTIARJonathan give a list of API change log here https://github.com/tensorlayer/tensorlayer/issues/479

      1. Layer API Change

    As it is an absolute central class, one change here are leading to changes everywhere. If any modification is done here, it should be done with a deprecation warning.

    ## Before
    layer = tl.layers.BatchNormLayer(layer = layer)
    layer = tl.layers.PReluLayer(layer  = layer)
    
    ## Now
    layer = tl.layers.BatchNormLayer(prev_layer = layer)
    layer = tl.layers.PReluLayer(prev_layer= layer)
    

    Commit introduced this change: b2e6cccd53bd6c076c32421b8c4d562a96437524

    Why the API was changed ? As you may guess, just this change lead to many projects raising errors and needing to be updated. We struggle to have tutorials and examples around with TL and this change is not helping with backward compatibility.

      1. DeConv2d API Change
    ## Before
    tl.layers.DeConv2d(layer=layer,  n_out_channel = 16)
    
    ## Now
    tl.layers.DeConv2d(layer=layer,  n_filter = 16)
    

    Here we have two problems:

    1. This Layer has now an inconsistent API with the rest of the TL library (this layer use layer instead of prev_layer).
    2. Again, no deprecation warning with the changes from n_out_channel to n_filter which may immediately make most GANs/AEs not working without a fix.
      1. Reuse Variable Scope

    You have correctly mentioned a deprecation warning, however it would be better to mention an appropriate fix and not just say "it's deprecated, deal with it now !"

    I give you an example:

    with tf.variable_scope("my_scope", reuse=reuse) as scope:
        # tl.layers.set_name_reuse(reuse) # deprecated
        if reuse:
            scope.reuse_variables()
    

    Quite easy to add inside the deprecation warning and now it provides a simple solution to fix the issue.

      1. No mention in the Changelog of an API change of the ReshapeLayer
    ## Before
    layer = tl.layers.ReshapeLayer(
        layer,
        shape = [-1, 256, 256, 3]
    )
    
    ## Now
    layer = tl.layers.ReshapeLayer(
        layer,
        shape = (-1, 256, 256, 3) # Must use a tuple, a list is not accepted anymore
    )
    
    Source code(tar.gz)
    Source code(zip)
Owner
TensorLayer Community
An open community to promote AI technology.
TensorLayer Community
Rayvens makes it possible for data scientists to access hundreds of data services within Ray with little effort.

Rayvens augments Ray with events. With Rayvens, Ray applications can subscribe to event streams, process and produce events. Rayvens leverages Apache

CodeFlare 32 Dec 25, 2022
Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX

CQL-JAX This repository implements Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX (FLAX). Implementation is built on

Karush Suri 8 Nov 7, 2022
Reinforcement-learning - Repository of the class assignment questions for the course on reinforcement learning

DSE 314/614: Reinforcement Learning This repository containing reinforcement lea

Manav Mishra 4 Apr 15, 2022
DRLib:A concise deep reinforcement learning library, integrating HER and PER for almost off policy RL algos.

DRLib:A concise deep reinforcement learning library, integrating HER and PER for almost off policy RL algos A concise deep reinforcement learning libr

null 329 Jan 3, 2023
A resource for learning about deep learning techniques from regression to LSTM and Reinforcement Learning using financial data and the fitness functions of algorithmic trading

A tour through tensorflow with financial data I present several models ranging in complexity from simple regression to LSTM and policy networks. The s

null 195 Dec 7, 2022
PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).

PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).

Ilya Kostrikov 3k Dec 31, 2022
Learning to Communicate with Deep Multi-Agent Reinforcement Learning in PyTorch

Learning to Communicate with Deep Multi-Agent Reinforcement Learning This is a PyTorch implementation of the original Lua code release. Overview This

Minqi 297 Dec 12, 2022
BasicRL: easy and fundamental codes for deep reinforcement learning。It is an improvement on rainbow-is-all-you-need and OpenAI Spinning Up.

BasicRL: easy and fundamental codes for deep reinforcement learning BasicRL is an improvement on rainbow-is-all-you-need and OpenAI Spinning Up. It is

RayYoh 12 Apr 28, 2022
An example project demonstrating how the Autonomous Learning Library can be used to build new reinforcement learning agents.

About This repository shows how Autonomous Learning Library can be used to build new reinforcement learning agents. In particular, it contains a model

Chris Nota 5 Aug 30, 2022
PyTorch implementations of deep reinforcement learning algorithms and environments

Deep Reinforcement Learning Algorithms with PyTorch This repository contains PyTorch implementations of deep reinforcement learning algorithms and env

Petros Christodoulou 4.7k Jan 4, 2023
SenseNet is a sensorimotor and touch simulator for deep reinforcement learning research

SenseNet is a sensorimotor and touch simulator for deep reinforcement learning research

null 59 Feb 25, 2022
Urban mobility simulations with Python3, RLlib (Deep Reinforcement Learning) and Mesa (Agent-based modeling)

Deep Reinforcement Learning for Smart Cities Documentation RLlib: https://docs.ray.io/en/master/rllib.html Mesa: https://mesa.readthedocs.io/en/stable

null 1 May 15, 2022
MazeRL is an application oriented Deep Reinforcement Learning (RL) framework

MazeRL is an application oriented Deep Reinforcement Learning (RL) framework, addressing real-world decision problems. Our vision is to cover the complete development life cycle of RL applications ranging from simulation engineering up to agent development, training and deployment.

EnliteAI GmbH 222 Dec 24, 2022
PGPortfolio: Policy Gradient Portfolio, the source code of "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem"(https://arxiv.org/pdf/1706.10059.pdf).

This is the original implementation of our paper, A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem (arXiv:1706.1

Zhengyao Jiang 1.5k Dec 29, 2022
Deep Reinforcement Learning based Trading Agent for Bitcoin

Deep Trading Agent Deep Reinforcement Learning based Trading Agent for Bitcoin using DeepSense Network for Q function approximation. For complete deta

Kartikay Garg 669 Dec 29, 2022
A list of papers regarding generalization in (deep) reinforcement learning

A list of papers regarding generalization in (deep) reinforcement learning

Kaixin WANG 13 Apr 26, 2021
AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning

AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning AutoPentest-DRL is an automated penetration testing framework based o

Cyber Range Organization and Design Chair 217 Jan 1, 2023
[ICML 2021] DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning | 斗地主AI

[ICML 2021] DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning DouZero is a reinforcement learning framework for DouDizhu (斗地主), t

Kwai Inc. 3.1k Jan 4, 2023