Attempt at implementation of a simple GAN using Keras

Overview

Simple GAN

This is my attempt to make a wrapper class for a GAN in keras which can be used to abstract the whole architecture process.

Build StatusPyPI versionQuality Gate

Overview

alt text

Flow Chart

Setting up a Generative Adversarial Network involves having a discriminator and a generator working in tandem, with the ultimate goal being that the generator can come up with samples that are indistinguishable from valid samples by the discriminator.

alt text

Installation

    pip install adversarials

Example

import numpy as np
from keras.datasets import mnist

from adversarials.core import Log
from adversarials import SimpleGAN

if __name__ == '__main__':
    (X_train, _), (_, _) = mnist.load_data()

    # Rescale -1 to 1
    X_train = (X_train.astype(np.float32) - 127.5) / 127.5
    X_train = np.expand_dims(X_train, axis=3)

    Log.info('X_train.shape = {}'.format(X_train.shape))

    gan = SimpleGAN(save_to_dir="./assets/images",
    save_interval=20)
    gan.train(X_train, epochs=40)

Documentation

Github Pages

Credits

Contribution

You are very welcome to modify and use them in your own projects.

Please keep a link to the original repository. If you have made a fork with substantial modifications that you feel may be useful, then please open a new issue on GitHub with a link and short description.

License (MIT)

This project is opened under the MIT 2.0 License which allows very broad use for both academic and commercial purposes.

A few of the images used for demonstration purposes may be under copyright. These images are included under the "fair usage" laws.

Todo

  • Add view training(discriminator and generator) simultaneously using tensorboard
  • Provision for Parallel data processing and multithreading
  • Saving models to Protobuff files
  • Using TfGraphDef and other things that could speed up training and inference
You might also like...
Keras like implementation of Deep Learning architectures from scratch using numpy.

Mini-Keras Keras like implementation of Deep Learning architectures from scratch using numpy. How to contribute? The project contains implementations

Simple SN-GAN to generate CryptoPunks
Simple SN-GAN to generate CryptoPunks

CryptoPunks GAN Simple SN-GAN to generate CryptoPunks. Neural network architecture and training code has been modified from the PyTorch DCGAN example.

Keras + Hyperopt: A very simple wrapper for convenient hyperparameter optimization

This project is now archived. It's been fun working on it, but it's time for me to move on. Thank you for all the support and feedback over the last c

A no-BS, dead-simple training visualizer for tf-keras
A no-BS, dead-simple training visualizer for tf-keras

A no-BS, dead-simple training visualizer for tf-keras TrainingDashboard Plot inter-epoch and intra-epoch loss and metrics within a jupyter notebook wi

Implementation of 'lightweight' GAN, proposed in ICLR 2021, in Pytorch. High resolution image generations that can be trained within a day or two
Implementation of 'lightweight' GAN, proposed in ICLR 2021, in Pytorch. High resolution image generations that can be trained within a day or two

512x512 flowers after 12 hours of training, 1 gpu 256x256 flowers after 12 hours of training, 1 gpu Pizza 'Lightweight' GAN Implementation of 'lightwe

Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper

TransGanFormer (wip) Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GansFormer and TransGan paper. I

PyTorch 1.5 implementation for paper DECOR-GAN: 3D Shape Detailization by Conditional Refinement.
PyTorch 1.5 implementation for paper DECOR-GAN: 3D Shape Detailization by Conditional Refinement.

DECOR-GAN PyTorch 1.5 implementation for paper DECOR-GAN: 3D Shape Detailization by Conditional Refinement, Zhiqin Chen, Vladimir G. Kim, Matthew Fish

This is a pytorch implementation of the NeurIPS paper GAN Memory with No Forgetting.

GAN Memory for Lifelong learning This is a pytorch implementation of the NeurIPS paper GAN Memory with No Forgetting. Please consider citing our paper

[CVPR 2021] Pytorch implementation of Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs

Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs In this work, we propose a framework HijackGAN, which enables non-linear latent space travers

Comments
  • Project Refactor

    Project Refactor

    I've made some modifications to project files and folders.

    • Added Continuous Integration / Continuous Development (with Travis-Ci).
    • Logging (color) & File utilities.
    • Configuration files (logger, model & global configs).
    • Base model & object verbosity.
    • Debugging capabilities.
    • Code design upgrades.
    • Documentation.
    opened by victor-iyi 0
  • Initial Update

    Initial Update

    This PR sets up pyup.io on this repo and updates all dependencies at once, in a single branch.

    Subsequent pull requests will update one dependency at a time, each in their own branch. If you want to start with that right away, simply close this PR.

    Update keras from 2.2.4 to 2.2.4.

    Changelog

    2.2.4

    This is a bugfix release, addressing two issues:
    
    - Ability to save a model when a file with the same name already exists.
    - Issue with loading legacy config files for the `Sequential` model.
    
    [See here](https://github.com/keras-team/keras/releases/tag/2.2.3) for the changelog since 2.2.2.
    

    2.2.3

    Areas of improvement
    
    - API completeness & usability improvements
    - Bug fixes
    - Documentation improvements
    
    API changes
    
    - Keras models can now be safely pickled.
    - Consolidate the functionality of the activation layers `ThresholdedReLU` and `LeakyReLU` into the `ReLU` layer.
    - As a result, the `ReLU` layer now takes new arguments `negative_slope` and `threshold`, and the `relu` function in the backend takes a new `threshold` argument.
    - Add `update_freq` argument in `TensorBoard` callback, controlling how often to write TensorBoard logs.
    - Add the `exponential` function to `keras.activations`.
    - Add `data_format` argument in all 4 `Pooling1D` layers.
    - Add `interpolation` argument in `UpSampling2D` layer and in `resize_images` backend function, supporting modes `"nearest"` (previous behavior, and new default) and `"bilinear"` (new).
    - Add `dilation_rate` argument in `Conv2DTranspose` layer and in `conv2d_transpose` backend function.
    - The `LearningRateScheduler` now receives the `lr` key as part of the `logs` argument in `on_epoch_end` (current value of the learning rate).
    - Make `GlobalAveragePooling1D` layer support masking.
    - The the `filepath` argument `save_model` and `model.save()` can now be a `h5py.Group` instance.
    - Add argument `restore_best_weights` to `EarlyStopping` callback (optionally reverts to the weights that obtained the highest monitored score value).
    - Add `dtype` argument to `keras.utils.to_categorical`.
    - Support `run_options` and `run_metadata` as optional session arguments in `model.compile()` for the TensorFlow backend.
    
    Breaking changes
    
    - Modify the return value of `Sequential.get_config()`. Previously, the return value was a list of the config dictionaries of the layers of the model. Now, the return value is a dictionary with keys `layers`, `name`, and an optional key `build_input_shape`. The old config is equivalent to `new_config['layers']`. This makes the output of `get_config` consistent across all model classes.
    
    
    Credits
    
    Thanks to our 38 contributors whose commits are featured in this release:
    
    BertrandDechoux, ChrisGll, Dref360, JamesHinshelwood, MarcoAndreaBuchmann, ageron, alfasst, blue-atom, chasebrignac, cshubhamrao, danFromTelAviv, datumbox, farizrahman4u, fchollet, fuzzythecat, gabrieldemarmiesse, hadifar, heytitle, hsgkim, jankrepl, joelthchao, knightXun, kouml, linjinjin123, lvapeab, nikoladze, ozabluda, qlzh727, roywei, rvinas, sriyogesh94, tacaswell, taehoonlee, tedyu, xuhdev, yanboliang, yongzx, yuanxiaosc
    

    2.2.2

    This is a bugfix release, fixing a significant bug in `multi_gpu_model`.
    
    For changes since version 2.2.0, see release notes for [Keras 2.2.1](https://github.com/keras-team/keras/releases/tag/2.2.1).
    

    2.2.1

    Areas of improvement
    
    - Bugs fixes
    - Performance improvements
    - Documentation improvements
    
    API changes
    
    - Add `output_padding` argument in `Conv2DTranspose` (to override default padding behavior).
    - Enable automatic shape inference when using Lambda layers with the CNTK backend.
    
    Breaking changes
    
    No breaking changes recorded.
    
    Credits
    
    Thanks to our 33 contributors whose commits are featured in this release:
    
    Ajk4, Anner-deJong, Atcold, Dref360, EyeBool, ageron, briannemsick, cclauss, davidtvs, dstine, eTomate, ebatuhankaynak, eliberis, farizrahman4u, fchollet, fuzzythecat, gabrieldemarmiesse, jlopezpena, kamil-kaczmarek, kbattocchi, kmader, kvechera, maxpumperla, mkaze, pavithrasv, rvinas, sachinruk, seriousmac, soumyac1999, taehoonlee, yanboliang, yongzx, yuyang-huang
    

    2.2.0

    Areas of improvements
    
    - New model definition API: `Model` subclassing.
    - New input mode: ability to call models on TensorFlow tensors directly (TensorFlow backend only).
    - Improve feature coverage of Keras with the Theano and CNTK backends.
    - Bug fixes and performance improvements.
    - Large refactors improving code structure, code health, and reducing test time. In particular:
    * The Keras engine now follows a much more modular structure.
    * The `Sequential` model is now a plain subclass of `Model`.
    * The modules `applications` and `preprocessing` are now externalized to their own repositories ([keras-applications](https://github.com/keras-team/keras-applications) and [keras-preprocessing](https://github.com/keras-team/keras-preprocessing)).
    
    API changes
    
    - Add `Model` subclassing API (details below).
    - Allow symbolic tensors to be fed to models, with TensorFlow backend (details below).
    - Enable CNTK and Theano support for layers `SeparableConv1D`, `SeparableConv2D`, as well as backend methods `separable_conv1d` and `separable_conv2d` (previously only available for TensorFlow).
    - Enable CNTK and Theano support for applications `Xception` and `MobileNet` (previously only available for TensorFlow).
    - Add `MobileNetV2` application  (available for all backends).
    - Enable loading external (non built-in) backends by changing your `~/.keras.json` configuration file (e.g. PlaidML backend).
    - Add `sample_weight` in `ImageDataGenerator`.
    - Add `preprocessing.image.save_img` utility to write images to disk.
    - Default `Flatten` layer's `data_format` argument to `None` (which defaults to global Keras config).
    - `Sequential` is now a plain subclass of `Model`. The attribute `sequential.model` is deprecated.
    - Add `baseline` argument in `EarlyStopping` (stop training if a given baseline isn't reached).
    - Add `data_format` argument to `Conv1D`.
    - Make the model returned by `multi_gpu_model` serializable.
    - Support input masking in `TimeDistributed` layer.
    - Add an `advanced_activation` layer `ReLU`, making the ReLU activation easier to configure while retaining easy serialization capabilities.
    - Add `axis=-1` argument in backend crossentropy functions specifying the class prediction axis in the input tensor.
    
    New model definition API : `Model` subclassing
    
    In addition to the `Sequential` API and the functional `Model` API, you may now define models by subclassing the `Model` class and writing your own `call` forward pass:
    
    python
    import keras
    
    class SimpleMLP(keras.Model):
    
     def __init__(self, use_bn=False, use_dp=False, num_classes=10):
         super(SimpleMLP, self).__init__(name='mlp')
         self.use_bn = use_bn
         self.use_dp = use_dp
         self.num_classes = num_classes
    
         self.dense1 = keras.layers.Dense(32, activation='relu')
         self.dense2 = keras.layers.Dense(num_classes, activation='softmax')
         if self.use_dp:
             self.dp = keras.layers.Dropout(0.5)
         if self.use_bn:
             self.bn = keras.layers.BatchNormalization(axis=-1)
    
     def call(self, inputs):
         x = self.dense1(inputs)
         if self.use_dp:
             x = self.dp(x)
         if self.use_bn:
             x = self.bn(x)
         return self.dense2(x)
    
    model = SimpleMLP()
    model.compile(...)
    model.fit(...)
    
    
    Layers are defined in `__init__(self, ...)`, and the forward pass is specified in `call(self, inputs)`. In `call`, you may specify custom losses by calling `self.add_loss(loss_tensor)` (like you would in a custom layer).
    
    New input mode: symbolic TensorFlow tensors
    
    With Keras 2.2.0 and TensorFlow 1.8 or higher, you may `fit`, `evaluate` and `predict` using symbolic TensorFlow tensors (that are expected to yield data indefinitely). The API is similar to the one in use in `fit_generator` and other generator methods:
    
    python
    iterator = training_dataset.make_one_shot_iterator()
    x, y = iterator.get_next()
    
    model.fit(x, y, steps_per_epoch=100, epochs=10)
    
    iterator = validation_dataset.make_one_shot_iterator()
    x, y = iterator.get_next()
    model.evaluate(x, y, steps=50)
    
    
    This is achieved by dynamically rewiring the TensorFlow graph to feed the input tensors to the existing model placeholders. There is no performance loss compared to building your model on top of the input tensors in the first place.
    
    
    Breaking changes
    
    - Remove legacy `Merge` layers and associated functionality (remnant of Keras 0), which were deprecated in May 2016, with full removal initially scheduled for August 2017. Models from the Keras 0 API using these layers cannot be loaded with Keras 2.2.0 and above.
    - The `truncated_normal` base initializer now returns values that are scaled by ~0.9 (resulting in correct variance value after truncation). This has a small chance of affecting initial convergence behavior on some models.
    
    
    Credits
    
    Thanks to our 46 contributors whose commits are featured in this release:
    
    ASvyatkovskiy, AmirAlavi, Anirudh-Swaminathan, DavidAriel, Dref360, JonathanCMitchell, KuzMenachem, PeterChe1990, Saharkakavand, StefanoCappellini, ageron, askskro, bileschi, bonlime, bottydim, brge17, briannemsick, bzamecnik, christian-lanius, clemens-tolboom, dschwertfeger, dynamicwebpaige, farizrahman4u, fchollet, fuzzythecat, ghostplant, giuscri, huyu398, jnphilipp, masstomato, morenoh149, mrTsjolder, nittanycolonial, r-kellerm, reidjohnson, roatienza, sbebo, stevemurr, taehoonlee, tiferet, tkoivisto, tzerrell, vkk800, wangkechn, wouterdobbels, zwang36wang
    

    2.1.6

    Areas of improvement
    
    - Bug fixes
    - Documentation improvements
    - Minor usability improvements
    
    API changes
    
    - In callback `ReduceLROnPlateau`, rename `epsilon` argument to `min_delta` (backwards-compatible).
    - In callback `RemoteMonitor`, add argument `send_as_json`.
    - In backend `softmax` function, add argument `axis`.
    - In `Flatten` layer, add argument `data_format`.
    - In `save_model` (`Model.save`) and `load_model` functions, allow the `filepath` argument to be a `h5py.File` object.
    - In `Model.evaluate_generator`, add `verbose` argument.
    - In `Bidirectional` wrapper layer, add `constants` argument.
    - In `multi_gpu_model` function, add arguments `cpu_merge` and `cpu_relocation` (controlling whether to force the template model's weights to be on CPU, and whether to operate merge operations on CPU or GPU).
    - In `ImageDataGenerator`, allow argument `width_shift_range` to be `int` or 1D array-like.
    
    Breaking changes
    
    This release does not include any known breaking changes.
    
    Credits
    
    Thanks to our 37 contributors whose commits are featured in this release:
    
    Dref360, FirefoxMetzger, Naereen, NiharG15, StefanoCappellini, WindQAQ, dmadeka, edrogers, eltronix, farizrahman4u, fchollet, gabrieldemarmiesse, ghostplant, jedrekfulara, jlherren, joeyearsley, johanahlqvist, johnyf, jsaporta, kalkun, lucasdavid, masstomato, mrlzla, myutwo150, nisargjhaveri, obi1kenobi, olegantonyan, ozabluda, pasky, planck35, sotlampr, souptc, srjoglekar246, stamate, taehoonlee, vkk800, xuhdev
    

    2.1.5

    Areas of improvement
    
    - Bug fixes.
    - New APIs: sequence generation API `TimeseriesGenerator`, and new layer `DepthwiseConv2D`.
    - Unit tests / CI improvements.
    - Documentation improvements.
    
    API changes
    
    - Add new sequence generation API `keras.preprocessing.sequence.TimeseriesGenerator`.
    - Add new convolutional layer `keras.layers.DepthwiseConv2D`.
    - Allow weights from `keras.layers.CuDNNLSTM` to be loaded into a `keras.layers.LSTM` layer (e.g. for inference on CPU).
    - Add `brightness_range` data augmentation argument in `keras.preprocessing.image.ImageDataGenerator`.
    - Add `validation_split` API in `keras.preprocessing.image.ImageDataGenerator`. You can pass `validation_split` to the constructor (float), then select between training/validation subsets by passing the argument `subset='validation'` or `subset='training'` to methods `flow` and `flow_from_directory`.
    
    Breaking changes
    
    - As a side effect of a refactor of `ConvLSTM2D` to a modular implementation, recurrent dropout support in Theano has been dropped for this layer.
    
    Credits
    
    Thanks to our 28 contributors whose commits are featured in this release:
    
    DomHudson, Dref360, VitamintK, abrad1212, ahundt, bojone, brainnoise, bzamecnik, caisq, cbensimon, davinnovation, farizrahman4u, fchollet, gabrieldemarmiesse, khosravipasha, ksindi, lenjoy, masstomato, mewwts, ozabluda, paulpister, sandpiturtle, saralajew, srjoglekar246, stefangeneralao, taehoonlee, tiangolo, treszkai
    

    2.1.4

    Areas of improvement
    
    - Bug fixes
    - Performance improvements
    - Improvements to example scripts
    
    API changes
    
    - Allow for stateful metrics in `model.compile(..., metrics=[...])`. A stateful metric inherits from `Layer`, and implements `__call__` and `reset_states`.
    - Support `constants` argument in `StackedRNNCells`.
    - Enable some TensorBoard features in the `TensorBoard` callback (loss and metrics plotting) with non-TensorFlow backends.
    - Add `reshape` argument in `model.load_weights()`, to optionally reshape weights being loaded to the size of the target weights in the model considered.
    - Add `tif` to supported formats in `ImageDataGenerator`.
    - Allow auto-GPU selection in `multi_gpu_model()` (set `gpus=None`).
    - In `LearningRateScheduler` callback, the scheduling function now takes an argument: `lr`, the current learning rate.
    
    Breaking changes
    
    - In `ImageDataGenerator`, change default interpolation of image transforms from nearest to bilinear. This should probably not break any users, but it is a change of behavior.
    
    Credits
    
    Thanks to our 37 contributors whose commits are featured in this release:
    
    DalilaSal, Dref360, GalaxyDream, GarrisonJ, Max-Pol, May4m, MiliasV, MrMYHuang, N-McA, Vijayabhaskar96, abrad1212, ahundt, angeloskath, bbabenko, bojone, brainnoise, bzamecnik, caisq, cclauss, dsadulla, fchollet, gabrieldemarmiesse, ghostplant, gorogoroyasu, icyblade, kapsl, kevinbache, mendesmiguel, mikesol, myutwo150, ozabluda, sadreamer, simra, taehoonlee, veniversum, yongtang, zhangwj618
    

    2.1.3

    Areas of improvement
    
    - Performance improvements (esp. convnets with TensorFlow backend).
    - Usability improvements.
    - Docs & docstrings improvements.
    - New models in the `applications` module.
    - Bug fixes.
    
    API changes
    
    - `trainable` attribute in `BatchNormalization` now disables the updates of the batch statistics (i.e. if `trainable == False` the layer will now run 100% in inference mode).
    - Add `amsgrad` argument in `Adam` optimizer.
    - Add new applications: `NASNetMobile`, `NASNetLarge`, `DenseNet121`, `DenseNet169`, `DenseNet201`.
    - Add `Softmax` layer (removing need to use a `Lambda` layer in order to specify the `axis` argument).
    - Add `SeparableConv1D` layer.
    - In `preprocessing.image.ImageDataGenerator`, allow `width_shift_range` and `height_shift_range` to take integer values (absolute number of pixels)
    - Support `return_state` in `Bidirectional` applied to RNNs (`return_state` should be set on the child layer).
    - The string values `"crossentropy"` and `"ce"` are now allowed in the `metrics` argument (in `model.compile()`), and are routed to either `categorical_crossentropy` or `binary_crossentropy` as needed.
    - Allow `steps` argument in `predict_*` methods on the `Sequential` model.
    - Add `oov_token` argument in `preprocessing.text.Tokenizer`.
    
    Breaking changes
    
    - In `preprocessing.image.ImageDataGenerator`, `shear_range` has been switched to use degrees rather than radians (for consistency). This should not actually break anything (neither training nor inference), but keep this change in mind in case you see any issues with regard to your image data augmentation process.
    
    
    Credits
    
    Thanks to our 45 contributors whose commits are featured in this release:
    
    Dref360, OliPhilip, TimZaman, bbabenko, bdwyer2, berkatmaca, caisq, decrispell, dmaniry, fchollet, fgaim, gabrieldemarmiesse, gklambauer, hgaiser, hlnull, icyblade, jgrnt, kashif, kouml, lutzroeder, m-mohsen, mab4058, manashty, masstomato, mihirparadkar, myutwo150, nickbabcock, novotnj3, obsproth, ozabluda, philferriere, piperchester, pstjohn, roatienza, souptc, spiros, srs70187, sumitgouthaman, taehoonlee, tigerneil, titu1994, tobycheese, vitaly-krumins, yang-zhang, ziky90
    

    2.1.2

    Areas of improvement
    
    - Bug fixes and performance improvements.
    - API improvements in Keras applications, generator methods.
    
    API changes
    
    - Make `preprocess_input` in all Keras applications compatible with both Numpy arrays and symbolic tensors (previously only supported Numpy arrays).
    - Allow the `weights` argument in all Keras applications to accept the path to a custom weights file to load (previously only supported the built-in `imagenet` weights file).
    - `steps_per_epoch` behavior change in generator training/evaluation methods:
     - If specified, the specified value will be used (previously, in the case of generator of type `Sequence`, the specified value was overridden by the `Sequence` length)
     - If unspecified and if the generator passed is a `Sequence`, we set it to the `Sequence` length.
    - Allow `workers=0` in generator training/evaluation methods (will run the generator in the main process, in a blocking way).
    - Add `interpolation` argument in `ImageDataGenerator.flow_from_directory`, allowing a custom interpolation method for image resizing.
    - Allow `gpus` argument in `multi_gpu_model` to be a list of specific GPU ids.
    
    Breaking changes
    
    - The change in `steps_per_epoch` behavior (described above) may affect some users.
    
    Credits
    
    Thanks to our 26 contributors whose commits are featured in this release:
    
    Alex1729, alsrgv, apisarek, asos-saul, athundt, cherryunix, dansbecker, datumbox, de-vri-es, drauh, evhub, fchollet, heath730, hgaiser, icyblade, jjallaire, knaveofdiamonds, lance6716, luoch, mjacquem1, myutwo150, ozabluda, raviksharma, rh314, yang-zhang, zach-nervana
    

    2.1.1

    This release amends release 2.1.0 to include a fix for an erroneous breaking change introduced in 8419.
    

    2.1.0

    This is a small release that fixes outstanding bugs that were reported since the previous release.
    
    Areas of improvement
    
    - Bug fixes (in particular, Keras no longer allocates devices at startup time with the TensorFlow backend. This was causing issues with Horovod.)
    - Documentation and docstring improvements.
    - Better CIFAR10 ResNet example script and improvements to example scripts code style.
    
    API changes
    
    - Add `go_backwards` to cuDNN RNNs (enables `Bidirectional` wrapper on cuDNN RNNs).
    - Add ability to pass `fetches` to `K.Function()` with the TensorFlow backend.
    - Add `steps_per_epoch` and `validation_steps` arguments in `Sequential.fit()` (to sync it with `Model.fit()`).
    
    Breaking changes
    
    None.
    
    Credits
    
    Thanks to our 14 contributors whose commits are featured in this release:
    
    Dref360, LawnboyMax, anj-s, bzamecnik, datumbox, diogoff, farizrahman4u, fchollet, frexvahi, jjallaire, nsuh, ozabluda, roatienza, yakigac
    

    2.0.9

    Areas of improvement
    
    - RNN improvements:
     - Refactor RNN layers to rely on atomic RNN cells. This makes the creation of custom RNN very simple and user-friendly, via the `RNN` base class.
     - Add ability to create new RNN cells by stacking a list of cells, allowing for efficient stacked RNNs.
    - Add `CuDNNLSTM` and `CuDNNGRU` layers, backend by NVIDIA's cuDNN library for fast GPU training & inference.
    - Add RNN Sequence-to-sequence example script.
    - Add `constants` argument in `RNN`'s `call` method, making RNN attention easier to implement.
    - Easier multi-GPU data parallelism via `keras.utils.multi_gpu_model`.
    - Bug fixes & performance improvements (in particular, native support for NCHW data layout in TensorFlow).
    - Documentation improvements and examples improvements.
    
    
    
    API changes
    
    - Add "fashion mnist" dataset as `keras.datasets.fashion_mnist.load_data()`
    - Add `Minimum` merge layer as `keras.layers.Minimum` (class) and `keras.layers.minimum(inputs)` (function)
    - Add `InceptionResNetV2` to `keras.applications`.
    - Support `bool` variables in TensorFlow backend.
    - Add `dilation` to `SeparableConv2D`.
    - Add support for dynamic `noise_shape` in `Dropout`
    - Add `keras.layers.RNN()` base class for batch-level RNNs (used to implement custom RNN layers from a cell class).
    - Add `keras.layers.StackedRNNCells()` layer wrapper, used to stack a list of RNN cells into a single cell.
    - Add `CuDNNLSTM` and `CuDNNGRU` layers.
    - Deprecate `implementation=0` for RNN layers.
    - The Keras progbar now reports time taken for each past epoch, and average time per step.
    - Add option to specific resampling method in `keras.preprocessing.image.load_img()`.
    - Add `keras.utils.multi_gpu_model` for easy multi-GPU data parallelism.
    - Add `constants` argument in `RNN`'s `call` method, used to pass a list of constant tensors to the underlying RNN cell.
    
    Breaking changes
    
    - Implementation change in `keras.losses.cosine_proximity` results in a different (correct) scaling behavior.
    - Implementation change for samplewise normalization in `ImageDataGenerator` results in a different normalization behavior.
    
    Credits
    
    Thanks to our 59 contributors whose commits are featured in this release!
    
    Alok, Danielhiversen, Dref360, HelgeS, JakeBecker, MPiecuch, MartinXPN, RitwikGupta, TimZaman, adammenges, aeftimia, ahojnnes, akshaychawla, alanyee, aldenks, andhus, apbard, aronj, bangbangbear, bchu, bdwyer2, bzamecnik, cclauss, colllin, datumbox, deltheil, dhaval067, durana, ericwu09, facaiy, farizrahman4u, fchollet, flomlo, fran6co, grzesir, hgaiser, icyblade, jsaporta, julienr, jussihuotari, kashif, lucashu1, mangerlahn, myutwo150, nicolewhite, noahstier, nzw0301, olalonde, ozabluda, patrikerdes, podhrmic, qin, raelg, roatienza, shadiakiki1986, smgt, souptc, taehoonlee, y0z
    

    2.0.8

    The primary purpose of this release is to address an incompatibility between Keras 2.0.7 and the next version of TensorFlow (1.4). TensorFlow 1.4 isn't due until a while, but the sooner the PyPI release has the fix, the fewer people will be affected when upgrading to the next TensorFlow version when it gets released.
    
    No API changes for this release. A few bug fixes.
    

    2.0.7

    Areas of improvement
    
    - Bug fixes.
    - Performance improvements.
    - Documentation improvements.
    - Better support for training models from data tensors in TensorFlow (e.g. Datasets, TFRecords). Add a related example script.
    - Improve TensorBoard UX with better grouping of ops into name scopes.
    - Improve test coverage.
    
    API changes
    
    - Add `clone_model` method, enabling to construct a new model, given an existing model to use as a template. Works even in a TensorFlow graph different from that of the original model.
    -  Add `target_tensors` argument in `compile`, enabling to use custom tensors or placeholders as model targets.
    - Add `steps_per_epoch` argument in `fit`, enabling to train a model from data tensors in a way that is consistent with training from Numpy arrays.
    - Similarly, add `steps` argument in `predict` and `evaluate`.
    - Add `Subtract` merge layer, and associated layer function `subtract`.
    - Add `weighted_metrics` argument in `compile` to specify metric functions meant to take into account `sample_weight` or `class_weight`.
    - Make the `stop_gradients` backend function consistent across backends.
    - Allow dynamic shapes in `repeat_elements` backend function.
    - Enable stateful RNNs with CNTK.
    
    Breaking changes
    
    - The backend methods `categorical_crossentropy`, `sparse_categorical_crossentropy`, `binary_crossentropy` had the order of their positional arguments (`y_true`, `y_pred`) inverted. This change does not affect the `losses` API. This change was done to achieve API consistency between the `losses` API and the backend API.
    - Move constraint management to be based on variable attributes. Remove the now-unused `constraints` attribute on layers and models (not expected to affect any user).
    
    Credits
    
    Thanks to our 47 contributors whose commits are featured in this release!
    
    5ke, Alok, Danielhiversen, Dref360, NeilRon, abnera, acburigo, airalcorn2, angeloskath, athundt, brettkoonce, cclauss, denfromufa, enkait, erg, ericwu09, farizrahman4u, fchollet, georgwiese, ghisvail, gokceneraslan, hgaiser, inexxt, joeyearsley, jorgecarleitao, kennyjacob, keunwoochoi, krizp, lukedeo, milani, n17r4m, nicolewhite, nigeljyng, nyghtowl, nzw0301, rapatel0, souptc, srinivasreddy, staticfloat, taehoonlee, td2014, titu1994, tleeuwenburg, udibr, waleedka, wassname, yashk2810
    

    2.0.6

    Areas of improvement
    
    - Improve generator methods (`predict_generator`, `fit_generator`, `evaluate_generator`) and add data enqueuing utilities.
    - Bug fixes and performance improvements.
    - New features: new `Conv3DTranspose` layer, new `MobileNet` application, self-normalizing networks.
    
    API changes
    
    - Self-normalizing networks: add `selu` activation function, `AlphaDropout` layer, `lecun_normal` initializer.
    - Data enqueuing: add `Sequence`, `SequenceEnqueuer`, `GeneratorEnqueuer` to `utils`.
    - Generator methods: rename arguments `pickle_safe` (replaced with `use_multiprocessing`) and `max_q_size ` (replaced with `max_queue_size`).
    - Add `MobileNet` to the applications module.
    - Add `Conv3DTranspose` layer.
    - Allow custom print functions for model's `summary` method (argument `print_fn`).
    

    2.0.5

    - Add beta CNTK backend.
    - TensorBoard improvements.
    - Documentation improvements.
    - Bug fixes and performance improvements.
    - Improve style transfer example script.
    
    API changes:
    
    - Add `return_state` constructor argument to RNNs.
    - Add `skip_compile` option to `load_model`.
    - Add `categorical_hinge` loss function.
    - Add `sparse_top_k_categorical_accuracy` metric.
    - Add new options to `TensorBoard` callback.
    - Add `TerminateOnNaN` callback.
    - Generalize the `Embedding` layer to N (>=2) input dimensions.
    

    2.0.4

    - Documentation improvements.
    - Docstring improvements.
    - Update some examples scripts (in particular, new deep dream example).
    - Bug fixes and performance improvements.
    
    API changes:
    
    - Add `logsumexp` and `identity` to backend.
    - Add `logcosh` loss.
    - New signature for `add_weight` in `Layer`.
    - `get_initial_states` in `Recurrent` is now `get_initial_state`.
    

    2.0.0

    Keras 2 release notes
    
    This document details changes, in particular API changes, occurring from Keras 1 to Keras 2.
    
    Training
    
    - The `nb_epoch` argument has been renamed `epochs` everywhere.
    - The methods `fit_generator`, `evaluate_generator` and `predict_generator` now work by drawing a number of *batches* from a generator (number of training steps), rather than a number of samples.
     - `samples_per_epoch` was renamed `steps_per_epoch` in `fit_generator`.
     - `nb_val_samples` was renamed `validation_steps` in `fit_generator`.
     - `val_samples` was renamed `steps` in `evaluate_generator` and `predict_generator`.
    - It is now possible to manually add a loss to a model by calling `model.add_loss(loss_tensor)`. The loss is added to the other losses of the model and minimized during training.
    - It is also possible to *not* apply any loss to a specific model output. If you pass `None` as the `loss` argument for an output (e.g. in compile, `loss={'output_1': None, 'output_2': 'mse'}`, the model will expect no Numpy arrays to be fed for this output when using `fit`, `train_on_batch`, or `fit_generator`. The output values are still returned as usual when using `predict`.
    - In TensorFlow, models can now be trained using `fit` if some of their inputs (or even all) are TensorFlow queues or variables, rather than placeholders. See [this test](https://github.com/fchollet/keras/blob/master/tests/keras/engine/test_training.pyL252) for specific examples.
    
    
    Losses & metrics
    
    - The `objectives` module has been renamed `losses`.
    - Several legacy metric functions have been removed, namely `matthews_correlation`, `precision`, `recall`, `fbeta_score`, `fmeasure`.
    - Custom metric functions can no longer return a dict, they must return a single tensor.
    
    
    Models
    
    - Constructor arguments for `Model` have been renamed:
     - `input` -> `inputs`
     - `output` -> `outputs`
    - The `Sequential` model not longer supports the `set_input` method.
    - For any model saved with Keras 2.0 or higher, weights trained with backend X will be converted to work with backend Y without any manual conversion step.
    
    
    Layers
    
    Removals
    
    Deprecated layers `MaxoutDense`, `Highway` and `TimedistributedDense` have been removed.
    
    
    Call method
    
    - All layers that use the learning phase now support a `training` argument in `call` (Python boolean or symbolic tensor), allowing to specify the learning phase on a layer-by-layer basis. E.g. by calling a `Dropout` instance as `dropout(inputs, training=True)` you obtain a layer that will always apply dropout, regardless of the current global learning phase. The `training` argument defaults to the global Keras learning phase everywhere.
    - The `call` method of layers can now take arbitrary keyword arguments, e.g. you can define a custom layer with a call signature like `call(inputs, alpha=0.5)`, and then pass a `alpha` keyword argument when calling the layer (only with the functional API, naturally).
    - `__call__` now makes use of TensorFlow `name_scope`, so that your TensorFlow graphs will look pretty and well-structured in TensorBoard.
    
    All layers taking a legacy `dim_ordering` argument
    
    `dim_ordering` has been renamed `data_format`. It now takes two values: `"channels_first"` (formerly `"th"`) and `"channels_last"` (formerly `"tf"`).
    
    Dense layer
    
    Changed interface:
    
    - `output_dim` -> `units`
    - `init` -> `kernel_initializer`
    - added `bias_initializer` argument
    - `W_regularizer` -> `kernel_regularizer`
    - `b_regularizer` -> `bias_regularizer`
    - `b_constraint` -> `bias_constraint`
    - `bias` -> `use_bias`
    
    Dropout, SpatialDropout*D, GaussianDropout
    
    Changed interface:
    
    - `p` -> `rate`
    
    Embedding
    
    Convolutional layers
    
    - The `AtrousConvolution1D` and `AtrousConvolution2D` layer have been deprecated. Their functionality is instead supported via the `dilation_rate` argument in `Convolution1D` and `Convolution2D` layers.
    - `Convolution*` layers are renamed `Conv*`.
    - The `Deconvolution2D` layer is renamed `Conv2DTranspose`.
    - The `Conv2DTranspose` layer no longer requires an `output_shape` argument, making its use much easier.
    
    Interface changes common to all convolutional layers:
    
    - `nb_filter` -> `filters`
    - float kernel dimension arguments become a single tuple argument, `kernel` size. E.g. a legacy call `Conv2D(10, 3, 3)` becomes `Conv2D(10, (3, 3))`
    - `kernel_size` can be set to an integer instead of a tuple, e.g. `Conv2D(10, 3)` is equivalent to `Conv2D(10, (3, 3))`.
    - `subsample` -> `strides`. Can also be set to an integer.
    - `border_mode` -> `padding`
    - `init` -> `kernel_initializer`
    - added `bias_initializer` argument
    - `W_regularizer` -> `kernel_regularizer`
    - `b_regularizer` -> `bias_regularizer`
    - `b_constraint` -> `bias_constraint`
    - `bias` -> `use_bias`
    - `dim_ordering` -> `data_format`
    - In the `SeparableConv2D` layers, `init` is split into `depthwise_initializer` and `pointwise_initializer`.
    - Added `dilation_rate` argument in `Conv2D` and `Conv1D`.
    - 1D convolution kernels are now saved as a 3D tensor (instead of 4D as before).
    - 2D and 3D convolution kernels are now saved in format `spatial_dims + (input_depth, depth))`, even with `data_format="channels_first"`.
    
    
    Pooling1D
    
    - `pool_length` -> `pool_size`
    - `stride` -> `strides`
    - `border_mode` -> `padding`
    
    Pooling2D, 3D
    
    - `border_mode` -> `padding`
    - `dim_ordering` -> `data_format`
    
    
    ZeroPadding layers
    
    The `padding` argument of the `ZeroPadding2D` and `ZeroPadding3D` layers must be a tuple of length 2 and 3 respectively. Each entry `i` contains by how much to pad the spatial dimension `i`. If it's an integer, symmetric padding is applied. If it's a tuple of integers, asymmetric padding is applied.
    
    Upsampling1D
    
    - `length` -> `size`
    
    BatchNormalization
    
    The `mode` argument of `BatchNormalization` has been removed; BatchNorm now only supports mode 0 (use batch metrics for feature-wise normalization during training, and use moving metrics for feature-wise normalization during testing).
    
    - `beta_init` -> `beta_initializer`
    - `gamma_init` -> `gamma_initializer`
    - added arguments `center`, `scale` (booleans, whether to use a `beta` and `gamma` respectively)
    - added arguments `moving_mean_initializer`, `moving_variance_initializer`
    - added arguments `beta_regularizer`, `gamma_regularizer`
    - added arguments `beta_constraint`, `gamma_constraint`
    - attribute `running_mean` is renamed `moving_mean`
    - attribute `running_std` is renamed `moving_variance` (it *is* in fact a variance with the current implementation).
    
    
    ConvLSTM2D
    
    Same changes as for convolutional layers and recurrent layers apply.
    
    PReLU
    
    - `init` -> `alpha_initializer`
    
    GaussianNoise
    
    - `sigma` -> `stddev`
    
    Recurrent layers
    
    - `output_dim` -> `units`
    - `init` -> `kernel_initializer`
    - `inner_init` -> `recurrent_initializer`
    - added argument `bias_initializer`
    - `W_regularizer` -> `kernel_regularizer`
    - `b_regularizer` -> `bias_regularizer`
    - added arguments `kernel_constraint`, `recurrent_constraint`, `bias_constraint`
    - `dropout_W` -> `dropout`
    - `dropout_U` -> `recurrent_dropout`
    - `consume_less` -> `implementation`. String values have been replaced with integers: implementation 0 (default), 1 or 2.
    - LSTM only: the argument `forget_bias_init` has been removed. Instead there is a boolean argument `unit_forget_bias`, defaulting to `True`.
    
    
    Lambda
    
    The `Lambda` layer now supports a `mask` argument.
    
    
    Utilities
    
    Utilities should now be imported from `keras.utils` rather than from specific submodules (e.g. no more `keras.utils.np_utils...`).
    
    
    Backend
    
    random_normal and truncated_normal
    - `std` -> `stddev`
    
    Misc
    
    - In the backend, `set_image_ordering` and `image_ordering` are now `set_data_format` and `data_format`.
    - Any arguments (other than `nb_epoch`) prefixed with `nb_` has been renamed to be prefixed with `num_` instead. This affects two datasets and one preprocessing utility.
    
    Links
    • PyPI: https://pypi.org/project/keras
    • Changelog: https://pyup.io/changelogs/keras/
    • Repo: https://github.com/keras-team/keras/tarball/2.2.4

    Update matplotlib from 3.1.0 to 3.1.0.

    Changelog

    2.1.0

    This is the second minor release in the Matplotlib 2.x series and the first
    release with major new features since 1.5.
    
    This release contains approximately 2 years worth of work by 275 contributors
    across over 950 pull requests.  Highlights from this release include:
    
    - support for string categorical values
    - export of animations to interactive javascript widgets
    - major overhaul of polar plots
    - reproducible output for ps/eps, pdf, and svg backends
    - performance improvements in drawing lines and images
    - GUIs show a busy cursor while rendering the plot
    
    
    along with many other enhancements and bug fixes.
    

    2.0.0

    This previews the new default style and many bug-fixes.  A full list of
    the style changes will be collected for the final release.
    
    In addition to the style change this release includes:
    - overhaul of font handling/text rendering to be faster and clearer
    - many new rcParams
    - Agg based OSX backend
    - optionally deterministic SVGs
    - complete re-write of image handling code
    - simplified color conversion
    - specify colors in the global property cycle via `'C0'`,
    `'C1'`... `'C9'`
    - use the global property cycle more places (bar, stem, scatter)
    
    There is a 'classic' style sheet which reproduces the 1.Y defaults:
    
    import matplotlib.style as mstyle
    mstyle.use('classic')
    

    2.0.0rc2

    This is the second and final planned release candidate for mpl v2.0
    
    This release includes:
    - Bug fixes and documentation changes
    - Expanded API on plot_surface and plot_wireframe
    - Pin font size at text creation time
    - Suppress fc-cache warning unless it takes more than 5s
    

    2.0.0rc1

    This is the first release candidate for mpl v2.0
    
    This release includes:
    - A re-implementation of the way margins are handled during auto
    scaling to allow artists to 'stick' to an edge of the Axes
    - Improvements to the ticking with log and symlog scales
    - Deprecation of the finance module.  This will be spun off into a stand-alone package
    - Deprecation of the 'hold' machinery 
    - Bumped the minimum numpy version to 1.7
    - Standardization of hatch width and appearance across backends
    - Made threshold for triggering 'offset' in `ScalarFormatter` configurable
    and default to 4 (plotting against years should just work now)
    - Default encoding for mp4 is now h264
    - `fill_between` and `fill_betweenx` now use the color cycle
    - Default alignment of bars changed from 'edge' to 'center'
    - Bug and documentation fixes
    

    2.0.0b4

    Fourth and final beta release
    

    2.0.0b3

    Third beta for v2.0.0 release
    
    This tag includes several critical bug fixes and updates the dash patterns.
    

    1.5.3

    This release contains a few critical bug fixes:
    - eliminate fatal exceptions with Qt5.7
    - memory leak in the contour code
    - keyboard interaction bug with nbagg
    - automatic integration with the ipython event loop (if running) which
    fixes 'naive' integration for IPython 5+
    

    1.5.2

    Final planned release for the 1.5.x series.
    
    Links
    • PyPI: https://pypi.org/project/matplotlib
    • Changelog: https://pyup.io/changelogs/matplotlib/
    • Homepage: https://matplotlib.org

    Update tensorflow from 1.14.0 to 1.14.0.

    Changelog

    1.13.0

    Major Features and Improvements
    
    * TensorFlow Lite has moved from contrib to core. This means that Python modules are under `tf.lite` and source code is now under `tensorflow/lite` rather than `tensorflow/contrib/lite`.
    * TensorFlow GPU binaries are now built against CUDA 10 and TensorRT 5.0.
    * Support for Python3.7 on all operating systems.
    * Moved NCCL to core.
    
    Behavioral changes
    
    * Disallow conversion of python floating types to uint32/64 (matching behavior of other integer types) in `tf.constant`.
    * Make the `gain` argument of convolutional orthogonal initializers (`convolutional_delta_orthogonal`, `convolutional_orthogonal_1D`, `convolutional_orthogonal_2D`, `convolutional_orthogonal_3D`) have consistent behavior with the `tf.initializers.orthogonal` initializer, i.e. scale the output l2-norm by `gain` and NOT by `sqrt(gain)`. (Note that these functions are currently in `tf.contrib` which is not guaranteed backward compatible).
    
    Bug Fixes and Other Changes
    
    *   Documentation
     *   Update the doc with the details about the rounding mode used in
         quantize_and_dequantize_v2.
     *   Clarify that tensorflow::port::InitMain() _should_ be called before
         using the TensorFlow library. Programs failing to do this are not
         portable to all platforms.
    *   Deprecations and Symbol renames.
     *   Removing deprecations for the following endpoints: `tf.acos`,
         `tf.acosh`, `tf.add`, `tf.as_string`, `tf.asin`, `tf.asinh`, `tf.atan`,
         `tf.atan2`, `tf.atanh`, `tf.cos`, `tf.cosh`, `tf.equal`, `tf.exp`,
         `tf.floor`, `tf.greater`, `tf.greater_equal`, `tf.less`,
         `tf.less_equal`, `tf.log`, `tf.logp1`, `tf.logical_and`,
         `tf.logical_not`, `tf.logical_or`, `tf.maximum`, `tf.minimum`,
         `tf.not_equal`, `tf.sin`, `tf.sinh`, `tf.tan`
     *   Deprecate `tf.data.Dataset.shard`.
     *   Deprecate `saved_model.loader.load` which is replaced by
         `saved_model.load` and `saved_model.main_op`, which will be replaced by
         `saved_model.main_op` in V2.
     *   Deprecate tf.QUANTIZED_DTYPES. The official new symbol is
         tf.dtypes.QUANTIZED_DTYPES.
     *   Update sklearn imports for deprecated packages.
     *   Deprecate `Variable.count_up_to` and `tf.count_up_to` in favor of
         `Dataset.range`.
     *   Export `confusion_matrix` op as `tf.math.confusion_matrix` instead of
         `tf.train.confusion_matrix`.
     *   Add `tf.dtypes.` endpoint for every constant in dtypes.py. Moving
         endpoints in versions.py to corresponding endpoints in `tf.sysconfig.`
         and `tf.version.`. Moving all constants under `tf.saved_model`
         submodules to `tf.saved_model` module. New endpoints are added in V1 and
         V2 but existing endpoint removals are only applied in V2.
     *   Deprecates behavior where device assignment overrides collocation
         constraints inside a collocation context manager.
    *   Keras & Python API
     *   Add to Keras functionality analogous to
         `tf.register_tensor_conversion_function`.
     *   Subclassed Keras models can now be saved through
         `tf.contrib.saved_model.save_keras_model`.
     *   `LinearOperator.matmul` now returns a new `LinearOperator`.
    *   New ops and improved op functionality
     *   Add a Nearest Neighbor Resize op.
     *   Add an `ignore_unknown` argument to `parse_values` which suppresses
         ValueError for unknown hyperparameter types. Such * Add
         `tf.linalg.matvec` convenience function.
     *   `tf.einsum()`raises `ValueError` for unsupported equations like
         `"ii->"`.
     *   Add DCT-I and IDCT-I in `tf.signal.dct` and `tf.signal.idct`.
     *   Add LU decomposition op.
     *   Add quantile loss to gradient boosted trees in estimator.
     *   Add `round_mode` to `QuantizeAndDequantizeV2` op to select rounding
         algorithm.
     *   Add `unicode_encode`, `unicode_decode`, `unicode_decode_with_offsets`,
         `unicode_split`, `unicode_split_with_offset`, and `unicode_transcode`
         ops. Amongst other things, this Op adds the ability to encode, decode,
         and transcode a variety of input text encoding formats into the main
         Unicode encodings (UTF-8, UTF-16-BE, UTF-32-BE)
     *   Add "unit" attribute to the substr op, which allows obtaining the
         substring of a string containing unicode characters.
     *   Broadcasting support for Ragged Tensors.
     *   `SpaceToDepth` supports uint8 data type.
     *   Support multi-label quantile regression in estimator.
     *   We now use "div" as the default partition_strategy in
         `tf.nn.safe_embedding_lookup_sparse`, `tf.nn.sampled_softmax` and
         `tf.nn.nce_loss`. hyperparameter are ignored.
    *   Performance
     *   Improve performance of GPU cumsum/cumprod by up to 300x.
     *   Added support for weight decay in most TPU embedding optimizers,
         including AdamW and MomentumW.
    *   TensorFlow 2.0 Development
     *   Add a command line tool to convert to TF2.0, tf_upgrade_v2
     *   Merge `tf.spectral` into `tf.signal` for TensorFlow 2.0.
     *   Change the default recurrent activation function for LSTM from
         'hard_sigmoid' to 'sigmoid' in 2.0. Historically recurrent activation is
         'hard_sigmoid' since it is fast than 'sigmoid'. With new unified backend
         between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we
         change the default for CPU mode to sigmoid as well. With that, the
         default LSTM will be compatible with both CPU and GPU kernel. This will
         enable user with GPU to use CuDNN kernel by default and get a 10x
         performance boost in training. Note that this is checkpoint breaking
         change. If user want to use their 1.x pre-trained checkpoint, please
         construct the layer with LSTM(recurrent_activation='hard_sigmoid') to
         fallback to 1.x behavior.
    *   TensorFlow Lite
     *   Move from `tensorflow/contrib/lite` to `tensorflow/lite`.
     *   Add experimental Java API for injecting TensorFlow Lite delegates
     *   Add support for strings in TensorFlow Lite Java API.
    *   `tf.contrib`:
     *   Add Apache Ignite Filesystem plugin to support accessing Apache IGFS.
     *   Dropout now takes `rate` argument, `keep_prob` is deprecated.
     *   Estimator occurrences references `tf.contrib.estimator` were changed to
         `tf.estimator`:
     *   `tf.contrib.estimator.BaselineEstimator` with
         `tf.estimator.BaselineEstimator`
     *   `tf.contrib.estimator.DNNLinearCombinedEstimator` with
         `tf.estimator.DNNLinearCombinedEstimator`
     *   `tf.contrib.estimator.DNNEstimator` with `tf.estimator.DNNEstimator`
     *   `tf.contrib.estimator.LinearEstimator` with
         `tf.estimator.LinearEstimator`
     *   `tf.contrib.estimator.InMemoryEvaluatorHook` and
         tf.estimator.experimental.InMemoryEvaluatorHook`.
     *   `tf.contrib.estimator.make_stop_at_checkpoint_step_hook` with
         `tf.estimator.experimental.make_stop_at_checkpoint_step_hook`.
     *   Expose `tf.distribute.Strategy as the new name for
         tf.contrib.distribute.DistributionStrategy.
     *   Migrate linear optimizer from contrib to core.
     *   Move `tf.contrib.signal` to `tf.signal` (preserving aliases in
         tf.contrib.signal).
     *   Users of `tf.contrib.estimator.export_all_saved_models` and related
         should switch to
         `tf.estimator.Estimator.experimental_export_all_saved_models`.
    *   tf.data:
     *   Add `tf.data.experimental.StatsOptions()`, to configure options to
         collect statistics from `tf.data.Dataset` pipeline using
         `StatsAggregator`. Add nested option, `experimental_stats` (which takes
         a `tf.data.experimen tal.StatsOptions` object), to `tf.data.Options`.
         Deprecates `tf.data.experimental.set_stats_agregator`.
     *   Performance optimizations:
     *   Add `tf.data.experimental.OptimizationOptions()`, to configure options
         to enable `tf.data` performance optimizations. Add nested option,
         `experimental_optimization` (which takes a
         `tf.data.experimental.OptimizationOptions` object), to
         `tf.data.Options`. Remove performance optimization options from
         `tf.data.Options`, and add them under
         `tf.data.experimental.OptimizationOptions` instead.
     *   Enable `map_and_batch_fusion` and `noop_elimination` optimizations by
         default. They can be disabled by configuring
         `tf.data.experimental.OptimizationOptions` to set `map_and_batch =
         False` or `noop_elimination = False` respectively. To disable all
         default optimizations, set `apply_default_optimizations = False`.
     *   Support parallel map in `map_and_filter_fusion`.
     *   Disable static optimizations for input pipelines that use non-resource
         `tf.Variable`s.
     *   Add NUMA-aware MapAndBatch dataset.
     *   Deprecate `tf.data.Dataset.make_one_shot_iterator()` in V1, removed it
         from V2, and added tf.compat.v1.data.make_one_shot_iterator()`.
     *   Deprecate `tf.data.Dataset.make_initializable_iterator()` in V1, removed
         it from V2, and added `tf.compat.v1.data.make_initializable_iterator()`.
     *   Enable nested dataset support in core `tf.data` transformations.
     *   For `tf.data.Dataset` implementers: Added
         `tf.data.Dataset._element_structured property` to replace
         `Dataset.output_{types,shapes,classes}`.
     *   Make `num_parallel_calls` of `tf.data.Dataset.interleave` and
         `tf.data.Dataset.map` work in Eager mode.
    *   Toolchains
     *   Fixed OpenSSL compatibility by avoiding `EVP_MD_CTX_destroy`.
     *   Added bounds checking to printing deprecation warnings.
     *   Upgraded CUDA dependency to 10.0
     *   To build with Android NDK r14b, add "include <linux/compiler.h>" to
         android-ndk-r14b/platforms/android-14/arch-*/usr/include/linux/futex.h
     *   Removed `:android_tensorflow_lib_selective_registration*` targets, use
         `:android_tensorflow_lib_lite*` targets instead.
    *   XLA
     *   Move `RoundToEven` function to xla/client/lib/math.h.
     *   A new environment variable `TF_XLA_DEBUG_OPTIONS_PASSTHROUGH` set to "1"
         or "true" allows the debug options passed within an XRTCompile op to be
         passed directly to the XLA compilation backend. If such variable is not
         set (service side), only a restricted set will be passed through.
     *   Allow the XRTCompile op to return the ProgramShape resulted form the XLA
         compilation as a second return argument.
     *   XLA HLO graphs can now be rendered as SVG/HTML.
    *   Estimator
     *   Replace all occurences of `tf.contrib.estimator.BaselineEstimator` with
         `tf.estimator.BaselineEstimator`
     *   Replace all occurences of
         `tf.contrib.estimator.DNNLinearCombinedEstimator` with
         `tf.estimator.DNNLinearCombinedEstimator`
     *   Replace all occurrences of `tf.contrib.estimator.DNNEstimator` with
         `tf.estimator.DNNEstimator`
     *   Replace all occurrences of `tf.contrib.estimator.LinearEstimator` with
         `tf.estimator.LinearEstimator`
     *   Users of `tf.contrib.estimator.export_all_saved_models` and related
         should switch to
         `tf.estimator.Estimator.experimental_export_all_saved_models`.
     *   Update `regression_head` to the new Head API for Canned Estimator V2.
     *   Switch `multi_class_head` to Head API for Canned Estimator V2.
     *   Replace all occurences of `tf.contrib.estimator.InMemoryEvaluatorHook`
         and `tf.contrib.estimator.make_stop_at_checkpoint_step_hook` with
         `tf.estimator.experimental.InMemoryEvaluatorHook` and
         `tf.estimator.experimental.make_stop_at_checkpoint_step_hook`
     *   Migrate linear optimizer from contrib to core.
    
    Thanks to our Contributors
    
    This release contains contributions from many people at Google, as well as:
    
    Abhinav Upadhyay, Ag Ramesh, akikaaa, Alexis Louis, Anders Huss, Andreas Madsen, Andrew Banchich, Andy Craze, Anton Dmitriev, Artem Malykh, Avijit-Nervana, Balint Cristian, Benjamin Tan Wei Hao, Bhavani Subramanian, Brendan Finan, Brian Nemsick, Bryan Cutler, By Shen, Cao Zongyan, Castiel, Chris Antaki, Christian Goll, Cibifang, Clayne Robison, Codrut Grosu, Cong Xu, Dalmo Cirne, Daniel Hunter, Dougal J. Sutherland, Edvard Fagerholm, EFanZh, Erik Smistad, Evgeniy Polyakov, Feiyang Chen, franklin5, Fred Reiss, Gautam, gehring, Geoffrey Irving, George Sterpu, Gitea, Grzegorz George Pawelczak, Guozhong Zhuang, himkt, Hoeseong Kim, Huan Li (李卓桓), HuiyangFei, hyunyoung, Isaac Burbank, jackonan, Jacky Ko, Jason Furmanek, Jason Zaman, Javier Luraschi, Jiang,Zhoulong, joaak, John Lin, Jonathan Wyatt Hoech, josephyearsley, Josh Gordon, Julian Niedermeier, Karl Lessard, Keno Fischer, lanhin, Leon Graser, leondgarse, Li, Guizi, Li, Yiqiang, lxl910915, Mahmoud Abuzaina, manhyuk, Marcela Morales Quispe, margaretmz, Matt Conley, Max Pumperla, mbhuiyan, mdfaijul, Meng, Peng, Michael, Michael Gielda, mrTsjolder, Muhammad Wildan, neargye, Nehal J Wani, NEWPLAN, Niranjan Hasabnis, Nutti, olicht, Pan Daoxin, Pedro Monreal, Peng Yu, pillarpond, Pooya Davoodi, qiezi, Rholais Lii, Richard Yu, Rin Arakaki, Roger Iyengar, sahilbadyal, Sami Kama, Sandip Giri, Scott Leishman, Serge Panev, Seunghoon Park, Shafi Dayatar, shengfuintel, Shimin Guo, Siju, silent567, Stefan Dyulgerov, steven, Tao Wei, Thor Johnsen, Tingbo Lu, tomguluson92, Tongxuan Liu, Trevor Morris, Ubuntu, Vadim Borisov, vanderliang, wangsiyu, Wen Yun, Wen-Heng (Jack) Chung, wenxizhu, William D. Irons, Xiaoming (Jason) Cui, Yan Facai (颜发才), Yanbo Liang, Yaniv Blumenfeld, Yash Gaurkar, Yicheng Fan, Yong Tang, Yongjoon Lee, Yuan (Terry) Tang, Yuxin Wu, zldrobit
    

    1.12.2

    Bug Fixes and Other Changes
    
    *   Fixes a potential security vulnerability where carefully crafted GIF images
     can produce a null pointer dereference during decoding.
    

    1.12.0

    Major Features and Improvements
    
    *   Keras models can now be directly exported to the SavedModel
     format(`tf.contrib.saved_model.save_keras_model()`) and used with Tensorflow
     Serving.
    *   Keras models now support evaluating with a `tf.data.Dataset`.
    *   TensorFlow binaries are built with XLA support linked in by default.
    *   Ignite Dataset added to contrib/ignite that allows to work with Apache
     Ignite.
    
    Bug Fixes and Other Changes
    
    *   tf.data:
     *   tf.data users can now represent, get, and set options of TensorFlow
         input pipelines using `tf.data.Options()`, `tf.data.Dataset.options()`,
         and `tf.data.Dataset.with_options()` respectively.
     *   New `tf.data.Dataset.reduce()` API allows users to reduce a finite
         dataset to a single element using a user-provided reduce function.
     *   New `tf.data.Dataset.window()` API allows users to create finite windows
         of input dataset; when combined with the `tf.data.Dataset.reduce()` API,
         this allows users to implement customized batching.
     *   All C++ code moves to the `tensorflow::data` namespace.
     *   Add support for `num_parallel_calls` to `tf.data.Dataset.interleave`.
    *   `tf.contrib`:
     *   Remove `tf.contrib.linalg`. `tf.linalg` should be used instead.
     *   Replace any calls to `tf.contrib.get_signature_def_by_key(metagraph_def,
         signature_def_key)` with
         `meta_graph_def.signature_def[signature_def_key]`. Catching a ValueError
         exception thrown by `tf.contrib.get_signature_def_by_key` should be
         replaced by catching a KeyError exception.
    *   `tf.contrib.data`
     *   Deprecate, and replace by tf.data.experimental.
    *   Other:
     *   Instead of jemalloc, revert back to using system malloc since it
         simplifies build and has comparable performance.
     *   Remove integer types from `tf.nn.softplus` and `tf.nn.softsign` OpDefs.
         This is a bugfix; these ops were never meant to support integers.
     *   Allow subslicing Tensors with a single dimension.
     *   Add option to calculate string length in Unicode characters.
     *   Add functionality to SubSlice a tensor.
     *   Add searchsorted (ie lower/upper_bound) op.
     *   Add model explainability to Boosted Trees.
     *   Support negative positions for tf.substr.
     *   There was previously a bug in the bijector_impl where the
         _reduce_jacobian_det_over_event does not handle scalar ILDJ
         implementations properly.
     *   In tf eager execution, allow re-entering a GradientTape context.
     *   Add tf_api_version flag. If --define=tf_api_version=2 flag is passed in,
         then bazel will build TensorFlow API version 2.0. Note that TensorFlow
         2.0 is under active development and has no guarantees at this point.
     *   Add additional compression options to TfRecordWriter.
     *   Performance improvements for regex full match operations.
     *   Replace tf.GraphKeys.VARIABLES with `tf.GraphKeys.GLOBAL_VARIABLES`.
     *   Remove unused dynamic learning rate support.
    
    Thanks to our Contributors
    
    This release contains contributions from many people at Google, as well as:
    
    (David) Siu-Kei Muk, Ag Ramesh, Anton Dmitriev, Artem Sobolev, Avijit-Nervana,
    Bairen Yi, Bruno Goncalves, By Shen, candy.dc, Cheng Chen, Clayne Robison,
    coder3101, Dao Zhang, Elms, Fei Hu, feiquan, Geoffrey Irving, Guozhong Zhuang,
    hellcom, Hoeseong Kim, imsheridan, Jason Furmanek, Jason Zaman, Jenny Sahng,
    jiefangxuanyan, Johannes Bannhofer, Jonathan Homer, Koan-Sin Tan, kouml, Loo
    Rong Jie, Lukas Geiger, manipopopo, Ming Li, Moritz KröGer, Naurril, Niranjan
    Hasabnis, Pan Daoxin, Peng Yu, pengwa, rasmi, Roger Xin, Roland Fernandez, Sami
    Kama, Samuel Matzek, Sangjung Woo, Sergei Lebedev, Sergii Khomenko, shaohua,
    Shaohua Zhang, Shujian2015, Sunitha Kambhampati, tomguluson92, ViníCius Camargo,
    wangsiyu, weidankong, Wen-Heng (Jack) Chung, William D. Irons, Xin Jin, Yan
    Facai (颜发才), Yanbo Liang, Yash Katariya, Yong Tang, 在原佐为
    

    1.11.0

    Major Features and Improvements
    
    *   Nvidia GPU:
     *   Prebuilt binaries are now (as of TensorFlow 1.11) built against cuDNN
         7.2 and TensorRT 4. See updated install guides:
         [Installing TensorFlow on Ubuntu](https://www.tensorflow.org/install/install_linuxtensorflow_gpu_support)
    *   Google Cloud TPU:
     *   Experimental tf.data integration for Keras on Google Cloud TPUs.
     *   Experimental / preview support for eager execution on Google Cloud TPUs.
    *   DistributionStrategy:
     *   Add multi-GPU DistributionStrategy support in tf.keras. Users can now
         use `fit`, `evaluate` and `predict` to distribute their model on
         multiple GPUs.
     *   Add multi-worker DistributionStrategy and standalone client support in
         Estimator. See
         [README](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/distribute)
         for more details.
    *   Add C, C++, and Python functions for querying kernels.
    
    Breaking Changes
    
    * Keras:
    * The default values for tf.keras `RandomUniform`, `RandomNormal`, and `TruncatedNormal` initializers have been changed to match those in external Keras.
    * Breaking change: `model.get_config()` on a Sequential model now returns a config dictionary (consistent with other Model instances) instead of a list of configs for the underlying layers.
    
    Bug Fixes and Other Changes
    
    *   C++:
     *   Changed the signature of SessionFactory::NewSession so that it can
         return a meaningful error message on failure.
    *   tf.data:
     *   Remove `num_parallel_parser_calls` argument from
         `tf.contrib.data.make_csv_dataset()`. [tf.data] Remove
         `num_parallel_parser_calls` argument from
         `tf.contrib.data.make_csv_dataset()`.
     *   `tf.data.Dataset.list_files()` raises an exception at initialization
         time if the argument matches no files.
     *   Renamed BigTable class to BigtableTable for clarity
     *   Document use of the Cloud Bigtable API
     *   Add `tf.contrib.data.reduce_dataset` which can be used to reduce a
         dataset to a single element.
     *   Generalization of `tf.contrib.data.sliding_window_batch`.
    *   INC:
     *   Runtime improvements to triangular solve.
    *   `tf.contrib`:
     *   Add an `implementation` argument to `tf.keras.layers.LocallyConnected2D`
         and `tf.keras.layers.LocallyConnected1D`. The new mode
         (`implementation=2`) performs forward pass as a single dense matrix
         multiplication, allowing dramatic speedups in certain scenarios (but
         worse performance in others - see docstring). The option also allows to
         use `padding=same`.
     *   Add documentation clarifying the differences between tf.fill and
         tf.constant.
     *   Add experimental IndexedDatasets.
     *   Add selective registration target using the lite proto runtime.
     *   Add simple Tensor and DataType classes to TensorFlow Lite Java
     *   Add support for bitcasting to/from uint32 and uint64.
     *   Added a subclass of Estimator that can be created from a SavedModel
         (SavedModelEstimator).
     *   Adds leaf index modes as an argument.
     *   Allow a different output shape from the input in
         tf.contrib.image.transform.
     *   Change the state_size order of the StackedRNNCell to be natural order.
         To keep the existing behavior, user can add reverse_state_order=True
         when constructing the StackedRNNCells.
     *   Deprecate self.test_session() in favor of self.session() or
         self.cached_session().
     *   Directly import tensor.proto.h (the transitive import will be removed
         from tensor.h soon).
     *   Estimator.train() now supports tf.contrib.summary.\* summaries out of
         the box; each call to .train() will now create a separate tfevents file
         rather than re-using a shared one.
     *   Fix FTRL L2-shrinkage behavior: the gradient from the L2 shrinkage term
         should not end up in the accumulator.
     *   Fix toco compilation/execution on Windows.
     *   GoogleZoneProvider class added to detect which Google Cloud Engine zone
         tensorflow is running in.
     *   It is now safe to call any of the C API's TF_Delete\* functions on
         nullptr.
     *   Log some errors on Android to logcat.
     *   Match FakeQuant numerics in TFLite to improve accuracy of TFLite
         quantized inference models.
     *   Optional bucket location check for the GCS Filesystem.
     *   Performance enhancements for StringSplitOp & StringSplitV2Op.
     *   Performance improvements for regex replace operations.
     *   TFRecordWriter now raises an error if .write() fails.
     *   TPU: More helpful error messages in TPUClusterResolvers.
     *   The legacy_init_op argument to SavedModelBuilder methods for adding
         MetaGraphs has been deprecated. Please use the equivalent main_op
         argument instead. As part of this, we now explicitly check for a single
         main_op or legacy_init_op at the time of SavedModel building, whereas
         the check on main_op was previously only done at load time.
     *   The protocol used for Estimator training is now configurable in
         RunConfig.
     *   Triangular solve performance improvements.
     *   Unify RNN cell interface between TF and Keras. Add new
         get_initial_state() to Keras and TF RNN cell, which will use to replace
         the existing zero_state() method.
     *   Update initialization of variables in Keras.
     *   Updates to "constrained_optimization" in tensorflow/contrib.
     *   boosted trees: adding pruning mode.
     *   tf.train.Checkpoint does not delete old checkpoints by default.
     *   tfdbg: Limit the total disk space occupied by dumped tensor data to 100
         GBytes. Add environment variable `TFDBG_DISK_BYTES_LIMIT` to allow
         adjustment of this upper limit.
    
    Thanks to our Contributors
    
    This release contains contributions from many people at Google, as well as:
    
    Aapeli, adoda, Ag Ramesh, Amogh Mannekote, Andrew Gibiansky, Andy Craze, Anirudh Koul, Aurelien Geron, Avijit, Avijit-Nervana, Ben, Benjamin H. Myara, bhack, Brett Koonce, Cao Zongyan, cbockman, cheerss, Chikanaga Tomoyuki, Clayne Robison, cosine0, Cui Wei, Dan J, David, David Norman, Dmitry Klimenkov, Eliel Hojman, Florian Courtial, fo40225, formath, Geoffrey Irving, gracehoney, Grzegorz Pawelczak, Guoliang Hua, Guozhong Zhuang, Herman Zvonimir DošIlović, HuiyangFei, Jacker, Jan HüNnemeyer, Jason Taylor, Jason Zaman, Jesse, Jiang,Zhoulong, Jiawei Zhang, Jie, Joe Yearsley, Johannes Schmitz, Jon Perl, Jon Triebenbach, Jonathan, Jonathan Hseu, Jongmin Park, Justin Shenk, karlkubx.ca, Kate Hodesdon, Kb Sriram, Keishi Hattori, Kenneth Blomqvist, Koan-Sin Tan, Li Liangbin, Li, Yiqiang, Loo Rong Jie, Madiyar, Mahmoud Abuzaina, Mark Ryan, Matt Dodge, mbhuiyan, melvinljy96, Miguel Mota, Nafis Sadat, Nathan Luehr, naurril, Nehal J Wani, Niall Moran, Niranjan Hasabnis, Nishidha Panpaliya, npow, olicht, Pei Zhang, Peng Wang (Simpeng), Peng Yu, Philipp Jund, Pradeep Banavara, Pratik Kalshetti, qwertWZ, Rakesh Chada, Randy West, Ray Kim, Rholais Lii, Robin Richtsfeld, Rodrigo Silveira, Ruizhi, Santosh Kumar, Seb Bro, Sergei Lebedev, sfujiwara, Shaba Abhiram, Shashi, SneakyFish5, Soila Kavulya, Stefan Dyulgerov, Steven Winston, Sunitha Kambhampati, Surry Shome, Taehoon Lee, Thor Johnsen, Tristan Rice, TShapinsky, tucan, tucan9389, Vicente Reyes, Vilmar-Hillow, Vitaly Lavrukhin, wangershi, weidan.kong, weidankong, Wen-Heng (Jack) Chung, William D. Irons, Wim Glenn, XFeiF, Yan Facai (颜发才), Yanbo Liang, Yong Tang, Yoshihiro Yamazaki, Yuan (Terry) Tang, Yuan, Man, zhaoyongke, ÁRon
    Ricardo Perez-Lopez, 张天启, 张晓飞
    

    1.10.1

    Bug Fixes and Other Changes
    
    * `tf.keras`:
    * Fixing keras on Cloud TPUs. No new binaries will be built for Windows.
    

    1.10.0

    Major Features And Improvements
    
    * The `tf.lite` runtime now supports `complex64`.
    * Initial [Google Cloud Bigtable integration](https://github.com/tensorflow/tensorflow/tree/r1.10/tensorflow/contrib/bigtable) for `tf.data`.
    * Improved local run behavior in `tf.estimator.train_and_evaluate` which does not reload checkpoints for evaluation.
    * `RunConfig` now sets device_filters to restrict how workers and PS can communicate. This can speed up training and ensure clean shutd
    opened by pyup-bot 0
Owner
Deven96
Opensourcerer in the making
Deven96
An attempt at the implementation of GLOM, Geoffrey Hinton's paper for emergent part-whole hierarchies from data

GLOM TensorFlow This Python package attempts to implement GLOM in TensorFlow, which allows advances made by several different groups transformers, neu

Rishit Dagli 32 Feb 21, 2022
This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras)

Yogi-Optimizer_Keras This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras) The NeurIPS-Paper can be found here: http://papers.nips.c

null 14 Sep 13, 2022
Keras udrl - Keras implementation of Upside Down Reinforcement Learning

keras_udrl Keras implementation of Upside Down Reinforcement Learning This is me

Eder Santana 7 Jan 24, 2022
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization

FuseDream This repo contains code for our paper (paper link): FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimizat

XCL 191 Dec 31, 2022
Replication attempt for the Protein Folding Model

RGN2-Replica (WIP) To eventually become an unofficial working Pytorch implementation of RGN2, an state of the art model for MSA-less Protein Folding f

Eric Alcaide 36 Nov 29, 2022
Example-custom-ml-block-keras - Custom Keras ML block example for Edge Impulse

Custom Keras ML block example for Edge Impulse This repository is an example on

Edge Impulse 8 Nov 2, 2022
Classification models 1D Zoo - Keras and TF.Keras

Classification models 1D Zoo - Keras and TF.Keras This repository contains 1D variants of popular CNN models for classification like ResNets, DenseNet

Roman Solovyev 12 Jan 6, 2023
Simple torch.nn.module implementation of Alias-Free-GAN style filter and resample

Alias-Free-Torch Simple torch module implementation of Alias-Free GAN. This repository including Alias-Free GAN style lowpass sinc filter @filter.py A

이준혁(Junhyeok Lee) 64 Dec 22, 2022
Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN", accepted to ACM MM 2021 BNI Track.

RecycleD Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN

Yunan Zhu 23 Nov 5, 2022
This source code is implemented using keras library based on "Automatic ocular artifacts removal in EEG using deep learning"

CSP_Deep_EEG This source code is implemented using keras library based on "Automatic ocular artifacts removal in EEG using deep learning" {https://www

Seyed Mahdi Roostaiyan 2 Nov 8, 2022