Jupyter notebooks for the code samples of the book "Deep Learning with Python"

Overview

Companion Jupyter notebooks for the book "Deep Learning with Python"

This repository contains Jupyter notebooks implementing the code samples found in the book Deep Learning with Python, 2nd Edition (Manning Publications).

For readability, these notebooks only contain runnable code blocks and section titles, and omit everything else in the book: text paragraphs, figures, and pseudocode. If you want to be able to follow what's going on, I recommend reading the notebooks side by side with your copy of the book.

These notebooks use Python 3.7 and Keras 2.0.8. They were generated on a p2.xlarge EC2 instance.

Table of contents

Issues
  • How can I find the pre-trained model

    How can I find the pre-trained model "cats_and_dogs_small_2.h5" ?

    Sir, how can I find the pre-trained model "cats_and_dogs_small_2.h5" mentioned by 5.4-visualizing-what-convnets-learn.ipynb ? Thank you!

    opened by wizephen 4
  • Unclear how to download data used in 6.3

    Unclear how to download data used in 6.3

    The instructions in 6.3 say the data is

    recorded at the Weather Station at the Max-Planck-Institute for Biogeochemistry in Jena, Germany: http://www.bgc-jena.mpg.de/wetter/.

    The data file name in the code is jena_climate_2009_2016.csv. The website allows us to download data in 6-month increments, so the period 2009-2016 is split into 16 files.

    Were these files concatenated to form the file used in the notebook? If so, a sentence to that effect might clarify where the data came from. Alternatively, if jena_climate_2009_2016.csv isn't the concatenation of these files, I think it's unclear where a reader would find the data.

    opened by dansbecker 3
  • typos in 6.1

    typos in 6.1

    Fixed some typos and bugs.

    opened by hiroyachiba 3
  • 6.3 jena_climate target leakage problem

    6.3 jena_climate target leakage problem

    The author foget deleting the temperature column in the float_data which is used as a pool for sampling. Thus the outcome looks perfect due to the target leakage.

    Delete the target response when try the code yourself, though it will let u see how bad the Network performs. 😄

    temp = float_data[:, 1] a = float_data[:, 0] a = np.reshape(a, newshape=(len(a), 1)) b = float_data[:, 2:] print(a.shape, b.shape) float_data = np.concatenate([a, b], axis=1)

    opened by Dolores2333 2
  • 5.4-visualizing-what-convnets-learn input_13:0 is both fed and fetched error

    5.4-visualizing-what-convnets-learn input_13:0 is both fed and fetched error

    Using Keras 2.2.4, I'm working my way though this notebook 5.4-visualizing-what-convnets-learn , except I switched the model with a unet one provided by Kaggle-Carvana-Image-Masking-Challenge . The first layer of the Kaggle model looks like this, followed by the rest of the example code.

    def get_unet_512(input_shape=(512, 512, 3),
                     num_classes=1):
        inputs = Input(shape=input_shape)
    
    ...
    
    Layer (type)                    Output Shape         Param #     Connected to                     
    ==================================================================================================
    input_13 (InputLayer)           (None, 512, 512, 3)  0    
    ...
    
    from keras import models
    layer_outputs = [layer.output for layer in model.layers[:8]]
    activation_model = models.Model(inputs=model.input, outputs=layer_outputs)
    activations = activation_model.predict(img_tensor)
    
    

    Now the error I am getting is

    InvalidArgumentError: input_13:0 is both fed and fetched.
    

    Does anyone have any suggestions on how to work around this?

    opened by yhatpub 2
  • issue with Colab (second edition) chapter 11 part 1

    issue with Colab (second edition) chapter 11 part 1

    Hi @fchollet, thanks for this amazing book (and the corresponding colab notebooks).

    I tried to run this file https://colab.research.google.com/github/fchollet/deep-learning-with-python-notebooks/blob/master/chapter11_part01_introduction.ipynb#scrollTo=xdw1FYamgsP9

    But I am getting the following error

    from tensorflow.keras.layers import TextVectorization
    text_vectorization = TextVectorization(
        output_mode="int",
    )
    ---------------------------------------------------------------------------
    ImportError                               Traceback (most recent call last)
    <ipython-input-1-0642862b90e9> in <module>()
    ----> 1 from tensorflow.keras.layers import TextVectorization
          2 text_vectorization = TextVectorization(
          3     output_mode="int",
          4 )
    
    ImportError: cannot import name 'TextVectorization' from 'tensorflow.keras.layers' (/usr/local/lib/python3.7/dist-packages/tensorflow/keras/layers/__init__.py)
    

    Do you know what the issue is? Thanks!

    opened by randomgambit 2
  • Listing 6.37. Training and evaluating a densely connected model - stalls on first epoch

    Listing 6.37. Training and evaluating a densely connected model - stalls on first epoch

    Having trouble getting listing 6.37 to work. The model stalls on the first epoch. Getting the following output, but it never reaches the end of epoch 1:

    Epoch 1/20
    496/500 [============================>.] - ETA: 0s - loss: 1.2985
    

    Prior code in this chapter and code in prior chapters works fine. Any suggestions?

    Thanks,

    opened by jswift24 2
  • optimizers.RMSprop

    optimizers.RMSprop

    I am trying some codes from your fantastic book, but got unconventional error . Please give an eye on it . Thank you model.compile(optimizer=optimizers.RMSprop(learning_rate=0.001), loss='binary_crossentropy' , metrics=['accuracy']) (on p-73) Error : module 'keras.optimizers' has no attribute 'RMSprop'

    opened by Raghav-Bell 2
  • 6.3:  AttributeError: module 'numpy.random' has no attribute 'randit'

    6.3: AttributeError: module 'numpy.random' has no attribute 'randit'

    Hello,

    I am currently getting the error: AttributeError: module 'numpy.random' has no attribute 'randit' when running model.fit_generator(). However, the numpy.random.randit() works fine when we call generator() directly.

    I have run pip3 install numpy as well as python3 -m pip install numpy to make sure numpy is update.

    I am currently running a jupyter notebook on a Ubuntu 16.04 machine.

    opened by combstraight 1
  • 6.3 Standardization of validation and test data

    6.3 Standardization of validation and test data

    I have 2 concerns:

    1. The training data is standardized using the mean and standard deviation of the training set. Shouldn't the validation and test set be standardized using its respective mean and standard deviation as well? I can't see this done anywhere.
    2. Wouldn't it be better to refer to the procedure as standardizing and not normalizing? If nothing else, this will conform to scikit-learns terminology.
    opened by gmohandas 1
  • 6 cpt

    6 cpt

    from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Embedding, Flatten, Dense from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences import numpy as np import os Init Plugin Init Graph Optimizer Init Kernel In [ ]:

    ​ In [2]:

    glove_dir = '../Desktop/Python/Deep_learning_with_python/glove/glove.6B' ​ maxlen = 100 training_samples = 200 validation_samples = 10000 max_words = 10000 embeddings_index = {} texts = [] labels = [] ​ f = open(os.path.join(glove_dir, 'glove.6B.100d.txt')) for line in f: values = line.split() word = values[0] coefs = np.asarray(values[1:], dtype='float32') embeddings_index[word] = coefs f.close() ​ tokenizer = Tokenizer(num_words = max_words) tokenizer.fit_on_texts(texts) word_index = tokenizer.word_index sequences = tokenizer.texts_to_sequences(texts) ​ print('Found %s word vectors.' % len(embeddings_index)) ​ embedding_dim = 100 ​ embedding_matrix = np.zeros((max_words, embedding_dim)) for word, i in word_index.items(): embedding_vector = embeddings_index.get(word) if i < max_words: if embedding_vector is not None: embedding_matrix[i] = embedding_vector ​ model = Sequential() model.add(Embedding(max_words, embedding_dim, input_length=maxlen)) model.add(Flatten()) model.add(Dense(32, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.summary() ​ model.layers[0].set_weights([embedding_matrix]) model.layers[0].trainable = False ​ ​ ​ Found 400000 word vectors. Metal device set to: Apple M1 Model: "sequential"


    Layer (type) Output Shape Param #

    embedding (Embedding) (None, 100, 100) 1000000


    flatten (Flatten) (None, 10000) 0


    dense (Dense) (None, 32) 320032


    dense_1 (Dense) (None, 1) 33

    Total params: 1,320,065 Trainable params: 1,320,065 Non-trainable params: 0


    2021-10-14 17:03:04.245661: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2021-10-14 17:03:04.245752: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: ) In [3]:

    ​ data = pad_sequences(sequences, maxlen = maxlen) labels = np.asarray(labels) ​ indices = np.arange(data.shape[0]) np.random.shuffle(indices) data = data[indices] labels = labels[indices] ​ x_train = data[: training_samples] y_train = labels[: training_samples] x_val = data[training_samples : training_samples + validation_samples] y_val = labels[training_samples : training_samples + validation_samples] ​ model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc']) history = model.fit(x_train, y_train, epochs=10, batch_size=32, validation_data=(x_val, y_val)) model.save_weights('pre_trained_glove_model.h5') ​ import matplotlib.pyplot as plt ​ acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] ​ epochs = range(1, len(acc) + 1) ​ plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.legend() ​ plt.figure() ​ plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() ​ plt.show() Epoch 1/10 2021-10-14 17:03:04.479691: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:176] None of the MLIR Optimization Passes are enabled (registered 2) 2021-10-14 17:03:04.479867: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz

    ValueError Traceback (most recent call last) /var/folders/sg/p3l_dvx57d1_f3b50jd7b6_h0000gn/T/ipykernel_25599/4013697431.py in 15 loss='binary_crossentropy', 16 metrics=['acc']) ---> 17 history = model.fit(x_train, y_train,

    ValueError: Expect x to be a non-empty array or dataset.

    opened by Chosseppe 0
  • sad reader of translated version dlwp

    sad reader of translated version dlwp

    Hi. I do not know wether you are going to read this post wether not, but reading this book and rewriting codes is really pain to compare with o'reily books about python. I do not think that simplyfi code is bad idea, but if i want to show someone or learn something, trying to do it in official pythonic way. and second thing when i rewrited whole code for 6.15 NameError: name 'word_index' is not defined... I can't find it in definition

    opened by Chosseppe 0
  • Could you create an open in Google colab button?

    Could you create an open in Google colab button?

    The note book not show the result, only code, it's better with open with colab button

    opened by ladylazy9x 0
  • Chapter 12, part 5, GANs - Model cannot be saved because the input shapes have not been set.

    Chapter 12, part 5, GANs - Model cannot be saved because the input shapes have not been set.

    Using the code on GANs, I'm unable to save the model in TF SavedModel format. Using TensorFlow 2.6.0 in a Kaggle notebook.

    I get the following error:

    ValueError: Model <__main__.GAN object at 0x7fa982c57810> cannot be saved because the input 
    shapes have not been set. Usually, input shapes are automatically determined from calling 
    `.fit()` or `.predict()`. To manually set the shapes, call `model.build(input_shape)`.
    

    I have attempted to resolve this by calling model.build(input_shape) after compilation and adding a call method to the GAN subclass, but I'm still seeing the same issue.

    opened by Pappa 0
  • Issue related to the book not the notebook

    Issue related to the book not the notebook

    Hi,

    in the book, Chapter 9, Advanced deep learning for computer vision,9.2 An image segmentation example, After the figure 9.4:

    In the code there is a parameter img_size :

    img_size = (200, 200)
    

    And the comment states :

    We resize everything to 180x180, like in the last chapter.

    I think it's a typo.

    opened by MadMenHitBooker 0
  • Deep learning

    Deep learning

    opened by svrameshds 0
  • GPU on Notebooks running on Big Sur w/ tensorflow-macos / tensorflow-metal

    GPU on Notebooks running on Big Sur w/ tensorflow-macos / tensorflow-metal

    I was evaluating whether these notebooks work with my AMD Radeon Pro 5700 XT. I have been able to get Keras models to use the GPU, however, the 'chapter07_working_with_keras' and the 'chapter11_part04_sequence_to_sequence' notebooks do not appear to be using the GPU. I installed tensorflow-macos and tensorflow-metal with these instructions:

    https://developer.apple.com/metal/tensorflow-plugin/

    I had to create the virtual environment with Apple's python 3.8.2. Anaconda's python 3.8.5 didn't work. E.g. /Library/Developer/CommandLineTools/usr/bin/python3 -m venv tensorflow-metal

    2021-07-15 12:03:01.235594: W tensorflow/core/framework/op_kernel.cc:1767] OP_REQUIRES failed at partitioned_function_ops.cc:114 : Invalid argument: No OpKernel was registered to support Op 'CudnnRNNV3' used by {{node cond_41/then/_0/cond/CudnnRNNV3}} with these attrs: [num_proj=0, time_major=false, dropout=0, seed=0, T=DT_FLOAT, input_mode="linear_input", direction="unidirectional", rnn_mode="gru", is_training=true, seed2=0]
    Registered devices: [CPU, GPU]
    Registered kernels:
      <no registered kernels>
    
    	 [[cond_41/then/_0/cond/CudnnRNNV3]]
    
    % pip list
    Package                  Version
    ------------------------ -------------------
    absl-py                  0.12.0
    appnope                  0.1.2
    astunparse               1.6.3
    attrs                    21.2.0
    backcall                 0.2.0
    cachetools               4.2.2
    certifi                  2021.5.30
    charset-normalizer       2.0.1
    Cython                   0.29.24
    debugpy                  1.3.0
    decorator                5.0.9
    dill                     0.3.4
    flatbuffers              1.12
    future                   0.18.2
    gast                     0.4.0
    google-auth              1.32.1
    google-auth-oauthlib     0.4.4
    google-pasta             0.2.0
    googleapis-common-protos 1.53.0
    grpcio                   1.34.1
    h5py                     3.1.0
    idna                     3.2
    importlib-resources      5.2.0
    ipykernel                6.0.1
    ipython                  7.25.0
    ipython-genutils         0.2.0
    jedi                     0.18.0
    jupyter-client           6.1.12
    jupyter-core             4.7.1
    keras-nightly            2.5.0.dev2021032900
    Keras-Preprocessing      1.1.2
    Markdown                 3.3.4
    matplotlib-inline        0.1.2
    numpy                    1.19.5
    oauthlib                 3.1.1
    opt-einsum               3.3.0
    parso                    0.8.2
    pexpect                  4.8.0
    pickleshare              0.7.5
    pip                      21.1.3
    promise                  2.3
    prompt-toolkit           3.0.19
    protobuf                 3.17.3
    ptyprocess               0.7.0
    pyasn1                   0.4.8
    pyasn1-modules           0.2.8
    pybind11                 2.6.2
    Pygments                 2.9.0
    python-dateutil          2.8.2
    pyzmq                    22.1.0
    requests                 2.26.0
    requests-oauthlib        1.3.0
    rsa                      4.7.2
    setuptools               41.2.0
    six                      1.15.0
    tensorboard              2.5.0
    tensorboard-data-server  0.6.1
    tensorboard-plugin-wit   1.8.0
    tensorflow-datasets      4.3.0
    tensorflow-estimator     2.5.0
    tensorflow-macos         2.5.0
    tensorflow-metadata      1.1.0
    tensorflow-metal         0.1.1
    termcolor                1.1.0
    tornado                  6.1
    tqdm                     4.61.2
    traitlets                5.0.5
    typing-extensions        3.7.4.3
    urllib3                  1.26.6
    wcwidth                  0.2.5
    Werkzeug                 2.0.1
    wheel                    0.36.2
    wrapt                    1.12.1
    zipp                     3.5.0
    
    
    Screen Shot 2021-07-16 at 11 58 26 AM Screen Shot 2021-07-16 at 9 25 14 AM

    This Keras code successfully runs on the GPU:

    import tensorflow_datasets as tfds
    import tensorflow as tf
    
    tf.compat.v1.enable_v2_behavior()
    
    from tensorflow.python.framework.ops import disable_eager_execution
    disable_eager_execution()
    
    
    (ds_train, ds_test), ds_info = tfds.load(
        'mnist',
        split=['train', 'test'],
        shuffle_files=True,
        as_supervised=True,
        with_info=True,
    )
    
    def normalize_img(image, label):
      """Normalizes images: `uint8` -> `float32`."""
      return tf.cast(image, tf.float32) / 255., label
    
    batch_size = 128
    
    ds_train = ds_train.map(
        normalize_img, num_parallel_calls=tf.data.experimental.AUTOTUNE)
    ds_train = ds_train.cache()
    ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples)
    ds_train = ds_train.batch(batch_size)
    ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)
    
    
    ds_test = ds_test.map(
        normalize_img, num_parallel_calls=tf.data.experimental.AUTOTUNE)
    ds_test = ds_test.batch(batch_size)
    ds_test = ds_test.cache()
    ds_test = ds_test.prefetch(tf.data.experimental.AUTOTUNE)
    
    
    model = tf.keras.models.Sequential([
      tf.keras.layers.Conv2D(32, kernel_size=(3, 3),
                     activation='relu'),
      tf.keras.layers.Conv2D(64, kernel_size=(3, 3),
                     activation='relu'),
      tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
    #   tf.keras.layers.Dropout(0.25),
      tf.keras.layers.Flatten(),
      tf.keras.layers.Dense(128, activation='relu'),
    #   tf.keras.layers.Dropout(0.5),
      tf.keras.layers.Dense(10, activation='softmax')
    ])
    model.compile(
        loss='sparse_categorical_crossentropy',
        optimizer=tf.keras.optimizers.Adam(0.001),
        metrics=['accuracy'],
    )
    
    model.fit(
        ds_train,
        epochs=12,
        validation_data=ds_test,
    )
    
    Screen Shot 2021-07-13 at 11 49 32 AM
    opened by dbl001 0
  • K.gradients (chapter -8)

    K.gradients (chapter -8)

    I am implementing "Deep Dream" but got an error . code: grads = K.gradients(loss,dream)[0] Error : tf.gradients is not supported when eager execution is enabled. Use tf.GradientTape instead. I have tried every method describe here : But nothing worked for me.

    opened by Raghav-Bell 0
  • adding three jupyter notebook for 7.1.1, 7.1.2 and 7.1.3

    adding three jupyter notebook for 7.1.1, 7.1.2 and 7.1.3

    The original codes in 7.1.2 are wrong. The parameters in Embedding lines are in the wrong order and now fixed.

    opened by chaowu2009 1
  • Variational Autoencoders (Listing 8.27)

    Variational Autoencoders (Listing 8.27)

    The code below is causing an error. There seems to be a problem with batch_size.

    vae.fit(x=x_train, y=None, shuffle=True, epochs=10, batch_size=batch_size, validation_data=(x_test, None))

    opened by cwk20 0
Owner
François Chollet
François Chollet
Try out deep learning models online on Google Colab

Try out deep learning models online on Google Colab

Erdene-Ochir Tuguldur 1.2k Oct 22, 2021
torchbearer: A model fitting library for PyTorch

Note: We're moving to PyTorch Lightning! Read about the move here. From the end of February, torchbearer will no longer be actively maintained. We'll

null 613 Sep 24, 2021
aka "Bayesian Methods for Hackers": An introduction to Bayesian methods + probabilistic programming with a computation/understanding-first, mathematics-second point of view. All in pure Python ;)

Bayesian Methods for Hackers Using Python and PyMC The Bayesian method is the natural approach to inference, yet it is hidden from readers behind chap

Cameron Davidson-Pilon 23.7k Oct 18, 2021
Jupyter notebooks for the code samples of the book "Deep Learning with Python"

Jupyter notebooks for the code samples of the book "Deep Learning with Python"

François Chollet 13.8k Oct 23, 2021
📚 A collection of Jupyter notebooks for learning and experimenting with OpenVINO 👓

A collection of ready-to-run Python* notebooks for learning and experimenting with OpenVINO developer tools. The notebooks are meant to provide an introduction to OpenVINO basics and teach developers how to leverage our APIs for optimized deep learning inference in their applications.

OpenVINO Toolkit 447 Oct 20, 2021
The PyTorch improved version of TPAMI 2017 paper: Face Alignment in Full Pose Range: A 3D Total Solution.

Face Alignment in Full Pose Range: A 3D Total Solution By Jianzhu Guo. [Updates] 2020.8.30: The pre-trained model and code of ECCV-20 are made public

Jianzhu Guo 3.1k Oct 20, 2021
ktrain is a Python library that makes deep learning and AI more accessible and easier to apply

Overview | Tutorials | Examples | Installation | FAQ | How to Cite Welcome to ktrain News and Announcements 2020-11-08: ktrain v0.25.x is released and

Arun S. Maiya 909 Oct 18, 2021
Lucid library adapted for PyTorch

Lucent PyTorch + Lucid = Lucent The wonderful Lucid library adapted for the wonderful PyTorch! Lucent is not affiliated with Lucid or OpenAI's Clarity

Lim Swee Kiat 387 Oct 18, 2021
All course materials for the Zero to Mastery Deep Learning with TensorFlow course.

All course materials for the Zero to Mastery Deep Learning with TensorFlow course.

Daniel Bourke 1.7k Oct 24, 2021
Repository of Jupyter notebook tutorials for teaching the Deep Learning Course at the University of Amsterdam (MSc AI), Fall 2020

Repository of Jupyter notebook tutorials for teaching the Deep Learning Course at the University of Amsterdam (MSc AI), Fall 2020

Phillip Lippe 262 Oct 16, 2021
A research toolkit for particle swarm optimization in Python

PySwarms is an extensible research toolkit for particle swarm optimization (PSO) in Python. It is intended for swarm intelligence researchers, practit

Lj Miranda 847 Oct 22, 2021
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.

Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.

Nerdy Rodent 778 Oct 16, 2021
An easier way to build neural search on the cloud

An easier way to build neural search on the cloud Jina is a deep learning-powered search framework for building cross-/multi-modal search systems (e.g

Jina AI 11.6k Oct 16, 2021
null 114 Oct 20, 2021
Like Dirt-Samples, but cleaned up

Clean-Samples Like Dirt-Samples, but cleaned up, with clear provenance and license info (generally a permissive creative commons licence but check the

TidalCycles 28 Jul 19, 2021
High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

TL;DR Ignite is a high-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently. Click on the image to

null 3.7k Oct 24, 2021
Official repository of my book: "Deep Learning with PyTorch Step-by-Step: A Beginner's Guide"

This is the official repository of my book "Deep Learning with PyTorch Step-by-Step". Here you will find one Jupyter notebook for every chapter in the book.

Daniel Voigt Godoy 133 Oct 21, 2021
Adversarial Framework for (non-) Parametric Image Stylisation Mosaics

Fully Adversarial Mosaics (FAMOS) Pytorch implementation of the paper "Copy the Old or Paint Anew? An Adversarial Framework for (non-) Parametric Imag

Zalando Research 103 Sep 16, 2021
Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty

Deep Deterministic Uncertainty This repository contains the code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic

Jishnu Mukhoti 29 Oct 5, 2021