Jupyter notebooks for the code samples of the book "Deep Learning with Python"

Overview

Companion Jupyter notebooks for the book "Deep Learning with Python"

This repository contains Jupyter notebooks implementing the code samples found in the book Deep Learning with Python, 2nd Edition (Manning Publications).

For readability, these notebooks only contain runnable code blocks and section titles, and omit everything else in the book: text paragraphs, figures, and pseudocode. If you want to be able to follow what's going on, I recommend reading the notebooks side by side with your copy of the book.

These notebooks use Python 3.7 and Keras 2.0.8. They were generated on a p2.xlarge EC2 instance.

Table of contents

Issues
  • Edited codeblock 7.26: Made the CustomModel independent of previous model.

    Edited codeblock 7.26: Made the CustomModel independent of previous model.

    In code block 7.26, If the sub-block alone is executed, it generates NameError: name 'model' is not defined. The subclass CustomModel uses the optimizer and trainable weights that were defined in the previous model. By accessing the actual optimizer and trainable weights that passed to CustomModel's compile(), the subclass can be made independent of the previously defined model and can eliminate the mentioned error.

    opened by var-nan 3
  • Unclear how to download data used in 6.3

    Unclear how to download data used in 6.3

    The instructions in 6.3 say the data is

    recorded at the Weather Station at the Max-Planck-Institute for Biogeochemistry in Jena, Germany: http://www.bgc-jena.mpg.de/wetter/.

    The data file name in the code is jena_climate_2009_2016.csv. The website allows us to download data in 6-month increments, so the period 2009-2016 is split into 16 files.

    Were these files concatenated to form the file used in the notebook? If so, a sentence to that effect might clarify where the data came from. Alternatively, if jena_climate_2009_2016.csv isn't the concatenation of these files, I think it's unclear where a reader would find the data.

    opened by dansbecker 3
  • issue with Colab (second edition) chapter 11 part 1

    issue with Colab (second edition) chapter 11 part 1

    Hi @fchollet, thanks for this amazing book (and the corresponding colab notebooks).

    I tried to run this file https://colab.research.google.com/github/fchollet/deep-learning-with-python-notebooks/blob/master/chapter11_part01_introduction.ipynb#scrollTo=xdw1FYamgsP9

    But I am getting the following error

    from tensorflow.keras.layers import TextVectorization
    text_vectorization = TextVectorization(
        output_mode="int",
    )
    ---------------------------------------------------------------------------
    ImportError                               Traceback (most recent call last)
    <ipython-input-1-0642862b90e9> in <module>()
    ----> 1 from tensorflow.keras.layers import TextVectorization
          2 text_vectorization = TextVectorization(
          3     output_mode="int",
          4 )
    
    ImportError: cannot import name 'TextVectorization' from 'tensorflow.keras.layers' (/usr/local/lib/python3.7/dist-packages/tensorflow/keras/layers/__init__.py)
    

    Do you know what the issue is? Thanks!

    opened by randomgambit 2
  • optimizers.RMSprop

    optimizers.RMSprop

    I am trying some codes from your fantastic book, but got unconventional error . Please give an eye on it . Thank you model.compile(optimizer=optimizers.RMSprop(learning_rate=0.001), loss='binary_crossentropy' , metrics=['accuracy']) (on p-73) Error : module 'keras.optimizers' has no attribute 'RMSprop'

    opened by Raghav-Bell 2
  • 6.3 jena_climate target leakage problem

    6.3 jena_climate target leakage problem

    The author foget deleting the temperature column in the float_data which is used as a pool for sampling. Thus the outcome looks perfect due to the target leakage.

    Delete the target response when try the code yourself, though it will let u see how bad the Network performs. 😄

    temp = float_data[:, 1] a = float_data[:, 0] a = np.reshape(a, newshape=(len(a), 1)) b = float_data[:, 2:] print(a.shape, b.shape) float_data = np.concatenate([a, b], axis=1)

    opened by Dolores2333 2
  • 5.4-visualizing-what-convnets-learn input_13:0 is both fed and fetched error

    5.4-visualizing-what-convnets-learn input_13:0 is both fed and fetched error

    Using Keras 2.2.4, I'm working my way though this notebook 5.4-visualizing-what-convnets-learn , except I switched the model with a unet one provided by Kaggle-Carvana-Image-Masking-Challenge . The first layer of the Kaggle model looks like this, followed by the rest of the example code.

    def get_unet_512(input_shape=(512, 512, 3),
                     num_classes=1):
        inputs = Input(shape=input_shape)
    
    ...
    
    Layer (type)                    Output Shape         Param #     Connected to                     
    ==================================================================================================
    input_13 (InputLayer)           (None, 512, 512, 3)  0    
    ...
    
    from keras import models
    layer_outputs = [layer.output for layer in model.layers[:8]]
    activation_model = models.Model(inputs=model.input, outputs=layer_outputs)
    activations = activation_model.predict(img_tensor)
    
    

    Now the error I am getting is

    InvalidArgumentError: input_13:0 is both fed and fetched.
    

    Does anyone have any suggestions on how to work around this?

    opened by matthewchung74 2
  • Listing 6.37. Training and evaluating a densely connected model - stalls on first epoch

    Listing 6.37. Training and evaluating a densely connected model - stalls on first epoch

    Having trouble getting listing 6.37 to work. The model stalls on the first epoch. Getting the following output, but it never reaches the end of epoch 1:

    Epoch 1/20
    496/500 [============================>.] - ETA: 0s - loss: 1.2985
    

    Prior code in this chapter and code in prior chapters works fine. Any suggestions?

    Thanks,

    opened by jswift24 2
  • Chap 07: AttributeError: 'SparseCategoricalAccuracy' object has no attribute 'reset_state'

    Chap 07: AttributeError: 'SparseCategoricalAccuracy' object has no attribute 'reset_state'

    • title above code block: Writing a step-by-step training loop: the loop itself

    • error output: Screenshot from 2021-12-31 09-05-29

    • TF, TF-base v2.4.1; TF-estimator v2.6.0

    • conda v4.11.0

    • Ubuntu 21.10

    opened by bjpcjp 1
  • listing 3.10 should history key be

    listing 3.10 should history key be "accuracy" and "val_accuracy" instead of "acc" and "val_acc"?

    On the book:

    acc = history.history['acc']
    val_acc = history.history['val_acc']
    

    But if I typed so, it would result in an error where keys of history are incorrect. But when I did history.history.keys(), it showed "accuracy" and "val_accuracy" instead. Is this a mistake on my part?

    Here's the full code:

    from keras.datasets import reuters
    from tensorflow.keras import models
    from tensorflow.keras import layers
    import numpy as np
    import matplotlib.pyplot as plt
    
    def vectorize_sequences(sequences, dimension = 10000):
    	results = np.zeros((len(sequences),dimension))
    	for i, sequence in enumerate(sequences):
    		results[i, sequence] = 1
    	return results
    
    def to_one_hot(labels, dimension=46):
    	results = np.zeros((len(labels), dimension))
    	for i, label in enumerate(labels):
    		results[i, label] = 1
    	return results
    
    (train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=10000)
    
    x_train = vectorize_sequences(train_data)
    x_test = vectorize_sequences(test_data)
    
    one_hot_train_label = to_one_hot(train_labels)
    one_hot_test_label = to_one_hot(test_labels)
    
    # one_hot_train_label = to_categorical(train_labels)
    # one_hot_test_label = to_categorical(test_labels)
    
    model = models.Sequential()
    model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
    model.add(layers.Dense(64, activation='relu'))
    model.add(layers.Dense(46, activation='softmax'))
    
    model.compile(optimizer='rmsprop',
                  loss='categorical_crossentropy',
                  metrics=['accuracy'])
    
    x_val = x_train[:1000]
    partial_x_train = x_train[1000:]
    
    y_val = one_hot_train_label[:1000]
    partial_y_train = one_hot_train_label[1000:]
    
    history = model.fit(partial_x_train,
                        partial_y_train,
                        epochs=20,
                        batch_size=512,
                        validation_data=(x_val, y_val))
    
    print("History key: ", history.history.keys())
    
    loss = history.history['loss']
    val_loss = history.history['val_loss']
    
    epochs = range(1, len(loss) + 1)
    
    plt.plot(epochs, loss, 'bo', label='Training loss')
    plt.plot(epochs, val_loss, 'b', label='Validation loss')
    plt.title('Training and validation loss')
    plt.xlabel('Epochs')
    plt.ylabel('Loss')
    plt.legend()
    
    plt.show()
    
    plt.clf()   # clear figure
    
    acc = history.history['accuracy']
    val_acc = history.history['val_accuracy']
    
    plt.plot(epochs, acc, 'bo', label='Training acc')
    plt.plot(epochs, val_acc, 'b', label='Validation acc')
    plt.title('Training and validation accuracy')
    plt.xlabel('Epochs')
    plt.ylabel('Loss')
    plt.legend()
    
    plt.show()
    
    opened by warrenbocphet 1
  • VGG 16 layers not loading

    VGG 16 layers not loading

    Hi,

    I tried to use vgg16 using keras for the first time. I loaded model offline in weights parameter using file path. on model.summary it is not showing flatten, fc1, fc2 layers . it shows layers up to bock5_pool . why so ? on proceeding further, I am getting model output predictions of the type (1,512,7,7) which shows error in decode_predictions ..Pls help

    opened by jaigsingla 1
  • 6.3 Standardization of validation and test data

    6.3 Standardization of validation and test data

    I have 2 concerns:

    1. The training data is standardized using the mean and standard deviation of the training set. Shouldn't the validation and test set be standardized using its respective mean and standard deviation as well? I can't see this done anywhere.
    2. Wouldn't it be better to refer to the procedure as standardizing and not normalizing? If nothing else, this will conform to scikit-learns terminology.
    opened by gmohandas 1
  • 5-3 When trying to train categorization CNN in MNIST, I got UnimplementedError: Graph execution error

    5-3 When trying to train categorization CNN in MNIST, I got UnimplementedError: Graph execution error

    Hello

    I used the code shown on the book and tried the code from the website as well (so I believe it is the correct code). Is there any version-related issues??

    train_images = train_images.reshape((60000,28,28,1)) train_images = train_images.astype('float32')/255 test_images = test_images.reshape((10000,28,28,1)) test_images = test_images.astype('float32')/255 train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) model.compile(optimizer = 'rmsprop', loss = 'categorical_crossentropy', metrics = ['accuracy']) model.fit(train_images, train_labels, epochs = 5, batch_size = 64)

    I got error:

    UnimplementedError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_16856/1658854806.py in ----> 1 model.fit(train_images, train_labels, epochs = 5, batch_size = 64)

    ~\anaconda3\lib\site-packages\keras\utils\traceback_utils.py in error_handler(*args, **kwargs) 65 except Exception as e: # pylint: disable=broad-except 66 filtered_tb = _process_traceback_frames(e.traceback) ---> 67 raise e.with_traceback(filtered_tb) from None 68 finally: 69 del filtered_tb

    ~\anaconda3\lib\site-packages\tensorflow\python\eager\execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 52 try: 53 ctx.ensure_initialized() ---> 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, 55 inputs, attrs, num_outputs) 56 except core._NotOkStatusException as e:

    UnimplementedError: Graph execution error:

    Detected at node 'sequential_1/conv2d_3/Conv2D' defined at (most recent call last):

    opened by katieliao 0
  • Unziping dogs-vs-cats dataset in colab

    Unziping dogs-vs-cats dataset in colab

    Consider the code below, taken from notebook for chapter 8 (computer vision) Downloading the data

    !kaggle competitions download -c dogs-vs-cats !unzip -qq train.zip

    Before runninng the last command !unzip train.zip, should we not have to run: !unzip -qq dogs-vs-cats.zip ?

    Thank you.

    opened by kriaz100 0
  • chapter10_dl-for-timeseries: How to predict weather?

    chapter10_dl-for-timeseries: How to predict weather?

    Hi, thanks for the valuable example. Can you add a last step to show how to do the actual prediction? For me, there are several open questions such as:

    • What data do I need to predict the temperature?
    • How to I prepare it?
    • How do I process the output of the prediction (e.g. do a inverse normalization)?
    opened by padmalcom 0
  • Cannot get CelebA data(Listing 12.30)

    Cannot get CelebA data(Listing 12.30)

    Hi, I have a problem when running Listing 12.30. After I execute the code, the following error occurs:

    _Access denied with the following error:

    Cannot retrieve the public link of the file. You may need to change
    the permission to 'Anyone with the link', or have had many accesses. 
    

    You may still be able to access the file from the browser:

     https://drive.google.com/uc?id=1O7m1010EJjLE5QxLZiM9Fpjs7Oj6e684 
    

    unzip: cannot find or open celeba_gan/data.zip, celeba_gan/data.zip.zip or celeba_gan/data.zip.ZIP._

    Does anyone has idea to solve this problem?Thanks for your attention.

    opened by Jiet-97 2
  • 10.2.5 A first recurrent baseline

    10.2.5 A first recurrent baseline

    Hello, I am trying to run the cell LSTM for weather prediction in time series: notebook 10

    inputs = keras.Input(shape=(sequence_length, raw_data.shape[-1])) x = layers.LSTM(16)(inputs) outputs = layers.Dense(1)(x) model = keras.Model(inputs, outputs)

    callbacks = [ keras.callbacks.ModelCheckpoint("jena_lstm.keras", save_best_only=True) ] model.compile(optimizer="rmsprop", loss="mse", metrics=["mae"]) history = model.fit(train_dataset, epochs=10, validation_data=val_dataset, callbacks=callbacks)

    model = keras.models.load_model("jena_lstm.keras") print(f"Test MAE: {model.evaluate(test_dataset)[1]:.2f}")

    I get the following error:

    NotImplementedError: Cannot convert a symbolic Tensor (lstm/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

    I use tensorflow 2.4.1

    opened by hodfa840 0
Owner
François Chollet
François Chollet
Repository for scripts and notebooks from the book: Programming PyTorch for Deep Learning

Repository for scripts and notebooks from the book: Programming PyTorch for Deep Learning

Ian Pointer 331 May 17, 2022
📚 A collection of Jupyter notebooks for learning and experimenting with OpenVINO 👓

A collection of ready-to-run Python* notebooks for learning and experimenting with OpenVINO developer tools. The notebooks are meant to provide an introduction to OpenVINO basics and teach developers how to leverage our APIs for optimized deep learning inference in their applications.

OpenVINO Toolkit 631 May 18, 2022
Library extending Jupyter notebooks to integrate with Apache TinkerPop and RDF SPARQL.

Graph Notebook: easily query and visualize graphs The graph notebook provides an easy way to interact with graph databases using Jupyter notebooks. Us

Amazon Web Services 404 May 26, 2022
Like Dirt-Samples, but cleaned up

Clean-Samples Like Dirt-Samples, but cleaned up, with clear provenance and license info (generally a permissive creative commons licence but check the

TidalCycles 33 Apr 1, 2022
PAWS 🐾 Predicting View-Assignments with Support Samples

This repo provides a PyTorch implementation of PAWS (predicting view assignments with support samples), as described in the paper Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples.

Facebook Research 420 May 19, 2022
Learn about Spice.ai with in-depth samples

Samples Learn about Spice.ai with in-depth samples ServerOps - Learn when to run server maintainance during periods of low load Gardener - Intelligent

Spice.ai 16 Mar 23, 2022
Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples This project is for the paper "Training Confidence-Calibrated Clas

null 165 May 5, 2022
Repo for FUZE project. I will also publish some Linux kernel LPE exploits for various real world kernel vulnerabilities here. the samples are uploaded for education purposes for red and blue teams.

Linux_kernel_exploits Some Linux kernel exploits for various real world kernel vulnerabilities here. More exploits are yet to come. This repo contains

Wei Wu 463 May 15, 2022
NeurIPS 2021, "Fine Samples for Learning with Noisy Labels"

[Official] FINE Samples for Learning with Noisy Labels This repository is the official implementation of "FINE Samples for Learning with Noisy Labels"

mythbuster 17 May 17, 2022
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples

Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples This repository is the official implementation of paper [Qimera: Data-free Q

Kanghyun Choi 15 Apr 18, 2022
The Malware Open-source Threat Intelligence Family dataset contains 3,095 disarmed PE malware samples from 454 families

MOTIF Dataset The Malware Open-source Threat Intelligence Family (MOTIF) dataset contains 3,095 disarmed PE malware samples from 454 families, labeled

Booz Allen Hamilton 99 May 8, 2022
Final project for machine learning (CSC 590). Detection of hepatitis C and progression through blood samples.

Hepatitis C Blood Based Detection Final project for machine learning (CSC 590). Dataset from Kaggle. Using data from previous hepatitis C blood panels

Jennefer Maldonado 1 Dec 28, 2021
Analysis of Antarctica sequencing samples contaminated with SARS-CoV-2

Analysis of SARS-CoV-2 reads in sequencing of 2018-2019 Antarctica samples in PRJNA692319 The samples analyzed here are described in this preprint, wh

Jesse Bloom 4 Feb 9, 2022
Sample code from the Neural Networks from Scratch book.

Neural Networks from Scratch (NNFS) book code Code from the NNFS book (https://nnfs.io) separated by chapter.

Harrison 146 May 26, 2022
Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification.

Easy Few-Shot Learning Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification. This repository is made for you

Sicara 260 May 25, 2022
Source code and notebooks to reproduce experiments and benchmarks on Bias Faces in the Wild (BFW).

Face Recognition: Too Bias, or Not Too Bias? Robinson, Joseph P., Gennady Livitz, Yann Henon, Can Qin, Yun Fu, and Samson Timoner. "Face recognition:

Joseph P. Robinson 39 Mar 15, 2022
Experimental solutions to selected exercises from the book [Advances in Financial Machine Learning by Marcos Lopez De Prado]

Advances in Financial Machine Learning Exercises Experimental solutions to selected exercises from the book Advances in Financial Machine Learning by

Brian 1.2k May 16, 2022
Official repository of my book: "Deep Learning with PyTorch Step-by-Step: A Beginner's Guide"

This is the official repository of my book "Deep Learning with PyTorch Step-by-Step". Here you will find one Jupyter notebook for every chapter in the book.

Daniel Voigt Godoy 209 May 26, 2022
Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)

Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)

Dominik Klein 116 May 18, 2022