Official repository of my book: "Deep Learning with PyTorch Step-by-Step: A Beginner's Guide"

Overview

Deep Learning with PyTorch Step-by-Step

This is the official repository of my book "Deep Learning with PyTorch Step-by-Step". Here you will find one Jupyter notebook for every chapter in the book.

Each notebook contains all the code shown in its corresponding chapter, and you should be able to run its cells in sequence to get the same outputs as shown in the book. I strongly believe that being able to reproduce the results brings confidence to the reader.

There are three options for you to run the Jupyter notebooks:

Google Colab

You can easily load the notebooks directly from GitHub using Colab and run them using a GPU provided by Google. You need to be logged in a Google Account of your own.

You can go through the chapters already using the links below:

Part I - Fundamentals

Part II - Computer Vision

Part III - Sequences

Part IV - Natural Language Processing

Binder

You can also load the notebooks directly from GitHub using Binder, but the process is slightly different. It will create an environment on the cloud and allow you to access Jupyter's Home Page in your browser, listing all available notebooks, just like in your own computer.

If you make changes to the notebooks, make sure to download them, since Binder does not keep the changes once you close it.

You can start your environment on the cloud right now using the button below:

Binder

Local Installation

This option will give you more flexibility, but it will require more effort to set up. I encourage you to try setting up your own environment. It may seem daunting at first, but you can surely accomplish it following seven easy steps:

1 - Anaconda

If you don’t have Anaconda’s Individual Edition installed yet, that would be a good time to do it - it is a very handy way to start - since it contains most of the Python libraries a data scientist will ever need to develop and train models.

Please follow the installation instructions for your OS:

Make sure you choose Python 3.X version since Python 2 was discontinued in January 2020.

2 - Conda (Virtual) Environments

Virtual environments are a convenient way to isolate Python installations associated with different projects.

First, you need to choose a name for your environment :-) Let’s call ours pytorchbook (or anything else you find easier to remember). Then, you need to open a terminal (in Ubuntu) or Anaconda Prompt (in Windows or macOS) and type the following command:

conda create -n pytorchbook anaconda

The command above creates a conda environment named pytorchbook and includes all anaconda packages in it (time to get a coffee, it will take a while...). If you want to learn more about creating and using conda environments, please check Anaconda’s Managing Environments user guide.

Did it finish creating the environment? Good! It is time to activate it, meaning, making that Python installation the one to be used now. In the same terminal (or Anaconda Prompt), just type:

conda activate pytorchbook

Your prompt should look like this (if you’re using Linux)...

(pytorchbook)$

or like this (if you’re using Windows):

(pytorchbook)C:\>

Done! You are using a brand new conda environment now. You’ll need to activate it every time you open a new terminal or, if you’re a Windows or macOS user, you can open the corresponding Anaconda Prompt (it will show up as Anaconda Prompt (pytorchbook), in our case), which will have it activated from start.

IMPORTANT: From now on, I am assuming you’ll activate the pytorchbook environment every time you open a terminal / Anaconda Prompt. Further installation steps must be executed inside the environment.

3 - PyTorch

It is time to install the star of the show :-) We can go straight to the Start Locally section of its website and it will automatically select the options that best suit your local environment and it will show you the command to run.

Your choices should look like:

  • PyTorch Build: "Stable"
  • Your OS: your operating system
  • Package: "Conda"
  • Language: "Python"
  • CUDA: "None" if you don't have a GPU, or the latest version (e.g. "10.1"), if you have a GPU.

The installation command will be shown right below your choices, so you can copy it. If you have a Windows computer and no GPU, you'd have to run the following command in your Anaconda Prompt (pytorchbook):

(pytorchbook) C:\> conda install pytorch torchvision cpuonly -c pytorch

4 - TensorBoard

TensorBoard is a powerful tool and we can use it even if we are developing models in PyTorch. Luckily, you don’t need to install the whole TensorFlow to get it, you can easily install TensorBoard alone using conda. You just need to run this command in your terminal or Anaconda Prompt (again, after activating the environment):

(pytorchbook)C:\> conda install -c conda-forge tensorboard

5 - GraphViz and TorchViz (optional)

This step is optional, mostly because the installation of GraphViz can be challenging sometimes (especially on Windows). If, for any reason, you do not succeed in installing it correctly, or if you decide to skip this installation step, you will still be able to execute the code in this book (except for a couple of cells that generate images of a model’s structure in the Dynamic Computation Graph section of Chapter 1).

We need to install GraphViz to be able to use TorchViz, a neat package that allows us to visualize a model’s structure. Please check the installation instructions for your OS.

If you are using Windows, please use the installer at GraphViz's Windows Package. You also need to add GraphViz to the PATH (environment variable) in Windows. Most likely, you can find GraphViz executable file at C:\ProgramFiles(x86)\Graphviz2.38\bin. Once you found it, you need to set or change the PATH accordingly, adding GraphViz's location to it. For more details on how to do that, please refer to How to Add to Windows PATH Environment Variable.

For additional information, you can also check the How to Install Graphviz Software guide.

If you installed GraphViz successfully, you can install the torchviz package. This package is not part of Anaconda Distribution Repository and is only available at PyPI , the Python Package Index, so we need to pip install it.

Once again, open a terminal or Anaconda Prompt and run this command (just once more: after activating the environment):

(pytorchbook)C:\> pip install torchviz

6 - Git

It is way beyond the scope of this guide to introduce you to version control and its most popular tool: git. If you are familiar with it already, great, you can skip this section altogether!

Otherwise, I’d recommend you to learn more about it, it will definitely be useful for you later down the line. In the meantime, I will show you the bare minimum, so you can use git to clone this repository containing all code used in this book - so you have your own, local copy of it and can modify and experiment with it as you please.

First, you need to install it. So, head to its downloads page and follow instructions for your OS. Once installation is complete, please open a new terminal or Anaconda Prompt (it's OK to close the previous one). In the new terminal or Anaconda Prompt, you should be able to run git commands. To clone this repository, you only need to run:

(pytorchbook)C:\> git clone https://github.com/dvgodoy/PyTorchStepByStep.git

The command above will create a PyTorchStepByStep folder which contains a local copy of everything available on this GitHub’s repository.

7 - Jupyter

After cloning the repository, navigate to the PyTorchStepByStep and, once inside it, you only need to start Jupyter on your terminal or Anaconda Prompt:

(pytorchbook)C:\> jupyter notebook

This will open your browser up and you will see Jupyter's Home Page containing this repository's notebooks and code.

Congratulations! You are ready to go through the chapters' notebooks!

Comments
  • Chapter 01 - negative sign for gradients

    Chapter 01 - negative sign for gradients

    Prior to "Linear Regression in Numpy" section you do not add a negative sign in front of calculated gradients, while you do so later. I believe later is correct as gradients need to point towards the minima. Is that right?

    opened by nisargvp 7
  • Went from

    Went from "train_step" in helper fn 1, to "step" in helper fn 2?

    I am missing the place where "step" is used as the returned function to "train_step"

    First: train_step = make_train_step(model, loss_fn, optimizer) loss = **train_step**(x_train_tensor, y_train_tensor)

    In model_training/v2.py we see mini_batch_loss = **train_step**(x_batch, y_batch)

    in helper function #2 we see mini_batch_loss = **step**(x_batch, y_batch)

    So far, I have been able to follow the thread of higher level functions. But I missed the above.

    Thank You Tom

    opened by minertom 4
  • No Issue. I purchased the book. Can you hint towards future chapters?

    No Issue. I purchased the book. Can you hint towards future chapters?

    HI, I just purchased the book as I am in the process of learning Pytorch. I was wondering if you could give a hint towards future chapters.

    Also, How would I be notified when future chapters are available?

    Thank You Tom

    opened by minertom 4
  • Chapter 09, encoder-decoder Data Preparation test_points not used for test set

    Chapter 09, encoder-decoder Data Preparation test_points not used for test set

    In the chunk for generation of the test set (Data Generation — Test) the full_testis derived from the points data structure, which are used for training, not from the test_points.

    test_points, test_directions = generate_sequences(seed=19)
    full_test = torch.as_tensor(points).float()
    source_test = full_test[:, :2]
    target_test = full_test[:, 2:]
    

    I do not think that is intended, so there is a simple correction possible:

    full_test = torch.as_tensor(test_points).float()
    

    Based on that change we get different performance figures.
    Loss: plot_loss

    and another figures prediction:

    seq_pred

    with 8 of 10 sequences with "clashing" points.
    If my results are right, this text chunk needs some adaption as well:

    The results are, at the same time, very good and very bad. In half of the sequences, the predicted coordinates are quite close to the actual ones. But, in the other half, the predicted coordinates are overlapping with each other and close to the midpoint between the actual coordinates.

    For whatever reason, the model learned to make good predictions whenever the first corner is on the right edge of the square, but really bad ones otherwise.

    See sequence pictures, these statements needs to be adapted. Especially the second.

    Same issue can be found in the final putting it all together section:

    # Validation/Test Set
    test_points, test_directions = generate_sequences(seed=19)
    full_test = torch.as_tensor(points).float()
    source_test = full_test[:, :2]
    target_test = full_test[:, 2:]
    test_data = TensorDataset(source_test, target_test)
    test_loader = DataLoader(test_data, batch_size=16)
    

    All based on your 1.1 revision, if I did not make any mistakes in updating by git pull.

    opened by gsgxnet 3
  • Chapter 1 - A Simple Regression Problem

    Chapter 1 - A Simple Regression Problem

    There is only one thing left to do; turn our tensor into a GPU tensor. That is what [to()](https://bit.ly/32Mgxjc) is good for. It sends a tensor to the specified device.
    

    Hi Dan,

    I love your book and tutorials! May I kindly ask does to() method copy the data input the device (GPU or CPU) memory directly?

    The reason I am asking is that you mentioned before that torch.as_tensor(x_train) will shares the underlying data with the original Numpy array, but when we used torch.as_tensor(x_train).to(device) I found that x_train data won't change.

    Do I understand it correctly?

    Best, Jun

    opened by xihajun 2
  • Why do we have detach() in self.alphas = alphas.detach() in Attention class in Chapter 9.

    Why do we have detach() in self.alphas = alphas.detach() in Attention class in Chapter 9.

    I wonder why alphas is "detach()"ed and before saved to self.alphas in Attention class. I tried self.alphas = alphas, that is, without detach and trained the model. There is no difference in performance. So I believe the reason is in something else.

    Thank you for your great teaching in your great book!

    opened by eisthf 2
  • On the calculation of

    On the calculation of "w_grad".

    In chapter0 and chapter1 there are a couple of calculations for the gradient of b and the gradient of w.

    Given that yhat = b +w(x_train), where b is random and error is (yhat -y_train), that makes error = (b +w(x_train) - y_train).

    b_grad is given as 2(error.mean() ) which is 2(b + w(x_train) - y_train).mean() , so it seems to me, and I could be wrong, that the so called gradient of b also includes a healthy helping of w.

    w_grad is given as 2(x_train(error).mean() ) which expands out to 2((x_train)b + (x_train**2)w - (x_train)(y_train)).mean() )

    It is the (x_train)**2 term that triggered something in my mind. As well as the fact that the w_grad term also has a healthy helping of b.

    My intuition, and I could be wrong here, is that there is a partial derivative missing such that the gradient of b would be based upon differentiating b while holding w constant and similarly, the gradient of w would be done with holding b as a constant.

    Also, the (x_train)**2 term is confusing here.

    I would be deeply grateful for a clarification.

    Thank You Tom

    opened by minertom 2
  • A simple question about how you are using matplotlib?

    A simple question about how you are using matplotlib?

    Hi, and i realize that this may be trivial.

    In your "figures" you are using python functions that are defined in the "plots" directory.

    In the jupyter notebook you use figure1(x_train, y_train, x_val, y_val

    This calls the function `def figure1(x_train, y_train, x_val, y_val): fig, ax = plt.subplots(1, 2, figsize=(12, 6))

    ax[0].scatter(x_train, y_train)
    ax[0].set_xlabel('x')
    ax[0].set_ylabel('y')
    ax[0].set_ylim([0, 3.1])
    ax[0].set_title('Generated Data - Train')
    
    ax[1].scatter(x_val, y_val, c='r')
    ax[1].set_xlabel('x')
    ax[1].set_ylabel('y')
    ax[1].set_ylim([0, 3.1])
    ax[1].set_title('Generated Data - Validation')
    fig.tight_layout()
    
    return fig, ax
    

    ` I notice that there is no plt.show() statement that is needed, either in the function or in the jupyter notebook.

    However, if I am at the command line (not using the jupyter notebook) and I enter the python code line by line and make the "figure" call, I don't get a plot unless I use "plt.show()" as the next line after the figure call.

    Can you tell me why plt.show() is not necessary for the jupyter notebook but it is necessary when calling the function from the command line?

    Thank You Tom

    opened by minertom 2
  • PyTorchStepByStep is under advertised.

    PyTorchStepByStep is under advertised.

    Danial, I wanted to let you know that I am really finding your Tutorial Series very helpful but you could be selling it a bit better.

    1. I am finding the tutorial a fantastic help in terms of learning numpy. Yes, the CS231 numpy tutorial give a good starting point but there is very little of practical use compared to what is in PyTorchStepByStep.

    2. Your tutorial provides a good learning point for matplotlib. I was forced to dig through the functions which generated the figures. I got everything working inside and outside of a python notebook.

    I just wanted to let you know how much I appreciate this tutorial.

    Tom

    opened by minertom 2
  • Chapter 8 > Gated Recurrent Units (GRUs) > Visualizing the Model > The Journey of a Gated Hidden State: figure22 error

    Chapter 8 > Gated Recurrent Units (GRUs) > Visualizing the Model > The Journey of a Gated Hidden State: figure22 error

    Hi, Figure22 is throwing the following error:

    ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    Cell In[100], line 1
    ----> 1 fig = figure22(model.basic_rnn)
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/PyTorchStepByStep/plots/chapter8.py:880, in figure22(rnn)
        878 square = torch.tensor([[-1, -1], [-1, 1], [1, 1], [1, -1]]).float().view(1, 4, 2)
        879 n_linear, r_linear, z_linear = disassemble_gru(rnn, layer='_l0')
    --> 880 gcell, mstates, hstates, gates = generate_gru_states(n_linear, r_linear, z_linear, square)
        881 gcell(hstates[-1])
        882 titles = [r'$hidden\ state\ (h)$',
        883           r'$transformed\ state\ (t_h)$',
        884           r'$reset\ gate\ (r*t_h)$' + '\n' + r'$r=$',
       (...)
        888           r'$adding\ z*h$' + '\n' + r'h=$(1-z)*n+z*h$', 
        889          ]
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/PyTorchStepByStep/plots/chapter8.py:787, in generate_gru_states(n_linear, r_linear, z_linear, X)
        785     gcell = add_h(gcell, z*hidden)
        786     model_states.append(deepcopy(gcell.state_dict()))
    --> 787     hidden = gcell(hidden)
        789 return gcell, model_states, hidden_states, {'rmult': rs, 'zmult': zs}
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs)
       1186 # If we don't have any hooks, we want to skip the rest of the logic in
       1187 # this function, and just call forward.
       1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
       1189         or _global_forward_hooks or _global_forward_pre_hooks):
    -> 1190     return forward_call(*input, **kwargs)
       1191 # Do not call functions when jit is used
       1192 full_backward_hooks, non_full_backward_hooks = [], []
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/torch/nn/modules/container.py:204, in Sequential.forward(self, input)
        202 def forward(self, input):
        203     for module in self:
    --> 204         input = module(input)
        205     return input
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs)
       1186 # If we don't have any hooks, we want to skip the rest of the logic in
       1187 # this function, and just call forward.
       1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
       1189         or _global_forward_hooks or _global_forward_pre_hooks):
    -> 1190     return forward_call(*input, **kwargs)
       1191 # Do not call functions when jit is used
       1192 full_backward_hooks, non_full_backward_hooks = [], []
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/torch/nn/modules/linear.py:114, in Linear.forward(self, input)
        113 def forward(self, input: Tensor) -> Tensor:
    --> 114     return F.linear(input, self.weight, self.bias)
    
    RuntimeError: expand(torch.FloatTensor{[1, 1, 2]}, size=[1, 2]): the number of sizes provided (2) must be greater or equal to the number of dimensions in the tensor (3)
    

    I have tested in local and colab, but the same error happens.

    opened by scmanjarrez 1
  • Changed next() approach to iterate DataLoader

    Changed next() approach to iterate DataLoader

    First of all, thank you for your amazing book (and repository). Lot of concepts are extremely well explained and easy to understand.

    Previous chapter used next(iter(DL)), however, chapter 5 uses iter(DL).next() which throws the following error:

    ---------------------------------------------------------------------------
    AttributeError                            Traceback (most recent call last)
    Cell In[149], line 1
    ----> 1 images_batch, labels_batch = iter(val_loader).next()
          2 logits = sbs_cnn1.predict(images_batch)
    
    AttributeError: '_SingleProcessDataLoaderIter' object has no attribute 'next'
    

    I don't know if that function was implemented in previous PyTorch versions, however with PyTorch 1.13, it doesn't work anymore.

    Note: Python 3.8.10, jupyterlab 3.5.0

    opened by scmanjarrez 1
  • Changed linewidth to 0.1 when binary=False

    Changed linewidth to 0.1 when binary=False

    The line fig = counter_vs_clock(binary=False) at the start of the chapter9 raises the following exception using matplotlib 3.6.2 (local jupyerlab), however it runs fine on 3.2.2 (installed on colab):

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/pyplot.py:119, in _draw_all_if_interactive()
        117 def _draw_all_if_interactive():
        118     if matplotlib.is_interactive():
    --> 119         draw_all()
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/_pylab_helpers.py:132, in Gcf.draw_all(cls, force)
        130 for manager in cls.get_all_fig_managers():
        131     if force or manager.canvas.figure.stale:
    --> 132         manager.canvas.draw_idle()
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/backend_bases.py:2054, in FigureCanvasBase.draw_idle(self, *args, **kwargs)
       2052 if not self._is_idle_drawing:
       2053     with self._idle_draw_cntx():
    -> 2054         self.draw(*args, **kwargs)
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/backends/backend_agg.py:405, in FigureCanvasAgg.draw(self)
        401 # Acquire a lock on the shared font cache.
        402 with RendererAgg.lock, \
        403      (self.toolbar._wait_cursor_for_draw_cm() if self.toolbar
        404       else nullcontext()):
    --> 405     self.figure.draw(self.renderer)
        406     # A GUI class may be need to update a window using this draw, so
        407     # don't forget to call the superclass.
        408     super().draw()
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/artist.py:74, in _finalize_rasterization.<locals>.draw_wrapper(artist, renderer, *args, **kwargs)
         72 @wraps(draw)
         73 def draw_wrapper(artist, renderer, *args, **kwargs):
    ---> 74     result = draw(artist, renderer, *args, **kwargs)
         75     if renderer._rasterizing:
         76         renderer.stop_rasterizing()
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/artist.py:51, in allow_rasterization.<locals>.draw_wrapper(artist, renderer)
         48     if artist.get_agg_filter() is not None:
         49         renderer.start_filter()
    ---> 51     return draw(artist, renderer)
         52 finally:
         53     if artist.get_agg_filter() is not None:
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/figure.py:3071, in Figure.draw(self, renderer)
       3068         # ValueError can occur when resizing a window.
       3070 self.patch.draw(renderer)
    -> 3071 mimage._draw_list_compositing_images(
       3072     renderer, self, artists, self.suppressComposite)
       3074 for sfig in self.subfigs:
       3075     sfig.draw(renderer)
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/image.py:131, in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
        129 if not_composite or not has_images:
        130     for a in artists:
    --> 131         a.draw(renderer)
        132 else:
        133     # Composite any adjacent images together
        134     image_group = []
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/artist.py:51, in allow_rasterization.<locals>.draw_wrapper(artist, renderer)
         48     if artist.get_agg_filter() is not None:
         49         renderer.start_filter()
    ---> 51     return draw(artist, renderer)
         52 finally:
         53     if artist.get_agg_filter() is not None:
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/axes/_base.py:3107, in _AxesBase.draw(self, renderer)
       3104         a.draw(renderer)
       3105     renderer.stop_rasterizing()
    -> 3107 mimage._draw_list_compositing_images(
       3108     renderer, self, artists, self.figure.suppressComposite)
       3110 renderer.close_group('axes')
       3111 self.stale = False
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/image.py:131, in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
        129 if not_composite or not has_images:
        130     for a in artists:
    --> 131         a.draw(renderer)
        132 else:
        133     # Composite any adjacent images together
        134     image_group = []
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/artist.py:51, in allow_rasterization.<locals>.draw_wrapper(artist, renderer)
         48     if artist.get_agg_filter() is not None:
         49         renderer.start_filter()
    ---> 51     return draw(artist, renderer)
         52 finally:
         53     if artist.get_agg_filter() is not None:
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/lines.py:799, in Line2D.draw(self, renderer)
        796 lc_rgba = mcolors.to_rgba(self._color, self._alpha)
        797 gc.set_foreground(lc_rgba, isRGBA=True)
    --> 799 gc.set_dashes(*self._dash_pattern)
        800 renderer.draw_path(gc, tpath, affine.frozen())
        801 gc.restore()
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/backend_bases.py:926, in GraphicsContextBase.set_dashes(self, dash_offset, dash_list)
        923         raise ValueError(
        924             "All values in the dash list must be non-negative")
        925     if dl.size and not np.any(dl > 0.0):
    --> 926         raise ValueError(
        927             'At least one value in the dash list must be positive')
        928 self._dashes = dash_offset, dash_list
    
    ValueError: At least one value in the dash list must be positive
    
    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/IPython/core/formatters.py:338, in BaseFormatter.__call__(self, obj)
        336     pass
        337 else:
    --> 338     return printer(obj)
        339 # Finally look for special method names
        340 method = get_real_method(obj, self.print_method)
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/IPython/core/pylabtools.py:152, in print_figure(fig, fmt, bbox_inches, base64, **kwargs)
        149     from matplotlib.backend_bases import FigureCanvasBase
        150     FigureCanvasBase(fig)
    --> 152 fig.canvas.print_figure(bytes_io, **kw)
        153 data = bytes_io.getvalue()
        154 if fmt == 'svg':
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/backend_bases.py:2314, in FigureCanvasBase.print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, pad_inches, bbox_extra_artists, backend, **kwargs)
       2308     renderer = _get_renderer(
       2309         self.figure,
       2310         functools.partial(
       2311             print_method, orientation=orientation)
       2312     )
       2313     with getattr(renderer, "_draw_disabled", nullcontext)():
    -> 2314         self.figure.draw(renderer)
       2316 if bbox_inches:
       2317     if bbox_inches == "tight":
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/artist.py:74, in _finalize_rasterization.<locals>.draw_wrapper(artist, renderer, *args, **kwargs)
         72 @wraps(draw)
         73 def draw_wrapper(artist, renderer, *args, **kwargs):
    ---> 74     result = draw(artist, renderer, *args, **kwargs)
         75     if renderer._rasterizing:
         76         renderer.stop_rasterizing()
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/artist.py:51, in allow_rasterization.<locals>.draw_wrapper(artist, renderer)
         48     if artist.get_agg_filter() is not None:
         49         renderer.start_filter()
    ---> 51     return draw(artist, renderer)
         52 finally:
         53     if artist.get_agg_filter() is not None:
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/figure.py:3071, in Figure.draw(self, renderer)
       3068         # ValueError can occur when resizing a window.
       3070 self.patch.draw(renderer)
    -> 3071 mimage._draw_list_compositing_images(
       3072     renderer, self, artists, self.suppressComposite)
       3074 for sfig in self.subfigs:
       3075     sfig.draw(renderer)
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/image.py:131, in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
        129 if not_composite or not has_images:
        130     for a in artists:
    --> 131         a.draw(renderer)
        132 else:
        133     # Composite any adjacent images together
        134     image_group = []
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/artist.py:51, in allow_rasterization.<locals>.draw_wrapper(artist, renderer)
         48     if artist.get_agg_filter() is not None:
         49         renderer.start_filter()
    ---> 51     return draw(artist, renderer)
         52 finally:
         53     if artist.get_agg_filter() is not None:
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/axes/_base.py:3107, in _AxesBase.draw(self, renderer)
       3104         a.draw(renderer)
       3105     renderer.stop_rasterizing()
    -> 3107 mimage._draw_list_compositing_images(
       3108     renderer, self, artists, self.figure.suppressComposite)
       3110 renderer.close_group('axes')
       3111 self.stale = False
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/image.py:131, in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
        129 if not_composite or not has_images:
        130     for a in artists:
    --> 131         a.draw(renderer)
        132 else:
        133     # Composite any adjacent images together
        134     image_group = []
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/artist.py:51, in allow_rasterization.<locals>.draw_wrapper(artist, renderer)
         48     if artist.get_agg_filter() is not None:
         49         renderer.start_filter()
    ---> 51     return draw(artist, renderer)
         52 finally:
         53     if artist.get_agg_filter() is not None:
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/lines.py:799, in Line2D.draw(self, renderer)
        796 lc_rgba = mcolors.to_rgba(self._color, self._alpha)
        797 gc.set_foreground(lc_rgba, isRGBA=True)
    --> 799 gc.set_dashes(*self._dash_pattern)
        800 renderer.draw_path(gc, tpath, affine.frozen())
        801 gc.restore()
    
    File ~/side_projects/brain_auth/psychopy/data_alcoholism/venv/lib/python3.8/site-packages/matplotlib/backend_bases.py:926, in GraphicsContextBase.set_dashes(self, dash_offset, dash_list)
        923         raise ValueError(
        924             "All values in the dash list must be non-negative")
        925     if dl.size and not np.any(dl > 0.0):
    --> 926         raise ValueError(
        927             'At least one value in the dash list must be positive')
        928 self._dashes = dash_offset, dash_list
    
    ValueError: At least one value in the dash list must be positive
    

    Using 0.1 as line width instead of 0 (https://github.com/matplotlib/matplotlib/issues/8821/) seems to avoid the error and doesn't affect the output of the plot.

    opened by scmanjarrez 0
  • How does nn.conv2D implement the feature map table of LeNet-5?

    How does nn.conv2D implement the feature map table of LeNet-5?

    I have been going through the pytorch documentation of conv2D. Reading the docs from pytorch I see

    torch.nn.Conv2d(in_channels: int, out_channels: int, kernel_size: Union[T, Tuple[T, T]], stride: Union[T, Tuple[T, T]] = 1, padding: Union[T, Tuple[T, T]] = 0, dilation: Union[T, Tuple[T, T]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros')

    From the LeNet paper of November 1998 I see that the third convolution layer is implemented with 6 input layers and 16 output layers. The 16 output layers are made from a combination of the 6 input layers according to a table, also in the paper:

    LenetFeatureMap

    From the chapter 5 tutorial, C3 is implemented with

    lenet.add_module('C3', nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5))

    What I do not see is how pytorch implements this feature map table. It seems a little like a "loaves and fishes" issue :-).

    P.S. If I might kill two birds with one stone, can i ask which plots are being implemented with IPython? I have observed that usually the plots that you generate are implemented in the plots directories from each chapter, using matplotlib. I was looking for HTML plots that were generated with IPython, because you call the IPython.core.display module and import display, HTML. But, I could not seem to spot how it was used.

    Thank You Tom

    opened by minertom 10
  • Missing input in helpers.py for function

    Missing input in helpers.py for function "make_balanced_sampler" and WeightedRandomSampler?

    I just downloaded the latest full zip file.

    While running the jupyter notebook for Chapter05, I noticed a couple of errors, relating to

    ` in 22 23 # Builds a weighted random sampler to handle imbalanced classes ---> 24 sampler = make_balanced_sampler(y_train_tensor) 25 26 # Uses sampler in the training set to get a balanced data loader

    ~/projects/pytorchstepbystep_2/PyTorchStepByStep-master/helpers.py in make_balanced_sampler(y) 79 num_samples=len(sample_weights), 80 generator=generator, ---> 81 replacement=True 82 ) 83 return sampler

    TypeError: init() got an unexpected keyword argument 'generator' `

    I think that the issue stems from the "WeightedRandomSampler" function, which might be missing an input. From the pytorch documentation CLASStorch.utils.data.WeightedRandomSampler(weights: Sequence[float], num_samples: int, replacement: bool = True, generator=None)

    That would be 4 inputs. "replacement" and "generator" are satisfied but either "weights" or "num_samples" seems to be missing from the call to WeightedRandomSampler from make_balanced_sampler. My guess would be that "num_samples" is the missing input.

    Is that the case?

    Thank You Tom

    opened by minertom 5
  • How to extract/save weights after training?

    How to extract/save weights after training?

    OK, here I am displaying my utter ignorance again. I did find a post on towards data science entitled "everything you need to know about saving weights in pytorch".

    https://towardsdatascience.com/everything-you-need-to-know-about-saving-weights-in-pytorch-572651f3f8de

    Now I am stuck. Having saved the weights in the example project, I am aware that the file is not in a human readable format.

    So my question now becomes is there a way to take this file of weights which is in pth format and convert it to numpy, which would be human readable? I would like to be able to do further manipulation of the weights in numpy.

    Thank You for your patients Tom

    opened by minertom 2
  • The use of

    The use of "log" vs "ln", chapter 3 - Loss

    I hope that this is not trivial. I was confused by it for a while so I thought that I should bring it up.

    Normally, in engineering, when I see Log, without any base, it is assumed to be logarithm to the base 10. In the following from chapter 3, Loss first_summation = torch.log(positive_pred).sum()

    Printing this first summation I noticed the value = tensor(-0.1054) I was going though the math and realized that this is not equal to log 10 of .9, which is -.045.

    Going to the pytorch documentation I saw that "log Returns a new tensor with the natural logarithm of the elements of input."

    Of course, in the "From Logits to Probablilties" there is shown the relationship which "kind of" hints towards natural logarithms or log to the base e, but the whole confusion can be avoided by using the symbol "ln" as opposed to "log".

    Do you agree?

    Thank You Tom

    opened by minertom 2
Owner
Daniel Voigt Godoy
Data scientist, developer, teacher and writer. Author of "Deep Learning with PyTorch Step-by-Step: A Beginner's Guide".
Daniel Voigt Godoy
Code samples for my book "Neural Networks and Deep Learning"

Code samples for "Neural Networks and Deep Learning" This repository contains code samples for my book on "Neural Networks and Deep Learning". The cod

Michael Nielsen 13.9k Dec 26, 2022
Sample code from the Neural Networks from Scratch book.

Neural Networks from Scratch (NNFS) book code Code from the NNFS book (https://nnfs.io) separated by chapter.

Harrison 172 Dec 31, 2022
Experimental solutions to selected exercises from the book [Advances in Financial Machine Learning by Marcos Lopez De Prado]

Advances in Financial Machine Learning Exercises Experimental solutions to selected exercises from the book Advances in Financial Machine Learning by

Brian 1.4k Jan 4, 2023
Jupyter notebooks for the code samples of the book "Deep Learning with Python"

Jupyter notebooks for the code samples of the book "Deep Learning with Python"

François Chollet 16.2k Dec 30, 2022
Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)

Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)

Dominik Klein 189 Dec 21, 2022
MATLAB codes of the book "Digital Image Processing Fourth Edition" converted to Python

Digital Image Processing Python MATLAB codes of the book "Digital Image Processing Fourth Edition" converted to Python TO-DO: Refactor scripts, curren

Merve Noyan 24 Oct 16, 2022
Python Algorithm Interview Book Review

파이썬 알고리즘 인터뷰 책 리뷰 리뷰 IT 대기업에 들어가고 싶은 목표가 있다. 내가 꿈꿔온 회사에서 일하는 사람들의 모습을 보면 멋있다고 생각이 들고 나의 목표에 대한 열망이 강해지는 것 같다. 미래의 핵심 사업 중 하나인 SW 부분을 이끌고 발전시키는 우리나라의 I

SharkBSJ 1 Dec 14, 2021
Official repository for "PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation"

pair-emnlp2020 Official repository for the paper: Xinyu Hua and Lu Wang: PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long

Xinyu Hua 31 Oct 13, 2022
Official repository for Few-shot Image Generation via Cross-domain Correspondence (CVPR '21)

Few-shot Image Generation via Cross-domain Correspondence Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zh

Utkarsh Ojha 251 Dec 11, 2022
Official repository for Jia, Raghunathan, Göksel, and Liang, "Certified Robustness to Adversarial Word Substitutions" (EMNLP 2019)

Certified Robustness to Adversarial Word Substitutions This is the official GitHub repository for the following paper: Certified Robustness to Adversa

Robin Jia 38 Oct 16, 2022
Official repository for the ICLR 2021 paper Evaluating the Disentanglement of Deep Generative Models with Manifold Topology

Official repository for the ICLR 2021 paper Evaluating the Disentanglement of Deep Generative Models with Manifold Topology Sharon Zhou, Eric Zelikman

Stanford Machine Learning Group 34 Nov 16, 2022
The repository offers the official implementation of our paper in PyTorch.

Cloth Interactive Transformer (CIT) Cloth Interactive Transformer for Virtual Try-On Bin Ren1, Hao Tang1, Fanyang Meng2, Runwei Ding3, Ling Shao4, Phi

Bingoren 49 Dec 1, 2022
Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.

Learning Associative Inference Using Fast Weight Memory This repository contains the offical code for the paper Learning Associative Inference Using F

Imanol Schlag 18 Oct 12, 2022
Official repository for "Action-Based Conversations Dataset: A Corpus for Building More In-Depth Task-Oriented Dialogue Systems"

Action-Based Conversations Dataset (ABCD) This respository contains the code and data for ABCD (Chen et al., 2021) Introduction Whereas existing goal-

ASAPP Research 49 Oct 9, 2022
Official repository for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'21, Oral Presentation)

Official PyTorch Implementation for HOTR: End-to-End Human-Object Interaction Detection with Transformers (CVPR'2021, Oral Presentation) HOTR: End-to-

Kakao Brain 114 Nov 28, 2022
Official repository for "Intriguing Properties of Vision Transformers" (2021)

Intriguing Properties of Vision Transformers Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, & Ming-Hsuan Yang P

Muzammal Naseer 155 Dec 27, 2022
Competitive Programming Club, Clinify's Official repository for CP problems hosting by club members.

Clinify-CPC_Programs This repository holds the record of the competitive programming club where the competitive coding aspirants are thriving hard and

Clinify Open Sauce 4 Aug 22, 2022
Official repository for "On Improving Adversarial Transferability of Vision Transformers" (2021)

Improving-Adversarial-Transferability-of-Vision-Transformers Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Fahad Khan, Fatih Porikli arxiv link A

Muzammal Naseer 47 Dec 2, 2022
This is the official repository of XVFI (eXtreme Video Frame Interpolation)

XVFI This is the official repository of XVFI (eXtreme Video Frame Interpolation), https://arxiv.org/abs/2103.16206 Last Update: 20210607 We provide th

Jihyong Oh 195 Dec 29, 2022