Repository for scripts and notebooks from the book: Programming PyTorch for Deep Learning

Overview

PyTorch for Deep Learning: Creating and Deploying Deep Learning Applications

Repository for scripts and notebooks from the book: Programming PyTorch for Deep Learning: Creating and Deploying Deep Learning Applications

Updates

  • 2020/05/25: Chapter 9.75 — Image Self-Supervised Learning

  • 2020/03/01: Chapter 9.5 - Text Generation With GPT-2 And (only) PyTorch, or Semi/Self-Supervision Learning Part 1 (Letters To Charlotte)

  • 2020/05/03: Chapter 7.5 - Quantizing Models


Deutschsprachige Ausgabe

PyTorch für Deep Learning: Anwendungen für Bild-, Ton- und Textdaten entwickeln und deployen

--> https://dpunkt.de/produkt/pytorch-fuer-deep-learning/

Installationshinweise

Versionskontrolle

Nachdem Sie das Github-Repository lokal geklont (bzw. zuvor geforkt) haben!

Conda

1.) Wechseln Sie zunächst in den Zielordner (cd beginners-pytorch-deep-learning), erstellen Sie dann eine (lokale) virtuelle Umgebung und installieren Sie die benötigten Bibliotheken und Pakete:

conda env create --file environment.yml

2.) Anschließend aktivieren Sie die virtuelle Umgebung:

conda activate myenv

3.) Zum Deaktivieren nutzen Sie den Befehl:

conda deactivate

pip

1.) Wechseln Sie zunächst in den Zielordner (cd beginners-pytorch-deep-learning) und erstellen Sie anschließend eine virtuelle Umgebung:

python3 -m venv myenv

2.) Aktivieren Sie die virtuelle Umgebung (https://docs.python.org/3/library/venv.html):

source myenv/bin/activate (Ubuntu/Mac) myenv\Scripts\activate.bat (Windows)

3.) Erstellen Sie eine (lokale) virtuelle Umgebung und installieren Sie die benötigten Bibliotheken und Pakete:

pip3 install -r requirements.txt

4.) Zum Deaktivieren nutzen Sie den Befehl:

deactivate

Bei Nutzung von Jupyter Notebook

1.) Zunächst müssen Sie Jupyter Notebook installieren:

conda install -c conda-forge notebook oder pip3 install notebook

2.) Nach Aktivierung Ihrer virtuellen Umgebung (s.o.) geben Sie den folgenden Befehl in Ihre Kommandozeile ein, um die ipykernel-Bibliothek herunterzuladen:

conda install ipykernel oder pip3 install ipykernel

3.) Installieren Sie einen Kernel mit Ihrer virtuellen Umgebung:

ipython kernel install --user --name=myenv

4.) Starten Sie Jupyter Notebook:

jupyter notebook

5.) Nach Öffnen des Jupyter-Notebook-Startbildschirms wählen Sie auf der rechten Seite das Feld New (bzw. in der Notebook-Ansischt den Reiter Kernel/Change Kernel) und wählen Sie myenv aus.

Google Colaboratory

In Google Colab stehen Ihnen standardmäßig einige Pakete bereits vorinstalliert zur Verfügung. Da sich Neuinstallationen immer nur auf ein Notebook beziehen, können Sie von einer Einrichtung einer virtuellen Umgebung absehen und direkt die Pakete mit Hilfe der Dateien environment.yml oder requirements.txt / requirements_cuda_available.txt wie oben beschrieben installieren, jedoch zusätzlich mit einem vorangestellten ! , bspw. !pip3 install -r requirements .txt.

Comments
  • chapter 3

    chapter 3

    issue: no ./train directory

    def check_image(path): try: im = Image.open(path) return True except: return False img_transforms = transforms.Compose([ transforms.Resize((64,64)),
    transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] ) ]) train_data_path = "./train/" train_data = torchvision.datasets.ImageFolder(root=train_data_path,transform=img_transforms, is_valid_file=check_image) val_data_path = "./val/" val_data = torchvision.datasets.ImageFolder(root=val_data_path,transform=img_transforms, is_valid_file=check_image) batch_size=64 train_data_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,shuffle=True) val_data_loader = torch.utils.data.DataLoader(val_data, batch_size=batch_size, shuffle=True) if torch.cuda.is_available(): device = torch.device("cuda") else: device = torch.device("cpu")

    FileNotFoundErrorTraceback (most recent call last) in () 12 ]) 13 train_data_path = "./train/" ---> 14 train_data = torchvision.datasets.ImageFolder(root=train_data_path,transform=img_transforms, is_valid_file=check_image) 15 val_data_path = "./val/" 16 val_data = torchvision.datasets.ImageFolder(root=val_data_path,transform=img_transforms, is_valid_file=check_image)

    ~/anaconda3/lib/python3.6/site-packages/torchvision/datasets/folder.py in init(self, root, transform, target_transform, loader, is_valid_file) 207 transform=transform, 208 target_transform=target_transform, --> 209 is_valid_file=is_valid_file) 210 self.imgs = self.samples

    ~/anaconda3/lib/python3.6/site-packages/torchvision/datasets/folder.py in init(self, root, loader, extensions, transform, target_transform, is_valid_file) 91 super(DatasetFolder, self).init(root, transform=transform, 92 target_transform=target_transform) ---> 93 classes, class_to_idx = self._find_classes(self.root) 94 samples = make_dataset(self.root, class_to_idx, extensions, is_valid_file) 95 if len(samples) == 0:

    ~/anaconda3/lib/python3.6/site-packages/torchvision/datasets/folder.py in _find_classes(self, dir) 120 if sys.version_info >= (3, 5): 121 # Faster and available in Python 3.5 and above --> 122 classes = [d.name for d in os.scandir(dir) if d.is_dir()] 123 else: 124 classes = [d for d in os.listdir(dir) if os.path.isdir(os.path.join(dir, d))]

    FileNotFoundError: [Errno 2] No such file or directory: './train/'

    opened by wubinbai 14
  • Chapter_2.ipynb Implicit dimension choice for softmax has been deprecated

    Chapter_2.ipynb Implicit dimension choice for softmax has been deprecated

    py:28: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.

    Need dim parameter added to softmax calls.

    opened by pjgoodall 7
  • Chapter 3 - Making Predictions Method from Chapter 2 does not work

    Chapter 3 - Making Predictions Method from Chapter 2 does not work

    Replacing the model from Chapter 2 with the one of Chapter 3, results in the following message when the prediction algorithm is called: RuntimeError: mat1 and mat2 shapes cannot be multiplied (256x36 and 9216x4096) EDIT: Found solution, using .unsqueeze(0)

    opened by Mahosaurus 4
  • Ch2: Add torch.unqueeze() in order to add 4th dimension at index 0

    Ch2: Add torch.unqueeze() in order to add 4th dimension at index 0

    #52

    There was missing the line

    img = torch.unsqueeze(img, 0)

    in order to add an additional dimension as the model expects a DataLoader object with batches.

    opened by MarcusFra 3
  • How do you evaluate Chapter 3 Model?

    How do you evaluate Chapter 3 Model?

    As title says, how do you evaluate a CNN? I tried use the same approach on chapter 2 but I can't. I get the following.

    `--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) in 11 img = img_transforms(img).to(device) 12 cnnet.eval() ---> 13 prediction = F.softmax(cnnet(img), dim=1) 14 prediction = prediction.argmax() 15 cats_pred.append(labels[prediction])

    D:\Users\gusta\anaconda3\envs\book-1\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(),

    in forward(self, x) 30 31 def forward(self, x): ---> 32 x = self.features(x) 33 x = self.avgpool(x) 34 x = torch.flatten(x, 1)

    D:\Users\gusta\anaconda3\envs\book-1\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(),

    D:\Users\gusta\anaconda3\envs\book-1\lib\site-packages\torch\nn\modules\container.py in forward(self, input) 115 def forward(self, input): 116 for module in self: --> 117 input = module(input) 118 return input 119

    D:\Users\gusta\anaconda3\envs\book-1\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(),

    D:\Users\gusta\anaconda3\envs\book-1\lib\site-packages\torch\nn\modules\conv.py in forward(self, input) 421 422 def forward(self, input: Tensor) -> Tensor: --> 423 return self._conv_forward(input, self.weight) 424 425 class Conv3d(_ConvNd):

    D:\Users\gusta\anaconda3\envs\book-1\lib\site-packages\torch\nn\modules\conv.py in _conv_forward(self, input, weight) 417 weight, self.bias, self.stride, 418 _pair(0), self.dilation, self.groups) --> 419 return F.conv2d(input, weight, self.bias, self.stride, 420 self.padding, self.dilation, self.groups) 421

    RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 11, 11], but got 3-dimensional input of size [3, 64, 64] instead `

    opened by lenoqt 3
  • Bump pillow from 7.0.0 to 8.1.1

    Bump pillow from 7.0.0 to 8.1.1

    Bumps pillow from 7.0.0 to 8.1.1.

    Release notes

    Sourced from pillow's releases.

    8.1.1

    https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html

    8.1.0

    https://pillow.readthedocs.io/en/stable/releasenotes/8.1.0.html

    Changes

    Dependencies

    Deprecations

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    8.1.1 (2021-03-01)

    • Use more specific regex chars to prevent ReDoS. CVE-2021-25292 [hugovk]

    • Fix OOB Read in TiffDecode.c, and check the tile validity before reading. CVE-2021-25291 [wiredfool]

    • Fix negative size read in TiffDecode.c. CVE-2021-25290 [wiredfool]

    • Fix OOB read in SgiRleDecode.c. CVE-2021-25293 [wiredfool]

    • Incorrect error code checking in TiffDecode.c. CVE-2021-25289 [wiredfool]

    • PyModule_AddObject fix for Python 3.10 #5194 [radarhere]

    8.1.0 (2021-01-02)

    • Fix TIFF OOB Write error. CVE-2020-35654 #5175 [wiredfool]

    • Fix for Read Overflow in PCX Decoding. CVE-2020-35653 #5174 [wiredfool, radarhere]

    • Fix for SGI Decode buffer overrun. CVE-2020-35655 #5173 [wiredfool, radarhere]

    • Fix OOB Read when saving GIF of xsize=1 #5149 [wiredfool]

    • Makefile updates #5159 [wiredfool, radarhere]

    • Add support for PySide6 #5161 [hugovk]

    • Use disposal settings from previous frame in APNG #5126 [radarhere]

    • Added exception explaining that repr_png saves to PNG #5139 [radarhere]

    • Use previous disposal method in GIF load_end #5125 [radarhere]

    ... (truncated)

    Commits
    • 741d874 8.1.1 version bump
    • 179cd1c Added 8.1.1 release notes to index
    • 7d29665 Update CHANGES.rst [ci skip]
    • d25036f Credits
    • 973a4c3 Release notes for 8.1.1
    • 521dab9 Use more specific regex chars to prevent ReDoS
    • 8b8076b Fix for CVE-2021-25291
    • e25be1e Fix negative size read in TiffDecode.c
    • f891baa Fix OOB read in SgiRleDecode.c
    • cbfdde7 Incorrect error code checking in TiffDecode.c
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 2
  • Chapter 6: A Journey Into Sound

    Chapter 6: A Journey Into Sound

    Hi, I do not find the solution, why len(train_loader.dataset) is 0 and leads to this error. Thank you for help. Brds, Patrick

    [9] train(audionet, optimizer, torch.nn.CrossEntropyLoss(), train_loader, valid_loader, epochs=20, device=device)

    ZeroDivisionError Traceback (most recent call last) in ----> 1 train(audionet, optimizer, torch.nn.CrossEntropyLoss(), train_loader, valid_loader, epochs=20, device=device)

    in train(model, optimizer, loss_fn, train_loader, val_loader, epochs, device) 14 optimizer.step() 15 training_loss += loss.data.item() * inputs.size(0) ---> 16 training_loss /= len(train_loader.dataset) 17 18 model.eval()

    ZeroDivisionError: float division by zero

    opened by eiler-partner 2
  • Chapter 4 - Replacing the Classifier

    Chapter 4 - Replacing the Classifier

    transfer_model.fc = nn.Sequential(nn.Linear(transfer_model.fc.in_features, 500), nn.ReLU(), nn.Dropout(), nn.Linear(500,2))

    AttributeError: 'Sequential' object has no attribute 'in_features'

    opened by ldkong1205 2
  • Chapter 6: find_lr(), train() - RuntimeError Output size is too small

    Chapter 6: find_lr(), train() - RuntimeError Output size is too small

    Hi,

    after having inserted the missing train_loader argument in the find_lr() function (we should open a PR for this after having solved this error), I get a RuntimeError in the current version of Chapter6.ipynb.

    After running

    torch.save(audionet.state_dict(), "audionet.pth")
    optimizer = optim.Adam(audionet.parameters(), lr=0.001)
    ### added: , train_loader, device="cuda"
    logs,losses = find_lr(audionet, nn.CrossEntropyLoss(), optimizer, train_loader, device="cuda")
    plt.plot(logs,losses)
    

    I get this error:

    ---------------------------------------------------------------------------
    
    RuntimeError                              Traceback (most recent call last)
    
    <ipython-input-47-d21b94029709> in <module>()
          2 optimizer = optim.Adam(audionet.parameters(), lr=0.001)
          3 ### added: , train_loader, device="cuda"
    ----> 4 logs,losses = find_lr(audionet, nn.CrossEntropyLoss(), optimizer, train_loader, device="cuda")
          5 plt.plot(logs,losses)
    
    4 frames
    
    <ipython-input-41-3824bd6072de> in find_lr(model, loss_fn, optimizer, train_loader, init_value, final_value, device)
         49         targets = targets.to(device)
         50         optimizer.zero_grad()
    ---> 51         outputs = model(inputs)
         52         loss = loss_fn(outputs, targets)
         53 
    
    /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
        530             result = self._slow_forward(*input, **kwargs)
        531         else:
    --> 532             result = self.forward(*input, **kwargs)
        533         for hook in self._forward_hooks.values():
        534             hook_result = hook(self, input, result)
    
    <ipython-input-44-db3925c72c5e> in forward(self, x)
         31         x = F.relu(self.bn4(x))
         32         x = self.pool4(x)
    ---> 33         x = self.avgPool(x)
         34         x = x.squeeze(-1)
         35         x = self.fc1(x)
    
    /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
        530             result = self._slow_forward(*input, **kwargs)
        531         else:
    --> 532             result = self.forward(*input, **kwargs)
        533         for hook in self._forward_hooks.values():
        534             hook_result = hook(self, input, result)
    
    /usr/local/lib/python3.6/dist-packages/torch/nn/modules/pooling.py in forward(self, input)
        484         return F.avg_pool1d(
        485             input, self.kernel_size, self.stride, self.padding, self.ceil_mode,
    --> 486             self.count_include_pad)
        487 
        488 
    
    RuntimeError: Given input size: (512x1x1). Calculated output size: (512x1x0). Output size is too small
    

    The same happens when I run

    train(audionet, optimizer, torch.nn.CrossEntropyLoss(),train_loader, valid_loader, epochs=20)
    

    I ran the code in Colab.

    opened by MarcusFra 2
  • Chapter 6 - audionet.save fails

    Chapter 6 - audionet.save fails

    the cell with

    audionet.save("audionet.pth")

    fails with error message AttributeError: 'AudioNet' object has no attribute 'save'

    So of course later attempts to load this pth also fail.

    Running the code on Ubuntu 19.10 and an Ananconda python setup

    opened by PCJimmmy 2
  • Code for Class activation mapping

    Code for Class activation mapping

    Hi, I've been trying to follow the code examples from the class activation mapping workflow (chapter 7) but have a hard time understanding how this code fits with what comes before:

    fts = sf[0].features[idx]
            prob = np.exp(to_np(log_prob))
            preds = np.argmax(prob[idx])
            fts_np = to_np(fts)
            f2=np.dot(np.rollaxis(fts_np,0,3), prob[idx])
            f2-=f2.min()
            f2/=f2.max()
            f2
    plt.imshow(dx)
    plt.imshow(scipy.misc.imresize(f2, dx.shape), alpha=0.5, cmap='jet');
    

    Could you please post the code that is consistent with previous naming, or add a file in the repository to demonstrate this flow ?

    Thanks !

    opened by jhagege 2
  • Bump pillow from 7.0.0 to 9.3.0

    Bump pillow from 7.0.0 to 9.3.0

    Bumps pillow from 7.0.0 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Chapter 6: The ESC-50 folder name are different between book and repo code

    Chapter 6: The ESC-50 folder name are different between book and repo code

    Chapter 6: The ESC-50 folder name are different between book and repo code

    In the book, tell me to get the wav file from 2 ways, one is to use git clone, the command in the book is below:

    git clone https://github.com/karoldvl/ESC-50
    

    The other is to download the zip file directly and unzip it.

    Both will create the folder with the default name ESC-50.

    but in the repo code, it used another name esc50 in the file Chapter 6.ipynb:

    device="cuda"
    bs=64
    PATH_TO_ESC50 = Path.cwd() / 'esc50'
    

    It confused me for a long while, every times I failed, get 0 result, I checked and checked again, and finally found the reason.

    I suggest to update the repo code

    opened by FreeBlues 1
  • Bump numpy from 1.18.5 to 1.22.0

    Bump numpy from 1.18.5 to 1.22.0

    Bumps numpy from 1.18.5 to 1.22.0.

    Release notes

    Sourced from numpy's releases.

    v1.22.0

    NumPy 1.22.0 Release Notes

    NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:

    • Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.
    • A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.
    • NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.
    • New methods for quantile, percentile, and related functions. The new methods provide a complete set of the methods commonly found in the literature.
    • A new configurable allocator for use by downstream projects.

    These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.

    The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.

    Expired deprecations

    Deprecated numeric style dtype strings have been removed

    Using the strings "Bytes0", "Datetime64", "Str0", "Uint32", and "Uint64" as a dtype will now raise a TypeError.

    (gh-19539)

    Expired deprecations for loads, ndfromtxt, and mafromtxt in npyio

    numpy.loads was deprecated in v1.15, with the recommendation that users use pickle.loads instead. ndfromtxt and mafromtxt were both deprecated in v1.17 - users should use numpy.genfromtxt instead with the appropriate value for the usemask parameter.

    (gh-19615)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • chapter 2 has unknown mistake, please help me

    chapter 2 has unknown mistake, please help me

    I copyed the code to pycharm, but it can not run, here are the code:

    import torch import torch.nn as nn import torch.optim as optim import torch.utils.data import torch.nn.functional as F import torchvision from torchvision import transforms from PIL import Image, ImageFile

    ImageFile.LOAD_TRUNCATED_IMAGES = True

    def check_image(path):

    try:

    im = Image.open(path)

    return True

    except:

    return False

    check_image = True

    img_transforms = transforms.Compose([ transforms.Resize((64, 64)), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ])

    train_data_path = "./train/" train_data = torchvision.datasets.ImageFolder(root=train_data_path, transform=img_transforms) val_data_path = "./val/" val_data = torchvision.datasets.ImageFolder(root=val_data_path, transform=img_transforms) test_data_path = "./test/" test_data = torchvision.datasets.ImageFolder(root=test_data_path, transform=img_transforms)

    batch_size = 64

    train_data_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size) val_data_loader = torch.utils.data.DataLoader(val_data, batch_size=batch_size) test_data_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size)

    class SimpleNet(nn.Module):

    def __init__(self):
        super(SimpleNet, self).__init__()
        self.fc1 = nn.Linear(12288, 84)
        self.fc2 = nn.Linear(84, 50)
        self.fc3 = nn.Linear(50, 2)
    
    def forward(self, x):
        x = x.view(-1, 12288)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x
    

    simplenet = SimpleNet()

    optimizer = optim.Adam(simplenet.parameters(), lr=0.001)

    if torch.cuda.is_available(): device = torch.device("cuda") else: device = torch.device("cpu")

    simplenet.to(device)

    def train(model, optimizer, loss_fn, train_loader, val_loader, epochs, device): for epoch in range(1, epochs + 1): training_loss = 0.0 valid_loss = 0.0 model.train() for batch in train_loader: optimizer.zero_grad() inputs, targets = batch inputs = inputs.to(device) targets = targets.to(device) output = model(inputs) loss = loss_fn(output, targets) loss.backward() optimizer.step() training_loss += loss.data.item() * inputs.size(0) training_loss /= len(train_loader.dataset)

        model.eval()
        num_correct = 0
        num_examples = 0
        for batch in val_loader:
            inputs, targets = batch
            inputs = inputs.to(device)
            output = model(inputs)
            targets = targets.to(device)
            loss = loss_fn(output, targets)
            valid_loss += loss.data.item() * inputs.size(0)
            correct = torch.eq(torch.max(F.softmax(output, dim=1), dim=1)[1], targets)
            num_correct += torch.sum(correct).item()
            num_examples += correct.shape[0]
        valid_loss /= len(val_loader.dataset)
    
        print(
            'Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss,
                                                                                                  valid_loss,
                                                                                                  num_correct / num_examples))
    

    epoch = 5

    train(simplenet, optimizer, torch.nn.CrossEntropyLoss(), train_data_loader, val_data_loader, epoch, device)


    mistake information:

    Traceback (most recent call last): File "C:\Users\Administrator\Desktop\disk_file\DL\1\5.py", line 110, in train(simplenet, optimizer, torch.nn.CrossEntropyLoss(), train_data_loader, val_data_loader, epoch, device) File "C:\Users\Administrator\Desktop\disk_file\DL\1\5.py", line 75, in train for batch in train_loader: File "C:\Users\Administrator\AppData\Roaming\Python\Python39\site-packages\torch\utils\data\dataloader.py", line 521, in next data = self._next_data() File "C:\Users\Administrator\AppData\Roaming\Python\Python39\site-packages\torch\utils\data\dataloader.py", line 561, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "C:\Users\Administrator\AppData\Roaming\Python\Python39\site-packages\torch\utils\data_utils\fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "C:\Users\Administrator\AppData\Roaming\Python\Python39\site-packages\torch\utils\data_utils\fetch.py", line 49, in data = [self.dataset[idx] for idx in possibly_batched_index] File "C:\Users\Administrator\AppData\Roaming\Python\Python39\site-packages\torchvision\datasets\folder.py", line 232, in getitem sample = self.loader(path) File "C:\Users\Administrator\AppData\Roaming\Python\Python39\site-packages\torchvision\datasets\folder.py", line 269, in default_loader return pil_loader(path) File "C:\Users\Administrator\AppData\Roaming\Python\Python39\site-packages\torchvision\datasets\folder.py", line 250, in pil_loader img = Image.open(f) File "C:\Program Files\1\lib\site-packages\PIL\Image.py", line 3030, in open raise UnidentifiedImageError( PIL.UnidentifiedImageError: cannot identify image file <_io.BufferedReader name='./train/cat\1004525_cba96ba3c3.jpg'>

    opened by hero-117 6
  • chapter 2, page 21: 'data' is not defined in data.DataLoader()

    chapter 2, page 21: 'data' is not defined in data.DataLoader()

    At the top of page 21, we have

    batch_size=64
    train_data_loader = data.DataLoader(train_data, batch_size=batch_size)  ## line 19, 'data' not defined
    val_data_loader = data.DataLoader(val_data, batch_size=batch_size)
    test_data_loader = data.DataLoader(test_data, batch_size=batch_size)
    

    which python will complain about, because there has been no 'data' package import.

    $ python ./prep.py
    Traceback (most recent call last):
      File "./prep.py", line 19, in <module>
        train_data_loader = data.DataLoader(train_data, batch_size=batch_size)
    NameError: name 'data' is not defined
    

    where is the data package with the DataLoader definition supposed to come from?

    opened by glycerine 2
Owner
Ian Pointer
Ian Pointer
Official repository of my book: "Deep Learning with PyTorch Step-by-Step: A Beginner's Guide"

This is the official repository of my book "Deep Learning with PyTorch Step-by-Step". Here you will find one Jupyter notebook for every chapter in the book.

Daniel Voigt Godoy 340 Jan 1, 2023
Code samples for my book "Neural Networks and Deep Learning"

Code samples for "Neural Networks and Deep Learning" This repository contains code samples for my book on "Neural Networks and Deep Learning". The cod

Michael Nielsen 13.9k Dec 26, 2022
Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)

Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)

Dominik Klein 189 Dec 21, 2022
3ds-Ghidra-Scripts - Ghidra scripts to help with 3ds reverse engineering

3ds Ghidra Scripts These are ghidra scripts to help with 3ds reverse engineering

Zak 7 May 23, 2022
Omniverse sample scripts - A guide for developing with Python scripts on NVIDIA Ominverse

Omniverse sample scripts ここでは、NVIDIA Omniverse ( https://www.nvidia.com/ja-jp/om

ft-lab (Yutaka Yoshisaka) 37 Nov 17, 2022
Deep universal probabilistic programming with Python and PyTorch

Getting Started | Documentation | Community | Contributing Pyro is a flexible, scalable deep probabilistic programming library built on PyTorch. Notab

null 7.7k Dec 30, 2022
Use deep learning, genetic programming and other methods to predict stock and market movements

StockPredictions Use classic tricks, neural networks, deep learning, genetic programming and other methods to predict stock and market movements. Both

Linda MacPhee-Cobb 386 Jan 3, 2023
Experimental solutions to selected exercises from the book [Advances in Financial Machine Learning by Marcos Lopez De Prado]

Advances in Financial Machine Learning Exercises Experimental solutions to selected exercises from the book Advances in Financial Machine Learning by

Brian 1.4k Jan 4, 2023
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

null 63 Oct 17, 2022
This package contains deep learning models and related scripts for RoseTTAFold

RoseTTAFold This package contains deep learning models and related scripts to run RoseTTAFold This repository is the official implementation of RoseTT

null 1.6k Jan 3, 2023
DeepProbLog is an extension of ProbLog that integrates Probabilistic Logic Programming with deep learning by introducing the neural predicate.

DeepProbLog DeepProbLog is an extension of ProbLog that integrates Probabilistic Logic Programming with deep learning by introducing the neural predic

KU Leuven Machine Learning Research Group 94 Dec 18, 2022
📚 A collection of Jupyter notebooks for learning and experimenting with OpenVINO 👓

A collection of ready-to-run Python* notebooks for learning and experimenting with OpenVINO developer tools. The notebooks are meant to provide an introduction to OpenVINO basics and teach developers how to leverage our APIs for optimized deep learning inference in their applications.

OpenVINO Toolkit 840 Jan 3, 2023
Deep Learning Training Scripts With Python

Deep Learning Training Scripts DNN Frameworks Caffe PyTorch Tensorflow CNN Models VGG ResNet DenseNet Inception Language Modeling GatedCNN-LM Attentio

Multicore Computing Research Lab 16 Dec 15, 2022
This repository contains all the code and materials distributed in the 2021 Q-Programming Summer of Qode.

Q-Programming Summer of Qode This repository contains all the code and materials distributed in the Q-Programming Summer of Qode. If you want to creat

Sammarth Kumar 11 Jun 11, 2021
PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos

PyKale is a PyTorch library for multimodal learning and transfer learning as well as deep learning and dimensionality reduction on graphs, images, texts, and videos. By adopting a unified pipeline-based API design, PyKale enforces standardization and minimalism, via reusing existing resources, reducing repetitions and redundancy, and recycling learning models across areas.

PyKale 370 Dec 27, 2022
Sample code from the Neural Networks from Scratch book.

Neural Networks from Scratch (NNFS) book code Code from the NNFS book (https://nnfs.io) separated by chapter.

Harrison 172 Dec 31, 2022
MATLAB codes of the book "Digital Image Processing Fourth Edition" converted to Python

Digital Image Processing Python MATLAB codes of the book "Digital Image Processing Fourth Edition" converted to Python TO-DO: Refactor scripts, curren

Merve Noyan 24 Oct 16, 2022
Python Algorithm Interview Book Review

파이썬 알고리즘 인터뷰 책 리뷰 리뷰 IT 대기업에 들어가고 싶은 목표가 있다. 내가 꿈꿔온 회사에서 일하는 사람들의 모습을 보면 멋있다고 생각이 들고 나의 목표에 대한 열망이 강해지는 것 같다. 미래의 핵심 사업 중 하나인 SW 부분을 이끌고 발전시키는 우리나라의 I

SharkBSJ 1 Dec 14, 2021