Tensors and Dynamic neural networks in Python with strong GPU acceleration

Overview

PyTorch Logo


PyTorch is a Python package that provides two high-level features:

  • Tensor computation (like NumPy) with strong GPU acceleration
  • Deep neural networks built on a tape-based autograd system

You can reuse your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.

System 3.6 3.7 3.8
Linux CPU Build Status Build Status
Linux GPU Build Status Build Status
Windows CPU / GPU Build Status
Linux (ppc64le) CPU Build Status
Linux (ppc64le) GPU Build Status
Linux (aarch64) CPU Build Status Build Status Build Status

See also the ci.pytorch.org HUD.

More About PyTorch

At a granular level, PyTorch is a library that consists of the following components:

Component Description
torch a Tensor library like NumPy, with strong GPU support
torch.autograd a tape-based automatic differentiation library that supports all differentiable Tensor operations in torch
torch.jit a compilation stack (TorchScript) to create serializable and optimizable models from PyTorch code
torch.nn a neural networks library deeply integrated with autograd designed for maximum flexibility
torch.multiprocessing Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training
torch.utils DataLoader and other utility functions for convenience

Usually, PyTorch is used either as:

  • A replacement for NumPy to use the power of GPUs.
  • A deep learning research platform that provides maximum flexibility and speed.

Elaborating Further:

A GPU-Ready Tensor Library

If you use NumPy, then you have used Tensors (a.k.a. ndarray).

Tensor illustration

PyTorch provides Tensors that can live either on the CPU or the GPU and accelerates the computation by a huge amount.

We provide a wide variety of tensor routines to accelerate and fit your scientific computation needs such as slicing, indexing, math operations, linear algebra, reductions. And they are fast!

Dynamic Neural Networks: Tape-Based Autograd

PyTorch has a unique way of building neural networks: using and replaying a tape recorder.

Most frameworks such as TensorFlow, Theano, Caffe, and CNTK have a static view of the world. One has to build a neural network and reuse the same structure again and again. Changing the way the network behaves means that one has to start from scratch.

With PyTorch, we use a technique called reverse-mode auto-differentiation, which allows you to change the way your network behaves arbitrarily with zero lag or overhead. Our inspiration comes from several research papers on this topic, as well as current and past work such as torch-autograd, autograd, Chainer, etc.

While this technique is not unique to PyTorch, it's one of the fastest implementations of it to date. You get the best of speed and flexibility for your crazy research.

Dynamic graph

Python First

PyTorch is not a Python binding into a monolithic C++ framework. It is built to be deeply integrated into Python. You can use it naturally like you would use NumPy / SciPy / scikit-learn etc. You can write your new neural network layers in Python itself, using your favorite libraries and use packages such as Cython and Numba. Our goal is to not reinvent the wheel where appropriate.

Imperative Experiences

PyTorch is designed to be intuitive, linear in thought, and easy to use. When you execute a line of code, it gets executed. There isn't an asynchronous view of the world. When you drop into a debugger or receive error messages and stack traces, understanding them is straightforward. The stack trace points to exactly where your code was defined. We hope you never spend hours debugging your code because of bad stack traces or asynchronous and opaque execution engines.

Fast and Lean

PyTorch has minimal framework overhead. We integrate acceleration libraries such as Intel MKL and NVIDIA (cuDNN, NCCL) to maximize speed. At the core, its CPU and GPU Tensor and neural network backends (TH, THC, THNN, THCUNN) are mature and have been tested for years.

Hence, PyTorch is quite fast – whether you run small or large neural networks.

The memory usage in PyTorch is extremely efficient compared to Torch or some of the alternatives. We've written custom memory allocators for the GPU to make sure that your deep learning models are maximally memory efficient. This enables you to train bigger deep learning models than before.

Extensions Without Pain

Writing new neural network modules, or interfacing with PyTorch's Tensor API was designed to be straightforward and with minimal abstractions.

You can write new neural network layers in Python using the torch API or your favorite NumPy-based libraries such as SciPy.

If you want to write your layers in C/C++, we provide a convenient extension API that is efficient and with minimal boilerplate. No wrapper code needs to be written. You can see a tutorial here and an example here.

Installation

Binaries

Commands to install from binaries via Conda or pip wheels are on our website: https://pytorch.org

NVIDIA Jetson Platforms

Python wheels for NVIDIA's Jetson Nano, Jetson TX2, and Jetson AGX Xavier are available via the following URLs:

They require JetPack 4.2 and above, and @dusty-nv maintains them

From Source

If you are installing from source, you will need Python 3.6.2 or later and a C++14 compiler. Also, we highly recommend installing an Anaconda environment. You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.

Once you have Anaconda installed, here are the instructions.

If you want to compile with CUDA support, install

If you want to disable CUDA support, export environment variable USE_CUDA=0. Other potentially useful environment variables may be found in setup.py.

If you are building for NVIDIA's Jetson platforms (Jetson Nano, TX1, TX2, AGX Xavier), Instructions to install PyTorch for Jetson Nano are available here

Install Dependencies

Common

conda install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing_extensions future six requests dataclasses

On Linux

# Add LAPACK support for the GPU if needed
conda install -c pytorch magma-cuda110  # or the magma-cuda* that matches your CUDA version from https://anaconda.org/pytorch/repo

On MacOS

# Add these packages if torch.distributed is needed
conda install pkg-config libuv

On Windows

# Add these packages if torch.distributed is needed.
# Distributed package support on Windows is a prototype feature and is subject to changes.
conda install -c conda-forge libuv=1.39

Get the PyTorch Source

git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
# if you are updating an existing checkout
git submodule sync
git submodule update --init --recursive

Install PyTorch

On Linux

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python setup.py install

Note that if you are using Anaconda, you may experience an error caused by the linker:

build/temp.linux-x86_64-3.7/torch/csrc/stub.o: file not recognized: file format not recognized
collect2: error: ld returned 1 exit status
error: command 'g++' failed with exit status 1

This is caused by ld from Conda environment shadowing the system ld. You should use a newer version of Python that fixes this issue. The recommended Python version is 3.6.10+, 3.7.6+ and 3.8.1+.

On macOS

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install

Each CUDA version only supports one particular XCode version. The following combinations have been reported to work with PyTorch.

CUDA version XCode version
10.0 XCode 9.4
10.1 XCode 10.1

On Windows

Build with CPU

It's fairly easy to build with CPU. Visual Studio 2019 version 16.7.6 (MSVC toolchain version 14.27) or higher is recommended.

Build with CUDA

NVTX is needed to build Pytorch with CUDA. NVTX is a part of CUDA distributive, where it is called "Nsight Compute". To install it onto already installed CUDA run CUDA installation once again and check the corresponding checkbox. Make sure that CUDA with Nsight Compute is installed after Visual Studio.

Currently, VS 2017 / 2019, and Ninja are supported as the generator of CMake. If ninja.exe is detected in PATH, then Ninja will be used as the default generator, otherwise, it will use VS 2017 / 2019.
If Ninja is selected as the generator, the latest MSVC will get selected as the underlying toolchain.

CUDA, MSVC, and PyTorch versions are interdependent; please install matching versions from this table:

CUDA version Newest supported VS version PyTorch version
9.2 Visual Studio 2017 Update 5 (15.5) (_MSC_VER <= 1912) 0.4.1 ~ 1.5.1
10.1 Visual Studio 2019 (16.X) (_MSC_VER < 1930) 1.3.0 ~ 1.7.0
10.2 Visual Studio 2019 (16.X) (_MSC_VER < 1930) 1.5.0 ~ 1.7.0
11.0 Visual Studio 2019 (16.X) (_MSC_VER < 1930) 1.7.0

Note: There's a compilation issue in several Visual Studio 2019 versions since 16.7.1, so please make sure your Visual Studio 2019 version is not in 16.7.1 ~ 16.7.5

Additional libraries such as Magma, oneDNN, a.k.a MKLDNN or DNNL, and Sccache are often needed. Please refer to the installation-helper to install them.

You can refer to the build_pytorch.bat script for some other environment variables configurations

cmd

:: [Optional] If you want to build with the VS 2017 generator for old CUDA and PyTorch, please change the value in the next line to `Visual Studio 15 2017`.
:: Note: This value is useless if Ninja is detected. However, you can force that by using `set USE_NINJA=OFF`.
set CMAKE_GENERATOR=Visual Studio 16 2019

:: Read the content in the previous section carefully before you proceed.
:: [Optional] If you want to override the underlying toolset used by Ninja and Visual Studio with CUDA, please run the following script block.
:: "Visual Studio 2019 Developer Command Prompt" will be run automatically.
:: Make sure you have CMake >= 3.12 before you do this when you use the Visual Studio generator.
set CMAKE_GENERATOR_TOOLSET_VERSION=14.27
set DISTUTILS_USE_SDK=1
for /f "usebackq tokens=*" %i in (`"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -version [15^,16^) -products * -latest -property installationPath`) do call "%i\VC\Auxiliary\Build\vcvarsall.bat" x64 -vcvars_ver=%CMAKE_GENERATOR_TOOLSET_VERSION%

:: [Optional] If you want to override the CUDA host compiler
set CUDAHOSTCXX=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\HostX64\x64\cl.exe

python setup.py install
Adjust Build Options (Optional)

You can adjust the configuration of cmake variables optionally (without building first), by doing the following. For example, adjusting the pre-detected directories for CuDNN or BLAS can be done with such a step.

On Linux

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python setup.py build --cmake-only
ccmake build  # or cmake-gui build

On macOS

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build --cmake-only
ccmake build  # or cmake-gui build

Docker Image

Using pre-built images

You can also pull a pre-built docker image from Docker Hub and run with docker v19.03+

docker run --gpus all --rm -ti --ipc=host pytorch/pytorch:latest

Please note that PyTorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g. for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you should increase shared memory size either with --ipc=host or --shm-size command line options to nvidia-docker run.

Building the image yourself

NOTE: Must be built with a docker version > 18.06

The Dockerfile is supplied to build images with Cuda support and cuDNN v7. You can pass PYTHON_VERSION=x.y make variable to specify which Python version is to be used by Miniconda, or leave it unset to use the default.

make -f docker.Makefile
# images are tagged as docker.io/${your_docker_username}/pytorch

Building the Documentation

To build documentation in various formats, you will need Sphinx and the readthedocs theme.

cd docs/
pip install -r requirements.txt

You can then build the documentation by running make <format> from the docs/ folder. Run make to get a list of all available output formats.

If you get a katex error run npm install katex. If it persists, try npm install -g katex

Previous Versions

Installation instructions and binaries for previous PyTorch versions may be found on Our Website.

Getting Started

Three-pointers to get you started:

Resources

Communication

Releases and Contributing

PyTorch has a 90-day release cycle (major releases). Please let us know if you encounter a bug by filing an issue.

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.

If you plan to contribute new features, utility functions, or extensions to the core, please first open an issue and discuss the feature with us. Sending a PR without discussion might end up resulting in a rejected PR because we might be taking the core in a different direction than you might be aware of.

To learn more about making a contribution to Pytorch, please see our Contribution page.

The Team

PyTorch is a community-driven project with several skillful engineers and researchers contributing to it.

PyTorch is currently maintained by Adam Paszke, Sam Gross, Soumith Chintala and Gregory Chanan with major contributions coming from hundreds of talented individuals in various forms and means. A non-exhaustive but growing list needs to mention: Trevor Killeen, Sasank Chilamkurthy, Sergey Zagoruyko, Adam Lerer, Francisco Massa, Alykhan Tejani, Luca Antiga, Alban Desmaison, Andreas Koepf, James Bradbury, Zeming Lin, Yuandong Tian, Guillaume Lample, Marat Dukhan, Natalia Gimelshein, Christian Sarofeen, Martin Raison, Edward Yang, Zachary Devito.

Note: This project is unrelated to hughperkins/pytorch with the same name. Hugh is a valuable contributor to the Torch community and has helped with many things Torch and PyTorch.

License

PyTorch has a BSD-style license, as found in the LICENSE file.

Comments
  • Add windows support please

    Add windows support please

    I think pytorch should add Windows support. Other deep learning frameworks, like tensorflow, theano and mxnet, all support Windows. I only use Windows in my work. So I want to know whether pytorch will support Windows in future.

    proposal accepted 
    opened by jf003320018 765
  • possible deadlock in dataloader

    possible deadlock in dataloader

    the bug is described at pytorch/examples#148. I just wonder if this is a bug in PyTorch itself, as the example code looks clean to me. Also, I wonder if this is related to #1120.

    opened by zym1010 213
  • from torch._C import *  (ImportError: DLL load failed: The specified module could not be found.

    from torch._C import * (ImportError: DLL load failed: The specified module could not be found.

    File "", line 4, in import torch

    File "C:\Users\hp i3\Anaconda3\lib\site-packages\torch_init_.py", line 76, in from torch._C import *

    ImportError: DLL load failed: The specified module could not be found.

    opened by HarshneetBhatia 176
  • RuntimeError: CUDA error: an illegal memory access was encountered

    RuntimeError: CUDA error: an illegal memory access was encountered

    Hi,everyone! I met a strange illegal memory access error. It happens randomly without any regular pattern. The code is really simple. It is PointNet for point cloud segmentation. I don't think there is anything wrong in the code.

    import torch
    import torch.nn as nn
    import torch.nn.functional as F
    import os
    class InstanceSeg(nn.Module):
        def __init__(self, num_points=1024):
            super(InstanceSeg, self).__init__()
    
            self.num_points = num_points
    
            self.conv1 = nn.Conv1d(9, 64, 1)
            self.conv2 = nn.Conv1d(64, 64, 1)
            self.conv3 = nn.Conv1d(64, 64, 1)
            self.conv4 = nn.Conv1d(64, 128, 1)
            self.conv5 = nn.Conv1d(128, 1024, 1)
            self.conv6 = nn.Conv1d(1088, 512, 1)
            self.conv7 = nn.Conv1d(512, 256, 1)
            self.conv8 = nn.Conv1d(256, 128, 1)
            self.conv9 = nn.Conv1d(128, 128, 1)
            self.conv10 = nn.Conv1d(128, 2, 1)
            self.max_pool = nn.MaxPool1d(num_points)
    
        def forward(self, x):
            batch_size = x.size()[0] # (x has shape (batch_size, 9, num_points))
    
            out = F.relu(self.conv1(x)) # (shape: (batch_size, 64, num_points))
            out = F.relu(self.conv2(out)) # (shape: (batch_size, 64, num_points))
            point_features = out
    
            out = F.relu(self.conv3(out)) # (shape: (batch_size, 64, num_points))
            out = F.relu(self.conv4(out)) # (shape: (batch_size, 128, num_points))
            out = F.relu(self.conv5(out)) # (shape: (batch_size, 1024, num_points))
            global_feature = self.max_pool(out) # (shape: (batch_size, 1024, 1))
    
            global_feature_repeated = global_feature.repeat(1, 1, self.num_points) # (shape: (batch_size, 1024, num_points))
            out = torch.cat([global_feature_repeated, point_features], 1) # (shape: (batch_size, 1024+64=1088, num_points))
    
            out = F.relu(self.conv6(out)) # (shape: (batch_size, 512, num_points))
            out = F.relu(self.conv7(out)) # (shape: (batch_size, 256, num_points))
            out = F.relu(self.conv8(out)) # (shape: (batch_size, 128, num_points))
            out = F.relu(self.conv9(out)) # (shape: (batch_size, 128, num_points))
    
            out = self.conv10(out) # (shape: (batch_size, 2, num_points))
    
            out = out.transpose(2,1).contiguous() # (shape: (batch_size, num_points, 2))
            out = F.log_softmax(out.view(-1, 2), dim=1) # (shape: (batch_size*num_points, 2))
            out = out.view(batch_size, self.num_points, 2) # (shape: (batch_size, num_points, 2))
    
            return out
    
    Num = 0
    network = InstanceSeg()
    network.cuda()
    while(1):
    
        input0 = torch.randn(32, 3, 1024).cuda()
        input1 = torch.randn(32, 3, 1024).cuda()
        input2 = torch.randn(32, 3, 1024).cuda()
        input = torch.cat((input0, input1, input2), 1)
    
        out = network(input)
        Num = Num+1
        print(Num)
    

    After random number of steps, error raises. The error report is

    Traceback (most recent call last):
      File "/home/wangye/Frustum-PointNet_Test/frustum_pointnet.py", line 58, in <module>
        input0 = torch.randn(32, 3, 1024).cuda()
    RuntimeError: CUDA error: an illegal memory access was encountered
    

    When I added "os.environ['CUDA_LAUNCH_BLOCKING'] = '1'" at the top of this script, the error report was changed to this

    Traceback (most recent call last):
      File "/home/wangye/Frustum-PointNet_Test/frustum_pointnet.py", line 64, in <module>
        out = network(input)
      File "/home/wangye/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/wangye/Frustum-PointNet_Test/frustum_pointnet.py", line 35, in forward
        out = F.relu(self.conv5(out)) # (shape: (batch_size, 1024, num_points))
      File "/home/wangye/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/wangye/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 187, in forward
        self.padding, self.dilation, self.groups)
    RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
    

    I know some wrong indexing operations and some wrong usage method of loss function may lead to illegal memory access error. But in this script, there is no such kind of operation. I am quite sure this error is not because of out of memory since only about 2G GPU memory is used, and I have totally 12G GPU memory.

    This is my environment information:

    OS: Ubuntu 16.04 LTS 64-bit
    Command: conda install pytorch torchvision cudatoolkit=9.0 -c pytorch
    GPU: Titan XP
    Driver Version: 410.93
    Python Version: 3.6
    cuda Version: cuda_9.0.176_384.81_linux
    cudnn Version: cudnn-9.0-linux-x64-v7.4.2.24
    pytorch Version: pytorch-1.0.1-py3.6_cuda9.0.176_cudnn7.4.2_2
    

    I have been stuck here for long time. In fact, not only this project faces this error, many other projects face similar error in my computer. I don't think there is anything wrong in the code. It can run correctly for some steps. Maybe this error is because the environment. I am not sure. Does anyone have any idea about this situation? If more detailed information is needed, please let me know. Thanks for any suggestion.

    module: cuda triaged 
    opened by xiaoxiangyeyuwangye 159
  • RuntimeError: CUDA out of memory. Tried to allocate 12.50 MiB (GPU 0; 10.92 GiB total capacity; 8.57 MiB already allocated; 9.28 GiB free; 4.68 MiB cached)

    RuntimeError: CUDA out of memory. Tried to allocate 12.50 MiB (GPU 0; 10.92 GiB total capacity; 8.57 MiB already allocated; 9.28 GiB free; 4.68 MiB cached)

    CUDA Out of Memory error but CUDA memory is almost empty

    I am currently training a lightweight model on very large amount of textual data (about 70GiB of text). For that I am using a machine on a cluster ('grele' of the grid5000 cluster network).

    I am getting after 3h of training this very strange CUDA Out of Memory error message: RuntimeError: CUDA out of memory. Tried to allocate 12.50 MiB (GPU 0; 10.92 GiB total capacity; 8.57 MiB already allocated; 9.28 GiB free; 4.68 MiB cached). According to the message, I have the required space but it does not allocate the memory.

    Any idea what might cause this ?

    For information, my preprocessing relies on torch.multiprocessing.Queue and an iterator over the lines of my source data to preprocess the data on the fly.

    Full stacktrace

    Traceback (most recent call last):
      File "/home/emarquer/miniconda3/envs/pytorch/lib/python3.6/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/home/emarquer/miniconda3/envs/pytorch/lib/python3.6/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home/emarquer/miniconda3/envs/pytorch/lib/python3.6/site-packages/memory_profiler.py", line 1228, in <module>
        exec_with_profiler(script_filename, prof, args.backend, script_args)
      File "/home/emarquer/miniconda3/envs/pytorch/lib/python3.6/site-packages/memory_profiler.py", line 1129, in exec_with_profiler
        exec(compile(f.read(), filename, 'exec'), ns, ns)
      File "run.py", line 293, in <module>
        main(args, save_folder, load_file)
      File "run.py", line 272, in main
        trainer.all_epochs()
      File "/home/emarquer/papud-bull-nn/trainer/trainer.py", line 140, in all_epochs
        self.single_epoch()
      File "/home/emarquer/papud-bull-nn/trainer/trainer.py", line 147, in single_epoch
        tracker.add(*self.single_batch(data, target))
      File "/home/emarquer/papud-bull-nn/trainer/trainer.py", line 190, in single_batch
        result = self.model(data)
      File "/home/emarquer/miniconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/emarquer/papud-bull-nn/model/model.py", line 54, in forward
        emb = self.emb(input)
      File "/home/emarquer/miniconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/emarquer/miniconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 118, in forward
        self.norm_type, self.scale_grad_by_freq, self.sparse)
      File "/home/emarquer/miniconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/functional.py", line 1454, in embedding
        return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
    RuntimeError: CUDA out of memory. Tried to allocate 12.50 MiB (GPU 0; 10.92 GiB total capacity; 8.57 MiB already allocated; 9.28 GiB free; 4.68 MiB cached)
    
    
    needs reproduction 
    opened by EMarquer 140
  • Integrating complex tensors

    Integrating complex tensors

    New description from @ezyang:

    Work is in progress at https://github.com/Roger-luo/pytorch-complex

    Organizational principles

    • Complex tensor support is important to PyTorch, and we will accept patches to core which add small amounts of code to make adding complex support.
    • Adding complex involves writing a lot of new kernels and code: we'd like this code to initially live out of repo, so it is easier for people to iterate quickly on them without having to go through the PyTorch main code review process. We will NOT commit to reviewing large new kernels in the short term, but eventually we would like all the kernels to come back to PyTorch.
    • The external library will be buildable separately from PyTorch, so you will be able to maintain it as a separate repository without having to merge with PyTorch (and deal with loads of merge conflicts).
      • PyTorch may occasionally make breaking changes in C++ API; if you bring these to our attention we will do our utmost to help solve these problems.
    • The hooks needed for this will NOT ship with PyTorch 1.0, but they will ship with a released version of PyTorch in the not too distant future.

    How will I work on complex kernels?

    Here is what the workflow will look like in the steady state.

    PyTorch will natively contain APIs for referring to the complex dtype, but they won't do anything by default. PyTorch defines torch.complex64 and torch.complex128 referring to complex tensors. However, if you try to construct a tensor this way, by default, PyTorch will error:

    >>> torch.zeros({2,2}, dtype=torch.complex64)
    RuntimeError: complex64 not supported by PyTorch
    

    @ezyang provided a patch which adds these dtypes to PyTorch. https://github.com/pytorch/pytorch/pull/11173

    In the mid-term, we will merge support for basic functionality (like allocating a tensor of zeros) to be supported by PyTorch natively. A reasonable proxy for what support is “basic” is PyTorch's native support for CPU half tensors (which are extremely impoverished).

    PyTorch publishes an interface for registering an implementation of complex tensors. The implementation inherits from the TypeDefault class (https://github.com/pytorch/pytorch/pull/11013) and will override methods on this class to define implementations of functions for which we have complex implementations. It will look something like this:

    struct CPUComplexFloatType final : public TypeDefault {
      virtual Tensor add(const Tensor & self, const Tensor & other, Scalar alpha=1) const override {
        // Your implementation of add for complex tensors
      }
      // ...
    }
    

    This class will override exactly the types which are supported for complex; all other implementations are provided by TypeDefault and will error by default.

    There will be a canonical listing of methods supported on Type (the overall interface) as an autogenerated file that is checked into the PyTorch source repository; we'll communicate API changes by diffs to this file. In general, the methods are in one-to-one correspondence with their corresponding names in the PyTorch frontend.

    In general, when you use an operation which you haven't implemented yet,

    WARNING: We intend to refactor Type away into a new system that also supports open registration of new operations (this obviously doesn't work if you have a single superclass that defines all the methods you might possibly want to support). Thus, try not to get too tied to the particular implementation strategy of writing Type as a subclass.

    To publish new, complex only operations, you will use the C++ extension API. The C++ extension API is documented at https://pytorch.org/tutorials/advanced/cpp_extension.html Essentially, you can write a C++ function like:

    at::Tensor imag(at::Tensor z) {
      ...
    }
    

    And then the C++ extension API will generate a Python binding so that you invoke this function from Python.

    Some operations will be “easy” to integrate into PyTorch as it exists today. For example, for implementation of binary operations, it probably makes more sense to extend add_kernel in BinaryOpsKernel.cpp so that it dispatches over complex types (and then you get it for free, because std::complex implements addition). As long as these patches are small and self-contained, we promise to merge them on a timely basis.

    It should ALWAYS be possible to unblock, by just writing an override on Type instead of using existing infrastructure, and doing liberal copy pasting. But let's avoid it when it's easy!

    Autograd. As long as you're working on operations which already have derivative formulas defined for them, you will “automatically” get autograd support, as long as you implement complex support for all the constituent functions which are invoked in the backwards implementation from derivatives.yaml.

    In some cases, we may need to adjust autograd formulas so that they work for complex numbers; e.g., the gradient of 'abs' isn't 'grad . self.sign()'. In these cases, all we need to do is upstream fix of changing the autograd formula of 'abs' to 'abs_backward', which is a function that can be overridden.

    For general complex valued back propagation, there are some references:

    1. Akira’s “Complex Valued Neural Networks”.
    2. https://giggleliu.github.io/2018/02/01/complex_bp.html

    Generally, we won't need to modify the autograd since in most cases we only calculate the derivatives of a real-valued function (the loss).

    Work plan

    Many of the necessary pieces are in place today, but they are not put together in an end-to-end way. Here is what needs to be done.

    • [X] Codemod TH to not ifdef real https://github.com/pytorch/pytorch/pull/11163
    • [X] Built-in support for torch.complex64 and torch.complex128 dtypes. https://github.com/pytorch/pytorch/pull/11173
    • [X] An interface for registering CPUComplexType, etc., so that this implementation is invoked when you request a complex tensor with dtype=torch.complex64 or do an operation on complex tensors.
    • [X] Land https://github.com/pytorch/pytorch/pull/11013
    • [X] An end-to-end example, including working build system, of a separately compileable C++ program that links against libtorch and uses the aforementioned interface to implement complex tensor allocation.

    Short term integration plan. These operations are “easy” to implement, and so we should mainline them in PyTorch as soon as possible.

    • [X] Basic tensor factories: torch.empty, torch.zeros, torch.ones
    • [ ] CPU binary operations: add, sub, mul, div #11641
    • [ ] FFT
    • [ ] ???

    Kernel implementation:

    TODO: Generate a list based on https://github.com/Roger-luo/TH/blob/master/ChangeLog.md

    Other complex related tasks:

    • [ ] Figure out the type promotion rules for complex tensors, and implement it in promoteTypes #11641

    Historical issue content

    Original comment from @PhilippPelz

    I was wondering if there is interest in incorporating complex tensors into pytorch. For CPU support there is ztorch and I have written z-cutorch ( https://github.com/PhilippPelz/z-cutorch ) a while ago. It is a fork off cutorch before the refactoring for CudaHalfTensor (don't have the hardware yet). If it's not too much work, I would like to slowly integrate it with pytorch. I am using matplotlib for plotting via fb.ptyhon and it turns out a huge pain every time I reinstall my system (compiling all the dependencies), plus it seems pytorch will work under Windows soon, which one of my experiment PCs runs on. I would also need complex gradients, so I would sooner or later touch autograd as well. While tf supports complex tensors per se, it seems many ops don't support it yet (https://github.com/tensorflow/tensorflow/issues/2255), plus it seems a bit heavyweight for my purposes.

    Maybe someone could say a few words how and where to start with this, if it's a welcome idea.

    feature triaged module: complex 
    opened by PhilippPelz 130
  • enable NVFuser by default

    enable NVFuser by default

    Stack from ghstack:

    • -> #76006
    • #76937

    Enable NVFuser in OSS. Tests are passing, and we've also run tests in torchvision and torchaudio

    Differential Revision: D35736977

    oncall: jit cla signed ciflow/trunk 
    opened by davidberard98 125
  • Broken `Type Hints` in PyTorch 0.4.0, related to IDEs(eq. PyCharm)

    Broken `Type Hints` in PyTorch 0.4.0, related to IDEs(eq. PyCharm)

    If you have a question or would like help and support, please ask at our forums.

    If you are submitting a feature request, please preface the title with [feature request]. If you are submitting a bug report, please fill in the following details.

    Issue description

    Recently, I figured out PyCharm cannot make auto-complete for torch.zeros.

    PyCharm says

    Cannot find reference 'zeros' in '__init__.py'
    

    I dig it for a while I found broken Type Hints.

    From these changes, https://github.com/pytorch/pytorch/commit/30ec06c140b0428d591e2f5007bc8046d1bdf7c4 https://github.com/pytorch/pytorch/wiki/Breaking-Changes-from-Variable-and-Tensor-merge

    Especially, https://github.com/pytorch/pytorch/commit/30ec06c140b0428d591e2f5007bc8046d1bdf7c4#diff-14258fce7c17ccb97b488e64373b0803R308 @colesbury This line cannot make Type Hints for lots of IDEs.

    Originally, torch.zeros was in torch/_C/__init__.py But, it moved to torch/_C/_VariableFunctions

    Code example

    https://gist.github.com/kimdwkimdw/50c18b5cf72c69c2d01bb4146c8a2b5c This is Proof of Concept for this bug.

    If you look at main.py

    import T_B as torch
    
    torch.p2()  # IDE can detect `p2`
    torch.p1    # IDE cannot detect `p1`
    

    System Info

    Please copy and paste the output from our environment collection script (or fill out the checklist below manually).

    You can get the script and run it with:

    wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
    # For security purposes, please check the contents of collect_env.py before running it.
    python collect_env.py
    
    • PyTorch or Caffe2:

    • How you installed PyTorch (conda, pip, source): Any case for conda, pip, source.

    • Build command you used (if compiling from source):

    • OS: Any

    • PyTorch version: 0.4.0

    • Python version: 3.6.5

    • CUDA/cuDNN version: .

    • GPU models and configuration: .

    • GCC version (if compiling from source): .

    • CMake version: .

    • Versions of any other relevant libraries:.

    opened by kimdwkimdw 106
  • GeForce RTX 3080 with CUDA capability sm_86 is not compatible with the current PyTorch installation.

    GeForce RTX 3080 with CUDA capability sm_86 is not compatible with the current PyTorch installation.

    Edit: If you see this error, please go to https://pytorch.org/get-started/locally/ and download a wheel built with cuda 11.x


    **Hi

    I recently purchased RTX 3080 and got this error

    GeForce RTX 3080 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.

    and my cuda version is following:

    • torch.cuda_version
    • Out[3]: '10.2'

    Should I install the latest cudatoolkit 11.0? but It seems Pytorch only provides cudatoolkit 10.2 like below screenshot. image

    Is there any solution for this issue? How do I get sm_86 binary capability?

    thanks in advance**

    cc @malfet @seemethere @walterddr @ngimel

    module: build module: cuda triaged 
    opened by swecomic 98
  • Vec256 Test cases

    Vec256 Test cases

    Tests for Vec256 classes #15676

    Testing Current list:

    • [x] Blends
    • [x] Memory: UnAlignedLoadStore
    • [x] Arithmetics: Plus,Minu,Multiplication,Division
    • [x] Bitwise: BitAnd, BitOr, BitXor
    • [x] Comparison: Equal, NotEqual, Greater, Less, GreaterEqual, LessEqual
    • [x] MinMax: Minimum, Maximum, ClampMin, ClampMax, Clamp
    • [x] SignManipulation: Absolute, Negate
    • [x] Interleave: Interleave, DeInterleave
    • [x] Rounding: Round, Ceil, Floor, Trunc
    • [x] Mask: ZeroMask
    • [x] SqrtAndReciprocal: Sqrt, RSqrt, Reciprocal
    • [x] Trigonometric: Sin, Cos, Tan
    • [x] Hyperbolic: Tanh, Sinh, Cosh
    • [x] InverseTrigonometric: Asin, ACos, ATan, ATan2
    • [x] Logarithm: Log, Log2, Log10, Log1p
    • [x] Exponents: Exp, Expm1
    • [x] ErrorFunctions: Erf, Erfc, Erfinv
    • [x] Pow: Pow
    • [x] LGamma: LGamma
    • [x] Quantization: quantize, dequantize, requantize_from_int
    • [x] Quantization: widening_subtract, relu, relu6 Missing:
    • [ ] Constructors, initializations
    • [ ] Conversion , Cast
    • [ ] Additional: imag, conj, angle (note: imag and conj only checked for float complex)

    Notes on tests and testing framework

    • some math functions are tested within domain range
    • mostly testing framework randomly tests against std implementation within the domain or within the implementation domain for some math functions.
    • some functions are tested against the local version. ~~For example, std::round and vector version of round differs. so it was tested against the local version~~
    • round was tested against pytorch at::native::round_impl. ~~for double type on Vsx vec_round failed for (even)+0 .5 values~~ . it was solved by using vec_rint
    • ~~complex types are not tested~~ After enabling complex testing due to precision and domain some of the complex functions failed for vsx and x86 avx as well. I will either test it against local implementation or check within the accepted domain
    • ~~quantizations are not tested~~ Added tests for quantizing, dequantize, requantize_from_int, relu, relu6, widening_subtract functions
    • the testing framework should be improved further
    • ~~For now -DBUILD_MOBILE_TEST=ONwill be used for Vec256Test too~~ Vec256 Test cases will be built for each CPU_CAPABILITY

    Fixes: #15676

    triaged module: vectorization open source 
    opened by quickwritereader 98
  • Add Leaky relu operator in metal shader

    Add Leaky relu operator in metal shader

    Summary: Heavily referenced how Hardswish was implemented.

    This is a great intro task to get a taste of how a torch method is implemented in shader and tested.

    Test Plan: Compared in metal shader metal version and cpu version result in tests.

    https://pxl.cl/251kT

    Reviewed By: SS-JIA

    Differential Revision: D36732187

    fb-exported Merged cla signed mobile_perf release notes: mobile 
    opened by mattguo 92
  • Tensor copying not always detecting when src and dest refer to same memory location

    Tensor copying not always detecting when src and dest refer to same memory location

    🐛 Describe the bug

    Hi there! In certain cases it seems that torch does not detect when the src and dest of a tensor copy refer to the same memory location, and therefore performs an incorrect copy.

    For example, in the following snippet:

    import torch
    torch.use_deterministic_algorithms(True)
    
    frame_dim = 140
    num_steps = 10
    num_envs = 512
    
    
    def go():
        torch.manual_seed(0)
        src = torch.randn(num_envs, num_steps, frame_dim).cuda()
        buffer = src.clone()
    
        # The problematic line - shift frames [0, 8] into [1, 9]
        buffer[:, 1 : num_steps] = buffer[:, 0 : (num_steps - 1)]
    
        print("Sum:", buffer.cpu().sum().item(), src.cpu().sum().item())
    
    
    for i in range(10):
        go()
    

    Even though the same operations are run every loop, the result of the shift is different every time (the left column is not consistent across multiple runs of the script either, while the right column is):

    Sum: -413.53521728515625 -993.8636474609375
    Sum: -132.9031982421875 -993.8636474609375
    Sum: -186.0804443359375 -993.8636474609375
    Sum: -177.52325439453125 -993.8636474609375
    Sum: -24.764511108398438 -993.8636474609375
    Sum: -245.3650665283203 -993.8636474609375
    Sum: -408.0976867675781 -993.8636474609375
    Sum: -464.16082763671875 -993.8636474609375
    Sum: -417.3087463378906 -993.8636474609375
    

    If I replace the problematic line with buffer[:, 1 : num_steps] = buffer[:, 0 : (num_steps - 1)].clone(), everything works as expected.

    Moreover, if I make the buffer have a simpler shape by dropping the leading num_envs dimension, then torch successfully detects the problem and raises the error:

    RuntimeError: unsupported operation: some elements of the input tensor and the written-to tensor refer to a single memory location. Please clone() the tensor before performing the operation.
    

    Versions

    Collecting environment information...
    PyTorch version: 1.13.1
    Is debug build: False
    CUDA used to build PyTorch: 11.6
    ROCM used to build PyTorch: N/A
    
    OS: Ubuntu 22.04.1 LTS (x86_64)
    GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
    Clang version: Could not collect
    CMake version: version 3.22.1
    Libc version: glibc-2.35
    
    Python version: 3.8.13 (default, Mar 28 2022, 11:38:47)  [GCC 7.5.0] (64-bit runtime)
    Python platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.17
    Is CUDA available: True
    CUDA runtime version: 11.6.124
    CUDA_MODULE_LOADING set to: LAZY
    GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
    Nvidia driver version: 515.43.04
    cuDNN version: Could not collect
    HIP runtime version: N/A
    MIOpen runtime version: N/A
    Is XNNPACK available: True
    
    Versions of relevant libraries:
    [pip3] mypy-extensions==0.4.3
    [pip3] numpy==1.23.1
    [pip3] pytorch-lightning==1.8.1
    [pip3] torch==1.13.1
    [pip3] torchaudio==0.13.1
    [pip3] torchmetrics==0.9.2
    [pip3] torchvision==0.14.1
    [conda] blas                      1.0                         mkl  
    [conda] cudatoolkit               11.3.1               h2bc3f7f_2  
    [conda] mkl                       2021.4.0           h06a4308_640  
    [conda] mkl-service               2.4.0            py38h7f8727e_0  
    [conda] mkl_fft                   1.3.1            py38hd3c417c_0  
    [conda] mkl_random                1.2.2            py38h51133e4_0  
    [conda] numpy                     1.23.1           py38h6c91a56_0  
    [conda] numpy-base                1.23.1           py38ha15fc14_0  
    [conda] pytorch                   1.13.1          py3.8_cuda11.6_cudnn8.3.2_0    pytorch
    [conda] pytorch-cuda              11.6                 h867d48c_0    pytorch
    [conda] pytorch-lightning         1.8.1                    pypi_0    pypi
    [conda] pytorch-mutex             1.0                        cuda    pytorch
    [conda] torchaudio                0.13.1               py38_cu116    pytorch
    [conda] torchmetrics              0.9.2                    pypi_0    pypi
    [conda] torchvision               0.14.1               py38_cu116    pytorch
    
    opened by jordan-benjamin 0
  • Support --dynamic-ci-skips

    Support --dynamic-ci-skips

    Stack from ghstack (oldest at bottom):

    • -> #91893

    This makes it easier for us to run only the skipped benchmarks and see if that actually started passing.

    Signed-off-by: Edward Z. Yang [email protected]

    cc @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire

    topic: developer feature module: dynamo ciflow/inductor release notes: dynamo 
    opened by ezyang 1
  • Forward fix unexpected success linalg.pinv.singular

    Forward fix unexpected success linalg.pinv.singular

    Stack from ghstack (oldest at bottom):

    • -> #91892
    • #91796

    Signed-off-by: Edward Z. Yang [email protected]

    cc @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire

    ciflow/trunk topic: not user facing module: inductor ciflow/inductor 
    opened by ezyang 3
  • hrnet_w18, DebertaV2ForQuestionAnswering, tts_angular works with dynamic shapes

    hrnet_w18, DebertaV2ForQuestionAnswering, tts_angular works with dynamic shapes

    Stack from ghstack (oldest at bottom):

    • -> #91891

    Signed-off-by: Edward Z. Yang [email protected]

    cc @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @desertfire

    module: dynamo ciflow/inductor 
    opened by ezyang 2
  • Add Skylion007 as a core reviewer

    Add Skylion007 as a core reviewer

    Stack from ghstack (oldest at bottom):

    • -> #91890

    Skylion007 has been diligently improving the state of our C++ code to follow best practices and make it possible to run lint on it (at the moment the code is so messy it cannot be linted), and I would like to give him review permissions to facilitate this work.

    Signed-off-by: Edward Z. Yang [email protected]

    topic: not user facing 
    opened by ezyang 2
Releases(v1.13.1)
  • v1.13.1(Dec 16, 2022)

    This release is meant to fix the following issues (regressions / silent correctness):

    • RuntimeError by torch.nn.modules.activation.MultiheadAttention with bias=False and batch_first=True #88669
    • Installation via pip on Amazon Linux 2, regression #88869
    • Installation using poetry on Mac M1, failure #88049
    • Missing masked tensor documentation #89734
    • torch.jit.annotations.parse_type_line is not safe (command injection) #88868
    • Use the Python frame safely in _pythonCallstack #88993
    • Double-backward with full_backward_hook causes RuntimeError #88312
    • Fix logical error in get_default_qat_qconfig #88876
    • Fix cuda/cpu check on NoneType and unit test #88854 and #88970
    • Onnx ATen Fallback for BUILD_CAFFE2=0 for ONNX-only ops #88504
    • Onnx operator_export_type on the new registry #87735
    • torchrun AttributeError caused by file_based_local_timer on Windows #85427

    The release tracker should contain all relevant pull requests related to this release as well as links to related issues

    Source code(tar.gz)
    Source code(zip)
    pytorch-v1.13.1.tar.gz(223.21 MB)
Tensors and neural networks in Haskell

Hasktorch Hasktorch is a library for tensors and neural networks in Haskell. It is an independent open source community project which leverages the co

hasktorch 920 Jan 4, 2023
A python package simulating the quasi-2D pseudospin-1/2 Gross-Pitaevskii equation with NVIDIA GPU acceleration.

A python package simulating the quasi-2D pseudospin-1/2 Gross-Pitaevskii equation with NVIDIA GPU acceleration. Introduction spinor-gpe is high-level,

null 2 Sep 20, 2022
Runtime type annotations for the shape, dtype etc. of PyTorch Tensors.

torchtyping Type annotations for a tensor's shape, dtype, names, ... Turn this: def batch_outer_product(x: torch.Tensor, y: torch.Tensor) -> torch.Ten

Patrick Kidger 1.2k Jan 3, 2023
Neural Fixed-Point Acceleration for Convex Optimization

Licensing The majority of neural-scs is licensed under the CC BY-NC 4.0 License, however, portions of the project are available under separate license

Facebook Research 27 Oct 6, 2022
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.

mtomo Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation.

Katsuya Hyodo 24 Mar 2, 2022
High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.

Anakin2.0 Welcome to the Anakin GitHub. Anakin is a cross-platform, high-performance inference engine, which is originally developed by Baidu engineer

null 514 Dec 28, 2022
GrabGpu_py: a scripts for grab gpu when gpu is free

GrabGpu_py a scripts for grab gpu when gpu is free. WaitCondition: gpu_memory >

tianyuluan 3 Jun 18, 2022
MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution (CVPR2021)

MASA-SR Official PyTorch implementation of our CVPR2021 paper MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Re

DV Lab 126 Dec 20, 2022
SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems

The SLIDE package contains the source code for reproducing the main experiments in this paper. Dataset The Datasets can be downloaded in Amazon-

Intel Labs 72 Dec 16, 2022
DI-HPC is an acceleration operator component for general algorithm modules in reinforcement learning algorithms

DI-HPC: Decision Intelligence - High Performance Computation DI-HPC is an acceleration operator component for general algorithm modules in reinforceme

OpenDILab 185 Dec 29, 2022
Calculates JMA (Japan Meteorological Agency) seismic intensity (shindo) scale from acceleration data recorded in NumPy array

shindo.py Calculates JMA (Japan Meteorological Agency) seismic intensity (shindo) scale from acceleration data stored in NumPy array Introduction Japa

RR_Inyo 3 Sep 23, 2022
GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks

GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks This repository implements a capsule model Inten

Joel Huang 15 Dec 24, 2022
Dynamic View Synthesis from Dynamic Monocular Video

Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer This repository contains code to compute depth from a

Intelligent Systems Lab Org 2.3k Jan 1, 2023
Dynamic View Synthesis from Dynamic Monocular Video

Dynamic View Synthesis from Dynamic Monocular Video Project Website | Video | Paper Dynamic View Synthesis from Dynamic Monocular Video Chen Gao, Ayus

Chen Gao 139 Dec 28, 2022
Dynamic vae - Dynamic VAE algorithm is used for anomaly detection of battery data

Dynamic VAE frame Automatic feature extraction can be achieved by probability di

null 10 Oct 7, 2022
A general and strong 3D object detection codebase that supports more methods, datasets and tools (debugging, recording and analysis).

ALLINONE-Det ALLINONE-Det is a general and strong 3D object detection codebase built on OpenPCDet, which supports more methods, datasets and tools (de

Michael.CV 5 Nov 3, 2022
A simplistic and efficient pure-python neural network library from Phys Whiz with CPU and GPU support.

A simplistic and efficient pure-python neural network library from Phys Whiz with CPU and GPU support.

Manas Sharma 19 Feb 28, 2022