DyNet: The Dynamic Neural Network Toolkit

Related tags

Deep Learning dynet
Overview
DyNet


Build Status (Travis CI) Build Status (AppVeyor) Build Status (Docs) PyPI version

The Dynamic Neural Network Toolkit

General

DyNet is a neural network library developed by Carnegie Mellon University and many others. It is written in C++ (with bindings in Python) and is designed to be efficient when run on either CPU or GPU, and to work well with networks that have dynamic structures that change for every training instance. For example, these kinds of networks are particularly important in natural language processing tasks, and DyNet has been used to build state-of-the-art systems for syntactic parsing, machine translation, morphological inflection, and many other application areas.

Read the documentation to get started, and feel free to contact the dynet-users group group with any questions (if you want to receive email make sure to select "all email" when you sign up). We greatly appreciate any bug reports and contributions, which can be made by filing an issue or making a pull request through the github page.

You can also read more technical details in our technical report.

Getting started

You can find tutorials about using DyNet here (C++) and here (python), and here (EMNLP 2016 tutorial).

One aspect that sets DyNet apart from other tookits is the auto-batching feature. See the documentation about batching.

The example folder contains a variety of examples in C++ and python.

Installation

DyNet relies on a number of external programs/libraries including CMake and Eigen. CMake can be installed from standard repositories.

For example on Ubuntu Linux:

sudo apt-get install build-essential cmake

Or on macOS, first make sure the Apple Command Line Tools are installed, then get CMake, and Mercurial with either homebrew or macports:

xcode-select --install
brew install cmake  # Using homebrew.
sudo port install cmake # Using macports.

On Windows, see documentation.

To compile DyNet you also need a specific version of the Eigen library. If you use any of the released versions, you may get assertion failures or compile errors. You can get it easily using the following command:

mkdir eigen
cd eigen
wget https://github.com/clab/dynet/releases/download/2.1/eigen-b2e267dc99d4.zip
unzip eigen-b2e267dc99d4.zip

C++ installation

You can install dynet for C++ with the following commands

# Clone the github repository
git clone https://github.com/clab/dynet.git
cd dynet
mkdir build
cd build
# Run CMake
# -DENABLE_BOOST=ON in combination with -DENABLE_CPP_EXAMPLES=ON also
# compiles the multiprocessing c++ examples
cmake .. -DEIGEN3_INCLUDE_DIR=/path/to/eigen -DENABLE_CPP_EXAMPLES=ON
# Compile using 2 processes
make -j 2
# Test with an example
./examples/train_xor

For more details refer to the documentation

Python installation

You can install DyNet for python by using the following command

pip install git+https://github.com/clab/dynet#egg=dynet

For more details refer to the documentation

Citing

If you use DyNet for research, please cite this report as follows:

@article{dynet,
  title={DyNet: The Dynamic Neural Network Toolkit},
  author={Graham Neubig and Chris Dyer and Yoav Goldberg and Austin Matthews and Waleed Ammar and Antonios Anastasopoulos and Miguel Ballesteros and David Chiang and Daniel Clothiaux and Trevor Cohn and Kevin Duh and Manaal Faruqui and Cynthia Gan and Dan Garrette and Yangfeng Ji and Lingpeng Kong and Adhiguna Kuncoro and Gaurav Kumar and Chaitanya Malaviya and Paul Michel and Yusuke Oda and Matthew Richardson and Naomi Saphra and Swabha Swayamdipta and Pengcheng Yin},
  journal={arXiv preprint arXiv:1701.03980},
  year={2017}
}

Contributing

We welcome any contribution to DyNet! You can find the contributing guidelines here

Issues
  • Incorporate cuDNN, add conv2d CPU/GPU version (based on Eigen and cuDNN)

    Incorporate cuDNN, add conv2d CPU/GPU version (based on Eigen and cuDNN)

    #229 This is the CPU implementation based on Eigen SpatialConvolution. It is reported as the current fastest (available) CPU version of conv2d. For GPU support, I think implementing a new version using cublas kernels (by hand) is worthless, so I am currently incorporating cudnn into DyNet and will provide a cudnn-based (standard) implementation.

    opened by zhisbug 33
  • First attempt at Yarin Gal dropout for LSTM

    First attempt at Yarin Gal dropout for LSTM

    https://arxiv.org/pdf/1512.05287v5.pdf

    I'm not 100% sure its correct, and it has some ugliness -- LSTMBuilder now keeps a pointer to ComputationGraph -- but Gal's dropout seems to be the preferred way to do dropout for LSTMs.

    Will appreciate another pair of eyes.

    opened by yoavg 29
  • Support installation through pip

    Support installation through pip

    With this change, DyNet can be installed with the following command line:

    pip install git+https://github.com/clab/dynet#egg=dynet
    

    If Boost is installed in a non-standard location, it has to be set in the environment variable BOOST prior to installation.

    To try this out from my fork before merging the pull request, use:

    pip install git+https://github.com/danielhers/dynet#egg=dynet
    
    opened by danielhers 23
  • Auto-batching 'inf' gradient

    Auto-batching 'inf' gradient

    Hi,

    We successfully implement a seq2seq model with auto-batching (in GPU) and it works great. We wanted to improve the speed by reducing the size of the softmax:

    Expression W = select_rows(p2c,candsInt); Expression x = W * v; Expression candidates = log_softmax(x);

    When not using auto-batching the code works and behaves as expected, however when using the auto-batch we get a runtime error what(): Magnitude of gradient is bad: inf

    Thank you, Eli

    major bug fix needs confirmation 
    opened by elikip 22
  • Is there a alternative way to save model besides the boost?

    Is there a alternative way to save model besides the boost?

    Hi,

    Currently I am facing problem to create a model loader in different languages(e.g. Java). Is there a better way to serialize the model (or parameters) in more human-readable way? It would be great to be more widely-used in many ways. Any kind of suggestions will be appreciated!

    Thanks, YJ

    opened by iamyoungjo 21
  • Combine python/setup.py.in into setup.py

    Combine python/setup.py.in into setup.py

    Simplify Python installation process by combining the generated setup.py into the top one, using environment variables to pass information from cmake. Should allow fixing #657 now that the Cython extensions are created by the main setup.py.

    opened by danielhers 20
  • GPU (backend cuda) build problem

    GPU (backend cuda) build problem

    I am having problem on build with BACKEND=cuda. My system is OS X (10.11.6 El Capitan) cmake works fine, once I do "make -j 4", it returns following error:

    Undefined symbols for architecture x86_64: ... ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation)

    As I use GPU in tensorflow without any issues, I doubt that my CUDA setting has not done or something.

    I searched similar issues here but I am the only person who's having this issue as it seems. There's no issue on compilation if I don't put cuda backend.

    If I missed a significant step here or if anyone who's familiar with this error, please help. I wasted more than 6 hours because of this.

    make.log.zip

    moderate bug fix needs confirmation 
    opened by iamyoungjo 20
  • Instalation issue

    Instalation issue

    Hello,

    I'm trying to install dynet on my local machine and I keep getting an error while importing dynet in python.

    import dynet as dy Traceback (most recent call last): File "", line 1, in File "dynet.py", line 17, in from _dynet import * ImportError: dlopen(./_dynet.so, 2): Library not loaded: @rpath/libdynet.dylib Referenced from: /dynet-base/dynet/build/python/_dynet.so Reason: image not found

    I'm using:

    • MBP w/ MacOS Sierra
    • Eigen's default branch from bitbucket
    • The latest dynet (w/ Today's commit that fixed TravisCI)
    • boost 160
    • python 2.7.10
    • cmake 3.6.3
    • make 3.81 (built for i386-apple-darwin11.3.0)

    The make log also references that file c++ -bundle -undefined dynamic_lookup -arch i386 -arch x86_64 -Wl,-F. build/temp.macosx-10.12-intel-2.7/dynet.o -L. -L/dynet-base/dynet/build/dynet/ -L/dynet-base/dynet/build/dynet/ -ldynet -o /dynet-base/dynet/build/python/_dynet.so ld: warning: ignoring file /dynet-base/dynet/build/dynet//libdynet.dylib, file was built for x86_64 which is not the architecture being linked (i386): /dynet-base/dynet/build/dynet//libdynet.dylib

    Could you advise?

    Thanks, Florin.

    moderate bug 
    opened by fmacicasan 19
  • Batch manipulation operations

    Batch manipulation operations

    It would be nice to have operations that allow you to do things like

    • concat_batch: concatenate multiple expressions into a single batched expression
    • pick_batch_elements: pick only a subset of the elements from a batched expression
    enhancement 
    opened by neubig 19
  • error: Could not download Eigen from 'https://bitbucket.org/eigen/eigen/get/2355b229ea4c.zip'

    error: Could not download Eigen from 'https://bitbucket.org/eigen/eigen/get/2355b229ea4c.zip'

    Hi, I'm trying to install version 2.0.3 but pip throws an error because it can't download Eigen from bitbucket. Indeed, the link appears broken. Is there a workaround for the missing Eigen package? Thanks

    opened by davidjlemay 0
  • When using mini-batch, there is a potential risk of future information leakage

    When using mini-batch, there is a potential risk of future information leakage

    When I'm training a MLP model with mini-batched input, the result is more better than the pytorch's version. I checked my input, I found my input samples are time continuous data and I forgot make a global shuffle. So the model can see future information in a mini-batch. I think the main reason is the low-level implementation of AffineTransform function with mini-batch. One solution is to make a global shuffle before making mini-batch, another solution maybe is to optimize the implementation of mini-batched AffineTransform. Thanks for paying attention to this issue.

    opened by initial-d 0
  • Bump pillow from 7.1.0 to 8.3.2 in /examples/variational-autoencoder/basic-image-recon

    Bump pillow from 7.1.0 to 8.3.2 in /examples/variational-autoencoder/basic-image-recon

    Bumps pillow from 7.1.0 to 8.3.2.

    Release notes

    Sourced from pillow's releases.

    8.3.2

    https://pillow.readthedocs.io/en/stable/releasenotes/8.3.2.html

    Security

    • CVE-2021-23437 Raise ValueError if color specifier is too long [hugovk, radarhere]

    • Fix 6-byte OOB read in FliDecode [wiredfool]

    Python 3.10 wheels

    • Add support for Python 3.10 #5569, #5570 [hugovk, radarhere]

    Fixed regressions

    • Ensure TIFF RowsPerStrip is multiple of 8 for JPEG compression #5588 [kmilos, radarhere]

    • Updates for ImagePalette channel order #5599 [radarhere]

    • Hide FriBiDi shim symbols to avoid conflict with real FriBiDi library #5651 [nulano]

    8.3.1

    https://pillow.readthedocs.io/en/stable/releasenotes/8.3.1.html

    Changes

    8.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/8.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    8.3.2 (2021-09-02)

    • CVE-2021-23437 Raise ValueError if color specifier is too long [hugovk, radarhere]

    • Fix 6-byte OOB read in FliDecode [wiredfool]

    • Add support for Python 3.10 #5569, #5570 [hugovk, radarhere]

    • Ensure TIFF RowsPerStrip is multiple of 8 for JPEG compression #5588 [kmilos, radarhere]

    • Updates for ImagePalette channel order #5599 [radarhere]

    • Hide FriBiDi shim symbols to avoid conflict with real FriBiDi library #5651 [nulano]

    8.3.1 (2021-07-06)

    • Catch OSError when checking if fp is sys.stdout #5585 [radarhere]

    • Handle removing orientation from alternate types of EXIF data #5584 [radarhere]

    • Make Image.array take optional dtype argument #5572 [t-vi, radarhere]

    8.3.0 (2021-07-01)

    • Use snprintf instead of sprintf. CVE-2021-34552 #5567 [radarhere]

    • Limit TIFF strip size when saving with LibTIFF #5514 [kmilos]

    • Allow ICNS save on all operating systems #4526 [baletu, radarhere, newpanjing, hugovk]

    • De-zigzag JPEG's DQT when loading; deprecate convert_dict_qtables #4989 [gofr, radarhere]

    • Replaced xml.etree.ElementTree #5565 [radarhere]

    ... (truncated)

    Commits
    • 8013f13 8.3.2 version bump
    • 23c7ca8 Update CHANGES.rst
    • 8450366 Update release notes
    • a0afe89 Update test case
    • 9e08eb8 Raise ValueError if color specifier is too long
    • bd5cf7d FLI tests for Oss-fuzz crash.
    • 94a0cf1 Fix 6-byte OOB read in FliDecode
    • cece64f Add 8.3.2 (2021-09-02) [CI skip]
    • e422386 Add release notes for Pillow 8.3.2
    • 08dcbb8 Pillow 8.3.2 supports Python 3.10 [ci skip]
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Published Documentation is Outdated

    Published Documentation is Outdated

    The published documentation for Python Manual Installation here: https://dynet.readthedocs.io/en/latest/python.html#manual-installation is outdated, as it includes installing Eigen from https://bitbucket.org/eigen/eigen instead of https://github.com/clab/dynet/releases/download/2.1/eigen-b2e267dc99d4.zip as shown here: https://github.com/clab/dynet/blob/master/doc/source/python.rst

    so we need to publish that to reflect on https://dynet.readthedocs.io/en/latest/python.html#manual-installation

    opened by monagjr 0
  • Eigen for dynet v1.1

    Eigen for dynet v1.1

    Hi, I'm trying to install dynet v1.1 manually which needs the 346ecdb revision of Eigen. However, the Eigen package link for that version has been deprecated. Do you have this version of the eigen package, or other alternative solutions?

    Thanks in advance!

    opened by JINGE-ui 0
  • compiled the project on armeabi-v7a arm

    compiled the project on armeabi-v7a arm

    I compiled this project on linux, the libdynet.so is 6.2M But when I compiled the project on armeabi-v7a arm, and the libdynet.so is 133M. Can anybody help reduce the size of the compiled LD library on armeabi-v7a arm? thanks a lot

    opened by sjtu-cz 0
  • NaN or Inf detected when computing log_softmax

    NaN or Inf detected when computing log_softmax

    Immediately when I start training a model I get "NaN of Inf detected" when this line happens:

    logloss = log_softmax(f_i, valid_frames)

    Note this is with immediate_compute and check_validity turned on. If they aren't, then the error seems to happen a little later in the process.

    In the most recent run, the values being passed to log_softmax are:

    f_i = expression 1630/2 valid_frames = [204, 28]

    Can someone help me understand why this input is returning either inf or nan? I've looked through the issues and it seems to be something different each time.

    Here is an example of what log_softmax returns (not the same run as above though):

    logloss = expression 3429/2

    Thanks!

    opened by eahogue 1
  • Fix 'boost::interprocess::interprocess_exception' Permission denied error in multi-user multi-process setting.

    Fix 'boost::interprocess::interprocess_exception' Permission denied error in multi-user multi-process setting.

    The following error is often thrown out when multiple users run multi-process training on a machine (across the time), which happens a lot in a cluster environment.

    terminate called after throwing an instance of 'boost::interprocess::interprocess_exception'
       what():  Permission denied
    

    What happened is: User A run a multi-process training. A queue file with name "dynet_mp_work_queue" is created with each multi-process training, and is often not deleted properly due to manual termination (like CTRL+C). Now User B tries to run a multi-process training on the same machine, and could not write/delete the queue file. So the above error is thrown.

    Fix: Include process id in the queue file name, so different trainings by different users do not step on each other's toes.

    opened by hui-wan 0
  • Builders leak ParameterCollectionStorage memory upon construction

    Builders leak ParameterCollectionStorage memory upon construction

    In part due to the use of a pointer in ParameterCollection for ParameterCollectionStorage and the lack of assignment operator or copy constructor for ParameterCollection, RnnBuilders like a VanillaLSTMBuilder will leak memory during their construction. In particular at a line like https://github.com/clab/dynet/blob/93a5cd2d6aabeb8c506f07e51ef3a779506da68b/dynet/lstm.cc#L325 the old pointer will be overwritten and lost.

    This can be demonstrated with a short program like this one extracted from train_rnn-autobatch.cc:

    #include "dynet/lstm.h"
    #include <iostream>
    using namespace dynet;
    
    int main(int argc, char** argv) {
      initialize(argc, argv);
      {
          unsigned int HIDDEN = 200;
          unsigned int EMBED_SIZE = 200;
    
          std::cout << "m: ";
          ParameterCollection m;
          std::cout << "fwR: ";
          VanillaLSTMBuilder fwR = VanillaLSTMBuilder(1, EMBED_SIZE, HIDDEN, m);
      }
      cleanup();
      return 0;
    }
    

    If some debugging output is added like these lines in model.cc,

    ParameterCollection::ParameterCollection() : name("/"),
        storage(DYNET_NEW(ParameterCollectionStorage(default_weight_decay_lambda))),
        parent(nullptr) {
    
        std::cout
            << "Constructing1 " << this
            << " with parent " << parent
            << " with storage " << storage
            << std::endl;
    }
    
    ParameterCollection::ParameterCollection(const string & my_name, ParameterCollection* my_parent, float weight_decay_lambda) :
        name(my_name), storage(DYNET_NEW(ParameterCollectionStorage(weight_decay_lambda))), parent(my_parent) {
    
        std::cout
            << "Constructing2 " << this
            << " with parent " << parent
            << " with storage " << storage
            << std::endl;
    }
    
    ParameterCollection::~ParameterCollection() {
        std::cerr
            << "Destructing " << this
            << " with parent " << parent
            << " with storage " << storage
            << std::endl;
    
        if (parent == nullptr && storage != nullptr)
            delete storage;
    }
    

    and a note added to lstm.cc in VanillaLSTMBuilder::VanillaLSTMBuilder

      std::cout << "local_model: ";
      local_model = model.add_subcollection("vanilla-lstm-builder");
    

    one can get output like this:

    m: Constructing1 000000F8B4EFF8C0 with parent 0000000000000000 with storage 000002411AD0FD40
    fwR: Constructing1 000000F8B4EFF9F8 with parent 0000000000000000 with storage 000002411AD0FF80
    local_model: Constructing2 000000F8B4EFEEF0 with parent 000000F8B4EFF8C0 with storage 000002411AD101C0
    Destructing 000000F8B4EFEEF0 with parent 000000F8B4EFF8C0 with storage 000002411AD101C0
    Destructing 000000F8B4EFF9F8 with parent 000000F8B4EFF8C0 with storage 000002411AD101C0
    Destructing 000000F8B4EFF8C0 with parent 0000000000000000 with storage 000002411AD0FD40
    

    The temporary ParameterCollection displayed in the fwR line and stored in local_model will leak when local_model is overwritten with the value from model.add_subcollection("vanilla-lstm-builder");.

    Probably the ParameterCollection should be made to assign and copy correctly. In this particular case, one can skip that and initialize local_model from the beginning by changing the constructor of VanillaLSTMBuilder to

    VanillaLSTMBuilder::VanillaLSTMBuilder(unsigned layers, unsigned input_dim,
        unsigned hidden_dim, ParameterCollection& model, bool ln_lstm, float forget_bias) :
        // The initialization of local_model has been added here.
        local_model(model.add_subcollection("vanilla-lstm-builder")), layers(layers),
        input_dim(input_dim), hid(hidden_dim), ln_lstm(ln_lstm), forget_bias(forget_bias),
        dropout_masks_valid(false), _cg(nullptr) {
    

    This results in the output

    m: Constructing1 0000006F4795F890 with parent 0000000000000000 with storage 0000018CA3ED0150
    fwR: Constructing2 0000006F4795F9C8 with parent 0000006F4795F890 with storage 0000018CA3ED0390
    Destructing 0000006F4795F9C8 with parent 0000006F4795F890 with storage 0000018CA3ED0390
    Destructing 0000006F4795F890 with parent 0000000000000000 with storage 0000018CA3ED0150
    

    This will still result in a leak because of the condition on the destructor of the ParameterCollection:

        if (parent == nullptr && storage != nullptr)
            delete storage;
    

    This parent doesn't seem to have much to do with anything here. Perhaps it was meant to work around other problems. Removing the condition parent == nullptr will prevent the leak in this case. I was a little more cautious and changed storage to be a shared pointer, instead. It probably helps in cases when ParameterCollections are copied.

    This problem probably exists with all or most builders, but I haven't studied whether similar modification will be effective for all the others. Thanks for looking into this.

    opened by kwalcock 6
  • DyNet V1.1

    DyNet V1.1

    Hi, when I install Dynet1.1,I encountered a problem。

     ERROR: Command errored out with exit status 1:
       command: /home/xushiqiang/anaconda3/envs/svmrnn/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-31z7go6_/dynet_9cbb8ae7d6b44b6f8763a84cf638457c/setup.py'"'"'; __file__='"'"'/tmp/pip-install-31z7go6_/dynet_9cbb8ae7d6b44b6f8763a84cf638457c/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-vj590a3j
           cwd: /tmp/pip-install-31z7go6_/dynet_9cbb8ae7d6b44b6f8763a84cf638457c/
      Complete output (27 lines):
      running bdist_wheel
      running build
      INFO:root:==============================
      INFO:root:CMake path: /usr/bin/cmake
      INFO:root:Make path: /usr/bin/make
      INFO:root:Make flags: -j 4
      INFO:root:Mercurial path: /usr/bin/hg
      INFO:root:C compiler path: /usr/bin/gcc
      INFO:root:CXX compiler path: /usr/bin/g++
      INFO:root:---
      INFO:root:Script directory: /tmp/pip-install-31z7go6_/dynet_9cbb8ae7d6b44b6f8763a84cf638457c
      INFO:root:Build directory: /tmp/pip-install-31z7go6_/dynet_9cbb8ae7d6b44b6f8763a84cf638457c/build/py3.8-64bit
      INFO:root:Library installation directory: /home/xushiqiang/anaconda3/envs/svmrnn/lib/python3.8/site-packages/../../..
      INFO:root:Python executable: /home/xushiqiang/anaconda3/envs/svmrnn/bin/python
      INFO:root:==============================
      cmake version 3.16.3
      
      CMake suite maintained and supported by Kitware (kitware.com/cmake).
      g++ (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
      Copyright (C) 2019 Free Software Foundation, Inc.
      This is free software; see the source for copying conditions.  There is NO
      warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
      
      INFO:root:Creating build directory /tmp/pip-install-31z7go6_/dynet_9cbb8ae7d6b44b6f8763a84cf638457c/build/py3.8-64bit
      INFO:root:Cloning Eigen...
      abort: HTTP Error 404: Not Found
      error: /usr/bin/hg clone https://bitbucket.org/eigen/eigen
      ----------------------------------------
      ERROR: Failed building wheel for dynet
    

    Could you give me some advices?When I used dynet2.1.2, I got 'dynet' has no attribute 'Saveable'

    opened by XuShiqiang9894 0
Releases(2.1.2)
  • 2.1.2(Oct 21, 2020)

  • 2.1.1(Oct 20, 2020)

  • 2.1(Sep 18, 2018)

    DyNet v. 2.1 incorporates the following changes:

    • Parameters are now implicitly cast Expressions in python. This changes the API slightly as there is no need to call dy.parameter or .expr anymore. #1233
    • Python 3.7 support (pre-built binaries on PyPI) #1450 (thanks @danielhers )
    • Advanced Numpy-like slicing #1363 (thanks @msperber)
    • Argmax and straight-through estimators #1208
    • Updated API doc #1312 (thanks @zhechenyan)
    • Fix segmentation fault in RNNs https://github.com/clab/dynet/issues/1371
    • Many other small fixes and QoL improvements (see the full list of merged PRs since the last release for more details)

    Link to the 2.1 documentation: https://dynet.readthedocs.io/en/2.1/

    Source code(tar.gz)
    Source code(zip)
    eigen-b2e267dc99d4.zip(3.16 MB)
  • 2.0.3(Feb 16, 2018)

    DyNet v. 2.0.3 incorporates the following changes:

    • On-GPU random number generation (https://github.com/clab/dynet/issues/1059 https://github.com/clab/dynet/pull/1094 https://github.com/clab/dynet/pull/1154)
    • Memory savings through in-place operations (https://github.com/clab/dynet/pull/1103)
    • More efficient inputTensor that doesn't switch memory layout (https://github.com/clab/dynet/issues/1143)
    • More stable sigmoid (https://github.com/clab/dynet/pull/1200)
    • Fix bug in weight decay (https://github.com/clab/dynet/issues/1201)
    • Many other fixes, etc.

    Link to the documentation: Dynet v2.0.3

    Source code(tar.gz)
    Source code(zip)
  • 2.0.2(Dec 21, 2017)

    v 2.0.2 of DyNet includes the following improvements. Thanks to everyone who made them happen!

    Done:

    Better organized examples: https://github.com/clab/dynet/issues/191 Full multi-device support: https://github.com/clab/dynet/issues/952 Broadcasting standard elementwise operations: https://github.com/clab/dynet/pull/776 Some refactoring: https://github.com/clab/dynet/issues/522 Better profiling: https://github.com/clab/dynet/pull/1088 Fix performance regression on autobatching: https://github.com/clab/dynet/issues/974 Pre-compiled pip binaries A bunch of other small functionality additions and bug fixes

    Source code(tar.gz)
    Source code(zip)
  • 2.0.1(Sep 2, 2017)

    DyNet v2.0.1 made the following major improvements:

    Simplified training interface: https://github.com/clab/dynet/pull/695 Support for multi-device computation (thanks @xunzhang !): https://github.com/clab/dynet/pull/704 A memory efficient version of LSTMBuilder (thanks @msperber): https://github.com/clab/dynet/pull/729 Scratch memory for better memory efficiency (thanks @zhisbug @Abasyoni !): https://github.com/clab/dynet/pull/692 Work towards pre-compiled pip files (thanks @danielhers !)

    Source code(tar.gz)
    Source code(zip)
  • v2.0(Jul 10, 2017)

    This release includes a number of new features that are breaking changes with respect to v1.1.

    • DyNet no longer requires boost (thanks @xunzhang)! This means that models are now not saved in Boost format, but instead a format supported natively by DyNet.
    • Other changes to reading and writing include the ability to read/write only parts of models. There have been a number of changes to the reading/writing interface as well, and examples of how to use it can be found in the "examples". (https://github.com/clab/dynet/issues/84)
    • Renaming of "Model" as "ParameterCollection"
    • Removing the dynet::expr namespace in C++ (now expressions are in the dynet:: namespace)
    • Making VanillaLSTMBuilder the default LSTM interface https://github.com/clab/dynet/issues/474

    Other new features include

    • Autobatching (by @yoavgo and @neubig): https://github.com/clab/dynet/blob/master/examples/python/tutorials/Autobatching.ipynb
    • Scala bindings (thanks @joelgrus!) https://github.com/clab/dynet/pull/357
    • Dynamically increasing memory pools (thanks @yoavgo) https://github.com/clab/dynet/pull/364
    • Convolutions and cuDNN (thanks @zhisbug!): https://github.com/clab/dynet/issues/229https://github.com/clab/dynet/issues/236
    • Better error handling: https://github.com/clab/dynet/pull/358https://github.com/clab/dynet/pull/365
    • Better documentation (thanks @pmichel31415!)
    • Gal dropout (thanks @yoavgo and @pmichel31415!): https://github.com/clab/dynet/pull/261
    • Integration into pip (thanks @danielhers !)
    • A cool new logo! (http://dynet.readthedocs.io/en/latest/citing.html)
    • A huge number of other changes by other contributors. Thank you everyone!
    Source code(tar.gz)
    Source code(zip)
  • v1.1(Jun 28, 2017)

  • v1.0-rc1(Oct 12, 2016)

    This is the first release candidate for DyNet version 1.0. Compared to the previous cnn, it supports a number of new features:

    • Full GPU support
    • Simple support of mini-batching
    • Better integration with Python bindings
    • Better efficiency
    • Correct implementation of l2 regularization
    • More supported functions
    • And much more!
    Source code(tar.gz)
    Source code(zip)
Owner
Chris Dyer's lab @ LTI/CMU
Chris Dyer's lab @ LTI/CMU
This is an open-source toolkit for Heterogeneous Graph Neural Network(OpenHGNN) based on DGL [Deep Graph Library] and PyTorch.

This is an open-source toolkit for Heterogeneous Graph Neural Network(OpenHGNN) based on DGL [Deep Graph Library] and PyTorch.

BUPT GAMMA Lab 128 Nov 26, 2021
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.1k Nov 26, 2021
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17k Feb 11, 2021
Dynamic Slimmable Network (CVPR 2021, Oral)

Dynamic Slimmable Network (DS-Net) This repository contains PyTorch code of our paper: Dynamic Slimmable Network (CVPR 2021 Oral). Architecture of DS-

Changlin Li 151 Nov 24, 2021
:hot_pepper: R²SQL: "Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing." (AAAI 2021)

R²SQL The PyTorch implementation of paper Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing. (AAAI 2021) Requirement

huybery 50 Sep 20, 2021
【ACMMM 2021】DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning

DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning (ACMMM 2021) Overview We release the code of the DSANet (Dynamic S

Wenhao Wu 43 Nov 12, 2021
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

null 52.3k Nov 25, 2021
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

null 46.1k Feb 13, 2021
PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 20

Zhengqi Li 393 Nov 26, 2021
Code accompanying "Dynamic Neural Relational Inference" from CVPR 2020

Code accompanying "Dynamic Neural Relational Inference" This codebase accompanies the paper "Dynamic Neural Relational Inference" from CVPR 2020. This

Colin Graber 40 Nov 1, 2021
Neural Dynamic Policies for End-to-End Sensorimotor Learning

This is a PyTorch based implementation for our NeurIPS 2020 paper on Neural Dynamic Policies for end-to-end sensorimotor learning.

Shikhar Bahl 37 Nov 7, 2021
Neural Scene Graphs for Dynamic Scene (CVPR 2021)

Implementation of Neural Scene Graphs, that optimizes multiple radiance fields to represent different objects and a static scene background. Learned representations can be rendered with novel object compositions and views.

null 57 Nov 25, 2021
D-NeRF: Neural Radiance Fields for Dynamic Scenes

D-NeRF: Neural Radiance Fields for Dynamic Scenes [Project] [Paper] D-NeRF is a method for synthesizing novel views, at an arbitrary point in time, of

Albert Pumarola 109 Nov 22, 2021
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayesian-Torch is designed to be flexible and seamless in extending a deterministic deep neural network architecture to corresponding Bayesian form by simply replacing the deterministic layers with Bayesian layers.

Intel Labs 81 Dec 2, 2021
Visualization toolkit for neural networks in PyTorch! Demo -->

FlashTorch A Python visualization toolkit, built with PyTorch, for neural networks in PyTorch. Neural networks are often described as "black box". The

Misa Ogura 633 Nov 28, 2021
A "gym" style toolkit for building lightweight Neural Architecture Search systems

A "gym" style toolkit for building lightweight Neural Architecture Search systems

Jack Turner 9 Aug 24, 2021
The VeriNet toolkit for verification of neural networks

VeriNet The VeriNet toolkit is a state-of-the-art sound and complete symbolic interval propagation based toolkit for verification of neural networks.

null 4 Nov 10, 2021
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

The Apache Software Foundation 19.8k Nov 26, 2021
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

The Apache Software Foundation 19.8k Dec 3, 2021