DyNet: The Dynamic Neural Network Toolkit

Related tags

Deep Learning dynet
Overview
DyNet


Build Status (Travis CI) Build Status (AppVeyor) Build Status (Docs) PyPI version

The Dynamic Neural Network Toolkit

General

DyNet is a neural network library developed by Carnegie Mellon University and many others. It is written in C++ (with bindings in Python) and is designed to be efficient when run on either CPU or GPU, and to work well with networks that have dynamic structures that change for every training instance. For example, these kinds of networks are particularly important in natural language processing tasks, and DyNet has been used to build state-of-the-art systems for syntactic parsing, machine translation, morphological inflection, and many other application areas.

Read the documentation to get started, and feel free to contact the dynet-users group group with any questions (if you want to receive email make sure to select "all email" when you sign up). We greatly appreciate any bug reports and contributions, which can be made by filing an issue or making a pull request through the github page.

You can also read more technical details in our technical report.

Getting started

You can find tutorials about using DyNet here (C++) and here (python), and here (EMNLP 2016 tutorial).

One aspect that sets DyNet apart from other tookits is the auto-batching feature. See the documentation about batching.

The example folder contains a variety of examples in C++ and python.

Installation

DyNet relies on a number of external programs/libraries including CMake and Eigen. CMake can be installed from standard repositories.

For example on Ubuntu Linux:

sudo apt-get install build-essential cmake

Or on macOS, first make sure the Apple Command Line Tools are installed, then get CMake, and Mercurial with either homebrew or macports:

xcode-select --install
brew install cmake  # Using homebrew.
sudo port install cmake # Using macports.

On Windows, see documentation.

To compile DyNet you also need a specific version of the Eigen library. If you use any of the released versions, you may get assertion failures or compile errors. You can get it easily using the following command:

mkdir eigen
cd eigen
wget https://github.com/clab/dynet/releases/download/2.1/eigen-b2e267dc99d4.zip
unzip eigen-b2e267dc99d4.zip

C++ installation

You can install dynet for C++ with the following commands

# Clone the github repository
git clone https://github.com/clab/dynet.git
cd dynet
mkdir build
cd build
# Run CMake
# -DENABLE_BOOST=ON in combination with -DENABLE_CPP_EXAMPLES=ON also
# compiles the multiprocessing c++ examples
cmake .. -DEIGEN3_INCLUDE_DIR=/path/to/eigen -DENABLE_CPP_EXAMPLES=ON
# Compile using 2 processes
make -j 2
# Test with an example
./examples/train_xor

For more details refer to the documentation

Python installation

You can install DyNet for python by using the following command

pip install git+https://github.com/clab/dynet#egg=dynet

For more details refer to the documentation

Citing

If you use DyNet for research, please cite this report as follows:

@article{dynet,
  title={DyNet: The Dynamic Neural Network Toolkit},
  author={Graham Neubig and Chris Dyer and Yoav Goldberg and Austin Matthews and Waleed Ammar and Antonios Anastasopoulos and Miguel Ballesteros and David Chiang and Daniel Clothiaux and Trevor Cohn and Kevin Duh and Manaal Faruqui and Cynthia Gan and Dan Garrette and Yangfeng Ji and Lingpeng Kong and Adhiguna Kuncoro and Gaurav Kumar and Chaitanya Malaviya and Paul Michel and Yusuke Oda and Matthew Richardson and Naomi Saphra and Swabha Swayamdipta and Pengcheng Yin},
  journal={arXiv preprint arXiv:1701.03980},
  year={2017}
}

Contributing

We welcome any contribution to DyNet! You can find the contributing guidelines here

Comments
  • Incorporate cuDNN, add conv2d CPU/GPU version (based on Eigen and cuDNN)

    Incorporate cuDNN, add conv2d CPU/GPU version (based on Eigen and cuDNN)

    #229 This is the CPU implementation based on Eigen SpatialConvolution. It is reported as the current fastest (available) CPU version of conv2d. For GPU support, I think implementing a new version using cublas kernels (by hand) is worthless, so I am currently incorporating cudnn into DyNet and will provide a cudnn-based (standard) implementation.

    opened by zhisbug 33
  • First attempt at Yarin Gal dropout for LSTM

    First attempt at Yarin Gal dropout for LSTM

    https://arxiv.org/pdf/1512.05287v5.pdf

    I'm not 100% sure its correct, and it has some ugliness -- LSTMBuilder now keeps a pointer to ComputationGraph -- but Gal's dropout seems to be the preferred way to do dropout for LSTMs.

    Will appreciate another pair of eyes.

    opened by yoavg 29
  • Support installation through pip

    Support installation through pip

    With this change, DyNet can be installed with the following command line:

    pip install git+https://github.com/clab/dynet#egg=dynet
    

    If Boost is installed in a non-standard location, it has to be set in the environment variable BOOST prior to installation.

    To try this out from my fork before merging the pull request, use:

    pip install git+https://github.com/danielhers/dynet#egg=dynet
    
    opened by danielhers 23
  • Auto-batching 'inf' gradient

    Auto-batching 'inf' gradient

    Hi,

    We successfully implement a seq2seq model with auto-batching (in GPU) and it works great. We wanted to improve the speed by reducing the size of the softmax:

    Expression W = select_rows(p2c,candsInt); Expression x = W * v; Expression candidates = log_softmax(x);

    When not using auto-batching the code works and behaves as expected, however when using the auto-batch we get a runtime error what(): Magnitude of gradient is bad: inf

    Thank you, Eli

    major bug fix needs confirmation 
    opened by elikip 22
  • Is there a alternative way to save model besides the boost?

    Is there a alternative way to save model besides the boost?

    Hi,

    Currently I am facing problem to create a model loader in different languages(e.g. Java). Is there a better way to serialize the model (or parameters) in more human-readable way? It would be great to be more widely-used in many ways. Any kind of suggestions will be appreciated!

    Thanks, YJ

    opened by iamyoungjo 21
  • Combine python/setup.py.in into setup.py

    Combine python/setup.py.in into setup.py

    Simplify Python installation process by combining the generated setup.py into the top one, using environment variables to pass information from cmake. Should allow fixing #657 now that the Cython extensions are created by the main setup.py.

    opened by danielhers 20
  • GPU (backend cuda) build problem

    GPU (backend cuda) build problem

    I am having problem on build with BACKEND=cuda. My system is OS X (10.11.6 El Capitan) cmake works fine, once I do "make -j 4", it returns following error:

    Undefined symbols for architecture x86_64: ... ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation)

    As I use GPU in tensorflow without any issues, I doubt that my CUDA setting has not done or something.

    I searched similar issues here but I am the only person who's having this issue as it seems. There's no issue on compilation if I don't put cuda backend.

    If I missed a significant step here or if anyone who's familiar with this error, please help. I wasted more than 6 hours because of this.

    make.log.zip

    moderate bug fix needs confirmation 
    opened by iamyoungjo 20
  • Batch manipulation operations

    Batch manipulation operations

    It would be nice to have operations that allow you to do things like

    • concat_batch: concatenate multiple expressions into a single batched expression
    • pick_batch_elements: pick only a subset of the elements from a batched expression
    enhancement 
    opened by neubig 19
  • Instalation issue

    Instalation issue

    Hello,

    I'm trying to install dynet on my local machine and I keep getting an error while importing dynet in python.

    import dynet as dy Traceback (most recent call last): File "", line 1, in File "dynet.py", line 17, in from _dynet import * ImportError: dlopen(./_dynet.so, 2): Library not loaded: @rpath/libdynet.dylib Referenced from: /dynet-base/dynet/build/python/_dynet.so Reason: image not found

    I'm using:

    • MBP w/ MacOS Sierra
    • Eigen's default branch from bitbucket
    • The latest dynet (w/ Today's commit that fixed TravisCI)
    • boost 160
    • python 2.7.10
    • cmake 3.6.3
    • make 3.81 (built for i386-apple-darwin11.3.0)

    The make log also references that file c++ -bundle -undefined dynamic_lookup -arch i386 -arch x86_64 -Wl,-F. build/temp.macosx-10.12-intel-2.7/dynet.o -L. -L/dynet-base/dynet/build/dynet/ -L/dynet-base/dynet/build/dynet/ -ldynet -o /dynet-base/dynet/build/python/_dynet.so ld: warning: ignoring file /dynet-base/dynet/build/dynet//libdynet.dylib, file was built for x86_64 which is not the architecture being linked (i386): /dynet-base/dynet/build/dynet//libdynet.dylib

    Could you advise?

    Thanks, Florin.

    moderate bug 
    opened by fmacicasan 19
  • Eliminate dependency on libdynet from _dynet.so

    Eliminate dependency on libdynet from _dynet.so

    Currently, the compiled Cython file, _dynet.so depends on libdynet. This has a couple of disadvantages such as:

    • Installation can be clumsy (e.g., setting LD_LIBRARY_PATH in Linux, DYLD_LIBRARY_PATH in macOS to load libdynet)
    • Not easy to deploy to servers.

    This change eliminates the dependency by creating a static library of dynet, making the installation and deployment easier. The idea is to link the static library rather than the shared/dynamic library when generating _dynet.so.

    The static and shared/dynamic libraries are generated from an object library (it's just a collection of object files) [1]. By creating an object library, we can avoid compiling object files for both libraries.

    [1] https://cmake.org/cmake/help/latest/command/add_library.html#object-libraries


    This change is Reviewable

    opened by tetsuok 18
  • Scala bindings for DyNet (via swig)

    Scala bindings for DyNet (via swig)

    We have created SWIG bindings so that we can use DyNet from Scala. They are pretty comprehensive, with lots of documentation and tests and examples, and we are actively using DyNet from Scala code.

    Other than a few lines in the top level CMakeLists, all of our changes are under the new swig directory (and are hidden behind a flag which is OFF by default).

    We wanted to contribute this back, as it seems like something that could be useful to a lot of people.

    Incorporating this would require some sort of plan around keeping the bindings in sync with the root C++ code. Presumably that's already required for the Python bindings, so maybe it's not terribly hard.

    Anyway, I know this isn't just a simple "LGTM" change, so let's discuss.

    opened by joelgrus 18
  • while releasing: ValueError: underlying buffer has been detached

    while releasing: ValueError: underlying buffer has been detached

    Trying to tag the repo and triggering a release is currently not working. Attached is a full-log (logs_buffer_detached.txt). Here are the interesting snippets.

    copying src\cryptoadvance\specterext\swan\templates\swan\components\swan_menu.jinja -> build\lib\cryptoadvance\specterext\swan\templates\swan\components
    copying src\cryptoadvance\specterext\swan\templates\swan\components\swan_tab.jinja -> build\lib\cryptoadvance\specterext\swan\templates\swan\components
    running install
    C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
      warnings.warn(
    --- Logging error ---
    Traceback (most recent call last):
      File "c:\python38\lib\logging\__init__.py", line 1088, in emit
        stream.write(msg + self.terminator)
    ValueError: underlying buffer has been detached
    Call stack:
      File "setup.py", line 39, in <module>
        setup(
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\setuptools\__init__.py", line 87, in setup
        return distutils.core.setup(**attrs)
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\setuptools\_distutils\core.py", line 185, in setup
        return run_commands(dist)
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\setuptools\_distutils\core.py", line 201, in run_commands
        dist.run_commands()
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\setuptools\_distutils\dist.py", line 968, in run_commands
        self.run_command(cmd)
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\setuptools\dist.py", line 1217, in run_command
        super().run_command(command)
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\setuptools\_distutils\dist.py", line 987, in run_command
        cmd_obj.run()
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\wheel\bdist_wheel.py", line 358, in run
        log.info(f"installing to {self.bdist_dir}")
    Message: 'installing to build\\bdist.win-amd64\\wheel'
    Arguments: ()
    --- Logging error ---
    Traceback (most recent call last):
      File "c:\python38\lib\logging\__init__.py", line 1088, in emit
        stream.write(msg + self.terminator)
    ValueError: underlying buffer has been detached
    Call stack:
      File "setup.py", line 39, in <module>
        setup(
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\setuptools\__init__.py", line 87, in setup
        return distutils.core.setup(**attrs)
    

    So it seems that Error is thrown for each logging attempt. Later in the logs, the following issue occured which is probably more severe:

    56788 INFO: Processing module hooks...
    56792 INFO: Loading module hook 'hook-cryptoadvance.specter.services.py' from 'C:\\gitlab-runner\\builds\\xPkLDUk2\\0\\k9ert\\specter-desktop\\pyinstaller\\hooks'...
    60828 INFO: Collecting subclasses of Service in C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\Lib\site-packages\cryptoadvance\specterext...
    Traceback (most recent call last):
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\cryptoadvance\specter\util\reflection.py", line 188, in get_subclasses_for_clazz
        module = import_module(f"{module_name}.service")
      File "c:\python38\lib\importlib\__init__.py", line 127, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
      File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
      File "<frozen importlib._bootstrap>", line 991, in _find_and_load
      File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
      File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
      File "<frozen importlib._bootstrap>", line 991, in _find_and_load
      File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
    ModuleNotFoundError: No module named 'devhelp'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "c:\python38\lib\runpy.py", line 194, in _run_module_as_main
        return _run_code(code, main_globals, None,
      File "c:\python38\lib\runpy.py", line 87, in _run_code
        exec(code, run_globals)
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\Scripts\pyinstaller.exe\__main__.py", line 7, in <module>
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\PyInstaller\__main__.py", line 178, in run
        run_build(pyi_config, spec_file, **vars(args))
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\PyInstaller\__main__.py", line 59, in run_build
        PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs)
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\PyInstaller\building\build_main.py", line 842, in main
        build(specfile, distpath, workpath, clean_build)
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\PyInstaller\building\build_main.py", line 764, in build
        exec(code, spec_namespace)
      File "specterd.spec", line 47, in <module>
        a = Analysis(['specterd.py'],
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\PyInstaller\building\build_main.py", line 319, in __init__
        self.__postinit__()
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\PyInstaller\building\datastruct.py", line 173, in __postinit__
        self.assemble()
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\PyInstaller\building\build_main.py", line 487, in assemble
        self.graph.process_post_graph_hooks(self)
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\PyInstaller\depend\analysis.py", line 326, in process_post_graph_hooks
        module_hook.post_graph(analysis)
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\PyInstaller\depend\imphook.py", line 404, in post_graph
        self._load_hook_module()
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\PyInstaller\depend\imphook.py", line 367, in _load_hook_module
        self._hook_module = importlib_load_source(self.hook_module_name, self.hook_filename)
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\PyInstaller\compat.py", line 620, in importlib_load_source
        return mod_loader.load_module()
      File "<frozen importlib._bootstrap_external>", line 462, in _check_name_wrapper
      File "<frozen importlib._bootstrap_external>", line 962, in load_module
      File "<frozen importlib._bootstrap_external>", line 787, in load_module
      File "<frozen importlib._bootstrap>", line 265, in _load_module_shim
      File "<frozen importlib._bootstrap>", line 702, in _load
      File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
      File "<frozen importlib._bootstrap_external>", line 783, in exec_module
      File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\pyinstaller\hooks\hook-cryptoadvance.specter.services.py", line 9, in <module>
        for service_dir in ServiceManager.get_service_x_dirs("templates")
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\cryptoadvance\specter\managers\service_manager\service_manager.py", line 394, in get_service_x_dirs
        for clazz in get_subclasses_for_clazz(Service)
      File "C:\gitlab-runner\builds\xPkLDUk2\0\k9ert\specter-desktop\.buildenv\lib\site-packages\cryptoadvance\specter\util\reflection.py", line 195, in get_subclasses_for_clazz
        module = import_module(
      File "c:\python38\lib\importlib\__init__.py", line 122, in import_module
        raise TypeError(msg.format(name))
    TypeError: the 'package' argument is required to perform a relative import for '.specterext.devhelp.service'
    Compress-Archive : The path 'dist\specterd.exe' either does not exist or is not a valid file system path.
    At line:1 char:1
    + Compress-Archive -Path dist\specterd.exe release\specterd-v1.13.2-pre ...
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : InvalidArgument: (dist\specterd.exe:String) [Compress-Archive], InvalidOperationExceptio 
       n
        + FullyQualifiedErrorId : ArchiveCmdletPathNotFound,Compress-Archive
     
    section_end:1669149038:step_script
    section_start:1669149038:upload_artifacts_on_failure
    Uploading artifacts for failed job
    Version:      14.0.0
    Git revision: 3b6f852e
    Git branch:   14-0-stable
    GO version:   go1.13.8
    Built:        2021-06-19T12:24:44+0000
    OS/Arch:      windows/amd64
    Uploading artifacts...
    Runtime platform                                    arch=amd64 os=windows pid=9744 revision=3b6f852e version=14.0.0
    WARNING: pyinstaller/release/*: no matching files  
    ERROR: No files to upload                          
    section_end:1669149039:upload_artifacts_on_failure
    section_start:1669149039:cleanup_file_variables
    Cleaning up file based variables
    section_end:1669149040:cleanup_file_variables
    ERROR: Job failed: exit status 1
    
    
    
    
    opened by k9ert 0
  • Bump pillow from 9.0.0 to 9.3.0 in /examples/variational-autoencoder/basic-image-recon

    Bump pillow from 9.0.0 to 9.3.0 in /examples/variational-autoencoder/basic-image-recon

    Bumps pillow from 9.0.0 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • make: *** No targets specified and no makefile found.

    make: *** No targets specified and no makefile found.

    followed windows installation steps and got stack with this error, as well I tried to install the python package have stacked with the same error:

    (base) PS C:\*****\dynet\build> cmake .. -DEIGEN3_INCLUDE_DIR=C:\Users\dilaw\Downloads\eigen -DENABLE_CPP_EXAMPLES=ON
    -- Building for: Visual Studio 17 2022
    -- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.22000.
    -- The C compiler identification is MSVC 19.31.31105.0
    -- The CXX compiler identification is MSVC 19.31.31105.0
    -- Detecting C compiler ABI info
    -- Detecting C compiler ABI info - done
    -- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.31.31103/bin/Hostx64/x64/cl.exe - skipped
    -- Detecting C compile features
    -- Detecting C compile features - done
    -- Detecting CXX compiler ABI info
    -- Detecting CXX compiler ABI info - done
    -- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.31.31103/bin/Hostx64/x64/cl.exe - skipped
    -- Detecting CXX compile features
    -- Detecting CXX compile features - done
    CMake Deprecation Warning at CMakeLists.txt:2 (cmake_minimum_required):
      Compatibility with CMake < 2.8.12 will be removed from a future version of
      CMake.
    
      Update the VERSION argument <min> value or use a ...<max> suffix to tell
      CMake that the project does not need compatibility with older versions.
    
    
    -- BACKEND not specified, defaulting to eigen.
    -- Eigen dir is C:/Users/dilaw/Downloads/eigen
    -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
    -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
    -- Looking for pthread_create in pthreads
    -- Looking for pthread_create in pthreads - not found
    -- Looking for pthread_create in pthread
    -- Looking for pthread_create in pthread - not found
    -- Found Threads: TRUE
    CMake Deprecation Warning at tutorial/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED):
      Compatibility with CMake < 2.8.12 will be removed from a future version of
      CMake.
    
      Update the VERSION argument <min> value or use a ...<max> suffix to tell
      CMake that the project does not need compatibility with older versions.
    
    
    CMake Deprecation Warning at examples/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED):
      Compatibility with CMake < 2.8.12 will be removed from a future version of
      CMake.
    
      Update the VERSION argument <min> value or use a ...<max> suffix to tell
      CMake that the project does not need compatibility with older versions.
    
    
    -- Configuring done
    -- Generating done
    -- Build files have been written to: C:/Users/dilaw/Downloads/dynet/build
    (base) PS C:\*****\dynet\build> make -j 2
    make: *** No targets specified and no makefile found.  Stop.
    
    opened by walidbou6 1
  • Dynet GPU installation failure on Amazon Linux instances

    Dynet GPU installation failure on Amazon Linux instances

    Hi, I was able to install Dynet fine for CPU usage in an Amazon Linux p3.16x instance. For GPU usage, I ran the following command:

    BACKEND=cuda pip install git+https://github.com/clab/dynet#egg=dynet
    

    and I get the following error:

    Looking in indexes: https://pypi.org/simple, https://pip.repos.neuron.amazonaws.com
    Collecting dynet
      Cloning https://github.com/clab/dynet to /tmp/pip-install-ao0f03yi/dynet_9d4d210ad7d640e0823785d1c0e51456
      Running command git clone --filter=blob:none -q https://github.com/clab/dynet /tmp/pip-install-ao0f03yi/dynet_9d4d210ad7d640e0823785d1c0e51456
      Resolved https://github.com/clab/dynet to commit c418b09dfb08be8c797c1403911ddfe0d9f5df77
      Installing build dependencies ... done
      Getting requirements to build wheel ... done
      Preparing metadata (pyproject.toml) ... done
    Requirement already satisfied: numpy in /home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages (from dynet) (1.21.5)
    Requirement already satisfied: cython in /home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages (from dynet) (0.29.24)
    Building wheels for collected packages: dynet
      Building wheel for dynet (pyproject.toml) ... error
      ERROR: Command errored out with exit status 1:
       command: /home/ec2-user/anaconda3/envs/pytorch_p38/bin/python3.8 /home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmp5pz9ri55
           cwd: /tmp/pip-install-ao0f03yi/dynet_9d4d210ad7d640e0823785d1c0e51456
      Complete output (33 lines):
      running bdist_wheel
      running build
      INFO:root:CMAKE_PATH='/home/ec2-user/anaconda3/envs/pytorch_p38/bin/cmake'
      INFO:root:MAKE_PATH='/usr/bin/make'
      INFO:root:MAKE_FLAGS='-j 64'
      INFO:root:EIGEN3_INCLUDE_DIR='/tmp/pip-install-ao0f03yi/dynet_9d4d210ad7d640e0823785d1c0e51456/build/py3.8-64bit/eigen'
      INFO:root:EIGEN3_DOWNLOAD_URL='https://github.com/clab/dynet/releases/download/2.1/eigen-b2e267dc99d4.zip'
      INFO:root:CC_PATH='/home/ec2-user/anaconda3/envs/pytorch_p38/bin/x86_64-conda-linux-gnu-cc'
      INFO:root:CXX_PATH='/home/ec2-user/anaconda3/envs/pytorch_p38/bin/x86_64-conda-linux-gnu-c++'
      INFO:root:SCRIPT_DIR='/tmp/pip-install-ao0f03yi/dynet_9d4d210ad7d640e0823785d1c0e51456'
      INFO:root:BUILD_DIR='/tmp/pip-install-ao0f03yi/dynet_9d4d210ad7d640e0823785d1c0e51456/build/py3.8-64bit'
      INFO:root:INSTALL_PREFIX='/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/../../..'
      INFO:root:PYTHON='/home/ec2-user/anaconda3/envs/pytorch_p38/bin/python3.8'
      Traceback (most recent call last):
        File "/home/ec2-user/anaconda3/envs/pytorch_p38/bin/cmake", line 5, in <module>
          from cmake import cmake
      ModuleNotFoundError: No module named 'cmake'
      x86_64-conda-linux-gnu-c++ (crosstool-NG 1.24.0.133_b0863d8_dirty) 9.3.0
      Copyright (C) 2019 Free Software Foundation, Inc.
      This is free software; see the source for copying conditions.  There is NO
      warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
      
      INFO:root:Creating build directory /tmp/pip-install-ao0f03yi/dynet_9d4d210ad7d640e0823785d1c0e51456/build/py3.8-64bit
      INFO:root:Fetching Eigen...
      INFO:root:Unpacking Eigen...
      INFO:root:Configuring...
      Traceback (most recent call last):
        File "/home/ec2-user/anaconda3/envs/pytorch_p38/bin/cmake", line 5, in <module>
          from cmake import cmake
      ModuleNotFoundError: No module named 'cmake'
      /tmp/pip-build-env-5oy0twd_/overlay/lib/python3.8/site-packages/setuptools/dist.py:771: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead
        warnings.warn(
      error: /home/ec2-user/anaconda3/envs/pytorch_p38/bin/cmake /tmp/pip-install-ao0f03yi/dynet_9d4d210ad7d640e0823785d1c0e51456 -DCMAKE_INSTALL_PREFIX='/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/../../..' -DEIGEN3_INCLUDE_DIR='/tmp/pip-install-ao0f03yi/dynet_9d4d210ad7d640e0823785d1c0e51456/build/py3.8-64bit/eigen' -DPYTHON='/home/ec2-user/anaconda3/envs/pytorch_p38/bin/python3.8' -DBACKEND='cuda'
      ----------------------------------------
      ERROR: Failed building wheel for dynet
    Failed to build dynet
    ERROR: Could not build wheels for dynet, which is required to install pyproject.toml-based projects
    

    I started python with this env activated and ran the command from cmake import cmake and there was no error. So it's not clear to me why it says it could not find a module named cmake in the pytorch_p38 conda env.

    Python 3.8.12 | packaged by conda-forge | (default, Oct 12 2021, 21:59:51) 
    [GCC 9.4.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> from cmake import cmake
    >>>
    

    Any ideas would be greatly appreciated! @neubig

    opened by g-karthik 0
  • Unable to build wheel for dynet on Windows (pip install dynet)

    Unable to build wheel for dynet on Windows (pip install dynet)

    Unable to build wheel for dynet on Windows (pip install dynet)

    File "C:\Users\User\anaconda3\Scripts\cmake.exe_main_.py", line 4, in ModuleNotFoundError: No module named 'cmake' error: make not found, and MAKE is not set. [end of output]

    note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for dynet Failed to build dynet ERROR: Could not build wheels for dynet, which is required to install pyproject.toml-based projects

    Any advice appreciated.

    opened by AndrewBrooks56 3
Releases(2.1.2)
  • 2.1.2(Oct 21, 2020)

  • 2.1.1(Oct 20, 2020)

  • 2.1(Sep 18, 2018)

    DyNet v. 2.1 incorporates the following changes:

    • Parameters are now implicitly cast Expressions in python. This changes the API slightly as there is no need to call dy.parameter or .expr anymore. #1233
    • Python 3.7 support (pre-built binaries on PyPI) #1450 (thanks @danielhers )
    • Advanced Numpy-like slicing #1363 (thanks @msperber)
    • Argmax and straight-through estimators #1208
    • Updated API doc #1312 (thanks @zhechenyan)
    • Fix segmentation fault in RNNs https://github.com/clab/dynet/issues/1371
    • Many other small fixes and QoL improvements (see the full list of merged PRs since the last release for more details)

    Link to the 2.1 documentation: https://dynet.readthedocs.io/en/2.1/

    Source code(tar.gz)
    Source code(zip)
    eigen-b2e267dc99d4.zip(3.16 MB)
  • 2.0.3(Feb 16, 2018)

    DyNet v. 2.0.3 incorporates the following changes:

    • On-GPU random number generation (https://github.com/clab/dynet/issues/1059 https://github.com/clab/dynet/pull/1094 https://github.com/clab/dynet/pull/1154)
    • Memory savings through in-place operations (https://github.com/clab/dynet/pull/1103)
    • More efficient inputTensor that doesn't switch memory layout (https://github.com/clab/dynet/issues/1143)
    • More stable sigmoid (https://github.com/clab/dynet/pull/1200)
    • Fix bug in weight decay (https://github.com/clab/dynet/issues/1201)
    • Many other fixes, etc.

    Link to the documentation: Dynet v2.0.3

    Source code(tar.gz)
    Source code(zip)
  • 2.0.2(Dec 21, 2017)

    v 2.0.2 of DyNet includes the following improvements. Thanks to everyone who made them happen!

    Done:

    Better organized examples: https://github.com/clab/dynet/issues/191 Full multi-device support: https://github.com/clab/dynet/issues/952 Broadcasting standard elementwise operations: https://github.com/clab/dynet/pull/776 Some refactoring: https://github.com/clab/dynet/issues/522 Better profiling: https://github.com/clab/dynet/pull/1088 Fix performance regression on autobatching: https://github.com/clab/dynet/issues/974 Pre-compiled pip binaries A bunch of other small functionality additions and bug fixes

    Source code(tar.gz)
    Source code(zip)
  • 2.0.1(Sep 2, 2017)

    DyNet v2.0.1 made the following major improvements:

    Simplified training interface: https://github.com/clab/dynet/pull/695 Support for multi-device computation (thanks @xunzhang !): https://github.com/clab/dynet/pull/704 A memory efficient version of LSTMBuilder (thanks @msperber): https://github.com/clab/dynet/pull/729 Scratch memory for better memory efficiency (thanks @zhisbug @Abasyoni !): https://github.com/clab/dynet/pull/692 Work towards pre-compiled pip files (thanks @danielhers !)

    Source code(tar.gz)
    Source code(zip)
  • v2.0(Jul 10, 2017)

    This release includes a number of new features that are breaking changes with respect to v1.1.

    • DyNet no longer requires boost (thanks @xunzhang)! This means that models are now not saved in Boost format, but instead a format supported natively by DyNet.
    • Other changes to reading and writing include the ability to read/write only parts of models. There have been a number of changes to the reading/writing interface as well, and examples of how to use it can be found in the "examples". (https://github.com/clab/dynet/issues/84)
    • Renaming of "Model" as "ParameterCollection"
    • Removing the dynet::expr namespace in C++ (now expressions are in the dynet:: namespace)
    • Making VanillaLSTMBuilder the default LSTM interface https://github.com/clab/dynet/issues/474

    Other new features include

    • Autobatching (by @yoavgo and @neubig): https://github.com/clab/dynet/blob/master/examples/python/tutorials/Autobatching.ipynb
    • Scala bindings (thanks @joelgrus!) https://github.com/clab/dynet/pull/357
    • Dynamically increasing memory pools (thanks @yoavgo) https://github.com/clab/dynet/pull/364
    • Convolutions and cuDNN (thanks @zhisbug!): https://github.com/clab/dynet/issues/229https://github.com/clab/dynet/issues/236
    • Better error handling: https://github.com/clab/dynet/pull/358https://github.com/clab/dynet/pull/365
    • Better documentation (thanks @pmichel31415!)
    • Gal dropout (thanks @yoavgo and @pmichel31415!): https://github.com/clab/dynet/pull/261
    • Integration into pip (thanks @danielhers !)
    • A cool new logo! (http://dynet.readthedocs.io/en/latest/citing.html)
    • A huge number of other changes by other contributors. Thank you everyone!
    Source code(tar.gz)
    Source code(zip)
  • v1.1(Jun 28, 2017)

  • v1.0-rc1(Oct 12, 2016)

    This is the first release candidate for DyNet version 1.0. Compared to the previous cnn, it supports a number of new features:

    • Full GPU support
    • Simple support of mini-batching
    • Better integration with Python bindings
    • Better efficiency
    • Correct implementation of l2 regularization
    • More supported functions
    • And much more!
    Source code(tar.gz)
    Source code(zip)
Owner
Chris Dyer's lab @ LTI/CMU
Chris Dyer's lab @ LTI/CMU
Dynamic View Synthesis from Dynamic Monocular Video

Dynamic View Synthesis from Dynamic Monocular Video Project Website | Video | Paper Dynamic View Synthesis from Dynamic Monocular Video Chen Gao, Ayus

Chen Gao 139 Dec 28, 2022
Dynamic vae - Dynamic VAE algorithm is used for anomaly detection of battery data

Dynamic VAE frame Automatic feature extraction can be achieved by probability di

null 10 Oct 7, 2022
DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing

DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing Figure: Joint multi-attribute edits using DyStyle model. Great diversity

null 74 Dec 3, 2022
Code for paper Decoupled Dynamic Spatial-Temporal Graph Neural Network for Traffic Forecasting

Decoupled Spatial-Temporal Graph Neural Networks Code for our paper: Decoupled Dynamic Spatial-Temporal Graph Neural Network for Traffic Forecasting.

S22 43 Jan 4, 2023
Temporal Dynamic Convolutional Neural Network for Text-Independent Speaker Verification and Phonemetic Analysis

TDY-CNN for Text-Independent Speaker Verification Official implementation of Temporal Dynamic Convolutional Neural Network for Text-Independent Speake

Seong-Hu Kim 16 Oct 17, 2022
This is a model made out of Neural Network specifically a Convolutional Neural Network model

This is a model made out of Neural Network specifically a Convolutional Neural Network model. This was done with a pre-built dataset from the tensorflow and keras packages. There are other alternative libraries that can be used for this purpose, one of which is the PyTorch library.

null 9 Oct 18, 2022
This is an open-source toolkit for Heterogeneous Graph Neural Network(OpenHGNN) based on DGL [Deep Graph Library] and PyTorch.

This is an open-source toolkit for Heterogeneous Graph Neural Network(OpenHGNN) based on DGL [Deep Graph Library] and PyTorch.

BUPT GAMMA Lab 519 Jan 2, 2023
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.3k Dec 29, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17k Feb 11, 2021
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

DeepMind 892 Dec 28, 2022
Dynamic Slimmable Network (CVPR 2021, Oral)

Dynamic Slimmable Network (DS-Net) This repository contains PyTorch code of our paper: Dynamic Slimmable Network (CVPR 2021 Oral). Architecture of DS-

Changlin Li 197 Dec 9, 2022
:hot_pepper: R²SQL: "Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing." (AAAI 2021)

R²SQL The PyTorch implementation of paper Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing. (AAAI 2021) Requirement

huybery 60 Dec 31, 2022
【ACMMM 2021】DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning

DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning (ACMMM 2021) Overview We release the code of the DSANet (Dynamic S

Wenhao Wu 46 Dec 27, 2022
torchsummaryDynamic: support real FLOPs calculation of dynamic network or user-custom PyTorch ops

torchsummaryDynamic Improved tool of torchsummaryX. torchsummaryDynamic support real FLOPs calculation of dynamic network or user-custom PyTorch ops.

Bohong Chen 1 Jan 7, 2022
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

null 61.4k Jan 4, 2023
Tensors and Dynamic neural networks in Python with strong GPU acceleration

PyTorch is a Python package that provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration Deep neural networks b

null 46.1k Feb 13, 2021
PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 20

Zhengqi Li 585 Jan 4, 2023
Code accompanying "Dynamic Neural Relational Inference" from CVPR 2020

Code accompanying "Dynamic Neural Relational Inference" This codebase accompanies the paper "Dynamic Neural Relational Inference" from CVPR 2020. This

Colin Graber 48 Dec 23, 2022
Neural Dynamic Policies for End-to-End Sensorimotor Learning

This is a PyTorch based implementation for our NeurIPS 2020 paper on Neural Dynamic Policies for end-to-end sensorimotor learning.

Shikhar Bahl 47 Dec 11, 2022