Official PyTorch implementation of the paper "TEMOS: Generating diverse human motions from textual descriptions"

Overview

TEMOS: TExt to MOtionS

Generating diverse human motions from textual descriptions

Description

Official PyTorch implementation of the paper "TEMOS: Generating diverse human motions from textual descriptions".

Please visit our webpage for more details.

teaser_lightteaser_dark

Bibtex

If you find this code useful in your research, please cite:

@article{petrovich22temos,
  title     = {{TEMOS}: Generating diverse human motions from textual descriptions},
  author    = {Petrovich, Mathis and Black, Michael J. and Varol, G{\"u}l},
  journal   = {arXiv},
  month     = {April},
  year      = {2022}
}

You can also put a star , if the code is useful to you.

Installation 👷

1. Create conda environment

conda create python=3.9 --name temos
conda activate temos

Install PyTorch 1.10 inside the conda environnement, and install the following packages:

pip install pytorch_lightning --upgrade
pip install torchmetrics==0.7
pip install hydra-core --upgrade
pip install hydra_colorlog --upgrade
pip install shortuuid
pip install tqdm
pip install pandas
pip install transformers
pip install psutil
pip install einops

The code was tested on Python 3.9.7 and PyTorch 1.10.0.

2. Download the datasets

KIT Motion-Language dataset

Be sure to read and follow their license agreements, and cite accordingly.

Use the code from Ghosh et al. or JL2P to download and prepare the kit dataset (extraction of xyz joints coodinates data from axis-angle Master Motor Map). Move or copy all the files which ends with "_meta.json", "_annotations.json" and "_fke.csv" inside the datasets/kit folder. "

AMASS dataset

WIP: instructions to be released soon

3. Download text model dependencies

Download distilbert from Hugging Face

cd deps/
git lfs install
git clone https://huggingface.co/distilbert-base-uncased
cd ..

4. SMPL body model

WIP: instructions to be released soon

5. (Optional) Donwload pre-trained models

WIP: instructions to be released soon

How to train TEMOS 🚀

The command to launch a training experiment is the folowing:

python train.py [OPTIONS]

The parsing is done by using the powerful Hydra library. You can override anything in the configuration by passing arguments like foo=value or foo.bar=value.

Experiment path

Each training will create a unique output directory (referred to as FOLDER below), where logs, configuations and checkpoints are stored.

By default it is defined as outputs/${data.dataname}/${experiment}/${run_id} with data.dataname the name of the dataset (see examples below), experiment=baseline and run_id a 8 unique random alpha-numeric identifier for the run (everything can be overridden if needed).

This folder is printed during logging, it should look like outputs/kit-mmm-xyz/baseline/3gn7h7v6/.

Some optional parameters

Datasets

  • data=kit-mmm-xyz: KIT-ML motions processed by the MMM framework (as in the original data) loaded as xyz joint coordinates (after axis-angle transformation → xyz) (by default)
  • data=kit-amass-rot: KIT-ML motions loaded as SMPL rotations and translations, from AMASS (processed with MoSh++)
  • data=kit-amass-xyz: KIT-ML motions loaded as xyz joint coordinates, from AMASS (processed with MoSh++) after passing through a SMPL layer and regressing the correct joints.

Training

  • trainer=gpu: training with CUDA, on an automatically selected GPU (default)
  • trainer=cpu: training on the CPU

How to generate motions with TEMOS

Dataset splits

To get results comparable to previous work, we use the same splits as in Language2Pose and Ghosh et al.. To be explicit, and not rely on random seeds, you can find the list of id-files in datasets/kit-splits/ (train/val/test).

When sampling Ghosh et al.'s motions with their code, I noticed that their dataloader is missing some sequences (see the discussion here). In order to compare all the methods with the same test set, we use the 520 sequences produced by Ghosh et al. code for the test set (instead of the 587 sequences). This split is refered as gtest (for "Ghosh test"). It is used per default in the sampling/evaluation/rendering code. You can change this set by specifying split=SPLIT in each command line.

You can also find in datasets/kit-splits/, the split used for the human-study (human-study) and the split used for the visuals of the paper (visu).

Sampling/generating motions

The command line to sample one motion per sequence is the following:

python sample.py folder=FOLDER [OPTIONS]

This command will create the folder FOLDER/samples/SPLIT and save the motions in the npy format.

Some optional parameters

  • mean=false: Take the mean value for the latent vector, instead of sampling (default is false)
  • number_of_samples=X: Generate X motions (by default it generates only one)
  • fact=X: Multiplies sigma by X during sampling (1.0 by default, diversity can be increased when fact>1)

Model trained on SMPL rotations

If your model has been trained with data=kit-amass-rot, it produces SMPL rotations and translations. In this case, you can specify the type of data you want to save after passing through the SMPL layer.

  • jointstype=mmm: Generate xyz joints compatible with the MMM bodies (by default). This gives skeletons comparable to data=kit-mmm-xyz (needed for evaluation).
  • jointstype=vertices: Generate human body meshes (needed for rendering).

Evaluating TEMOS (and prior works)

To evaluate TEMOS on the metrics defined in the paper, you must generate motions first (see above), and then run:

python evaluate.py folder=FOLDER [OPTIONS]

This will compute and store the metrics in the file FOLDER/samples/metrics_SPLIT in a yaml format.

Some optional parameters

Same parameters as in sample.py, it will choose the right directories for you. In the case of evaluating with number_of_samples>1, the script will compute two metrics metrics_gtest_multi_avg (the average of single metrics) and metrics_gtest_multi_best (chosing the best output for each motion). Please check the paper for more details.

Model trained on SMPL rotations

Currently, evaluation is only implemented on skeletons with MMM format. You must therefore use jointstype=mmm during sampling.

Evaluating prior works

WIP: the proper instructions and code will be available soon.

To give an overview:

  1. Generate motions with their code (it is still in the rifke feature space)
  2. Save them in xyz format (I "hack" their render script, to save them in xyz npy format instead of rendering)
  3. Load them into the evaluation code (instead of loading TEMOS motions).

Rendering motions

To get the visuals of the paper, I use Blender 2.93. The setup is not trivial (installation + running), I do my best to explain the process but don't hesitate to tell me if you have a problem.

Instalation

The goal is to be able to install blender so that it can be used with python scripts (so we can use ``import bpy''). There seem to be many different ways to do this, I will explain the one I use and understand (feel free to use other methods or suggest an easier way). The installation of Blender will be done as a standalone package. To use my scripts, we will run blender in the background, and the python executable in blender will run the script.

In any case, after the installation, please do step 5. to install the dependencies in the python environment.

  1. Please follow the instructions to install blender 2.93 on your operating system. Please install exactly this version.
  2. Locate the blender executable if it is not in your path. For the following commands, please replace blender with the path to your executable (or create a symbolic link or use an alias).
    • On Linux, it could be in /usr/bin/blender (already in your path).
    • On macOS, it could be in /Applications/Blender.app/Contents/MacOS/Blender (not in your path)
  3. Check that the correct version is installed:
    • blender --background --version should return "Blender 2.93.X".
    • blender --background --python-expr "import sys; print('\nThe version of python is '+sys.version.split(' ')[0])" should return "3.9.X".
  4. Locate the python installation used by blender the following line. I will refer to this path as path/to/blender/python.
blender --background --python-expr "import sys; import os; print('\nThe path to the installation of python of blender can be:'); print('\n'.join(['- '+x for x in sys.path if 'python' in (file:=os.path.split(x)[-1]) and not file.endswith('.zip')]))"
  1. Install these packages in the python environnement of blender:
path/to/blender/python -m pip install --user numpy
path/to/blender/python -m pip install --user matplotlib
path/to/blender/python -m pip install --user hydra-core --upgrade
path/to/blender/python -m pip install --user hydra_colorlog --upgrade
path/to/blender/python -m pip install --user moviepy
path/to/blender/python -m pip install --user shortuuid

Launch a python script (with arguments) with blender

Now that blender is installed, if we want to run the script script.py with the blender API (the bpy module), we can use:

blender --background --python script.py

If you need to add additional arguments, this will probably fail (as blender will interpret the arguments). Please use the double dash -- to tell blender to ignore the rest of the command. I then only parse the last part of the command (check temos/launch/blender.py if you are interested).

Rendering one sample

To render only one motion, please use this command line:

blender --background --python render.py -- npy=PATH_TO_DATA.npy [OPTIONS]

Rendering all the data

Please use this command line to render all the data of a split (which have to be already generated with sample.py). I suggest to use split=visu to render only a small subset.

blender --background --python render.py -- folder=FOLDER [OPTIONS]

SMPL bodies

Don't forget to generate the data with the option jointstype=vertices before. The renderer will automatically detect whether the movement is a sequence of joints or meshes.

Some optional parameters

  • downsample=true: Render only 1 frame every 8 frames, to speed up rendering (by default)
  • canonicalize=true: Make sure the first pose is oriented canonically (by translating and rotating the entire sequence) (by default)
  • mode=XXX: Choose the rendering mode (default is mode=sequence)
    • video: Render all the frames and generate a video (as in the supplementary video)
    • sequence: Render a single frame, with num=8 bodies (sampled equally, as in the figures of the paper)
    • frame: Render a single frame, at a specific point in time (exact_frame=0.5, generates the frame at about 50% of the video)
  • quality=false: Render to a higher resolution and denoise the output (default to false to speed up))

License

This code is distributed under an MIT LICENSE.

Note that our code depends on other libraries, including SMPL, SMPL-X, PyTorch3D, Hugging Face, Hydra, and uses datasets which each have their own respective licenses that must also be followed.

Comments
  • File Not Found Error

    File Not Found Error

    Hi, Thank you for sharing the official codes. I followed the instructions in the README.md file. However, when I tried to run the train.py, I met a File Not Found Error and I would like to know what makes this error and how to fix it.

    Many thanks for your kind help.

    微信截图_20220608201221

    opened by Dean-UQ 5
  • About non-rescaled version

    About non-rescaled version

    Hello, I would like to use the non-scaled version suggested in your paper.

    I'm reading the code and using it, but I wanna ask if it's right to do it the way below.

    1. in temos/model/metrics/compute.py, https://github.com/Mathux/TEMOS/blob/d9d22064882b50a6a581afa44e878b2155cc4c7c/temos/model/metrics/compute.py#L26 I changed the parameter force_in_meter to False
    2. in /temos/transforms/rots2joints/smplh.py, https://github.com/Mathux/TEMOS/blob/d9d22064882b50a6a581afa44e878b2155cc4c7c/temos/transforms/rots2joints/smplh.py#L138-L140 I removed these codes when I use smpl data

    Is this the correct way to proceed non-scaled version with smpl data?

    Thanks!

    opened by dwro0121 5
  • Evaluation with past research

    Evaluation with past research

    Hi dear authors, I would like to start by saying thank you for your amazing work. Did you re-implement past research(Lin et. al./ JL2P/ Ghosh et al.)? How can I evaluate them with your code?

    opened by wooheum-xin 4
  • Rendering SMPLX motion sequence of AMASS

    Rendering SMPLX motion sequence of AMASS

    Hi, thanks for your excellent work. I follow your instruction and successfully visualize the SMPL motion sequence of AMASS. I try to visualize the SMPLX motion sequence and save the meshes in a format of (T, 10475, 3). And then, I render the meshes with render.py using blender. However, the rendering result is as follows: Do you have any idea about this issue? image

    opened by linjing7 2
  • Custom Prompts

    Custom Prompts

    Hello, thanks for releasing the code for this awesome work! Does this repo currently support custom prompts? If not, could you highlight the necessary changes?

    opened by AkbarShah96 2
  • Normalize Values Calculation

    Normalize Values Calculation

    Hi, Congrats and thanks for releasing your work! May I double check how did you calculate the mean&std in normalization step? (e.g. the file under ${path.deps}/transforms/rots2rfeats/smplvelp/${.pose_rep}/${data.dataname}).

    Thanks!

    opened by KevinQian97 2
  • About Evaluation

    About Evaluation

    Hi dear author, Thanks for the amazing work.

    I tried to test the model so I followed the instructions in README

    1. download the pre-trained models
    2. run sample.py to sample the motions with the pre-trained kit-mmm-xyz model
    3. run evaluate.py to calculate the metrics with motions above

    However, I found that the results were not good enough. I am new to this area, and don't know where I did wrong. I will be deeply graceful if you could help me find out the mistakes. Thanks

    | | APE_root | APE_traj | APE_mean_pose | APE_mean_joints | AVE_root | AVE_traj | AVE_mean_pose | AVE_mean_joints | |---------|----------|----------|---------------|-----------------|-----------|-----------|---------------|-----------------| | Paper | 0.963 | 0.955 | 0.104 | 0.976 | 0.445 | 0.445 | 0.005 | 0.448 | | My run | 1.033 | 1.024 | 0.104 | 1.048 | 0.448 | 0.448 | 0.005 | 0.451 |

    opened by 52PengUin 2
  • About checkpoints

    About checkpoints

    Hi, thanks for your great work. I have a question about checkpoints.

    I saw config files, and I can find that you used mode=max in latest_checkpoint.yaml, but I can't find it in last_checkpoint.yaml. so if you used the same metrics for them, I think we need to remove it from latest_checkpoint.yaml. (If use error or loss for metrics)

    How do you think about this?

    Additionally, I want to know which one is the best model. Is the last.ckpt the best model with metrics (valid error or loss)?

    thanks.

    opened by dwro0121 2
  • APE and AVE_pose metric calculation

    APE and AVE_pose metric calculation

    Hey, Thank you so much for sharing this project. I was trying to run the training code and saw what I think is a little bug here https://github.com/Mathux/TEMOS/blob/ea12cf6b22122aa5be95bbd75fcc374c0f42398a/temos/model/base.py#L45

    Should it have a .mean()? dico.update({f"Metrics/{metric}": value.mean() for metric, value in metrics_dict.items()}) Otherwise, I get this error image

    Please let me know if I'm right or not :)

    Thanks!

    opened by angelacast135 1
  • Problems related to blender installation

    Problems related to blender installation

    Thanks for your tutorial of the blender installation.
    There are some problems. The command:

    blender --background --python-expr "import sys; import os; print('\nThe path to the installation of python of blender can be:'); print('\n'.join(['- '+x for x in sys.path if 'python' in (file:=os.path.split(x)[-1]) and not file.endswith('.zip')]))"
    

    works wrongly on my MacOS.

    It returns me a path /Applications/Blender.app/Contents/Resources/3.2/python/lib/python3.10. However, this is not an executable python path. I changed it into /Applications/Blender.app/Contents/Resources/3.2/python/bin/python3.10, which works well on my MacOS. Is it a bug of the provided command?

    This is the problem I met with here, peers with the same problem can solve it similarly. I hope the authors can modify this command. Thanks~

    opened by LinghaoChan 1
  • How to get demo presented as pictures in the README files

    How to get demo presented as pictures in the README files

    Hi dear authors,

    I would like to start by saying thank you for your amazing work. I managed to train the model and run the sample.py file. But the results I got were only a [.npy] file. Am I doing something wrong or are there additional steps I need to take to produce the similar results presented in the teaser figures?

    opened by yohannes-taye 1
  • when I just set 'vae=False' in temos.yaml,I got errors below

    when I just set 'vae=False' in temos.yaml,I got errors below

    hi authors, I want to try a situation when VAE is disabled and save its weights. but I was trapped in some problems(sorry, I just started)

      File "/Users/eanson/opt/miniconda3/envs/temos/lib/python3.9/site-packages/pytorch_lightning/strategies/strategy.py", line 370, in validation_step
        return self.model.validation_step(*args, **kwargs)
      File "/Users/eanson/Documents/dl/TEMOS/temos/model/base.py", line 33, in validation_step
        return self.allsplit_step("val", batch, batch_idx)
      File "/Users/eanson/Documents/dl/TEMOS/temos/model/temos.py", line 144, in allsplit_step
        loss = self.losses[split].update(ds_text=datastruct_from_text,
      File "/Users/eanson/Documents/dl/TEMOS/temos/model/losses/compute.py", line 83, in update
        total += self._update_loss("kl_text2motion", dis_text, dis_motion)
      File "/Users/eanson/Documents/dl/TEMOS/temos/model/losses/compute.py", line 105, in _update_loss
        val = self._losses_func[loss](outputs, inputs)
      File "/Users/eanson/Documents/dl/TEMOS/temos/model/losses/kl.py", line 9, in __call__
        div = torch.distributions.kl_divergence(q, p)
      File "/Users/eanson/opt/miniconda3/envs/temos/lib/python3.9/site-packages/torch/distributions/kl.py", line 170, in kl_divergence
        raise NotImplementedError("No KL(p || q) is implemented for p type {} and q type {}"
    NotImplementedError: No KL(p || q) is implemented for p type NoneType and q type NoneType
    

    pip list

    Package                 Version
    ----------------------- -----------
    absl-py                 1.3.0
    aiohttp                 3.8.3
    aiosignal               1.2.0
    antlr4-python3-runtime  4.9.3
    astroid                 2.12.12
    async-timeout           4.0.2
    attrs                   22.1.0
    beautifulsoup4          4.11.1
    cachetools              5.2.0
    certifi                 2022.9.24
    charset-normalizer      2.1.1
    colorlog                6.7.0
    commonmark              0.9.1
    contourpy               1.0.6
    cycler                  0.11.0
    decorator               4.4.2
    dill                    0.3.6
    einops                  0.5.0
    filelock                3.8.0
    fonttools               4.38.0
    frozenlist              1.3.1
    fsspec                  2022.10.0
    google-auth             2.13.0
    google-auth-oauthlib    0.4.6
    grpcio                  1.50.0
    huggingface-hub         0.10.1
    hydra-colorlog          1.2.0
    hydra-core              1.2.0
    idna                    3.4
    imageio                 2.22.2
    imageio-ffmpeg          0.4.7
    importlib-metadata      5.0.0
    isort                   5.10.1
    kiwisolver              1.4.4
    lazy-object-proxy       1.7.1
    Markdown                3.4.1
    MarkupSafe              2.1.1
    matplotlib              3.6.1
    mccabe                  0.7.0
    moviepy                 1.0.3
    multidict               6.0.2
    numpy                   1.23.4
    oauthlib                3.2.2
    omegaconf               2.2.3
    packaging               21.3
    pandas                  1.5.1
    Pillow                  9.3.0
    pip                     22.2.2
    platformdirs            2.5.2
    proglog                 0.1.10
    protobuf                3.19.6
    psutil                  5.9.3
    pyasn1                  0.4.8
    pyasn1-modules          0.2.8
    pyDeprecate             0.3.2
    Pygments                2.13.0
    pylint                  2.15.5
    pyparsing               3.0.9
    PySocks                 1.7.1
    python-dateutil         2.8.2
    pytorch-lightning       1.7.7
    pytz                    2022.5
    PyYAML                  6.0
    regex                   2022.9.13
    requests                2.28.1
    requests-oauthlib       1.3.1
    rich                    12.6.0
    rsa                     4.9
    setuptools              59.5.0
    shortuuid               1.0.9
    six                     1.16.0
    soupsieve               2.3.2.post1
    tensorboard             2.10.1
    tensorboard-data-server 0.6.1
    tensorboard-plugin-wit  1.8.1
    tokenizers              0.13.1
    tomli                   2.0.1
    tomlkit                 0.11.5
    torch                   1.13.0
    torchmetrics            0.7.0
    torchvision             0.14.0
    tqdm                    4.64.1
    transformers            4.23.1
    typing_extensions       4.4.0
    urllib3                 1.26.12
    Werkzeug                2.2.2
    wheel                   0.37.1
    wrapt                   1.14.1
    yarl                    1.8.1
    zipp                    3.9.0
    
    opened by eanson023 1
  • interact.py example gives errors, import fails

    interact.py example gives errors, import fails

    Hi, I am really interested to try your method. I tried installing and following your advice to use the interact.py script, but get some errors that temos.data.kit cannot be found. Is there some special instruction how to install Temos, an install script in addition to the git clone? Or a proper way how to add to sys.path in python?

    Thanks a lot for your help

     python interact.py folder=pretrained_models/kit-mmm-xyz/3l49g7hv/ saving=kick text="A person kicks with the right foot." length=60
    [10/25/22 15:07:48] INFO     Interaction script. The result will be saved there: kick                   interact.py:52
                        INFO     The sentence is: A person kicks with the right foot.                       interact.py:53
    [10/25/22 15:07:50] INFO     Created a temporary directory at /tmp/tmp4gjoeeh5                      instantiator.py:21
                        INFO     Writing /tmp/tmp4gjoeeh5/_remote_module_non_scriptable.py              instantiator.py:76
                        INFO     Global seed set to 1234                                                        seed.py:71
                        INFO     Loading model                                                              interact.py:71
                        INFO     Loading data module                                                        interact.py:77
    Error executing job with overrides: ['folder=pretrained_models/kit-mmm-xyz/3l49g7hv/', 'saving=kick', 'text=A person kicks with the right foot.', 'length=60']
    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 639, in _locate
        obj = getattr(obj, part)
    AttributeError: module 'temos.data' has no attribute 'kit'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 645, in _locate
        obj = import_module(mod)
      File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
        return _bootstrap._gcd_import(name[level:], package, level)
      File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
      File "<frozen importlib._bootstrap>", line 983, in _find_and_load
      File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
      File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
      File "<frozen importlib._bootstrap_external>", line 728, in exec_module
      File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
      File "/content/TEMOS/temos/data/kit.py", line 15, in <module>
        from temos.transforms import Transform
      File "/content/TEMOS/temos/transforms/__init__.py", line 2, in <module>
        from .smpl import SMPLTransform
      File "/content/TEMOS/temos/transforms/smpl.py", line 8, in <module>
        from .joints2jfeats import Joints2Jfeats
      File "/content/TEMOS/temos/transforms/joints2jfeats/__init__.py", line 2, in <module>
        from .rifke import Rifke
      File "/content/TEMOS/temos/transforms/joints2jfeats/rifke.py", line 11, in <module>
        class Rifke(Joints2Jfeats):
      File "/content/TEMOS/temos/transforms/joints2jfeats/rifke.py", line 122, in Rifke
        def extract(self, features: Tensor) -> tuple[Tensor]:
    TypeError: 'type' object is not subscriptable
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/instantiate/_instantiate2.py", line 134, in _resolve_target
        target = _locate(target)
      File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 655, in _locate
        ) from exc_import
    ImportError: Error loading 'temos.data.kit.KITDataModule':
    TypeError("'type' object is not subscriptable")
    
    The above exception was the direct cause of the following exception:
    
    Traceback (most recent call last):
      File "interact.py", line 146, in <module>
        _interact()
      File "/usr/local/lib/python3.7/dist-packages/hydra/main.py", line 95, in decorated_main
        config_name=config_name,
      File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 396, in _run_hydra
        overrides=overrides,
      File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 453, in _run_app
        lambda: hydra.run(
      File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 216, in run_and_report
        raise ex
      File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 213, in run_and_report
        return func()
      File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/utils.py", line 456, in <lambda>
        overrides=overrides,
      File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/hydra.py", line 132, in run
        _ = ret.return_value
      File "/usr/local/lib/python3.7/dist-packages/hydra/core/utils.py", line 260, in return_value
        raise self._return_value
      File "/usr/local/lib/python3.7/dist-packages/hydra/core/utils.py", line 186, in run_job
        ret.return_value = task_function(task_cfg)
      File "interact.py", line 14, in _interact
        return interact(cfg)
      File "interact.py", line 78, in interact
        data_module = instantiate(cfg.data)
      File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/instantiate/_instantiate2.py", line 223, in instantiate
        config, *args, recursive=_recursive_, convert=_convert_, partial=_partial_
      File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/instantiate/_instantiate2.py", line 325, in instantiate_node
        _target_ = _resolve_target(node.get(_Keys.TARGET), full_key)
      File "/usr/local/lib/python3.7/dist-packages/hydra/_internal/instantiate/_instantiate2.py", line 139, in _resolve_target
        raise InstantiationException(msg) from e
    hydra.errors.InstantiationException: Error locating target 'temos.data.kit.KITDataModule', see chained exception above.
    full_key: data
    
    opened by nikjetchev 3
  • Training on multi-GPU

    Training on multi-GPU

    Hi dear author, I would like to train the temos on multi-GPU. As shown in Fig.1, It runs well when using one GPU as default. But, when I try to run it on multi-GPU and set the gpus: 4 in the trainer config file, I meet a Type Error shown in Fig.2. So, I would like to know how to train temos on multi-GPU well.

    One GPU Training: 微信截图_20220610094505

    Four GPUs Training: 微信截图_20220610094621

    opened by Lucky-Maximize 1
  • About rendering on a Windows system

    About rendering on a Windows system

    Hi dear authors, I would like to start by saying thank you for your amazing work.

    I tried rendering in my Windows system the way you wrote it, but it didn't work. I can not Install those packages in the python environnement of blender.

    like "defaulting to user installation because normal site-packages is not writeable"

    opened by wooheum-xin 1
Owner
Mathis Petrovich
PhD student mainly interested in Human Body Shape Analysis, Computer Vision and Optimal Transport.
Mathis Petrovich
Official pytorch implementation of paper "Image-to-image Translation via Hierarchical Style Disentanglement".

HiSD: Image-to-image Translation via Hierarchical Style Disentanglement Official pytorch implementation of paper "Image-to-image Translation

null 364 Dec 14, 2022
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Jie Liu 111 Dec 31, 2022
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

null 35 Dec 6, 2022
Official implementation of our CVPR2021 paper "OTA: Optimal Transport Assignment for Object Detection" in Pytorch.

OTA: Optimal Transport Assignment for Object Detection This project provides an implementation for our CVPR2021 paper "OTA: Optimal Transport Assignme

null 217 Jan 3, 2023
This is the official PyTorch implementation of the paper "TransFG: A Transformer Architecture for Fine-grained Recognition" (Ju He, Jie-Neng Chen, Shuai Liu, Adam Kortylewski, Cheng Yang, Yutong Bai, Changhu Wang, Alan Yuille).

TransFG: A Transformer Architecture for Fine-grained Recognition Official PyTorch code for the paper: TransFG: A Transformer Architecture for Fine-gra

Ju He 307 Jan 3, 2023
[PyTorch] Official implementation of CVPR2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency". https://arxiv.org/abs/2103.05465

PointDSC repository PyTorch implementation of PointDSC for CVPR'2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency",

null 153 Dec 14, 2022
The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"

TimeSformer This is an official pytorch implementation of Is Space-Time Attention All You Need for Video Understanding?. In this repository, we provid

Facebook Research 1k Dec 31, 2022
Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(2021) paper

ImageNet-21K Pretraining for the Masses Paper | Pretrained models Official PyTorch Implementation Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, Lihi Zelni

null 574 Jan 2, 2023
The official PyTorch implementation of the paper: *Xili Dai, Xiaojun Yuan, Haigang Gong, Yi Ma. "Fully Convolutional Line Parsing." *.

F-Clip — Fully Convolutional Line Parsing This repository contains the official PyTorch implementation of the paper: *Xili Dai, Xiaojun Yuan, Haigang

Xili Dai 115 Dec 28, 2022
The repository offers the official implementation of our paper in PyTorch.

Cloth Interactive Transformer (CIT) Cloth Interactive Transformer for Virtual Try-On Bin Ren1, Hao Tang1, Fanyang Meng2, Runwei Ding3, Ling Shao4, Phi

Bingoren 49 Dec 1, 2022
The official PyTorch implementation of recent paper - SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training

This repository is the official PyTorch implementation of SAINT. Find the paper on arxiv SAINT: Improved Neural Networks for Tabular Data via Row Atte

Gowthami Somepalli 284 Dec 21, 2022
Official PyTorch implementation and pretrained models of the paper Self-Supervised Classification Network

Self-Classifier: Self-Supervised Classification Network Official PyTorch implementation and pretrained models of the paper Self-Supervised Classificat

Elad Amrani 24 Dec 21, 2022
Official Pytorch implementation of paper "Reverse Engineering of Generative Models: Inferring Model Hyperparameters from Generated Images"

Reverse_Engineering_GMs Official Pytorch implementation of paper "Reverse Engineering of Generative Models: Inferring Model Hyperparameters from Gener

null 100 Dec 18, 2022
Official Pytorch Implementation of: "Semantic Diversity Learning for Zero-Shot Multi-label Classification"(2021) paper

Semantic Diversity Learning for Zero-Shot Multi-label Classification Paper Official PyTorch Implementation Avi Ben-Cohen, Nadav Zamir, Emanuel Ben Bar

null 28 Aug 29, 2022
Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Zhengxia Zou 1.5k Dec 28, 2022
Official Pytorch implementation of ICLR 2018 paper Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge.

Deep Learning for Physical Processes: Integrating Prior Scientific Knowledge: Official Pytorch implementation of ICLR 2018 paper Deep Learning for Phy

emmanuel 47 Nov 6, 2022
This is the official pytorch implementation for the paper: Instance Similarity Learning for Unsupervised Feature Representation.

ISL This is the official pytorch implementation for the paper: Instance Similarity Learning for Unsupervised Feature Representation, which is accepted

null 19 May 4, 2022
Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN", accepted to ACM MM 2021 BNI Track.

RecycleD Official PyTorch implementation of the paper "Recycling Discriminator: Towards Opinion-Unaware Image Quality Assessment Using Wasserstein GAN

Yunan Zhu 23 Nov 5, 2022
Official PyTorch implementation of the paper: Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting.

Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting Official PyTorch implementation of the paper: Improving Graph Neural Net

Giorgos Bouritsas 58 Dec 31, 2022