[SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars

Overview

AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars

1S-Lab, Nanyang Technological University  2SenseTime Research  3Shanghai AI Laboratory
*equal contribution  +corresponding author

Accepted to SIGGRAPH 2022 (Journal Track)

TL;DR

AvatarCLIP generate and animate avatars given descriptions of body shapes, appearances and motions.

A tall and skinny female soldier that is arguing. A skinny ninja that is raising both arms. An overweight sumo wrestler that is sitting. A tall and fat Iron Man that is running.

This repository contains the official implementation of AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars.


[Project Page][arXiv][High-Res PDF (166M)][Supplementary Video][Colab Demo]

Updates

[05/2022] Paper uploaded to arXiv. arXiv

[05/2022] Add a Colab Demo for avatar generation! Open In Colab

[05/2022] Support converting the generated avatar to the animatable FBX format! Go checkout how to use the FBX models. Or checkout the instructions for the conversion codes.

[05/2022] Code release for avatar generation part!

[04/2022] AvatarCLIP is accepted to SIGGRAPH 2022 (Journal Track) 🥳 !

Citation

If you find our work useful for your research, please consider citing the paper:

@article{hong2022avatarclip,
    title={AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars},
    author={Hong, Fangzhou and Zhang, Mingyuan and Pan, Liang and Cai, Zhongang and Yang, Lei and Liu, Ziwei},
    journal={ACM Transactions on Graphics (TOG)},
    volume={41},
    number={4},
    articleno={161},
    pages={1--19},
    year={2022},
    publisher={ACM New York, NY, USA},
    doi={10.1145/3528223.3530094},
}

Use Generated FBX Models

Download

Go visit our project page. Go to the section 'Avatar Gallery'. Pick a model you like. Click 'Load Model' below. Click 'Download FBX' link at the bottom of the pop-up viewer.

Import to Your Favourite 3D Software (e.g. Blender, Unity3D)

The FBX models are already rigged. Use your motion library to animate it!

Upload to Mixamo

To make use of the rich motion library provided by Mixamo, you can also upload the FBX model to Mixamo. The rigging process is completely automatic!

Installation

We recommend using anaconda to manage the python environment. The setup commands below are provided for your reference.

git clone https://github.com/hongfz16/AvatarCLIP.git
cd AvatarCLIP
conda create -n AvatarCLIP python=3.7
conda activate AvatarCLIP
conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=10.1 -c pytorch
pip install -r requirements.txt

Other than the above steps, you should also install neural_renderer following its instructions. Before compiling neural_renderer (or after compiling should also be fine), remember to add the following three lines to neural_renderer/perspective.py after line 19.

x[z<=0] = 0
y[z<=0] = 0
z[z<=0] = 0

This quick fix is for a rendering issue where objects behide the camera will also be rendered. Be careful when using this fixed version of neural_renderer on your other projects, because this fix will cause the rendering process not differentiable.

Data Preparation

Download SMPL Models

Register and download SMPL models here. Put the downloaded models in the folder smpl_models. The folder structure should look like

./
├── ...
└── smpl_models/
    ├── smpl/
        ├── SMPL_FEMALE.pkl
        ├── SMPL_MALE.pkl
        └── SMPL_NEUTRAL.pkl

Download Pretrained Models & Other Data

This download is only for coarse shape generation. You can skip if you only want to use other parts. Download the pretrained weights and other required data here. Put them in the folder AvatarGen so that the folder structure should look like

./
├── ...
└── AvatarGen/
    └── ShapeGen/
        └── data/
            ├── codebook.pth
            ├── model_VAE_16.pth
            ├── nongrey_male_0110.jpg
            ├── smpl_uv.mtl
            └── smpl_uv.obj

Avatar Generation

Coarse Shape Generation

Folder AvatarGen/ShapeGen contains codes for this part. Run the follow command to generate the coarse shape corresponding to the shape description 'a strong man'. We recommend to use the prompt augmentation 'a 3d rendering of xxx in unreal engine' for better results. The generated coarse body mesh will be stored under AvatarGen/ShapeGen/output/coarse_shape.

python main.py --target_txt 'a 3d rendering of a strong man in unreal engine'

Then we need to render the mesh for initialization of the implicit avatar representation. Use the following command for rendering.

python render.py --coarse_shape_obj output/coarse_shape/a_3d_rendering_of_a_strong_man_in_unreal_engine.obj --output_folder ${RENDER_FOLDER}

Shape Sculpting and Texture Generation

Note that all the codes are tested on NVIDIA V100 (32GB memory). Therefore, in order to run on GPUs with lower memory, please try to scale down the network or tune down max_ray_num in the config files. You can refer to confs/examples_small/example.conf or our colab demo for a scale-down version of AvatarCLIP.

Folder AvatarGen/AppearanceGen contains codes for this part. We provide data, pretrained model and scripts to perform shape sculpting and texture generation on a zero-beta body (mean shape defined by SMPL). We provide many example scripts under AvatarGen/AppearanceGen/confs/examples. For example, if we want to generate 'Abraham Lincoln', which is defined in the config file confs/examples/abrahamlincoln.conf, use the following command.

python main.py --mode train_clip --conf confs/examples/abrahamlincoln.conf

Results will be stored in AvatarCLIP/AvatarGen/AppearanceGen/exp/smpl/examples/abrahamlincoln.

If you wish to perform shape sculpting and texture generation on the previously generated coarse shape. We also provide example config files in confs/base_models/astrongman.conf confs/astrongman/*.conf. Two steps of optimization are required as follows.

# Initilization of the implicit avatar
python main.py --mode train --conf confs/base_models/astrongman.conf
# Shape sculpting and texture generation on the initialized implicit avatar
python main.py --mode train_clip --conf confs/astrongman/hulk.conf

Marching Cube

To extract meshes from the generated implicit avatar, one may use the following command.

python main.py --mode validate_mesh --conf confs/examples/abrahamlincoln.conf

The final high resolution mesh will be stored as AvatarCLIP/AvatarGen/AppearanceGen/exp/smpl/examples/abrahamlincoln/meshes/00030000.ply

Convert Avatar to FBX Format

For the convenience of using the generated avatar with modern graphics pipeline, we also provide scripts to rig the avatar and convert to FBX format. See the instructions here.

Motion Generation

TBA

License

Distributed under the MIT License. See LICENSE for more information.

Related Works

There are lots of wonderful works that inspired our work or came around the same time as ours.

Dream Fields enables zero-shot text-driven general 3D object generation using CLIP and NeRF.

Text2Mesh proposes to edit a template mesh by predicting offsets and colors per vertex using CLIP and differentiable rendering.

CLIP-NeRF can manipulate 3D objects represented by NeRF with natural languages or examplar images by leveraging CLIP.

Text to Mesh facilitates zero-shot text-driven general mesh generation by deforming from a sphere mesh guided by CLIP.

MotionCLIP establishes a projection from the CLIP text space to the motion space through supervised training, which leads to amazing text-driven motion generation results.

Acknowledgements

This study is supported by NTU NAP, MOE AcRF Tier 2 (T2EP20221-0033), and under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s).

We thank the following repositories for their contributions in our implementation: NeuS, smplx, vposer, Smplx2FBX.

Comments
  • Problem of Motion Generation

    Problem of Motion Generation

    ImportError: ('Unable to load OpenGL library', 'libgcrypt.so.11: cannot open shared object file: No such file or directory', '/home/user/anconda3/envs/AvatarCLIP_1/lib/libOSMesa.so.8', '/home/user/anconda3/envs/AvatarCLIP_1/lib/libOSMesa.so.8')

    There were no problems before, but at this point, a problem that has not been seen before appeared:

    `Traceback (most recent call last): File "/home/user/anconda3/envs/AvatarCLIP_1/lib/python3.8/site-packages/OpenGL/platform/osmesa.py", line 22, in GL return ctypesloader.loadLibrary( File "/home/user/anconda3/envs/AvatarCLIP_1/lib/python3.8/site-packages/OpenGL/platform/ctypesloader.py", line 45, in loadLibrary return dllType( name, mode ) File "/home/user/anconda3/envs/AvatarCLIP_1/lib/python3.8/ctypes/init.py", line 373, in init self._handle = _dlopen(self._name, mode) OSError: ('libgcrypt.so.11: cannot open shared object file: No such file or directory', '/home/user/anconda3/envs/AvatarCLIP_1/lib/libOSMesa.so.8', '/home/user/anconda3/envs/AvatarCLIP_1/lib/libOSMesa.so.8')

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "main.py", line 12, in from visualize import render_pose, render_motion File "/home/user/AvatarCLIP-main/AvatarAnimate/visualize.py", line 9, in import pyrender File "/home/user/anconda3/envs/AvatarCLIP_1/lib/python3.8/site-packages/pyrender/init.py", line 3, in from .light import Light, PointLight, DirectionalLight, SpotLight File "/home/user/anconda3/envs/AvatarCLIP_1/lib/python3.8/site-packages/pyrender/light.py", line 10, in from OpenGL.GL import * File "/home/user/anconda3/envs/AvatarCLIP_1/lib/python3.8/site-packages/OpenGL/GL/init.py", line 3, in from OpenGL import error as _error File "/home/user/anconda3/envs/AvatarCLIP_1/lib/python3.8/site-packages/OpenGL/error.py", line 12, in from OpenGL import platform, _configflags File "/home/user/anconda3/envs/AvatarCLIP_1/lib/python3.8/site-packages/OpenGL/platform/init.py", line 35, in _load() File "/home/user/anconda3/envs/AvatarCLIP_1/lib/python3.8/site-packages/OpenGL/platform/init.py", line 32, in _load plugin.install(globals()) File "/home/user/anconda3/envs/AvatarCLIP_1/lib/python3.8/site-packages/OpenGL/platform/baseplatform.py", line 92, in install namespace[ name ] = getattr(self,name,None) File "/home/user/anconda3/envs/AvatarCLIP_1/lib/python3.8/site-packages/OpenGL/platform/baseplatform.py", line 14, in get value = self.fget( obj ) File "/home/user/anconda3/envs/AvatarCLIP_1/lib/python3.8/site-packages/OpenGL/platform/osmesa.py", line 66, in GetCurrentContext function = self.OSMesa.OSMesaGetCurrentContext File "/home/user/anconda3/envs/AvatarCLIP_1/lib/python3.8/site-packages/OpenGL/platform/baseplatform.py", line 14, in get value = self.fget( obj ) File "/home/user/anconda3/envs/AvatarCLIP_1/lib/python3.8/site-packages/OpenGL/platform/osmesa.py", line 60, in OSMesa def OSMesa( self ): return self.GL File "/home/user/anconda3/envs/AvatarCLIP_1/lib/python3.8/site-packages/OpenGL/platform/baseplatform.py", line 14, in get value = self.fget( obj ) File "/home/user/anconda3/envs/AvatarCLIP_1/lib/python3.8/site-packages/OpenGL/platform/osmesa.py", line 28, in GL raise ImportError("Unable to load OpenGL library", *err.args) ImportError: ('Unable to load OpenGL library', 'libgcrypt.so.11: cannot open shared object file: No such file or directory', '/home/user/anconda3/envs/AvatarCLIP_1/lib/libOSMesa.so.8', '/home/user/anconda3/envs/AvatarCLIP_1/lib/libOSMesa.so.8') `

    Has anyone encountered this situation? Is there any solution?

    opened by fcyx 1
  • Bump numpy from 1.19.2 to 1.21.0

    Bump numpy from 1.19.2 to 1.21.0

    Bumps numpy from 1.19.2 to 1.21.0.

    Release notes

    Sourced from numpy's releases.

    v1.21.0

    NumPy 1.21.0 Release Notes

    The NumPy 1.21.0 release highlights are

    • continued SIMD work covering more functions and platforms,
    • initial work on the new dtype infrastructure and casting,
    • universal2 wheels for Python 3.8 and Python 3.9 on Mac,
    • improved documentation,
    • improved annotations,
    • new PCG64DXSM bitgenerator for random numbers.

    In addition there are the usual large number of bug fixes and other improvements.

    The Python versions supported for this release are 3.7-3.9. Official support for Python 3.10 will be added when it is released.

    :warning: Warning: there are unresolved problems compiling NumPy 1.21.0 with gcc-11.1 .

    • Optimization level -O3 results in many wrong warnings when running the tests.
    • On some hardware NumPy will hang in an infinite loop.

    New functions

    Add PCG64DXSM BitGenerator

    Uses of the PCG64 BitGenerator in a massively-parallel context have been shown to have statistical weaknesses that were not apparent at the first release in numpy 1.17. Most users will never observe this weakness and are safe to continue to use PCG64. We have introduced a new PCG64DXSM BitGenerator that will eventually become the new default BitGenerator implementation used by default_rng in future releases. PCG64DXSM solves the statistical weakness while preserving the performance and the features of PCG64.

    See upgrading-pcg64 for more details.

    (gh-18906)

    Expired deprecations

    • The shape argument numpy.unravel_index cannot be passed as dims keyword argument anymore. (Was deprecated in NumPy 1.16.)

    ... (truncated)

    Commits
    • b235f9e Merge pull request #19283 from charris/prepare-1.21.0-release
    • 34aebc2 MAINT: Update 1.21.0-notes.rst
    • 493b64b MAINT: Update 1.21.0-changelog.rst
    • 07d7e72 MAINT: Remove accidentally created directory.
    • 032fca5 Merge pull request #19280 from charris/backport-19277
    • 7d25b81 BUG: Fix refcount leak in ResultType
    • fa5754e BUG: Add missing DECREF in new path
    • 61127bb Merge pull request #19268 from charris/backport-19264
    • 143d45f Merge pull request #19269 from charris/backport-19228
    • d80e473 BUG: Removed typing for == and != in dtypes
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • getting the textures of the avatars from the gallery

    getting the textures of the avatars from the gallery

    Hi, I've downloaded the avatars from the gallery (tried both .glb and .fbx formats) and I can't find their textures. When loading into Blender or Unity the avatar is rendered without textures. Are the textures in another location I'm missing?

    opened by NaorHaba 0
  • No module named 'skimage'

    No module named 'skimage'

    Followed instructions, would not run...

    (AvatarCLIP) ➜  ShapeGen git:(main) python main.py --target_txt 'a 3d rendering of a strong man in unreal engine'
    Traceback (most recent call last):
      File "main.py", line 9, in <module>
        import neural_renderer as nr
      File "/home/user/miniconda3/envs/AvatarCLIP/lib/python3.7/site-packages/neural_renderer/__init__.py", line 3, in <module>
        from .load_obj import load_obj
      File "/home/user/miniconda3/envs/AvatarCLIP/lib/python3.7/site-packages/neural_renderer/load_obj.py", line 6, in <module>
        from skimage.io import imread
    ModuleNotFoundError: No module named 'skimage'
    
    opened by chrisbward 0
  • Applying a generated motion on a generated avatar

    Applying a generated motion on a generated avatar

    Hello,

    I am not able to understand how to apply a generated motion on a an avatar I generated.

    I want to be able get something like: "A skinny ninja that is raising both arms" that you had in the example.

    I understand how to generate a skinny ninja, and how to generate a generic man that is raising both arms - but how do I get a skinny ninja that is raising both arms?

    Thank you.

    opened by RandomUser999x 1
  • Motion Generation - Unable to load OpenGL library

    Motion Generation - Unable to load OpenGL library

    Thank you for your work.

    The avatar generation part works great, but I am facing difficulties with the motion generation part.

    For reference:

    (AvatarCLIP) eitan.levy@lambda2:~/AvatarCLIP/AvatarAnimate$ python main.py --conf confs/pose_ablation/pose_optimizer/argue.conf Traceback (most recent call last): File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/osmesa.py", line 25, in GL mode=ctypes.RTLD_GLOBAL File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/ctypesloader.py", line 45, in loadLibrary return dllType( name, mode ) File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/ctypes/init.py", line 364, in init self._handle = _dlopen(self._name, mode) OSError: ('libgcrypt.so.11: cannot open shared object file: No such file or directory', '/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/libOSMesa.so.8', '/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/libOSMesa.so.8')

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "main.py", line 12, in from visualize import render_pose, render_motion File "/home/eitan.levy/AvatarCLIP/AvatarAnimate/visualize.py", line 9, in import pyrender File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/pyrender/init.py", line 3, in from .light import Light, PointLight, DirectionalLight, SpotLight File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/pyrender/light.py", line 10, in from OpenGL.GL import * File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/GL/init.py", line 3, in from OpenGL import error as _error File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/error.py", line 12, in from OpenGL import platform, _configflags File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/init.py", line 35, in _load() File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/init.py", line 32, in _load plugin.install(globals()) File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 92, in install namespace[ name ] = getattr(self,name,None) File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 14, in get value = self.fget( obj ) File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/osmesa.py", line 66, in GetCurrentContext function = self.OSMesa.OSMesaGetCurrentContext File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 14, in get value = self.fget( obj ) File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/osmesa.py", line 60, in OSMesa def OSMesa( self ): return self.GL File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 14, in get value = self.fget( obj ) File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/osmesa.py", line 28, in GL raise ImportError("Unable to load OpenGL library", *err.args) ImportError: ('Unable to load OpenGL library', 'libgcrypt.so.11: cannot open shared object file: No such file or directory', '/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/libOSMesa.so.8', '/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/libOSMesa.so.8') (AvatarCLIP) eitan.levy@lambda2:~/AvatarCLIP/AvatarAnimate$ python3 main.py --conf confs/pose_ablation/pose_optimizer/argue.conf /home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/clip/clip.py:24: UserWarning: PyTorch version 1.7.1 or higher is recommended warnings.warn("PyTorch version 1.7.1 or higher is recommended") Traceback (most recent call last): File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/osmesa.py", line 25, in GL mode=ctypes.RTLD_GLOBAL File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/ctypesloader.py", line 45, in loadLibrary return dllType( name, mode ) File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/ctypes/init.py", line 364, in init self._handle = _dlopen(self._name, mode) OSError: ('libgcrypt.so.11: cannot open shared object file: No such file or directory', '/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/libOSMesa.so.8', '/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/libOSMesa.so.8')

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "main.py", line 12, in from visualize import render_pose, render_motion File "/home/eitan.levy/AvatarCLIP/AvatarAnimate/visualize.py", line 9, in import pyrender File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/pyrender/init.py", line 3, in from .light import Light, PointLight, DirectionalLight, SpotLight File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/pyrender/light.py", line 10, in from OpenGL.GL import * File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/GL/init.py", line 3, in from OpenGL import error as _error File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/error.py", line 12, in from OpenGL import platform, _configflags File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/init.py", line 35, in _load() File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/init.py", line 32, in _load plugin.install(globals()) File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 92, in install namespace[ name ] = getattr(self,name,None) File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 14, in get value = self.fget( obj ) File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/osmesa.py", line 66, in GetCurrentContext function = self.OSMesa.OSMesaGetCurrentContext File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 14, in get value = self.fget( obj ) File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/osmesa.py", line 60, in OSMesa def OSMesa( self ): return self.GL File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 14, in get value = self.fget( obj ) File "/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/python3.7/site-packages/OpenGL/platform/osmesa.py", line 28, in GL raise ImportError("Unable to load OpenGL library", *err.args) ImportError: ('Unable to load OpenGL library', 'libgcrypt.so.11: cannot open shared object file: No such file or directory', '/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/libOSMesa.so.8', '/home/eitan.levy/anaconda3/envs/AvatarCLIP/lib/libOSMesa.so.8')

    I cannot seem to solve this problem.

    opened by Eitan229 2
  • The correspondence between the codebook and the codebook_embedding

    The correspondence between the codebook and the codebook_embedding

    Hi,

    I'm glad to read your publication and try your released demo. As for motion generation, the essential item should be the correspondence between the codebook and the codebook_embedding. However, when I checked your code, I found that the CLIL features of the decoded poses of the codebook are not equivalent to those of the codebook_embedding. From Fig, 8 of the paper, I found that the CLIP feature of one pose is the sum of multiple CLIP features of different views of that pose. Would you mind describing more details of how to calculate the codebook and codebook_embeding? If you can release the code for extracting codebook_embedding, I will be more than grateful.

    Thank you in advance.

    Best wishes, Jack

    opened by junfanlin 0
Owner
Fangzhou Hong
Ph.D. Student in MMLab@NTU
Fangzhou Hong
Zero-Shot Text-to-Image Generation VQGAN+CLIP Dockerized

VQGAN-CLIP-Docker About Zero-Shot Text-to-Image Generation VQGAN+CLIP Dockerized This is a stripped and minimal dependency repository for running loca

Kevin Costa 73 Sep 11, 2022
Pytorch Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic

Pytorch Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic [Paper] [Colab is coming soon] Approach Example Usage To r

null 6 Dec 1, 2021
Source code of our BMVC 2021 paper: AniFormer: Data-driven 3D Animation with Transformer

AniFormer This is the PyTorch implementation of our BMVC 2021 paper AniFormer: Data-driven 3D Animation with Transformer. Haoyu Chen, Hao Tang, Nicu S

null 7 Oct 22, 2021
Code for the AAAI 2022 paper "Zero-Shot Cross-Lingual Machine Reading Comprehension via Inter-Sentence Dependency Graph".

multilingual-mrc-isdg Code for the AAAI 2022 paper "Zero-Shot Cross-Lingual Machine Reading Comprehension via Inter-Sentence Dependency Graph". This r

Liyan 5 Dec 7, 2022
Codes for ACL-IJCNLP 2021 Paper "Zero-shot Fact Verification by Claim Generation"

Zero-shot-Fact-Verification-by-Claim-Generation This repository contains code and models for the paper: Zero-shot Fact Verification by Claim Generatio

Liangming Pan 47 Jan 1, 2023
ZeroGen: Efficient Zero-shot Learning via Dataset Generation

ZEROGEN This repository contains the code for our paper “ZeroGen: Efficient Zero

Jiacheng Ye 31 Dec 30, 2022
CharacterGAN: Few-Shot Keypoint Character Animation and Reposing

CharacterGAN Implementation of the paper "CharacterGAN: Few-Shot Keypoint Character Animation and Reposing" by Tobias Hinz, Matthew Fisher, Oliver Wan

Tobias Hinz 181 Dec 27, 2022
Avatarify Python - Avatars for Zoom, Skype and other video-conferencing apps.

Avatarify Python - Avatars for Zoom, Skype and other video-conferencing apps.

Ali Aliev 15.3k Jan 5, 2023
SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model

SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model Edresson Casanova, Christopher Shulby, Eren Gölge, Nicolas Michael Müller, Frede

Edresson Casanova 92 Dec 9, 2022
Code repo for EMNLP21 paper "Zero-Shot Information Extraction as a Unified Text-to-Triple Translation"

Zero-Shot Information Extraction as a Unified Text-to-Triple Translation Source code repo for paper Zero-Shot Information Extraction as a Unified Text

cgraywang 88 Dec 31, 2022
SMPLpix: Neural Avatars from 3D Human Models

subject0_validation_poses.mp4 Left: SMPL-X human mesh registered with SMPLify-X, middle: SMPLpix render, right: ground truth video. SMPLpix: Neural Av

Sergey Prokudin 292 Dec 30, 2022
Generate pixel-style avatars with python.

face2pixel Generate pixel-style avatars with python. Run: Clone the project: git clone https://github.com/theodorecooper/face2pixel install requiremen

Theodore Cooper 2 May 11, 2022
Official implementation of "StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation" (SIGGRAPH 2021)

StyleCariGAN in PyTorch Official implementation of StyleCariGAN:Caricature Generation via StyleGAN Feature Map Modulation in PyTorch Requirements PyTo

PeterZhouSZ 49 Oct 31, 2022
Official implementation of "StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation" (SIGGRAPH 2021)

StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation This repository contains the official PyTorch implementation of the following

Wonjong Jang 270 Dec 30, 2022
Code for One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI 2022)

One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI 2022) Paper | Demo Requirements Python >= 3.6 , Pytorch >

FuxiVirtualHuman 84 Jan 3, 2023
Imposter-detector-2022 - HackED 2022 Team 3IQ - 2022 Imposter Detector

HackED 2022 Team 3IQ - 2022 Imposter Detector By Aneeljyot Alagh, Curtis Kan, Jo

Joshua Ji 3 Aug 20, 2022
Pytorch re-implementation of Paper: SwinTextSpotter: Scene Text Spotting via Better Synergy between Text Detection and Text Recognition (CVPR 2022)

SwinTextSpotter This is the pytorch implementation of Paper: SwinTextSpotter: Scene Text Spotting via Better Synergy between Text Detection and Text R

mxin262 183 Jan 3, 2023
[CVPR 2021] Released code for Counterfactual Zero-Shot and Open-Set Visual Recognition

Counterfactual Zero-Shot and Open-Set Visual Recognition This project provides implementations for our CVPR 2021 paper Counterfactual Zero-S

null 144 Dec 24, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page >> coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022