Research code for CVPR 2021 paper "End-to-End Human Pose and Mesh Reconstruction with Transformers"

Overview

MeshTransformer

This is our research code of End-to-End Human Pose and Mesh Reconstruction with Transformers.

MEsh TRansfOrmer is a simple yet effective transformer-based method for human pose and mesh reconsruction from an input image. In this repository, we provide our research code for training and testing our proposed method for the following tasks:

  • Human pose and mesh reconstruction
  • Hand pose and mesh reconstruction

Installation

Check INSTALL.md for installation instructions.

Model Zoo and Download

Please download our pre-trained models and other relevant files that are important to run our code.

Check DOWNLOAD.md for details.

Quick demo

We provide demo codes to run end-to-end inference on the test images.

Check DEMO.md for details.

Experiments

We provide python codes for training and evaluation.

Check EXP.md for details.

Contributing

We welcome contributions and suggestions. Please check CONTRIBUTE and CODE_OF_CONDUCT for details.

Citations

If you find our work useful in your research, please consider citing:

@inproceedings{lin2021end-to-end,
author = {Lin, Kevin and Wang, Lijuan and Liu, Zicheng},
title = {End-to-End Human Pose and Mesh Reconstruction with Transformers},
booktitle = {CVPR},
year = {2021},
}

License

Our research code is released under the MIT license. See LICENSE for details.

We use submodules from third parties, such as huggingface/transformers and hassony2/manopth. Please see NOTICE for details.

We note that any use of SMPL models and MANO models are subject to Software Copyright License for non-commercial scientific research purposes. See SMPL-Model License and MANO License for details.

Acknowledgments

Our implementation and experiments are built on top of open-source GitHub repositories. We thank all the authors who made their code public, which tremendously accelerates our project progress. If you find these works helpful, please consider citing them as well.

huggingface/transformers

HRNet/HRNet-Image-Classification

nkolot/GraphCMR

akanazawa/hmr

MandyMo/pytorch_HMR

hassony2/manopth

hongsukchoi/Pose2Mesh_RELEASE

mks0601/I2L-MeshNet_RELEASE

open-mmlab/mmdetection

Comments
  • 3D mesh is not generated

    3D mesh is not generated

    Hi,

    First of all, thank you so much for your contribution!

    I am facing a problem when running your quick demo code. I hope 3D mesh is generated as shown in your demo. image

    However, 3D mesh was not generated in my example.

    Could you please give me some advice?

    opened by tqtrunghnvn 19
  • AttributeError: 'ColoredRenderer' object has no attribute 'vbo_verts_face'

    AttributeError: 'ColoredRenderer' object has no attribute 'vbo_verts_face'

    Hello. I met an error when I am running the demo.

    Traceback (most recent call last): File "./metro/tools/end2end_inference_bodymesh.py", line 318, in main(args) File "./metro/tools/end2end_inference_bodymesh.py", line 314, in main run_inference(args, image_list, _metro_network, mesh_smpl, renderer, mesh_sampler)
    File "./metro/tools/end2end_inference_bodymesh.py", line 90, in run_inference att[-1][0].detach()) File "./metro/tools/end2end_inference_bodymesh.py", line 121, in visualize_mesh_and_attention rend_img = visualize_reconstruction_and_att_local(img, 224, vertices_full, vertices, vertices_2d, cam, renderer, joints_2d, att, color='pink') File "/home/user/Desktop/MeshTransformer/metro/utils/renderer.py", line 407, in visualize_reconstruction_and_att_local focal_length=focal_length, body_color=color) File "/home/user/Desktop/MeshTransformer/metro/utils/renderer.py", line 608, in render return self.renderer.r File "/home/user/anaconda3/envs/metro/lib/python3.7/site-packages/chumpy/ch.py", line 594, in r self._call_on_changed() File "/home/user/anaconda3/envs/metro/lib/python3.7/site-packages/chumpy/ch.py", line 589, in _call_on_changed self.on_changed(self._dirty_vars) File "/home/user/anaconda3/envs/metro/lib/python3.7/site-packages/opendr-0.73-py3.7.egg/opendr/renderer.py", line 1082, in on_changed self.vbo_verts_face.set_array(np.array(self.verts_by_face).astype(np.float32)) AttributeError: 'ColoredRenderer' object has no attribute 'vbo_verts_face'

    opened by CheungBH 5
  • demo error

    demo error

    MeshTransformer-main/metro/tools/end2end_inference_bodymesh.py --resume_checkpoint ./models/metro_release/metro_3dpw_state_dict.bin --image_file_or_path ./samples/human-body 2021-12-22 20:25:42,376 METRO Inference INFO: Using 1 GPUs anaconda3/envs/metro/lib/python3.7/site-packages/scipy/sparse/_index.py:84: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient. self._set_intXint(row, col, x.flat[0]) anaconda3/envs/metro/lib/python3.7/site-packages/scipy/sparse/_index.py:84: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient. self._set_intXint(row, col, x.flat[0]) anaconda3/envs/metro/lib/python3.7/site-packages/scipy/sparse/_index.py:84: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient. self._set_intXint(row, col, x.flat[0]) Model name 'metro/modeling/bert/bert-base-uncased/' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc). We assumed 'metro/modeling/bert/bert-base-uncased/' was a path or url but couldn't find any file associated to this path or url. Traceback (most recent call last): File "MeshTransformer-main/metro/tools/end2end_inference_bodymesh.py", line 316, in main(args) File "MeshTransformer-main/metro/tools/end2end_inference_bodymesh.py", line 222, in main config.output_attentions = False AttributeError: 'NoneType' object has no attribute 'output_attentions' 2021-12-22 20:25:47,352 METRO Inference INFO: Inference: Loading from checkpoint ./models/metro_release/metro_3dpw_state_dict.bin

    opened by ClimberY 4
  • Quick Demo Problem about opendr

    Quick Demo Problem about opendr

    The following message showed after run the human body reconstruction script of Quick Demo.

    Traceback (most recent call last):
      File "./metro/tools/end2end_inference_bodymesh.py", line 28, in <module>
        from metro.utils.renderer import Renderer, visualize_reconstruction, visualize_reconstruction_test, visualize_reconstruction_no_text, visualize_reconstruction_and_att_local
      File "/root/MeshTransformer/metro/utils/renderer.py", line 15, in <module>
        from opendr.renderer import ColoredRenderer, TexturedRenderer
      File "/root/anaconda3/envs/maskrcnn/lib/python3.7/site-packages/opendr/renderer.py", line 25, in <module>
        from .contexts.ctx_mesa import OsContext
      File "opendr/contexts/ctx_base.pyx", line 18, in init opendr.contexts.ctx_mesa
    ModuleNotFoundError: No module named '_constants'
    

    And I have no clue how to fix it. If anyone could come up with a solution. I really appreciate it.

    opened by AndyVerne 4
  • the estimation result on 3DPW using ResNet50 backbone

    the estimation result on 3DPW using ResNet50 backbone

    Hi, I'm curious about the quantitative performance of METERO with ResNet50 backbone on the 3DPW dataset, since the official repo doesn't provide the pre-trained models with ResNet50. I'd be grateful if any advice was given.

    opened by tinatiansjz 4
  • Mapping 3D pose from cropped image patch to the original images!

    Mapping 3D pose from cropped image patch to the original images!

    To deal with multi-person 3d pose estimation, I've run the METRO to get 3d joints from smpl (use get_joints function) on the cropped patches of image. So now I have the joints with shape 1x24x3 for each patch. For a given cropped bounding box like [x, y, w, h]. How can I reverse to the original image 3d joints? Thanks

    opened by thancaocuong 4
  • 3D Pose from vertices

    3D Pose from vertices

    Hi!

    Once again, kudos for the amazing work, I can't wait for the Mesh Graphormer to get out (hopefully it will).

    Is there a part of the code where we can extract 3D Pose (and also maybe betas) from vertices, similarly like you did for 3D Joints (get_joints)?

    Cheers!

    opened by boza-wd 4
  • Unable to install OpenDR

    Unable to install OpenDR

    Hi, thanks for your great work. When I install OpenDR, here is the error. Do you know how to solve this?

    (metro) cezheng@lambda-dual:~/HPE/meshpose/MeshTransformer/apex$ pip install opendr matplotlib Collecting opendr Using cached opendr-0.78.tar.gz (581 kB) Requirement already satisfied: matplotlib in /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages (3.3.2) Requirement already satisfied: Cython in /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages (from opendr) (0.29.21) Requirement already satisfied: chumpy>=0.58 in /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages (from opendr) (0.69) Requirement already satisfied: pillow>=6.2.0 in /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages (from matplotlib) (7.2.0) Requirement already satisfied: kiwisolver>=1.0.1 in /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages (from matplotlib) (1.2.0) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages (from matplotlib) (2.4.7) Requirement already satisfied: numpy>=1.15 in /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages (from matplotlib) (1.19.2) Requirement already satisfied: certifi>=2020.06.20 in /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages (from matplotlib) (2021.5.30) Requirement already satisfied: python-dateutil>=2.1 in /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages (from matplotlib) (2.8.1) Requirement already satisfied: cycler>=0.10 in /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages (from matplotlib) (0.10.0) Requirement already satisfied: scipy>=0.13.0 in /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages (from chumpy>=0.58->opendr) (1.6.2) Requirement already satisfied: six>=1.11.0 in /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages (from chumpy>=0.58->opendr) (1.15.0) Building wheels for collected packages: opendr Building wheel for opendr (setup.py) ... error ERROR: Command errored out with exit status 1: command: /home/cezheng/anaconda3/envs/metro/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-kws8g40y/opendr/setup.py'"'"'; file='"'"'/tmp/pip-install-kws8g40y/opendr/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-j7sa3lsx cwd: /tmp/pip-install-kws8g40y/opendr/ Complete output (72 lines): running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/opendr copying opendr/test_depth_renderer.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/occlusion_test.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/topology.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/lighting.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/everything.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/utils.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/dummy.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/version.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/camera.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/init.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/simple.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/test_renderer.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/util_tests.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/serialization.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/test_sh.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/slider_demo.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/geometry.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/filters.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/cvwrap.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/test_geometry.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/test_camera.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/common.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/renderer.py -> build/lib.linux-x86_64-3.8/opendr creating build/lib.linux-x86_64-3.8/opendr/contexts copying opendr/contexts/draw_triangle_shaders_2_1.py -> build/lib.linux-x86_64-3.8/opendr/contexts copying opendr/contexts/init.py -> build/lib.linux-x86_64-3.8/opendr/contexts copying opendr/contexts/constants.py -> build/lib.linux-x86_64-3.8/opendr/contexts copying opendr/contexts/autogen.py -> build/lib.linux-x86_64-3.8/opendr/contexts copying opendr/contexts/fix_warnings.py -> build/lib.linux-x86_64-3.8/opendr/contexts copying opendr/contexts/draw_triangle_shaders_3_2.py -> build/lib.linux-x86_64-3.8/opendr/contexts creating build/lib.linux-x86_64-3.8/opendr/test_dr copying opendr/test_dr/init.py -> build/lib.linux-x86_64-3.8/opendr/test_dr running build_ext building 'opendr.contexts.ctx_mesa' extension creating build/temp.linux-x86_64-3.8 creating build/temp.linux-x86_64-3.8/opendr creating build/temp.linux-x86_64-3.8/opendr/contexts gcc -pthread -B /home/cezheng/anaconda3/envs/metro/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -D__OSMESA_=1 -Iopendr/contexts -I. -I/home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages/numpy/core/include -Iopendr/contexts/OSMesa/include -I/home/cezheng/anaconda3/envs/metro/include/python3.8 -c opendr/contexts/ctx_mesa.c -o build/temp.linux-x86_64-3.8/opendr/contexts/ctx_mesa.o -lstdc++ In file included from /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages/numpy/core/include/numpy/ndarraytypes.h:1822, from /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages/numpy/core/include/numpy/ndarrayobject.h:12, from /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages/numpy/core/include/numpy/arrayobject.h:4, from opendr/contexts/ctx_mesa.c:657: /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: #warning "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] 17 | #warning "Using deprecated NumPy API, disable it with "
    | ^~~~~~~ In file included from opendr/contexts/ctx_mesa.c:663: opendr/contexts/OSMesa/include/GL/osmesa.h:258:1: warning: function declaration isn’t a prototype [-Wstrict-prototypes] 258 | typedef void (*OSMESAproc)(); | ^~~~~~~ opendr/contexts/ctx_mesa.c: In function ‘__pyx_pf_6opendr_8contexts_8ctx_mesa_13OsContextBase_150ShaderSource’: opendr/contexts/ctx_mesa.c:13123:50: warning: passing argument 3 of ‘glShaderSource’ from incompatible pointer type [-Wincompatible-pointer-types] 13123 | glShaderSource(__pyx_v_shader, __pyx_v_count, (&__pyx_v_s), (&__pyx_v_len)); | ~^~~~~~~~~~~ | | | char ** In file included from opendr/contexts/OSMesa/include/GL/gl.h:2085, from opendr/contexts/gl_includes.h:10, from opendr/contexts/ctx_mesa.c:662: opendr/contexts/OSMesa/include/GL/glext.h:5794:82: note: expected ‘const GLchar **’ {aka ‘const char **’} but argument is of type ‘char *’ 5794 | GLAPI void APIENTRY glShaderSource (GLuint shader, GLsizei count, const GLchar *string, const GLint *length); | ~~~~~~~~~~~~~~~^~~~~~ gcc -pthread -shared -B /home/cezheng/anaconda3/envs/metro/compiler_compat -L/home/cezheng/anaconda3/envs/metro/lib -Wl,-rpath=/home/cezheng/anaconda3/envs/metro/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.8/opendr/contexts/ctx_mesa.o -Lopendr/contexts/OSMesa/lib -lOSMesa -lGL -lGLU -o build/lib.linux-x86_64-3.8/opendr/contexts/ctx_mesa.cpython-38-x86_64-linux-gnu.so -lstdc++ /home/cezheng/anaconda3/envs/metro/compiler_compat/ld: cannot find -lOSMesa /home/cezheng/anaconda3/envs/metro/compiler_compat/ld: cannot find -lGLU collect2: error: ld returned 1 exit status error: command 'gcc' failed with exit status 1

    ERROR: Failed building wheel for opendr Running setup.py clean for opendr Failed to build opendr DEPRECATION: Could not build wheels for opendr which do not use PEP 517. pip will fall back to legacy 'setup.py install' for these. pip 21.0 will remove support for this functionality. A possible replacement is to fix the wheel build issue reported above. You can find discussion regarding this at https://github.com/pypa/pip/issues/8368. Installing collected packages: opendr Running setup.py install for opendr ... error ERROR: Command errored out with exit status 1: command: /home/cezheng/anaconda3/envs/metro/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-kws8g40y/opendr/setup.py'"'"'; file='"'"'/tmp/pip-install-kws8g40y/opendr/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-2a0woggk/install-record.txt --single-version-externally-managed --compile --install-headers /home/cezheng/anaconda3/envs/metro/include/python3.8/opendr cwd: /tmp/pip-install-kws8g40y/opendr/ Complete output (72 lines): running install running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/opendr copying opendr/test_depth_renderer.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/occlusion_test.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/topology.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/lighting.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/everything.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/utils.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/dummy.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/version.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/camera.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/init.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/simple.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/test_renderer.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/util_tests.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/serialization.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/test_sh.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/slider_demo.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/geometry.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/filters.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/cvwrap.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/test_geometry.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/test_camera.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/common.py -> build/lib.linux-x86_64-3.8/opendr copying opendr/renderer.py -> build/lib.linux-x86_64-3.8/opendr creating build/lib.linux-x86_64-3.8/opendr/contexts copying opendr/contexts/draw_triangle_shaders_2_1.py -> build/lib.linux-x86_64-3.8/opendr/contexts copying opendr/contexts/init.py -> build/lib.linux-x86_64-3.8/opendr/contexts copying opendr/contexts/constants.py -> build/lib.linux-x86_64-3.8/opendr/contexts copying opendr/contexts/autogen.py -> build/lib.linux-x86_64-3.8/opendr/contexts copying opendr/contexts/fix_warnings.py -> build/lib.linux-x86_64-3.8/opendr/contexts copying opendr/contexts/draw_triangle_shaders_3_2.py -> build/lib.linux-x86_64-3.8/opendr/contexts creating build/lib.linux-x86_64-3.8/opendr/test_dr copying opendr/test_dr/init.py -> build/lib.linux-x86_64-3.8/opendr/test_dr running build_ext building 'opendr.contexts.ctx_mesa' extension creating build/temp.linux-x86_64-3.8 creating build/temp.linux-x86_64-3.8/opendr creating build/temp.linux-x86_64-3.8/opendr/contexts gcc -pthread -B /home/cezheng/anaconda3/envs/metro/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -D__OSMESA_=1 -Iopendr/contexts -I. -I/home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages/numpy/core/include -Iopendr/contexts/OSMesa/include -I/home/cezheng/anaconda3/envs/metro/include/python3.8 -c opendr/contexts/ctx_mesa.c -o build/temp.linux-x86_64-3.8/opendr/contexts/ctx_mesa.o -lstdc++ In file included from /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages/numpy/core/include/numpy/ndarraytypes.h:1822, from /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages/numpy/core/include/numpy/ndarrayobject.h:12, from /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages/numpy/core/include/numpy/arrayobject.h:4, from opendr/contexts/ctx_mesa.c:657: /home/cezheng/anaconda3/envs/metro/lib/python3.8/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: #warning "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] 17 | #warning "Using deprecated NumPy API, disable it with "
    | ^~~~~~~ In file included from opendr/contexts/ctx_mesa.c:663: opendr/contexts/OSMesa/include/GL/osmesa.h:258:1: warning: function declaration isn’t a prototype [-Wstrict-prototypes] 258 | typedef void (*OSMESAproc)(); | ^~~~~~~ opendr/contexts/ctx_mesa.c: In function ‘__pyx_pf_6opendr_8contexts_8ctx_mesa_13OsContextBase_150ShaderSource’: opendr/contexts/ctx_mesa.c:13123:50: warning: passing argument 3 of ‘glShaderSource’ from incompatible pointer type [-Wincompatible-pointer-types] 13123 | glShaderSource(__pyx_v_shader, __pyx_v_count, (&__pyx_v_s), (&__pyx_v_len)); | ~^~~~~~~~~~~ | | | char ** In file included from opendr/contexts/OSMesa/include/GL/gl.h:2085, from opendr/contexts/gl_includes.h:10, from opendr/contexts/ctx_mesa.c:662: opendr/contexts/OSMesa/include/GL/glext.h:5794:82: note: expected ‘const GLchar **’ {aka ‘const char **’} but argument is of type ‘char *’ 5794 | GLAPI void APIENTRY glShaderSource (GLuint shader, GLsizei count, const GLchar *string, const GLint *length); | ~~~~~~~~~~~~~~~^~~~~~ gcc -pthread -shared -B /home/cezheng/anaconda3/envs/metro/compiler_compat -L/home/cezheng/anaconda3/envs/metro/lib -Wl,-rpath=/home/cezheng/anaconda3/envs/metro/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.8/opendr/contexts/ctx_mesa.o -Lopendr/contexts/OSMesa/lib -lOSMesa -lGL -lGLU -o build/lib.linux-x86_64-3.8/opendr/contexts/ctx_mesa.cpython-38-x86_64-linux-gnu.so -lstdc++ /home/cezheng/anaconda3/envs/metro/compiler_compat/ld: cannot find -lOSMesa /home/cezheng/anaconda3/envs/metro/compiler_compat/ld: cannot find -lGLU collect2: error: ld returned 1 exit status error: command 'gcc' failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: /home/cezheng/anaconda3/envs/metro/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-kws8g40y/opendr/setup.py'"'"'; file='"'"'/tmp/pip-install-kws8g40y/opendr/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-2a0woggk/install-record.txt --single-version-externally-managed --compile --install-headers /home/cezheng/anaconda3/envs/metro/include/python3.8/opendr Check the logs for full command output.

    opened by zczcwh 4
  • Questions about training on 3DPW

    Questions about training on 3DPW

    Hello. Thanks for your great work. I have some questions about training on 3DPW. According to docs/EXP.md, i fine-tune your pre-trained model (metro_h36m_state_dict.bin) on 3DPW training set. But the result is not like what you described in the paper. How did you train on the 3DPW data set?

    image

    opened by wjingdan 3
  • Poor results using end2end_inference_bodymesh

    Poor results using end2end_inference_bodymesh

    Hello, thank you very much for this great project. I have done many tests (inference only). The results are very bad and I want to ask you if I am doing something wrong? I did exactly what is suggested in DEMO.md. Should I do something additional? Here some poor results: Screenshot from 2021-08-30 11-43-16

    3dpw_sit_on_street_metro_pred

    opened by Gowan1998 3
  • questions on 3DPW

    questions on 3DPW

    Hi! Thanks for your great work. I have some questions about training on 3DPW.

    1. According to docs/EXP.md, when train on 3DPW, 3dpw/test_has_gender.yaml is used to evaluate during training:

      --val_yaml 3dpw/test_has_gender.yaml \
      

      When evaluate on 3DPW after training, 3dpw/test.yaml is used instead:

      --val_yaml 3dpw/test.yaml \
      

      I only find test_has_gender.yaml in the provided 3dpw.tar archive. Are the above 2 files the same? If not, what are the differences?

    2. According to metro/tools/tsv_demo_3dpw.py#L71 and metro/tools/run_metro_bodymesh.py#L395, neutral SMPL model is used to generate GT kp3d and GT vertices, for both 3DPW train set and testing set. I wonder if it is correct because 3DPW provides gender attributes, and only use the model corresponding to the given gender gives the correct output, as far as I know.

    Thanks again and looking forward to your reply!

    opened by siyuzou 3
  • Project dependencies may have API risk issues

    Project dependencies may have API risk issues

    Hi, In MeshTransformer, inappropriate dependency versioning constraints can cause risks.

    Below are the dependencies and version constraints that the project is using

    yacs
    cython
    opencv-python
    tqdm
    nltk
    numpy
    scipy==1.4.1
    chumpy
    boto3
    requests
    

    The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict. The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

    After further analysis, in this project, The version constraint of dependency tqdm can be changed to >=4.36.0,<=4.64.0. The version constraint of dependency nltk can be changed to >=3.2.2,<=3.7. The version constraint of dependency numpy can be changed to >=1.8.0,<=1.23.0rc3. The version constraint of dependency scipy can be changed to >=0.12.0,<=1.7.3.

    The above modification suggestions can reduce the dependency conflicts as much as possible, and introduce the latest version as much as possible without calling Error in the projects.

    The invocation of the current project includes all the following methods.

    The calling methods from the tqdm
    tqdm.tqdm
    
    The calling methods from the nltk
    collections.OrderedDict
    
    The calling methods from the numpy
    numpy.linalg.svd
    numpy.linalg.det
    numpy.linalg.norm
    
    The calling methods from the scipy
    scipy.sparse.csr_matrix
    scipy.sparse.coo_matrix
    
    The calling methods from the all methods
    torch.nn.MSELoss
    shape.torch.FloatTensor.view
    rend_img.transpose.transpose
    logging.StreamHandler.emit
    S2.S1_hat.sum
    numpy.issubdtype
    tsv_label_file.format
    extract
    os.path.exists
    batch_size.image_feat.view.expand
    i.self.transition3
    metro.datasets.build.make_hand_data_loader
    img_shuf_list.append
    transform
    ref_vertices_array.copy.tolist
    join.format
    numpy.abs
    self.hw_tsv.num_rows
    os.path.splitext
    rows_hw.append
    os.getcwd
    smpl_shape.torch.FloatTensor.view
    torch.arange
    torch.nn.Upsample
    metro.utils.geometric_layers.rodrigues
    self.flush
    numpy.round
    numpy.where
    ecolors.keys
    block
    self.METRO_Encoder.super.__init__
    V.dot.dot
    gt_keypoints_3d.clone.clone
    numpy.eye
    torchvision.transforms.Resize
    torch.spmm
    main
    run
    torch.utils.data.distributed.DistributedSampler
    rows.append
    self.residual
    torch.nn.MSELoss.cuda
    args.arch.models.__dict__
    metro.utils.image_ops.flip_img
    spmm
    func
    opendr.lighting.LambertianPointLight
    numpy.zeros_like
    joints_2d_transformed.torch.from_numpy.float
    setattr
    self._make_branches
    args.resume_checkpoint.split
    metro.utils.renderer.visualize_reconstruction
    numpy.zeros_like.max
    self.initialize
    os.path.join
    numpy.maximum
    os.path.join.split
    metro.utils.tsv_file.TSVFile
    pred_vertices.transpose.transpose
    images.size
    tensor.cpu.numpy
    torchvision.transforms.CenterCrop
    smpl_pose.torch.FloatTensor.view
    os.system
    numpy.float32.np.eye.astype.tolist
    self.MANO.super.__init__
    numpy.min
    self.MeshTSVYamlDataset.super.__init__
    torch.nn.Sequential.children
    criterion_keypoints
    preproc
    mesh_sampler.downsample
    self.conv1
    pickle.dumps
    torchvision.transforms.ToTensor
    torch.unsqueeze
    z2.y2.x2.w2.yz.wx.wy.xz.wx.yz.z2.y2.x2.w2.xy.wz.xz.wy.wz.xy.z2.y2.x2.w2.torch.stack.view
    self._fp.close
    self.cam_param_fc
    tsvin.fileno
    torch.utils.data.sampler.BatchSampler
    i.pred_camera.cpu.numpy
    pred_keypoints_2d.cpu
    smpl_cam_pose.np.asarray.tolist
    draw_text
    numpy.float32.trans.np.array.reshape
    I_cube.R.view
    torch.FloatTensor.fill_.to
    time.time
    self.bn2
    torch.nn.Sequential
    img.torch.from_numpy.float
    setuptools.setup
    torch.sin
    transition_layers.append
    self.layer.th_faces.numpy
    re.sum.sum
    format.decode
    visualize_mesh
    numpy.fliplr
    tensor_list.append
    numpy.eye.astype
    world2cam
    torch.distributed.is_initialized
    self.LayerNorm
    data_list.append
    self.joints_definition.index
    mean_per_vertex_error
    collections.defaultdict
    labels.append
    pred_vertices_sub.contiguous
    pickle.load.astype
    torch.nn.ModuleList.append
    os.rename
    cv2.warpAffine
    datalist.append
    compute_3d_joints_error
    enumerate
    cv2.line
    torch.cos
    torch.distributed.get_world_size
    metro.modeling._smpl.SMPL.get_joints
    pred_camera.detach
    annotations.detach
    METRO_model
    setuptools.find_packages
    compute_similarity_transform_batch
    torch.manual_seed
    self.get_valid_tsv.num_rows
    self.j2d_processing
    pred_3d_joints_from_smpl.cpu.numpy
    pred_2d_vertices.cpu.numpy
    pred_camera.cpu.numpy
    numpy.load
    metro.utils.logger.setup_logger.info
    self.backbone
    numpy.sum
    features.torch.ones_like.cuda
    torch.nn.functional.avg_pool2d
    HighResolutionNet
    self.prepare_image_keys
    betas.torch.from_numpy.unsqueeze
    w.pow
    numpy.random.random_sample
    round
    metro.modeling._mano.MANO.to
    f.read.strip
    pickle.loads
    metro.utils.tsv_file_ops.find_file_path_in_yaml
    metro.modeling.data.config.H36M_J17_NAME.index
    self.state_dict.keys
    numpy.eye.dot
    gt_3d_joints.shape.batch_size.torch.ones.cuda
    numpy.linalg.det
    gt.pred.sum.torch.sqrt.mean.cpu
    metro.utils.image_ops.rot_aa
    json.dumps
    scipy.sparse.coo_matrix.to
    torch.nn.L1Loss
    SparseMM.apply
    head_mask.to.dim
    metro.modeling.bert.METRO_Body_Network.to
    self.shapedirs.view.expand
    cv2.Rodrigues.dot
    gt_keypoints_2d.pred_keypoints_2d.criterion_keypoints.conf.mean.item
    super
    self.load_state_dict
    HighResolutionNet.init_weights
    shuf_list.append
    yaml.dump
    world_coord.transpose
    list
    self._open
    line.strip.append
    numpy.ones
    torch.distributed.is_available
    pose2mesh_joint_norm
    json.dump
    mano.get_3d_joints.contiguous
    metro.modeling._mano.MANO.to.layer
    metro.modeling.bert.METRO_Body_Network
    torch.ones_like.unsqueeze
    i.self.downsamp_modules
    metro.utils.renderer.Renderer
    y.size.y.F.avg_pool2d.view
    torch.nn.parallel.DistributedDataParallel
    S2.S1_hat.sum.np.sqrt.mean
    open
    numpy.arange
    smpl
    torch.distributed.reduce
    i.pred_camera.cpu
    img.copy
    shutil.copyfileobj
    torch.sparse.FloatTensor
    tqdm.tqdm
    metro.modeling.data.config.JOINT_REGRESSOR_TRAIN_EXTRA.np.load.torch.from_numpy.float
    numpy.float32.self.joint_regressor.shape.range.i.i.np.array.reshape
    self.cfg.get
    metro.modeling._smpl.SMPL
    tsv_linelist_file.format
    numpy.zeros
    t.reshape
    self.conv2
    idx_source.self.tsvs.get_key
    args.hidden_feat_dim.split
    fp.write
    len
    metro.utils.logger.setup_logger
    device.torch.FloatTensor.to.view
    i.pred_vertices.cpu
    run_validate
    images.cpu.numpy
    metro.utils.tsv_file_ops.load_from_yaml_file
    self.seek
    os.path.basename
    pred_keypoints_2d.cpu.numpy
    torch.load
    save_checkpoint
    smpl_shape_tensor.np.asarray.tolist
    img_from_base64
    self.stage4
    self.layer1
    run_eval_and_save
    self.modules
    self.state_dict
    head_mask.to.expand
    self.stage3
    lrotmin.posedirs.torch.matmul.view
    torch.sparse.FloatTensor.copy
    self.get_tsv_file
    scipy_to_pytorch
    torch.distributed.barrier
    metro.utils.image_ops.flip_pose
    gt_keypoints_3d.unsqueeze.clone
    images.cpu.numpy.transpose
    os.path.dirname.split
    metro.modeling.data.config.J_NAME.index
    cam2pixel
    FileHandler.setLevel
    pose.torch.from_numpy.unsqueeze.cuda
    get_matching_parameters
    torch.norm
    self.position_embeddings
    keypoint_2d_loss.item
    torch.FloatTensor.fill_
    betas.torch.from_numpy.unsqueeze.cuda.float
    torch.nn.Linear
    args.input_feat_dim.split
    mesh_smpl.faces.cpu
    torch.cat
    logging.StreamHandler
    metro.utils.tsv_file_ops.load_linelist_file
    self.downsample
    i.self.fuse_layers
    metro.utils.renderer.visualize_reconstruction_and_att_local
    torch.nn.Conv2d
    torch.cuda.set_device
    concat_files
    metro.modeling._smpl.SMPL.to.get_h36m_joints
    PIL.Image.open
    torch.stack
    torch.distributed.init_process_group
    metro.utils.comm.get_world_size
    mano_model.get_3d_joints.contiguous
    metro.modeling.bert.METRO_Hand_Network.load_state_dict
    torchvision.transforms.Compose
    map
    attention_mask.unsqueeze.unsqueeze
    numpy.float32.smpl_3d_joints.numpy.astype.reshape
    total_to_draw.append
    run_inference_hand_mesh
    transforms3d.axangles.mat2axangle
    pred_vertices_full.cpu.numpy
    device.torch.FloatTensor.to.view.expand
    zipfile.ZipFile
    torch.no_grad
    numpy.array.reshape
    numpy.eye.copy
    self.get_line_no
    numpy.asarray.transpose
    fuse_layer.append
    smpl_shape_camera_corrd.tolist
    torch.LongTensor
    device.batch_size.torch.zeros.to.J.torch.cat.view
    pred_vertices_sub2.detach
    smpl_cam_shape.np.asarray.tolist
    G.permute.contiguous.view
    torch.distributed.get_rank
    metro.datasets.build.make_data_loader
    build_hand_dataset
    torch.nn.parallel.DistributedDataParallel.parameters
    ptU.append
    torchvision.utils.make_grid.append
    find_version
    model.module.backbone.body._freeze_backbone
    numpy.asarray.copy
    make_data_sampler
    FileHandler.setFormatter
    gt_keypoints_2d.cpu.numpy.unsqueeze
    get_rank
    torch.sum
    smpl_mesh_model
    pred_vertices_sub2.transpose
    images.cpu
    torch.ByteTensor
    rotateY
    self.reset
    keypoint_3d_loss.item
    self._fp.readline.split
    smpl.get_h36m_joints
    numpy.mean
    json.loads
    os.makedirs
    all_idx.append
    i.pred_vertices.cpu.numpy
    trans.torch.FloatTensor.view
    self.release
    layer.self.encoder.layer.attention.prune_heads
    i.images.cpu
    numpy.max
    tsv_writer
    logging.StreamHandler.__init__
    metro.modeling.bert.METRO_Hand_Network
    cv2.getRotationMatrix2D
    torch.stack.append
    torch.save
    torch.long.seq_length.batch_size.torch.zeros.cuda
    self.METRO.super.__init__
    self.get_img_key
    X1.dot
    self._fp.seek
    numpy.sign
    torch.LongTensor.to
    z.pow
    torch.nn.init.constant_
    head_mask.unsqueeze.unsqueeze.unsqueeze.unsqueeze
    numpy.linalg.svd
    process_bbox
    numpy.vstack
    smpl.faces.int.to
    cv2.putText
    mesh_model.layer
    torch.cat.clone
    metro.utils.geometric_layers.orthographic_projection.detach
    stream.close
    torch.nn.BatchNorm2d
    metro.utils.tsv_file_ops.tsv_writer
    torchvision.utils.make_grid
    modeling_bert.BertPooler
    torch.optim.Adam.step
    self._make_head
    numpy.asarray.astype
    self._ensure_tsv_opened
    torch.div
    all
    V.dot
    self.backbone.view
    fpidx.write
    metro.utils.metric_logger.EvalMetricsLogger.update
    metro.modeling.bert.METRO_Body_Network.load_state_dict
    pred_vertices_full.cpu
    torch.eye
    line.strip
    torch.cat.cpu
    tsv_img_file.format
    self.SMPL.super.__init__
    format
    numpy.array
    type
    metro.datasets.hand_mesh_tsv.HandMeshTSVYamlDataset
    metro.utils.metric_pampjpe.get_alignMesh
    rotations.append
    row.split
    numpy.radians
    rows_label.append
    self.rgb_processing
    metro.utils.comm.is_main_process
    line_list.append
    j.i.self.fuse_layers
    i.self.kintree_table.item
    metro.utils.metric_logger.EvalMetricsLogger
    hasattr.state_dict
    metro.utils.tsv_file.CompositeTSVFile
    os.fspath
    pycocotools.coco.COCO.createIndex
    logging.getLogger
    gt.pred.sum.torch.sqrt.mean.cpu.numpy
    metro.modeling._smpl.SMPL.cuda.get_joints
    regexp.model.get_matching_parameters.items
    torch.ones_like.dim
    pred_vertices.detach
    metro.modeling.data.config.JOINT_REGRESSOR_H36M_correct.np.load.torch.from_numpy.float
    torchvision.transforms.Normalize
    mjm_mask.torch.from_numpy.float
    cam_param.squeeze.squeeze
    out.append
    torch.nn.L1Loss.cuda
    image_list.append
    position_ids.unsqueeze.expand_as
    img_tensor.torch.unsqueeze.cuda
    metro.utils.metric_logger.AverageMeter.update
    logging.getLogger.addHandler
    renderer.render
    data.torch.from_numpy.float
    conv3x3
    get_norm_smpl_mesh
    y.size.y.F.avg_pool2d.view.flatten
    metro.utils.renderer.visualize_reconstruction_test
    att_all.append
    numpy.einsum
    getattr
    pose2mesh_3d_to_2d_joint
    self.HandMeshTSVYamlDataset.super.__init__
    fail_subaction.append
    myimresize
    self.shapedirs.view
    f.read
    run_multiscale_inference
    multiscale_joints.append
    pose.torch.from_numpy.unsqueeze
    FileNotFoundError
    torch.cuda.max_memory_allocated
    pose.view
    logging.StreamHandler.setLevel
    load_linelist_file
    metro.utils.tsv_file_ops.generate_linelist_file
    joint_cam.np.array.reshape
    metro.modeling._mano.MANO
    metro.modeling.hrnet.config.update_config
    tsv_reader
    modules.get_num_inchannels
    torch.nn.Sequential.parameters
    metro.modeling.bert.METRO_Body_Network.eval
    torch.inverse.numpy
    metro.utils.metric_pampjpe.reconstruction_error
    self._fp.readline
    torch.nn.ReLU
    content.keys
    yaml.load
    os.path.dirname
    cv2.imdecode
    opendr.camera.ProjectPoints
    pose2mesh_joints_name.index
    torch.optim.Adam.zero_grad
    metro.modeling._smpl.SMPL.to
    tensor.cpu.numpy.tobytes
    plot_one_line
    self._make_fuse_layers
    sep.join
    pred_2d_joints.cpu
    modeling_bert.BertEmbeddings
    mkdir
    adjust_learning_rate
    torch.distributed.all_gather
    metro.modeling._smpl.Mesh.downsample
    tsv_file.format
    storage.torch.ByteTensor.to
    head_mask.to.to
    pose.torch.FloatTensor.view
    get_graph_params
    range
    extended_attention_mask.to.to
    self.encoder
    numpy.linalg.norm
    torch.device
    smpl_3d_joints.numpy.astype
    numpy.float32.cam_param.np.array.reshape
    batch_size.G.permute.contiguous.view.self.weights.torch.matmul.view.transpose
    position_ids.unsqueeze.expand_as.unsqueeze
    metro.utils.image_ops.flip_kp
    multiscale_fusion
    argparse.ArgumentParser.add_argument
    y.pow
    imgname.split.split.split
    self.HighResolutionNet.super.__init__
    betas.torch.from_numpy.unsqueeze.cuda
    adjmat_sparse
    smpl_pose.view.numpy
    pretrained_dict.items.items
    S1.mean
    numpy.sqrt
    os.path.basename.startswith
    self._ensure_lineidx_loaded
    mano_joint_coord.numpy.reshape
    cfg.merge_from_file
    numpy.ones_like
    betas.tolist
    joints_3d_transformed.torch.from_numpy.float
    hasattr
    numpy.hstack
    cv2.cvtColor
    i.self.transition2
    pickle.load.tocoo
    numpy.dot
    model.named_parameters
    logging.getLogger.setLevel
    manopth.manolayer.ManoLayer
    img_i.npz_imgname.decode.split
    y.flatten.mean
    IterationBasedBatchSampler
    torch.spmm.max
    torch.FloatTensor
    quat2mat
    pred_camera.cpu
    metro.modeling._mano.MANO.to.get_3d_joints
    logging.StreamHandler.setFormatter
    metro.utils.image_ops.transform
    isinstance
    S.astype.astype
    line.split
    tsvin.readline
    pose.ndimension
    i.images.cpu.numpy
    row1.append
    datetime.timedelta
    smpl.get_h36m_joints.cpu
    self._make_stage
    gt_3d_joints.cpu.numpy
    fuse_layers.append
    self.augm_params
    pose.tolist
    os.chdir
    pred_vertices_sub.detach
    next
    gt_keypoints_2d.cpu.numpy.cpu
    transform_visualize
    torch.sparse.FloatTensor.multiply
    p.numel
    metro.utils.miscellaneous.mkdir
    pred_vertices.cpu
    input_image.copy
    gt_3d_joints_norm.mano_joint_coord_norm.np.sum.np.sqrt.mean
    G.permute.contiguous
    gt_keypoints_2d.pred_keypoints_2d.criterion_keypoints.conf.mean.backward
    camera.view.view
    attention.cpu.numpy.detach
    base64.b64encode
    self.joint_regressor.torch.from_numpy.float
    run_inference
    modeling_bert.BertLayerNorm
    mano_mesh_coord.numpy.reshape.numpy
    mano_mesh_coord.numpy.reshape
    imageio.imread
    is_main_process
    smpl_mesh_model.get_joints.cpu
    line.strip.split
    head_mask.unsqueeze.unsqueeze.unsqueeze
    _metro_network
    model.backbone.body._freeze_backbone
    pycocotools.coco.COCO.loadImgs
    ctx.save_for_backward
    shape.X_trans.view.camera.view
    self.j3d_processing
    pickle.load
    torch.einsum
    self.register_buffer
    torch.nn.Conv1d
    error_list.append
    torch.optim.Adam
    pose.torch.FloatTensor.view.view
    re.search
    tsvout.write
    load_list_file
    self.get_annotations
    template_pose.cuda.cuda
    logging.Formatter
    input_image.shape.np.mean.astype
    readme
    torch.zeros.cuda
    os.fstat
    cv2.addWeighted
    pose.torch.from_numpy.unsqueeze.cuda.float
    beta.shapedirs.torch.matmul.view
    torch.stack.ndimension
    cv2.resize
    torch.nn.parallel.DistributedDataParallel.train
    batch_size.torch.zeros.to
    comm.is_main_process
    pose_cube.rodrigues.view
    numpy.trace
    torch.spmm.min
    self.stage2
    self._make_one_branch
    annotations.cuda.cpu
    numpy.random.randn
    cv2.imread
    keypoint_2d_loss
    X_trans.view
    torch.inverse
    sum
    compute_similarity_transform
    smpl.faces.cpu
    metro.utils.comm.synchronize
    numpy.asarray
    smpl_vertices.numpy.astype
    torch.stack.clone
    self.upsampling
    torch.nn.Embedding
    self.branches
    self.conv_learn_tokens
    self.dropout
    fname_output_save.append
    pred_vertices_full.transpose.transpose
    transforms3d.axangles.axangle2mat
    i.self.incre_modules
    x_list.append
    torch.sparse.FloatTensor.sum
    pred_2d_joints.cpu.numpy
    self.renderer.set
    open.read
    HighResolutionModule
    set_matching_error.append
    torch.ByteStorage.from_buffer
    numpy.deg2rad
    cv2.imencode
    total_to_draw.sort
    self.posedirs.view
    ref_vertices.abs.max
    torch.utils.data.DataLoader
    max
    tensor.numel.torch.LongTensor.to
    mano_joint_coord.numpy.reshape.numpy
    graphcmr_joints_name.index
    re.search.group
    x_fuse.append
    torch.cat.numel
    re.compile
    pred_vertices_sub.transpose.transpose
    label_file.format
    gt_keypoints_2d.unsqueeze.clone
    pred_camera.contiguous
    gt_kp.astype
    zipfile.ZipFile.read
    yacs.config.CfgNode
    ref_vertices.expand.expand
    smpl_model.get_h36m_joints.numpy
    print
    metro.modeling._smpl.Mesh
    attention.cpu
    os.path.abspath
    int
    branches.append
    min
    run_eval_general
    self.posedirs.view.expand
    y.size.y.F.avg_pool2d.view.size
    logging.Handler.__init__
    make_batch_data_sampler
    json.load.items
    os.path.isfile
    mano.get_3d_joints
    torch.sqrt
    ValueError
    sorted
    i.self.branches
    join
    read_to_character
    numpy.zeros.tolist
    tsvin.tell
    self.bn1
    smpl_model.get_h36m_joints
    metro.utils.miscellaneous.set_seed
    smpl_model.numpy
    cv2.imwrite
    metro.modeling.bert.METRO_Hand_Network.to
    set_pose2mesh_3djoint_world.append
    metro.utils.metric_logger.AverageMeter
    torch.stack.permute
    metro.utils.comm.all_gather
    conv3x3s.append
    images.cuda.size
    torch.spmm.abs
    self.bert
    os.remove
    self.BasicBlock.super.__init__
    ref_joints_array.copy.tolist
    fp.read
    torch.distributed.gather
    G.permute.contiguous.view.self.weights.torch.matmul.view
    pycocotools.coco.COCO
    SMPL
    re.sum.mean
    f.write
    center.np.asarray.astype
    args.resume_checkpoint.split.split
    U.dot
    str
    build_dataset
    logging.StreamHandler.close
    self.upsampling2
    self.HighResolutionModule.super.__init__
    self.seq.append
    self.METRO_Body_Network.super.__init__
    eval_log.append
    metro.modeling._mano.Mesh.downsample
    mesh_output_save.append
    torch.cuda.manual_seed_all
    size.item
    input_dict.keys
    self.cls_head
    layers.append
    self._make_transition_layer
    json.load
    numpy.transpose.astype
    j.pred_vertices.tolist
    annotations.cuda.expand
    heads_to_prune.items
    mesh_model.root_joint_idx.mano_pose.numpy
    gt.pred.sum.torch.sqrt.mean
    os.strerror
    FileHandler
    archive.read.decode
    os.getpid
    smpl.get_joints.cpu
    metro.modeling._smpl.SMPL.cuda
    smpl_pose_tensor.np.asarray.tolist
    names.append
    self.normalize_img
    numpy.concatenate
    subaction_indentify
    torch._C._get_tracing_state
    parse_args
    gt_3d_joints.copy
    self.cam_param_fc2
    x.strip
    joints.np.round.astype
    self.state_dict.update
    max_size.torch.ByteTensor.to
    self.num_rows
    gt.pred.sum
    torch.ones_like
    numpy.zeros_like.astype
    att_max_value.cpu.detach
    i.strip.split
    kp.astype.astype
    torch.nn.Sequential.append
    R.view.view
    self.get_valid_tsv
    pred_3d_joints.contiguous
    mean_per_joint_position_error
    self.get_layer
    new_root.cv2.Rodrigues.reshape
    pred_vertices.cpu.numpy
    cfg.defrost
    metro.utils.renderer.visualize_reconstruction_no_text
    self.cam_param_fc3.transpose
    tsv_hw_file.format
    collections.OrderedDict
    fname.script_dir.op.join.open.read
    try_delete
    self.Bottleneck.super.__init__
    myimrotate
    attention.cpu.numpy
    os.stat
    self.img_tsv.num_rows
    self.joints_name.index
    reversed
    head_mask.unsqueeze.unsqueeze
    re.compile.match
    visualize_mesh_and_attention.transpose
    t.torch.from_numpy.view
    float
    shape.torch.FloatTensor.view.abs
    self.close
    metro.modeling.bert.METRO_Hand_Network.eval
    logging.getLevelName
    local_size.max_size.torch.ByteTensor.to
    os.path.splitext.strip
    self.batch_sampler.sampler.set_epoch
    gt_keypoints_3d.clone.unsqueeze
    torch.zeros
    self.METRO_Hand_Network.super.__init__
    argparse.ArgumentParser.parse_args
    imgname.split.split
    self.parameters
    flip.rot.pose.self.pose_processing.torch.from_numpy.float
    TSVFile
    reorg_h36m_from_smpl.h36m_joint.np.sum.np.sqrt.mean
    img_visual.torch.unsqueeze.cuda
    att_max_value.cpu
    j.pred_3d_joints_from_mesh.tolist
    self.img_embedding
    i.self.transition1
    torch.zeros_like
    METRO_Encoder
    gt_keypoints_2d.pred_keypoints_2d.criterion_keypoints.conf.mean
    draw_skeleton
    numpy.zeros.sum
    argparse.ArgumentParser
    smpl_model
    metro.utils.image_ops.img_from_base64
    self.incre_modules
    zip
    torch.eye.to
    self.final_layer
    metro.utils.comm.get_rank
    world_coord.transpose.R.np.dot.transpose
    visualize_mesh_and_attention
    J_regressor_shape.v.i.torch.sparse.FloatTensor.to_dense
    metro.modeling.hrnet.hrnet_cls_net.get_cls_net
    any
    cv2.Rodrigues
    images.cuda.cuda
    sparse.t
    x.pow
    config_save_file
    numpy.dot.astype
    torch.utils.data.sampler.RandomSampler
    numpy.random.choice
    os.path.isdir
    self.trans_encoder
    metro.modeling._mano.Mesh
    center.tolist
    torch.ones
    generate_lineidx
    logging.getLogger.error
    torch.matmul
    joint_output_save.append
    random.seed
    dict
    criterion_vertices
    numpy.frombuffer
    smpl_pose.view.view
    numpy.ones.tolist
    opendr.renderer.ColoredRenderer
    self.acquire
    metro.modeling.data.config.J24_NAME.index
    torch.nn.init.kaiming_normal_
    mesh_smpl.faces.cpu.numpy
    logging.getLogger.info
    metro.utils.image_ops.crop
    torch.utils.data.sampler.SequentialSampler
    torch.nn.parallel.DistributedDataParallel.eval
    self.get_image
    scipy.sparse.csr_matrix
    self.pose_processing
    torch.FloatTensor.to
    self.apply
    self.cam_param_fc3
    get_world_size
    logging.info
    metro.utils.geometric_layers.orthographic_projection
    multiscale_vertices.append
    linelist_file.format
    set_pose2mesh_S24.append
    torch.nn.ModuleList
    ptD.append
    numpy.random.seed
    model_class
    numpy.argmin
    torch.cat.size
    run_aml_inference_hand_mesh
    act_name.i.db_subaction.append
    numpy.minimum
    cfg.freeze
    smpl_pose_camera_corrd.tolist
    S2.mean
    self.seek_first_column
    RuntimeError
    smpl.faces.cpu.numpy
    fp.readlines
    numpy.float32.smpl_vertices.numpy.astype.reshape
    self.get_valid_tsv.get_key
    gt_keypoints_2d.cpu.numpy
    numpy.random.uniform
    ref_vertices.abs.max.item
    pose.copy.copy
    torch.from_numpy.unsqueeze
    load_pred_json
    idx_source.self.tsvs.seek
    ipdb.set_trace
    fp.read.index
    cv2.circle
    torch.FloatTensor.fill_.cuda
    scipy.sparse.coo_matrix
    vertices_loss.item
    os.path.dirname.startswith
    metro.datasets.human_mesh_tsv.MeshTSVYamlDataset
    pred_2d_vertices.cpu
    img_i.npz_imgname.decode
    act_name.i.db_subaction.sort
    modeling_bert.BertEncoder
    vertices_loss
    self.layer.th_J_regressor.numpy
    torch.nn.Dropout
    fp.read.strip
    numpy.cos
    self._make_layer
    self.relu
    i.strip
    torch.from_numpy
    db.anns.keys
    cfg.dump
    i.images.cpu.numpy.transpose
    get_transform
    gt_keypoints_3d.pred_keypoints_3d.criterion_keypoints.conf.mean
    head_mask.to.unsqueeze
    gen_rows
    self.bn3
    config_class.from_pretrained
    norm_quat.norm.norm
    file.read.split
    os.listdir
    smpl_shape.abs.any
    betas.torch.from_numpy.float
    self._check_branches
    set_subaction_hypo.append
    self.conv3
    modules.append
    torch.spmm.to
    keypoint_3d_loss
    add_Pose2Mesh_smpl_labels
    numpy.transpose
    box_shuf_list.append
    hw_file.format
    mvm_mask.torch.from_numpy.float
    numpy.cumsum
    metro.modeling.hrnet.hrnet_cls_net_featmaps.get_cls_net
    smpl.faces.int
    filename.endswith
    metro.modeling._smpl.SMPL.to.eval
    mano_model.layer.cuda
    annotations.cuda
    it.self.kintree_table.item
    cv2.cvtColor.astype
    numpy.sin
    base64.b64decode
    image_feat_newview.transpose.transpose
    

    @developer Could please help me check this issue? May I pull a request to fix it? Thank you very much.

    opened by PyDeps 0
  • Training on single dataset

    Training on single dataset

    Hi, Thank you for the great work! In the results section of your paper, you've stated results for your training on mixed datasets for 200 epochs. I attempted to train on the single 3dpw dataset from scratch but received unexpected results (as shown in the log below). I'd appreciate it if you could advise me how to solve this problem.

    Thanks in advance.

    2022-09-27 10:27:33,273 METRO INFO: Using 1 GPUs 2022-09-27 10:27:37,447 METRO INFO: Update config parameter num_hidden_layers: 12 -> 4 2022-09-27 10:27:37,447 METRO INFO: Update config parameter hidden_size: 768 -> 1024 2022-09-27 10:27:37,447 METRO INFO: Update config parameter num_attention_heads: 12 -> 4 2022-09-27 10:27:38,310 METRO INFO: Init model from scratch. 2022-09-27 10:27:38,310 METRO INFO: Update config parameter num_hidden_layers: 12 -> 4 2022-09-27 10:27:38,310 METRO INFO: Update config parameter hidden_size: 768 -> 256 2022-09-27 10:27:38,310 METRO INFO: Update config parameter num_attention_heads: 12 -> 4 2022-09-27 10:27:38,486 METRO INFO: Init model from scratch. 2022-09-27 10:27:38,486 METRO INFO: Update config parameter num_hidden_layers: 12 -> 4 2022-09-27 10:27:38,486 METRO INFO: Update config parameter hidden_size: 768 -> 128 2022-09-27 10:27:38,486 METRO INFO: Update config parameter num_attention_heads: 12 -> 4 2022-09-27 10:27:38,569 METRO INFO: Init model from scratch. 2022-09-27 10:27:40,009 METRO INFO: => loading hrnet-v2-w64 model 2022-09-27 10:27:40,012 METRO INFO: Transformers total parameters: 102256646 2022-09-27 10:27:40,016 METRO INFO: Backbone total parameters: 128059944 2022-09-27 10:27:40,216 METRO INFO: Training parameters Namespace(data_dir='datasets', train_yaml='pw3d_tsv_reproduce/train.yaml', val_yaml='pw3d_tsv_reproduce/test.yaml', num_workers=4, img_scale_factor=1, model_name_or_path='metro/modeling/bert/bert-base-uncased/', resume_checkpoint=None, output_dir='output/', config_name='', per_gpu_train_batch_size=20, per_gpu_eval_batch_size=30, lr=0.0001, num_train_epochs=30, vertices_loss_weight=100.0, joints_loss_weight=1000.0, vloss_w_full=0.33, vloss_w_sub=0.33, vloss_w_sub2=0.33, drop_out=0.1, arch='hrnet-w64', num_hidden_layers=4, hidden_size=128, num_attention_heads=4, intermediate_size=-1, input_feat_dim='2051,512,128', hidden_feat_dim='1024,256,128', legacy_setting=True, run_eval_only=False, logging_steps=1000, device=device(type='cuda'), seed=88, local_rank=0, num_gpus=1, distributed=False) 2022-09-27 10:37:39,084 METRO INFO: eta: 5:30:01 epoch: 0 iter: 1000 max mem : 19359 loss: 43.8094, 2d joint loss: 0.0363, 3d joint loss: 0.0242, vertex loss: 0.1603, compute: 0.5986, data: 0.0054, lr: 0.000100 2022-09-27 10:44:41,439 METRO INFO: Validation epoch: 1 mPVE: 216.89, mPJPE: 163.97, PAmPJPE: 110.12, Data Count: 35515.00 2022-09-27 10:53:16,153 METRO INFO: eta: 6:50:32 epoch: 1 iter: 2000 max mem : 19359 loss: 32.0019, 2d joint loss: 0.0250, 3d joint loss: 0.0167, vertex loss: 0.1277, compute: 0.7678, data: 0.1754, lr: 0.000100 2022-09-27 11:01:39,414 METRO INFO: Validation epoch: 2 mPVE: 213.65, mPJPE: 161.82, PAmPJPE: 105.72, Data Count: 35515.00 2022-09-27 11:08:53,971 METRO INFO: eta: 7:07:05 epoch: 2 iter: 3000 max mem : 19359 loss: 26.3174, 2d joint loss: 0.0201, 3d joint loss: 0.0134, vertex loss: 0.1088, compute: 0.8245, data: 0.2321, lr: 0.000100 2022-09-27 11:18:37,952 METRO INFO: Validation epoch: 3 mPVE: 204.17, mPJPE: 154.88, PAmPJPE: 102.00, Data Count: 35515.00 2022-09-27 11:24:28,939 METRO INFO: eta: 7:07:11 epoch: 3 iter: 4000 max mem : 19359 loss: 22.8643, 2d joint loss: 0.0172, 3d joint loss: 0.0115, vertex loss: 0.0963, compute: 0.8521, data: 0.2601, lr: 0.000100 2022-09-27 11:36:15,641 METRO INFO: Validation epoch: 4 mPVE: 182.91, mPJPE: 147.08, PAmPJPE: 96.03, Data Count: 35515.00 2022-09-27 11:36:17,768 METRO INFO: Save checkpoint to output/checkpoint-4-4544 2022-09-27 11:41:03,895 METRO INFO: eta: 7:06:50 epoch: 4 iter: 5000 max mem : 19359 loss: 20.4471, 2d joint loss: 0.0152, 3d joint loss: 0.0102, vertex loss: 0.0874, compute: 0.8807, data: 0.2837, lr: 0.000100 ...... 2022-09-27 18:08:18,140 METRO INFO: Validation epoch: 27 mPVE: 156.67, mPJPE: 136.94, PAmPJPE: 89.08, Data Count: 35515.00 2022-09-27 18:08:20,040 METRO INFO: Save checkpoint to output/checkpoint-27-30672 2022-09-27 18:11:35,319 METRO INFO: eta: 0:46:05 epoch: 27 iter: 31000 max mem : 19359 loss: 7.0103, 2d joint loss: 0.0049, 3d joint loss: 0.0030, vertex loss: 0.0350, compute: 0.8979, data: 0.3039, lr: 0.000010 2022-09-27 18:25:21,996 METRO INFO: Validation epoch: 28 mPVE: 157.29, mPJPE: 137.61, PAmPJPE: 88.35, Data Count: 35515.00 2022-09-27 18:25:23,883 METRO INFO: Save checkpoint to output/checkpoint-28-31808 2022-09-27 18:27:18,918 METRO INFO: eta: 0:31:10 epoch: 28 iter: 32000 max mem : 19359 loss: 6.8592, 2d joint loss: 0.0048, 3d joint loss: 0.0029, vertex loss: 0.0344, compute: 0.8993, data: 0.3053, lr: 0.000010 2022-09-27 18:42:27,939 METRO INFO: Validation epoch: 29 mPVE: 158.00, mPJPE: 137.04, PAmPJPE: 88.93, Data Count: 35515.00 2022-09-27 18:43:01,725 METRO INFO: eta: 0:16:12 epoch: 29 iter: 33000 max mem : 19359 loss: 6.7176, 2d joint loss: 0.0047, 3d joint loss: 0.0029, vertex loss: 0.0338, compute: 0.9006, data: 0.3065, lr: 0.000010 2022-09-27 18:53:01,465 METRO INFO: eta: 0:01:11 epoch: 29 iter: 34000 max mem : 19359 loss: 6.5830, 2d joint loss: 0.0046, 3d joint loss: 0.0028, vertex loss: 0.0333, compute: 0.8918, data: 0.2977, lr: 0.000010 2022-09-27 18:53:49,660 METRO INFO: eta: 0:00:00 epoch: 30 iter: 34080 max mem : 19359 loss: 6.5728, 2d joint loss: 0.0046, 3d joint loss: 0.0028, vertex loss: 0.0333, compute: 0.8911, data: 0.2970, lr: 0.000001 2022-09-27 18:59:31,358 METRO INFO: Validation epoch: 30 mPVE: 158.38, mPJPE: 137.46, PAmPJPE: 88.50, Data Count: 35515.00

    opened by fmx789 1
  • Dataset download error

    Dataset download error

    Hello, when I use "wget https://datarelease.blob.core.windows.net/metro/datasets/human3.6m.tar" to download the Human3.6M dataset, it always occurs this problem as this picture shows: image

    opened by wang-zm18 1
  • How to compare with methods using perspective projection? Ignoring the mpjpe and only comparing pa-mpjpe?

    How to compare with methods using perspective projection? Ignoring the mpjpe and only comparing pa-mpjpe?

    I am a new comer in this field, And I have two questions: 1、When I read the codes in run_metro_bodymesh.py, L396 only gives gt_pose and gt_betas to get the gt_vertices, so the evaluation doesnot include txyz? 2、How to compare with SOTA methods which use perspective projection? As these methods may give txyz for smpl layer to get vertice coords. (Txyz from gt and from prediction may have error and the error contains in the mpjpe) Under such condition, is the mpjpe meaningless, and only the pa-mpjpe is useful? Many thanks for your kindly response!

    opened by miraclebiu 0
Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.

Detectron is deprecated. Please see detectron2, a ground-up rewrite of Detectron in PyTorch. Detectron Detectron is Facebook AI Research's software sy

Facebook Research 25.5k Jan 7, 2023
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)

MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-t

Facebook Research 5.1k Jan 4, 2023
[CVPR 2022] CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation

CoTTA Code for our CVPR 2022 paper Continual Test-Time Domain Adaptation Prerequisite Please create and activate the following conda envrionment. To r

Qin Wang 87 Jan 8, 2023
[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

Ayan Kumar Bhunia 44 Dec 12, 2022
A sample pytorch Implementation of ACL 2021 research paper "Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction".

Span-ASTE-Pytorch This repository is a pytorch version that implements Ali's ACL 2021 research paper Learning Span-Level Interactions for Aspect Senti

来自丹麦的天籁 10 Dec 6, 2022
Code for our CVPR 2021 paper "MetaCam+DSCE"

Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification (CVPR'21) Introduction Code for our CVPR 2021

FlyingRoastDuck 59 Oct 31, 2022
Official code of the paper "ReDet: A Rotation-equivariant Detector for Aerial Object Detection" (CVPR 2021)

ReDet: A Rotation-equivariant Detector for Aerial Object Detection ReDet: A Rotation-equivariant Detector for Aerial Object Detection (CVPR2021), Jiam

csuhan 334 Dec 23, 2022
Official code for the paper: Deep Graph Matching under Quadratic Constraint (CVPR 2021)

QC-DGM This is the official PyTorch implementation and models for our CVPR 2021 paper: Deep Graph Matching under Quadratic Constraint. It also contain

Quankai Gao 55 Nov 14, 2022
Code for CVPR 2021 paper: Anchor-Free Person Search

Introduction This is the implementationn for Anchor-Free Person Search in CVPR2021 License This project is released under the Apache 2.0 license. Inst

null 158 Jan 4, 2023
Code of paper "CDFI: Compression-Driven Network Design for Frame Interpolation", CVPR 2021

CDFI (Compression-Driven-Frame-Interpolation) [Paper] (Coming soon...) | [arXiv] Tianyu Ding*, Luming Liang*, Zhihui Zhu, Ilya Zharkov IEEE Conference

Tianyu Ding 95 Dec 4, 2022
Official code for the CVPR 2021 paper "How Well Do Self-Supervised Models Transfer?"

How Well Do Self-Supervised Models Transfer? This repository hosts the code for the experiments in the CVPR 2021 paper How Well Do Self-Supervised Mod

Linus Ericsson 157 Dec 16, 2022
Demo code for paper "Learning optical flow from still images", CVPR 2021.

Depthstillation Demo code for "Learning optical flow from still images", CVPR 2021. [Project page] - [Paper] - [Supplementary] This code is provided t

null 130 Dec 25, 2022
Code for the upcoming CVPR 2021 paper

The Temporal Opportunist: Self-Supervised Multi-Frame Monocular Depth Jamie Watson, Oisin Mac Aodha, Victor Prisacariu, Gabriel J. Brostow and Michael

Niantic Labs 496 Dec 30, 2022
the code for our CVPR 2021 paper Bilateral Grid Learning for Stereo Matching Network [BGNet]

BGNet This repository contains the code for our CVPR 2021 paper Bilateral Grid Learning for Stereo Matching Network [BGNet] Environment Python 3.6.* C

3DCV developer 87 Nov 29, 2022
the code of the paper: Recurrent Multi-view Alignment Network for Unsupervised Surface Registration (CVPR 2021)

RMA-Net This repo is the implementation of the paper: Recurrent Multi-view Alignment Network for Unsupervised Surface Registration (CVPR 2021). Paper

Wanquan Feng 205 Nov 9, 2022
Code for CVPR 2021 oral paper "Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts"

Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts The rapid progress in 3D scene understanding has come with growing dem

Facebook Research 182 Dec 30, 2022
Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction

Welcome to Barlow Barlow is a tool for identifying the failure modes for a given neural network. To achieve this, Barlow first creates a group of imag

Sahil Singla 33 Dec 5, 2022
The code for the CVPR 2021 paper Neural Deformation Graphs, a novel approach for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects.

Neural Deformation Graphs Project Page | Paper | Video Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction Aljaž Božič, Pablo P

Aljaz Bozic 134 Dec 16, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

selfcontact This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] It includes the main function

Lea Müller 68 Dec 6, 2022