Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation

Overview

DynaBOA

Code repositoty for the paper:

Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation

Shanyan Guan, Jingwei Xu, Michelle Z. He, Yunbo Wang, Bingbing Ni, Xiaokang Yang

[Paper] [Project Page]

Get Started

DynaBOA has been implemneted and tested on Ubuntu 18.04 with python = 3.6.

Clone this repo:

git clone https://github.com/syguan96/DynaBOA.git

Install the requirements using miniconda:

conda env create -f dynaboa-env.yaml

Download required file from this link. Then unzip the file and move rename to data folder.

Running on the 3DPW

bash run_on_3dpw.sh

Results on 3DPW

Method Protocol PA-MPJPE MPJPE PVE
SPIN #PS 59.2 96.9 135.1
PARE #PS 46.4 79.1 94.2
Mesh Graphormer #PS 45.6 74.7 87.7
DynaBOA (Ours) #PS 40.4 65.5 82.0

qualitative results

Todo

  • DynaBOA for MPI-INF-3DHP and SURREAL
  • DynaBOA for the internet data.
Comments
  • Stuck while testing in dataloader

    Stuck while testing in dataloader

    sh run.sh 
    100%|#####################################################################################################| 2018/2018 [00:00<00:00, 14689.63it/s]
    alphapose-results Total Images: 0 , in fact: 2007
    ---> seed has been set
    ---> model and optimizer have been set
    LEN: 2007
    Adapt:   0%|                                                                                                  | 1/2007 [00:05<3:11:31,  5.73s/it]Adapt:   0%|                                                                                               | 1/2007 [03:50<128:29:03, 230.58s/it]
    Traceback (most recent call last):
      File "dynaboa_internet.py", line 184, in <module>
        adaptor.excute()
      File "dynaboa_internet.py", line 76, in excute
        for step, batch in tqdm(enumerate(self.dataloader), total=len(self.dataloader), desc='Adapt'):
      File "/mnt/zhoudeyu/conda/gl_conda2/envs/alphapose/lib/python3.6/site-packages/tqdm/std.py", line 1195, in __iter__
        for obj in iterable:
      File "/mnt/zhoudeyu/conda/gl_conda2/envs/alphapose/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
        data = self._next_data()
      File "/mnt/zhoudeyu/conda/gl_conda2/envs/alphapose/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1068, in _next_data
        idx, data = self._get_data()
      File "/mnt/zhoudeyu/conda/gl_conda2/envs/alphapose/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1034, in _get_data
        success, data = self._try_get_data()
      File "/mnt/zhoudeyu/conda/gl_conda2/envs/alphapose/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 872, in _try_get_data
        data = self._data_queue.get(timeout=timeout)
      File "/mnt/zhoudeyu/conda/gl_conda2/envs/alphapose/lib/python3.6/multiprocessing/queues.py", line 104, in get
        if not self._poll(timeout):
      File "/mnt/zhoudeyu/conda/gl_conda2/envs/alphapose/lib/python3.6/multiprocessing/connection.py", line 257, in poll
        return self._poll(timeout)
      File "/mnt/zhoudeyu/conda/gl_conda2/envs/alphapose/lib/python3.6/multiprocessing/connection.py", line 414, in _poll
        r = wait([self], timeout)
      File "/mnt/zhoudeyu/conda/gl_conda2/envs/alphapose/lib/python3.6/multiprocessing/connection.py", line 911, in wait
        ready = selector.select(timeout)
      File "/mnt/zhoudeyu/conda/gl_conda2/envs/alphapose/lib/python3.6/selectors.py", line 376, in select
        fd_event_list = self._poll.poll(timeout)
    KeyboardInterrupt
    ^C
    
    opened by ChawDoe 23
  • Do i need h36m data to run inference on internet data?

    Do i need h36m data to run inference on internet data?

    Hi,

    Thanks for your great work! Do I really need to download the entire h36m dataset in order to run the demo on internet data?

    Following your guide I run into issues in the lower level adaptation that takes a h36m batch. Is this on purpose or should it be changed?

    Here lower_level_loss, _ = self.lower_level_adaptation(image, gt_keypoints_2d, h36m_batch, learner)

    opened by ChristianIngwersen 16
  • Inference on internet still need h36m dataset???

    Inference on internet still need h36m dataset???

    class BaseAdaptor():
        def __init__(self, options):
            self.options = options
            self.exppath = osp.join(self.options.expdir, self.options.expname)
            os.makedirs(self.exppath+'/mesh', exist_ok=True)
            os.makedirs(self.exppath+'/image', exist_ok=True)
            os.makedirs(self.exppath+'/result', exist_ok=True)
            self.summary_writer = SummaryWriter(self.exppath)
            self.device = torch.device('cuda')
            # set seed
            self.seed_everything(self.options.seed)
    
            self.options.mixtrain = self.options.lower_level_mixtrain or self.options.upper_level_mixtrain
    
            if self.options.retrieval:
                # # load basemodel's feature
                self.load_h36_cluster_res()
    
            # if self.options.retrieval:
            #     self.h36m_dataset = SourceDataset(datapath='data/retrieval_res/h36m_random_sample_center_10_10.pt')
    
            # set model
    

    For inference only why still need training dataset????

    opened by jinfagang 14
  • "h36m_part.h5" missed

    Hello. Thanks for your excellent job. I met an error while running your code. It seemed that I missed this file "h36m_part.h5". Where can I get this file? File "dynaboa.py", line 94, in __init__ fin = h5py.File('/dev/shm/h36m_part.h5', 'r')

    opened by MooreManor 12
  • Question about 2d annotation during testing in test datasets

    Question about 2d annotation during testing in test datasets

    Hello! Thanks for your great work! I have some confusion about the 2d annotation during online adaption. Do you mean that you use the 2d ground truth key points of the test set? Or do you obtain the 2d key points as annotation after sending the frames into an off-the-shelf 2d pose estimator?

    opened by MooreManor 10
  • Results Using Predicted 2D Keypoints

    Results Using Predicted 2D Keypoints

    Thank you so much for your excellent work! But I got some problems while trying to test the model on predicted 2D keypoints(using Alphapose-Fast Pose, same as the backbone mentioned in README) on the dataset 3dpw This is how I tried:

    • For the reason that it is pretty hard for me to map multi-pose results generated by Alphapose to 3DPW ground-truth, I selected videos with a single person. The list is as follows:
                           "courtyard_backpack_00",
                           "courtyard_basketball_01",
                           "courtyard_bodyScannerMotions_00",
                           "courtyard_box_00",
                           "courtyard_golf_00",
                           "courtyard_jacket_00",
                            "courtyard_jumpBench_01",
                           "courtyard_laceShoe_00",
                           "courtyard_relaxOnBench_00",
                           "courtyard_relaxOnBench_01",
                           "downtown_stairs_00",
                           "downtown_walkDownhill_00",
                           "flat_guitar_01",
                           "flat_packBags_00",
                           "outdoors_climbing_00",
                           "outdoors_climbing_01",
                           "outdoors_climbing_02",
                           "outdoors_crosscountry_00",
                           "outdoors_fencing_01",
                           "outdoors_freestyle_00",
                           "outdoors_freestyle_01",
                           "outdoors_golf_00",
                           "outdoors_parcours_00",
                           "outdoors_parcours_01",
                           "outdoors_slalom_00",
                           "outdoors_slalom_01",
      
    • Then, I ran the Internet video baseline and got predicted cam, rotmat, beta parameters for each frame.
    • After that, I calculate the MPJPE, PA-MPJPE, and PVE for each step.

    The final results are as follows (plus with MPJPE on X, Y, Z axis):

    metricMPJP | dynaBOA w gt 2d | dynaBOA w pred 2d -- | -- | -- MPJPE | 65.56047058105469 | 186.74376 PA-MPJPE | 40.92316436767578 | 77.56925 PVE | 83.11467019999202 | 195.08884 MPJPE(X Axis) | 21.0639544272907 | 67.5   MPJPE(Y Axis) | 25.5786684319053 | 57.8 MPJPE(Z Axis) | 50.4342290491508 | 140.7

    I was quite confused why the results would be so bad. So I tried to make Gaussian Perturbation on ground truth 2d. And run the 3dpw baseline. The code I changed is as follows.

    https://github.com/syguan96/DynaBOA/blob/b8d2bbe9d8e827a36e72bb324a9a6e43f421ae31/boa_dataset/pw3d.py#L58

    changed to (e.g. sigma=1):

    self.smpl_j2ds.append(smpl_j2ds+np.random.normal(0, 1, size=tuple(smpl_j2ds.shape)))
    

    And here is the result: Gaussian Perturbation on ground truth 2d

    Furthermore, I calculate the mean-variance of ground truth 2d and Alphapose predicted 2d, and the result is 12.65. Take the assumption detected 2d is Gaussian noise added on ground truth 2d, the result is supposed to be worse.

    So is that mean DynaBOA is not combination incorporable with detected 2d keypoints? Or is that because of my improper operation?

    Thank you so much for your patience in reading my issue.

    opened by juxuan27 9
  • 3DPW

    3DPW

    Hi and thanks for releasing the code!

    I was wondering how the .npz files in the data_extras are generated? Is the 'smpl_j2d' used in adaptation part for 3DPW, the ground truth 2d keypoints or is it obtained from the Alphapose? If the former, does it mean that during adaptation on test domain on 3DPW, GT 2d keypoints are used?

    Cheers!

    opened by mahsaep 9
  • Can I test on single image or video?

    Can I test on single image or video?

    Hi, Thanks for sharing this code Can I test on single image or video by using this code? run_on_3dpw.sh tests on various data, can you give me some guide to do this?

    opened by woo1 6
  • Results on internet videos

    Results on internet videos

    嗨,作者您好,感谢您所做的工作。 我按照您给的步骤测试internet video,效果不对,有点疑问,希望您能提供帮助。

    我的步骤如下:

    1. 原视频: internet_video.zip

    2. 视频对应的图像如下: vd_demo02.zip

    3. 通过openpose_body25获取得到2Dkeypoints, python process_data.py --dataset internet 处理后得到文件如下: vd_demo02_json_npz.zip

    其中,我修改了utils/kp_utils.py, 添加了openpose25对应的关键点位和关节点的连接关系: `def get_openpose25_joint_names(): return [ 'OP Nose', # 0 'OP Neck', # 1 'OP RShoulder', # 2 'OP RElbow', # 3 'OP RWrist', # 4 'OP LShoulder', # 5 'OP LElbow', # 6 'OP LWrist', # 7 'OP MidHip', # 8 'OP RHip', # 9 'OP RKnee', # 10 'OP RAnkle', # 11 'OP LHip', # 12 'OP LKnee', # 13 'OP LAnkle', # 14 'OP REye', # 15 'OP LEye', # 16 'OP REar', # 17 'OP LEar', # 18 'OP LBigToe', # 19 'OP LSmallToe', # 20 'OP LHeel', # 21 'OP RBigToe', # 22 'OP RSmallToe', # 23 'OP RHeel', # 24 ]

    def get_openpose25_skeleton(): return np.array( [ [1, 8], [1, 2], [1, 5], [2, 3], [3, 4], [5, 6], [6, 7], [8, 9], [9, 10], [10, 11], [8, 12], [12, 13], [13, 14], [1, 0],
    [0, 15],
    [15, 17], [0, 16], [16, 18], [14, 19], [19, 20],
    [14, 21], [11, 22], [22, 23], [11, 24],
    ] ) `

    1. h36m 和 internet videos 一起测试 (1) 下载了h36m数据集的images,它对应的属性信息是存放在data/retrieval_res中的吧 (2) ckpt:data/basemodel.pt (3)测试时打印的信息: (p_366) lina@lina-MS-7C82:~/lxh_3D_pose_estimate/DynaBOA-main$ bash run_on_internet.sh ---> seed has been set ---> model and optimizer have been set 1 75 LEN: 75 Adapt: 0%| | 0/75 [00:00<?, ?it/s] imgname: imageFiles/h36m_s1/images/S7_Sitting_1.60457274_004151.jpg imgname: imageFiles/h36m_s1/images/S8_Sitting_1.60457274_001016.jpg imgname: imageFiles/h36m_s1/images/S8_Sitting.60457274_001986.jpg imgname: imageFiles/h36m_s1/images/S5_Phoning_1.58860488_000841.jpg imgname: imageFiles/h36m_s1/images/S5_Discussion_2.55011271_001061.jpg imgname: imageFiles/h36m_s1/images/S7_Discussion_1.55011271_004621.jpg imgname: imageFiles/h36m_s1/images/S7_Directions_1.60457274_001211.jpg imgname: imageFiles/h36m_s1/images/S6_Waiting_3.55011271_001331.jpg imgname: imageFiles/h36m_s1/images/S6_Photo_1.60457274_001266.jpg Adapt: 1%|█▊ | 1/75 [00:06<08:08, 6.60s/it] imgname: imageFiles/h36m_s1/images/S8_Greeting.60457274_000686.jpg imgname: imageFiles/h36m_s1/images/S1_Posing.55011271_000126.jpg imgname: imageFiles/h36m_s1/images/S8_Discussion.60457274_000446.jpg imgname: imageFiles/h36m_s1/images/S6_Greeting_1.60457274_001291.jpg imgname: imageFiles/h36m_s1/images/S6_Waiting_3.60457274_000211.jpg imgname: imageFiles/h36m_s1/images/S1_Posing_1.60457274_000516.jpg imgname: imageFiles/h36m_s1/images/S8_WalkTogether_1.55011271_001301.jpg imgname: imageFiles/h36m_s1/images/S6_WalkTogether_1.60457274_001466.jpg imgname: imageFiles/h36m_s1/images/S1_Posing.55011271_000106.jpg Adapt: 3%|███▋ | 2/75 [00:08<04:41,
      ... ...
      Adapt: 99%|███████████████████████████████████████████████████▏ | 74/75 [02:32<00:02, 2.25s/it] imgname: imageFiles/h36m_s1/images/S1_Waiting.55011271_001686.jpg imgname: imageFiles/h36m_s1/images/S7_Directions.55011271_001891.jpg imgname: imageFiles/h36m_s1/images/S5_Purchases_1.55011271_000126.jpg imgname: imageFiles/h36m_s1/images/S5_Photo_2.55011271_002106.jpg imgname: imageFiles/h36m_s1/images/S6_Waiting_3.55011271_001941.jpg imgname: imageFiles/h36m_s1/images/S5_Waiting_2.55011271_000751.jpg imgname: imageFiles/h36m_s1/images/S5_Photo.55011271_003156.jpg imgname: imageFiles/h36m_s1/images/S5_Walking_1.60457274_002091.jpg imgname: imageFiles/h36m_s1/images/S8_Eating_1.55011271_002041.jpg Adapt:100%|█████████████████████████████████████████████████████| 75/75 [02:34<00:00, 2.06s/it]

    (4)最终重建效果:(效果不理想) video_result.zip

    想请问您,

    1. 效果不好是否是我的步骤或者处理数据有问题?
    2. 只使用Internet video测试时75张images耗时1m30s左右,效果同上,速度是否正常?有提升方式么?

    感谢您耐心看完,希望能解答我的疑问,谢谢~

    helpful for Openpose user 
    opened by emojiLee 5
  • File generation under the data/retrieval_res folder

    File generation under the data/retrieval_res folder

    Thanks for your great work, I want to know how the files in the 'data/retrieval_res' folder are generated. Could you share the scripts that generate these files? thanks

    opened by guoguangchao 3
  • Will you release training code?

    Will you release training code?

    How many dataset have you used? Can you share some training experience to achieve the performance as paper showed?ex:

    1. used pretrained model or train from zero
    2. how many loss used and loss weight
    3. opt method
    4. training dataset and weight
    5. training epoch

    Best!

    opened by StayYouth1993 3
  • About mixtrain in DynaBOA code

    About mixtrain in DynaBOA code

    Good morning,

    Just have a question regarding mixtrain argument in DynaBOA code (including lower_level_mixtrain and upper_level_mixtrain). I have confused with what exactly is mixtrain. According to the code, it seems that it allows the model to do label loss on h3.6m batch extracted from retrieval. However, the paper does not seem to mention mixtrain but mention an ablation study on "Adapting to non-stationary streaming data with highly mixed 3DPW and 3DHP videos.". But here, I am not sure whether the mixtrain procedure really uses 3DHP videos in the code. Could you make this clear on how to understand the mixtrain in the DynaBOA code in relation with the T-PAMI paper?

    opened by dqj5182 3
  • About pretrained HMR model

    About pretrained HMR model

    HI! recently I have a question about your pretrained HMR model. You know that HMR model also used 2d dataset for training such as LSP, LSP-extended, MPII and MS COCO, may I ask do u still keep them while training HMR except Human3.6M? That is, how do you generate the checkpoint file as for the input, i.e.'data/basemodel.pt'.

    Maybe this question seems stupid but I want to use the way you process it for some reference. hope you can give me some guidance, Thank you!

    opened by Mirandl 2
  • About 3dpw evaluation

    About 3dpw evaluation

    Hi! Thanks for the good job! I have some quetion about how you test the 3dpw dataset on multi person sequences. Do you use only one person or every person one by one to evaluate?

    opened by isshpan 1
  • TypeError: Caught TypeError in DataLoader worker process 1.

    TypeError: Caught TypeError in DataLoader worker process 1.

    Hi! There I meet a problem :

    ---> seed has been set ---> model and optimizer have been set pw3d: 37 WARNING: You are using a SMPL model, with only 10 shape coefficients. WARNING: You are using a SMPL model, with only 10 shape coefficients. WARNING: You are using a SMPL model, with only 10 shape coefficients. 1%| | 200/35515 [09:03<23:55:21, 2.44s/it]Step:199: MPJPE:108.14312744140625, PAMPJPE:52.69846725463867, PVE:94.87768046557903 1%| | 400/35515 [17:31<28:17:38, 2.90s/it]Step:399: MPJPE:99.1795425415039, PAMPJPE:47.766239166259766, PVE:88.05403053760529 2%|▏ | 600/35515 [25:37<13:24:34, 1.38s/it]Step:599: MPJPE:91.56452178955078, PAMPJPE:47.820899963378906, PVE:84.10537596791983 2%|▏ | 800/35515 [30:35<18:15:24, 1.89s/it]Step:799: MPJPE:84.9598159790039, PAMPJPE:44.537147521972656, PVE:82.7759880432859 3%|▎ | 1000/35515 [37:18<28:50:51, 3.01s/it]Step:999: MPJPE:79.5338134765625, PAMPJPE:42.920555114746094, PVE:81.89351376518607 3%|▎ | 1200/35515 [45:19<19:36:31, 2.06s/it]Step:1199: MPJPE:76.7173080444336, PAMPJPE:42.85281753540039, PVE:82.61778503345947 4%|▍ | 1400/35515 [50:35<19:12:54, 2.03s/it]Step:1399: MPJPE:74.18508911132812, PAMPJPE:40.78797912597656, PVE:80.56003755224603 5%|▍ | 1600/35515 [56:14<15:07:32, 1.61s/it]Step:1599: MPJPE:73.13909912109375, PAMPJPE:41.25379943847656, PVE:77.55705753806978 5%|▌ | 1800/35515 [1:00:38<10:39:45, 1.14s/it]Step:1799: MPJPE:72.77523803710938, PAMPJPE:42.25539016723633, PVE:76.45247981366184 6%|▌ | 2000/35515 [1:05:01<9:35:38, 1.03s/it]Step:1999: MPJPE:73.12327575683594, PAMPJPE:42.23678970336914, PVE:76.57051902078092 6%|▌ | 2200/35515 [1:09:42<14:54:10, 1.61s/it]Step:2199: MPJPE:72.85092163085938, PAMPJPE:41.97563171386719, PVE:75.81348557533188 7%|▋ | 2400/35515 [1:13:13<11:36:43, 1.26s/it]Step:2399: MPJPE:71.45988464355469, PAMPJPE:41.051570892333984, PVE:74.75854421810557 7%|▋ | 2600/35515 [1:17:23<10:32:56, 1.15s/it]Step:2599: MPJPE:71.4017562866211, PAMPJPE:41.07121276855469, PVE:76.12829965467637 8%|▊ | 2800/35515 [1:21:17<6:36:03, 1.38it/s]Step:2799: MPJPE:70.69111633300781, PAMPJPE:41.131832122802734, PVE:76.1530753770577 8%|▊ | 3000/35515 [1:24:56<8:37:23, 1.05it/s]Step:2999: MPJPE:69.84212493896484, PAMPJPE:40.65814971923828, PVE:75.713991060853 9%|▉ | 3200/35515 [1:29:11<9:58:27, 1.11s/it]Step:3199: MPJPE:68.54174041748047, PAMPJPE:39.8968620300293, PVE:74.9515091557987 9%|▉ | 3257/35515 [1:30:18<14:54:30, 1.66s/it] Traceback (most recent call last): File "/home/DynaBOA/dynaboa_benchmark.py", line 292, in adaptor.excute() File "/home/DynaBOA/dynaboa_benchmark.py", line 89, in excute for step, batch in tqdm(enumerate(self.dataloader), total=len(self.dataloader)): File "/home/miniconda3/envs/DynaBOA-env/lib/python3.6/site-packages/tqdm/std.py", line 1166, in iter for obj in iterable: File "/home/miniconda3/envs/DynaBOA-env/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 517, in next data = self._next_data() File "/home/miniconda3/envs/DynaBOA-env/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data return self._process_data(data) File "/home/miniconda3/envs/DynaBOA-env/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data data.reraise() File "/home/miniconda3/envs/DynaBOA-env/lib/python3.6/site-packages/torch/_utils.py", line 429, in reraise raise self.exc_type(msg) TypeError: Caught TypeError in DataLoader worker process 1. Original Traceback (most recent call last): File "/home/miniconda3/envs/DynaBOA-env/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop data = fetcher.fetch(index) File "/home/miniconda3/envs/DynaBOA-env/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/miniconda3/envs/DynaBOA-env/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/DynaBOA/boa_dataset/pw3d.py", line 101, in getitem image = self.read_image(imgname, index) File "/home/DynaBOA/boa_dataset/pw3d.py", line 146, in read_image img = cv2.imread(imgname)[:, :, ::-1].copy().astype(np.float32) TypeError: 'NoneType' object is not subscriptable

    Process finished with exit code 1

    The errors all drop quickly but ... have you ever met this problem? I tried but failed to find where this error comes from. It seems that some data are needed but missed. Maybe the labeled data?

    I'm sure I followed all your descriptions and the human3.6m and 3dpw data has placed properly and input totally. Could you please give me a little guidance? I'll be very appreciated it. Thank you!

    opened by Mirandl 1
  • Could you show the final tree of human3.6m dataset according to your processing code?

    Could you show the final tree of human3.6m dataset according to your processing code?

    Hi, your work's so marvellous and I'm trying to realise your code. cause I have this processed dataset early before and I wonder if my data structure is totally the same as yours. So It will help me very very a lot if you can share the structure of your processed human3.6m dataset! Thank you and waiting for your reply

    opened by Mirandl 4
Owner
Use this account to share code.
null
Research code for CVPR 2021 paper "End-to-End Human Pose and Mesh Reconstruction with Transformers"

MeshTransformer ✨ This is our research code of End-to-End Human Pose and Mesh Reconstruction with Transformers. MEsh TRansfOrmer is a simple yet effec

Microsoft 473 Dec 31, 2022
Adversarial Adaptation with Distillation for BERT Unsupervised Domain Adaptation

Knowledge Distillation for BERT Unsupervised Domain Adaptation Official PyTorch implementation | Paper Abstract A pre-trained language model, BERT, ha

Minho Ryu 29 Nov 30, 2022
DABO: Data Augmentation with Bilevel Optimization

DABO: Data Augmentation with Bilevel Optimization [Paper] The goal is to automatically learn an efficient data augmentation regime for image classific

ElementAI 24 Aug 12, 2022
Repository for the paper "Online Domain Adaptation for Occupancy Mapping", RSS 2020

RSS 2020 - Online Domain Adaptation for Occupancy Mapping Repository for the paper "Online Domain Adaptation for Occupancy Mapping", Robotics: Science

Anthony 26 Sep 22, 2022
A repo that contains all the mesh keys needed for mesh backend, along with a code example of how to use them in python

Mesh-Keys A repo that contains all the mesh keys needed for mesh backend, along with a code example of how to use them in python Have been seeing alot

Joseph 53 Dec 13, 2022
CoSMA: Convolutional Semi-Regular Mesh Autoencoder. From Paper "Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes"

Mesh Convolutional Autoencoder for Semi-Regular Meshes of Different Sizes Implementation of CoSMA: Convolutional Semi-Regular Mesh Autoencoder arXiv p

Fraunhofer SCAI 10 Oct 11, 2022
Given a 2D triangle mesh, we could randomly generate cloud points that fill in the triangle mesh

generate_cloud_points Given a 2D triangle mesh, we could randomly generate cloud points that fill in the triangle mesh. Run python disp_mesh.py Or you

Peng Yu 2 Dec 24, 2021
AI Face Mesh: This is a simple face mesh detection program based on Artificial intelligence.

AI Face Mesh: This is a simple face mesh detection program based on Artificial Intelligence which made with Python. It's able to detect 468 different

Md. Rakibul Islam 1 Jan 13, 2022
Code for CVPR2021 "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai

Visualizing Adapted Knowledge in Domain Transfer @inproceedings{hou2021visualizing, title={Visualizing Adapted Knowledge in Domain Transfer}, auth

Yunzhong Hou 80 Dec 25, 2022
[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation

[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation [Paper] Prerequisites To install requirements: pip install -r requirements.txt

Guangrui Li 84 Dec 26, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [arxiv] This is the official repository for CDTrans: Cross-domain Transformer for

null 238 Dec 22, 2022
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation

[ICCV2021] TransReID: Transformer-based Object Re-Identification [pdf] The official repository for TransReID: Transformer-based Object Re-Identificati

DamoCV 569 Dec 30, 2022
A Pytorch Implementation of [Source data‐free domain adaptation of object detector through domain

A Pytorch Implementation of Source data‐free domain adaptation of object detector through domain‐specific perturbation Please follow Faster R-CNN and

null 1 Dec 25, 2021
RDA: Robust Domain Adaptation via Fourier Adversarial Attacking

RDA: Robust Domain Adaptation via Fourier Adversarial Attacking Updates 08/2021: check out our domain adaptation for video segmentation paper Domain A

null 17 Nov 30, 2022
Semi-supervised Domain Adaptation via Minimax Entropy

Semi-supervised Domain Adaptation via Minimax Entropy (ICCV 2019) Install pip install -r requirements.txt The code is written for Pytorch 0.4.0, but s

Vision and Learning Group 243 Jan 9, 2023
A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.

ManhattanSLAM Authors: Raza Yunus, Yanyan Li and Federico Tombari ManhattanSLAM is a real-time SLAM library for RGB-D cameras that computes the camera

null 117 Dec 28, 2022
"MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction" (CVPRW 2022) & (Winner of NTIRE 2022 Challenge on Spectral Reconstruction from RGB)

MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction (CVPRW 2022) Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Z

Yuanhao Cai 274 Jan 5, 2023
Code for "3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop"

PyMAF This repository contains the code for the following paper: 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop Hongwe

Hongwen Zhang 450 Dec 28, 2022
[CVPR'21] Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration

Locally Aware Piecewise Transformation Fields for 3D Human Mesh Registration This repository contains the implementation of our paper Locally Aware Pi

sfwang 70 Dec 19, 2022