[CVPR 2022] PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision (Oral)

Overview

PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision

Kehong Gong*, Bingbing Li*, Jianfeng Zhang*, Tao Wang*, Jing Huang, Bi Mi, Jiashi Feng, Xinchao Wang

CVPR 2022 (Oral Presentation, arxiv)

Logo

Framework

Pose-triplet contains three components: estimator, imitator and hallucinator

The three components form dual-loop during the training process, complementing and strengthening one another. alt text

Improvement through co-evolving

Here is imitated motion of different rounds, the estimator and imitator get improved over the rounds of training, and thus the imitated motion becomes more accurate and realistic from round 1 to 3. alt text

Video demo

04806-supp.mp4

Comparasion

Here we compared our results with two recent works Yu et al. and Hu et al.

Installation

  • Please refer to README_env.md for the python environment setup.

Data Preparation

Training

Please refer to script-summary for the training process. We also provide a checkpoint folder here with better performance, which support that this framework has the potential to reach the same performance as fully-supervised approaches.
Note: checkpoint for the RL policy is not include due to the size limitation, please following the training code to train the policy.

Inference

We provide an inference code here. Please follow the instruction and download the pretrained model for inference on videos.

Talk

Here is a slidestalk (PPT in english, speak in chinese).

Citation

If you find this code useful for your research, please consider citing the following paper:

@inproceedings{gong2022posetriplet,
  title      = {PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision},
  author     = {Gong, Kehong and Li, Bingbing and Zhang, Jianfeng and Wang, Tao and Huang, Jing and Mi, Michael Bi and Feng, Jiashi and Wang, Xinchao},
  booktitle  = {CVPR},
  year       = {2022}
}
Comments
  • Can this model inference single image?

    Can this model inference single image?

    I want to find out whether this model can inference online. So does it need the frame from future when it inference current frames? or like the question as title, can this model inference single image?

    opened by tiger990111 10
  • ValueError in Inference

    ValueError in Inference

    When I ran the inference script: python videopose-j16-wild-eval_run.py I got folloing errors. Could you help me?

    -------------- prepare video clip spends 0.03 seconds
    -------------- load keypoint spends 0.05 seconds
    Loading checkpoint ./checkpoint/ckpt_ep_045.bin
    -------------- load 3D model spends 3.81 seconds
    -------------- generate reconstruction 3D data spends 0.53 seconds
    Loading checkpoint ./checkpoint/ckpt_ep_045.bin
    -------------- load 3D Traj model spends 0.16 seconds
    -------------- generate reconstruction 3D data spends 0.02 seconds
    Rendering... save to ./wild_eval/333_scale2D_010/bilibili-clip/kunkun_clip_alpha_pose.mp4
    ===========================> This video get 49 frames in total.
      2%|##2                                                                                                          | 1/49 [00:00<00:10,  4.57it/s]Traceback (most recent call last):
      File "videopose-j16-wild-eval_run.py", line 288, in <module>
        Vis.redering()
      File "videopose-j16-wild-eval_run.py", line 44, in redering
        self.visalizatoin(anim_output)
      File "videopose-j16-wild-eval_run.py", line 232, in visalizatoin
        input_video_skip=args.viz_skip)
      File "/mnt/zhoudeyu/project/save_video/dengyuanzhang/posetriplet/PoseTriplet-main/estimator_inference/common/visualization.py", line 195, in render_animation
        anim.save(output, writer=writer)
      File "/root/miniconda3/envs/alphapose/lib/python3.6/site-packages/matplotlib/animation.py", line 1174, in save
        writer.grab_frame(**savefig_kwargs)
      File "/root/miniconda3/envs/alphapose/lib/python3.6/contextlib.py", line 99, in __exit__
        self.gen.throw(type, value, traceback)
      File "/root/miniconda3/envs/alphapose/lib/python3.6/site-packages/matplotlib/animation.py", line 232, in saving
        self.finish()
      File "/root/miniconda3/envs/alphapose/lib/python3.6/site-packages/matplotlib/animation.py", line 358, in finish
        self.cleanup()
      File "/root/miniconda3/envs/alphapose/lib/python3.6/site-packages/matplotlib/animation.py", line 395, in cleanup
        out, err = self._proc.communicate()
      File "/root/miniconda3/envs/alphapose/lib/python3.6/subprocess.py", line 863, in communicate
        stdout, stderr = self._communicate(input, endtime, timeout)
      File "/root/miniconda3/envs/alphapose/lib/python3.6/subprocess.py", line 1525, in _communicate
        selector.register(self.stdout, selectors.EVENT_READ)
      File "/root/miniconda3/envs/alphapose/lib/python3.6/selectors.py", line 351, in register
        key = super().register(fileobj, events, data)
      File "/root/miniconda3/envs/alphapose/lib/python3.6/selectors.py", line 237, in register
        key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data)
      File "/root/miniconda3/envs/alphapose/lib/python3.6/selectors.py", line 224, in _fileobj_lookup
        return _fileobj_to_fd(fileobj)
      File "/root/miniconda3/envs/alphapose/lib/python3.6/selectors.py", line 39, in _fileobj_to_fd
        "{!r}".format(fileobj)) from None
    ValueError: Invalid file object: <_io.BufferedReader name=30>
      6%|######6                                                                                                      | 3/49 [00:00<00:07,  5.99it/s]
    
    opened by ChawDoe 8
  • How to accelerate the policy training

    How to accelerate the policy training

    Hi, I changed the loss function of the training for the 3D pose estimation. So I should train the policy since the iteration helix_1 and I cannot use the policy trained by you, right? However, I found it is time-consuming to train the policy ( python pose_imitation/pose_mimic.py --cfg subject_h36m_helix_1 --num-threads 52 --mocap-folder ./checkpoint/exp_h36m_gt2d_v5/helix_1). About 11 hours were needed on my server to get a model such as models/iter_0200.p. Would you please tell me how to accelerate the policy training process? Thank you!

    opened by gyou2021 7
  • About custom training and inference

    About custom training and inference

    感谢作者的优秀工作,并且在这分享实现给大家! 我刚刚看了你的视频演讲回放,讲得很棒! 我正在做3D pose的应用,有一些问题想请教一下:

    1. 因为是做应用,所以比较关注estimator。我们在推理的时候,能不能只用estimator?是否还得配合着imitator来用使得这个动作序列更逼真一些?用和不用imitator做推理(不是训练)在性能上差多少呢?
    2. 关于整个framework,我自己理解是这三个模块结构上是可拆解替换的是吗?也就是我可以自定义修改estimator的实现,然后用pretrained的imitator和hallucinator来单独训练estimator的是吗?
    opened by PGogo 5
  •   Training issue: joints' number (17) of the model is not equal to the input 2D joints' number (16) when evaluating S911, 3DHP & 3DHPW

    Training issue: joints' number (17) of the model is not equal to the input 2D joints' number (16) when evaluating S911, 3DHP & 3DHPW

    estimator/common/model.py", line 66, in forward assert x.shape[-2] == self.num_joints_in AssertionError

    Debug in def _model_preparation_pos(self) of posegan_basementclass.py self.poses_valid_2d[0].shape[-2] is 17; self.dataset.skeleton().num_joints() is 16.

    opened by gyou2021 4
  • Lower body skeleton issue

    Lower body skeleton issue

    Hi I am trying to get the 3 pose on upper body and is there a way to filter or remove lower body keypoints completely. I am getting the following output where the legs are distorted but I don't want lower body joints to be visible in my output.

    Screen Shot 2022-04-25 at 2 23 37 PM
    opened by Abi5678 4
  • no such file or directory: './data_cross/3dhp/3dhp_testset_bySub.pkl'

    no such file or directory: './data_cross/3dhp/3dhp_testset_bySub.pkl'

    Hello, I am very interesting in your excellenct work! When I run the code in script-summary-gt2d-v5.sh, I get the following error: no such file or directory: './data_cross/3dhp/3dhp_testset_bySub.pkl' How can I get this file? thank you! https://github.com/Garfield-kh/PoseTriplet/blob/eb93132f99161bd776dafbcb713e9fb43e501c36/imitator/script-summary-gt2d-v5.sh#L30

    opened by JiahongWu1995 3
  • multi-person 3D pose estimation

    multi-person 3D pose estimation

    Hi Gong,

    we are using your impressive work for 3D pose reconstruction from video. In general, it works quite well for single-person scenarios. However, when I tried to apply it to the multi-person scenarios, it seems that it only tracks the pose for one character and tracking is not consistent on one person but jumping around on different person. So is that possible to apply for your work on multi-person 3D pose estimation? Thank you!

    opened by dhhjx880713 3
  • Result on H36M

    Result on H36M

    Hi, in your paper, PoseTriplet result is 68.2 on H36M in terms of MPJPE (P1), P2 45.1 using GT 2d pose. I viewed your training result. Is the 68.2 from s911_flip_p1 (tag: eval_P_epoch_real/s911_flip_p1)? However, I found smaller values on the graph. Where are the P1 results for 3DHP and 3DPW from? (from eval_P_epoch_real?) Thank you!

    good first issue 
    opened by gyou2021 2
  • The parameter value of pose_mimic_eval.py

    The parameter value of pose_mimic_eval.py

    Hi, In imitator/script-summary-gt2d-v5.sh, 159 >> inference the RL result using trained model 160 python pose_imitation/pose_mimic_eval.py --cfg subject_h36mrib --data train --num-threads 52 --iter 1200 ..... Why 1200 was selected for --iter? How to select the value? Thank you!

    opened by gyou2021 2
  • Posegan_train

    Posegan_train

    Hi, In imitator/script-summary-gt2d-v5.sh, since 182 # use default setting #5 if it dose not crash, why should we run posegan_train.py four times from line 175 -180? It seems each posegan_train.py is independent of others. Is it right that we only run 177 CUDA_VISIBLE_DEVICES=0 python posegan_train.py --note vp_5_wrib --add_random_cam...? Thank you.

    opened by gyou2021 2
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
  • How to project the predicted 3D pose to the 2D pose

    How to project the predicted 3D pose to the 2D pose

    Hi, I want to project the predicted 3D pose to the 2D pose to see the difference between this 2D pose and the 2D pose from which the 3D pose is derived of. I wonder whether it is right that I use the method project_to_2d(X, camera_params) of camera.py with the real parameters of the camera that took the images because the real camera parameters were not used in training. How to project the predicted 3D pose to the 2D pose? Thank you.

    opened by gyou2021 4
  • Regarding the distance between the camera and the subject

    Regarding the distance between the camera and the subject

    Hi,Mr. Garfield-kh. Sorry, I have to post a new issue.

    This may be fundamental question, but is there any particular distance between the subject and the camera that is required for this system to accurately produce a 3D image? I am wondering if it has something to do with the "4m" that Mr. Garfield-kh mentioned.

    Also, I feel that the 3D estimation is worse when the subject is a child.Do you know of any way to adjust it?

    Thank you.

    opened by jin-tmoya 4
  • How to extract coordinates of 3D keypoints

    How to extract coordinates of 3D keypoints

    Hi,I am not good at English, so I apologize if my English is misleading.

    I want to extract the coordinates of 3D keypoints(x , y, z), but it doesn't work. As for what we tried, I tried to output the value of prediction.(「videopose-j16-wild-eval_run.py」line 199)

    The results are shown here.(We used a "kunkun_clip.mp4".) -------------- prepare video clip spends 0.02 seconds -------------- load keypoint spends 0.01 seconds Loading checkpoint ./checkpoint/ckpt_ep_045.bin -------------- load 3D model spends 0.08 seconds [[[ 5.93354198e-05 1.70869516e-05 3.85639858e-08] [-1.03275985e-01 -2.98896129e-03 3.72205190e-02] [-2.07049251e-01 4.06995893e-01 -1.07592411e-01] ... [-1.62893653e-01 -4.70567942e-01 -5.11920266e-02] [-1.94243610e-01 -2.81453490e-01 1.39230907e-01] [-2.94799685e-01 -1.80989355e-01 1.17846029e-02]]

    [[ 6.81496749e-05 1.57090817e-05 4.34617711e-08] [-1.01091936e-01 -2.09802575e-03 4.19775918e-02] [-2.11011797e-01 4.05896515e-01 -9.80924964e-02] ... [-1.53813958e-01 -4.74190831e-01 -3.30972932e-02] [-1.90179735e-01 -3.07276726e-01 1.67530388e-01] [-3.05322289e-01 -2.26756632e-01 6.73069879e-02]]

    [[ 7.13446352e-05 1.38120804e-05 4.19068655e-08] [-1.00091867e-01 -7.69968377e-04 4.70813289e-02] [-2.13834763e-01 4.10951346e-01 -9.52427685e-02] ... [-1.51114359e-01 -4.77976769e-01 -2.20416579e-02] [-2.04482198e-01 -3.37212771e-01 1.90199822e-01] [-3.38903546e-01 -2.81002909e-01 1.29762441e-01]]

    ...

    [[-6.72895112e-06 1.62988144e-05 -1.20641790e-08] [-1.03694782e-01 6.92390744e-03 5.06759100e-02] [-1.96537614e-01 4.33968723e-01 -1.27974689e-01] ... [-3.33037019e-01 -3.92906040e-01 -1.15641028e-01] [-5.39204121e-01 -3.08923870e-01 3.72153409e-02] [-7.58739769e-01 -2.83981621e-01 -3.50418091e-02]]

    [[ 4.52273525e-08 1.58211951e-05 -1.74119279e-08] [-1.01883560e-01 7.35222874e-03 5.44899777e-02] [-2.18860939e-01 4.36264157e-01 -1.25121012e-01] ... [-3.36887985e-01 -3.93161565e-01 -1.23000994e-01] [-5.36568999e-01 -2.95687914e-01 3.57882604e-02] [-7.49117494e-01 -2.61148334e-01 -4.28803191e-02]]

    [[ 1.09871326e-05 1.75449386e-05 -2.58652264e-08] [-1.00538105e-01 9.60641168e-03 5.87101802e-02] [-2.38827333e-01 4.35711741e-01 -1.21683776e-01] ... [-3.37355673e-01 -3.86507511e-01 -1.32569075e-01] [-5.34336150e-01 -2.70994544e-01 2.65108123e-02] [-7.29883015e-01 -2.23541379e-01 -6.84319139e-02]]] -------------- generate reconstruction 3D data spends 0.03 seconds Loading checkpoint ./checkpoint/ckpt_ep_045.bin -------------- load 3D Traj model spends 0.10 seconds [[[-0.0608316 0.06176288 4.3910446 ]]

    [[-0.06548614 0.07058202 4.388146 ]]

    [[-0.06756089 0.07601852 4.394839 ]]

    [[-0.06937461 0.07606728 4.4062023 ]]

    [[-0.07935582 0.08046775 4.4164486 ]]

    [[-0.08117009 0.08611105 4.454586 ]]

    [[-0.08033934 0.08394711 4.507877 ]]

    [[-0.08038983 0.08460207 4.5325723 ]]

    [[-0.08086307 0.09273802 4.5473223 ]]

    [[-0.08537923 0.11209127 4.5626073 ]]

    [[-0.0958671 0.14113364 4.5830708 ]]

    [[-0.10283907 0.16764934 4.5974092 ]]

    [[-0.10631857 0.20077157 4.6156845 ]]

    [[-0.11503273 0.22333497 4.6380987 ]]

    [[-0.12763707 0.23823354 4.685437 ]]

    [[-0.14130223 0.24021342 4.7722874 ]]

    [[-0.15757567 0.22182588 4.8492317 ]]

    [[-0.17067264 0.19555132 4.9141035 ]]

    [[-0.1773699 0.16885322 4.966269 ]]

    [[-0.18369074 0.14390603 4.9619675 ]]

    [[-0.18909636 0.13704099 4.9318147 ]]

    [[-0.19336607 0.13380723 4.8802266 ]]

    [[-0.19484845 0.13180962 4.823578 ]]

    [[-0.19598952 0.13649382 4.7753596 ]]

    [[-0.19962612 0.14402375 4.781396 ]]

    [[-0.20210056 0.15149407 4.802383 ]]

    [[-0.20000105 0.1506775 4.8272696 ]]

    [[-0.20134133 0.14728308 4.8650827 ]]

    [[-0.20255381 0.15239106 4.8854446 ]]

    [[-0.19994022 0.16628543 4.8886423 ]]

    [[-0.19105588 0.17989056 4.8842635 ]]

    [[-0.1787905 0.20058975 4.8489604 ]]

    [[-0.16615632 0.22787926 4.804517 ]]

    [[-0.15367337 0.2493041 4.7636037 ]]

    [[-0.14456213 0.25460663 4.743619 ]]

    [[-0.13935232 0.24980387 4.7353373 ]]

    [[-0.13870691 0.23478699 4.73121 ]]

    [[-0.14449558 0.21044308 4.72014 ]]

    [[-0.14755204 0.1812517 4.7154474 ]]

    [[-0.1413371 0.1611051 4.6990433 ]]

    [[-0.1335179 0.1436337 4.6672726 ]]

    [[-0.12924318 0.12828264 4.672592 ]]

    [[-0.1243439 0.11969504 4.647041 ]]

    [[-0.11906209 0.11369721 4.621543 ]]

    [[-0.11781553 0.10911711 4.630979 ]]

    [[-0.11602639 0.10793802 4.638523 ]]

    [[-0.1132953 0.10446329 4.6481276 ]]

    [[-0.11169393 0.09817347 4.6719456 ]]

    [[-0.11350296 0.09695254 4.6916246 ]]]

    I am not sure if this way of putting it out is correct or if it is different in the first place. If you know how to do it right, please let me know.Thank you.

    opened by jin-tmoya 13
Owner
null
Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.

Python scripts for performing 3D human pose estimation using the Mobile Human Pose model in ONNX.

Ibai Gorordo 99 Dec 31, 2022
[CVPR 2022 Oral] EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation

EPro-PnP EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation In CVPR 2022 (Oral). [paper] Hanshen

 同济大学智能汽车研究所综合感知研究组 ( Comprehensive Perception Research Group under Institute of Intelligent Vehicles, School of Automotive Studies, Tongji University) 842 Jan 4, 2023
Repository for the paper "PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation", CVPR 2021.

PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation Code repository for the paper: PoseAug: A Differentiable Pose Augme

Pyjcsx 328 Dec 17, 2022
Code for "Human Pose Regression with Residual Log-likelihood Estimation", ICCV 2021 Oral

Human Pose Regression with Residual Log-likelihood Estimation [Paper] [arXiv] [Project Page] Human Pose Regression with Residual Log-likelihood Estima

JeffLi 347 Dec 24, 2022
Mixup for Supervision, Semi- and Self-Supervision Learning Toolbox and Benchmark

OpenSelfSup News Downstream tasks now support more methods(Mask RCNN-FPN, RetinaNet, Keypoints RCNN) and more datasets(Cityscapes). 'GaussianBlur' is

AI Lab, Westlake University 332 Jan 3, 2023
Learning trajectory representations using self-supervision and programmatic supervision.

Trajectory Embedding for Behavior Analysis (TREBA) Implementation from the paper: Jennifer J. Sun, Ann Kennedy, Eric Zhan, David J. Anderson, Yisong Y

null 58 Jan 6, 2023
Code for "Single-view robot pose and joint angle estimation via render & compare", CVPR 2021 (Oral).

Single-view robot pose and joint angle estimation via render & compare Yann Labbé, Justin Carpentier, Mathieu Aubry, Josef Sivic CVPR: Conference on C

Yann Labbé 51 Oct 14, 2022
Code for "Reconstructing 3D Human Pose by Watching Humans in the Mirror", CVPR 2021 oral

Reconstructing 3D Human Pose by Watching Humans in the Mirror Qi Fang*, Qing Shuai*, Junting Dong, Hujun Bao, Xiaowei Zhou CVPR 2021 Oral The videos a

ZJU3DV 178 Dec 13, 2022
Code repo for realtime multi-person pose estimation in CVPR'17 (Oral)

Realtime Multi-Person Pose Estimation By Zhe Cao, Tomas Simon, Shih-En Wei, Yaser Sheikh. Introduction Code repo for winning 2016 MSCOCO Keypoints Cha

Zhe Cao 4.9k Dec 31, 2022
Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral

Good news! We release a clean version of PVNet: clean-pvnet, including how to train the PVNet on the custom dataset. Use PVNet with a detector. The tr

ZJU3DV 722 Dec 27, 2022
This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation

SO-Pose This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation This paper is basically an

shangbuhuan 52 Nov 25, 2022
[CVPR 2022 Oral] Versatile Multi-Modal Pre-Training for Human-Centric Perception

Versatile Multi-Modal Pre-Training for Human-Centric Perception Fangzhou Hong1  Liang Pan1  Zhongang Cai1,2,3  Ziwei Liu1* 1S-Lab, Nanyang Technologic

Fangzhou Hong 96 Jan 3, 2023
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry Official implementation of the paper Multi-View Depth Est

Bae, Gwangbin 138 Dec 28, 2022
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

HRNet 367 Dec 27, 2022
This repository is the offical Pytorch implementation of ContextPose: Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021).

Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021) Introduction This repository is the offical Pytorch implementation of

null 37 Nov 21, 2022
[CVPR 2022] Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions" paper

template-pose Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions

Van Nguyen Nguyen 92 Dec 28, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

selfcontact This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] It includes the main function

Lea Müller 68 Dec 6, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

SMPLify-XMC This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright Lic

Lea Müller 83 Dec 14, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

TUCH This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright License fo

Lea Müller 45 Jan 7, 2023