[ICRA 2022] CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation

Overview

This is the official implementation of our paper:

Bowen Wen, Wenzhao Lian, Kostas Bekris, and Stefan Schaal. "CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation." IEEE International Conference on Robotics and Automation (ICRA) 2022.

Abstract

Task-relevant grasping is critical for industrial assembly, where downstream manipulation tasks constrain the set of valid grasps. Learning how to perform this task, however, is challenging, since task-relevant grasp labels are hard to define and annotate. There is also yet no consensus on proper representations for modeling or off-the-shelf tools for performing task-relevant grasps. This work proposes a framework to learn task-relevant grasping for industrial objects without the need of time-consuming real-world data collection or manual annotation. To achieve this, the entire framework is trained solely in simulation, including supervised training with synthetic label generation and self-supervised, hand-object interaction. In the context of this framework, this paper proposes a novel, object-centric canonical representation at the category level, which allows establishing dense correspondence across object instances and transferring task-relevant grasps to novel instances. Extensive experiments on task-relevant grasping of densely-cluttered industrial objects are conducted in both simulation and real-world setups, demonstrating the effectiveness of the proposed framework.

Bibtex

@article{wen2021catgrasp,
  title={CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation},
  author={Wen, Bowen and Lian, Wenzhao and Bekris, Kostas and Schaal, Stefan},
  journal={ICRA 2022},
  year={2022}
}

Supplementary Video

Click to watch

ICRA 2022 Presentation Video

Quick Setup

We provide docker environment and setup is as easy as below a few lines.

  • If you haven't installed docker, firstly install (https://docs.docker.com/get-docker/).

  • Run

    docker pull wenbowen123/catgrasp:latest
    
  • To enter the docker, run below

    cd  docker && bash run_container.sh
    cd /home/catgrasp && bash build.sh
    

    Now the environment is ready to run training or testing.

Data

  catgrasp
  ├── artifacts
  ├── data
  └── urdf

Testing

python run_grasp_simulation.py

You should see the demo starting like below. You can play with the settings in config_run.yml, including changing different object instances within the category while using the same framework

Training

In the following, we take the nut category as an example to walk through

  • Compute signed distance function for all objects of the category

    python make_sdf.py --class_name nut
    
  • Pre-compute offline grasps of training objects. This generates and evaluates grasp qualities regardless of their task-relevance. To visualize and debug the grasp quality evaluation change to --debug 1

    python generate_grasp.py --class_name nut --debug 0
    
  • Self-supervised task-relevance discovery in simulation

    python pybullet_env/env_semantic_grasp.py --class_name nut --debug 0
    

    Changing --debug 0 to --debug 1, you are able to debug and visualize the process

    The affordance results will be saved in data/object_models. The heatmap file XXX_affordance_vis can be visualized as in the below image, where warmer area means higher task-relevant grasping region P(T|G)

  • Make the canonical model that stores category-level knowledge

    python make_canonical.py --class_name nut
    

  • Training data generation of piles

    python generate_pile_data.py --class_name nut
    

  • Process training data, including generating ground-truth labels

    python tool.py
    
  • To train NUNOCS net, examine the settings in config_nunocs.yml, then

    python train_nunocs.py
    
  • To train grasping-Q net, examine the settings in config_grasp.yml, then

    python train_grasp.py
    
  • To train instance segmentation net, examine the settings in PointGroup/config/config_pointgroup.yaml, then

    python train_pointgroup.py
    
Comments
  • Problem about the python generate_grasp.py

    Problem about the python generate_grasp.py

    hello,thank you for your work. I clone your code and run the python generate_grasp.py --class_name nut --debug 0,but there is a bug. Screenshot from 2022-05-11 21-15-43 Did you met this problem and how can i fix it.

    opened by sunhan1997 13
  • 程序运行方面的问题

    程序运行方面的问题

    wen博,你好。 我想请教一个关于程序运行相关的问题: 当我第一次运行python generate_grasp.py --class_name nut --debug 1时,程序不报错能够正常的运行,但是当我第二次运行的时候时候,出现下面的错误。不知道您知不知道这种情况该怎么处理?

    (catgrasp) root@yqw:/home/catgrasp# python generate_grasp.py --class_name nut --debug 1
    pybullet build time: Dec  1 2021 18:33:04
    Gripper hand_depth: 0.018883
    Gripper init_bite: 0.005
    Gripper max_width: 0.048
    Gripper hand_height: 0.020832
    Gripper finger_width: 0.00586
    Gripper hand_outer_diameter: 0.061398
    Sdf3D self.dims_=[168 168 168], self.resolution_=0.000994335, self.origin_=[-0.0835218 -0.083531   0.0678743], center_sdf=-0.0309976, boundary_sdf=0.0808156
    sdf_dir /home/catgrasp/dexnet/grasping/../../urdf/robotiq_hande/gripper_enclosed_air_tight.sdf
    Sdf3D self.dims_=[168 168 168], self.resolution_=0.000994683, self.origin_=[-0.083551  -0.0835602  0.0678726], center_sdf=-0.0309976, boundary_sdf=0.080857
    obj_dirs:
     /home/catgrasp/data/object_models/nut_LBNR12-screw.obj
    /home/catgrasp/data/object_models/nut_carr_95505A631.obj
    /home/catgrasp/data/object_models/nut_carr_95496A380_MIL_SPEC.obj
    /home/catgrasp/data/object_models/nut_carr_95010A240_GRADE.obj
    /home/catgrasp/data/object_models/nut_carr_92362A160_TYPE_18-8.obj
    /home/catgrasp/data/object_models/nut_carr_91034A423_LOW-STRENGTH.obj
    /home/catgrasp/data/object_models/nut_carr_90580A717_EXTRA-WD.obj
    /home/catgrasp/data/object_models/nut_carr_90566A271_LOW-STRENGTH.obj
    /home/catgrasp/data/object_models/nut_carr_90565A061_GRADE.obj
    /home/catgrasp/data/object_models/nut_carr_90387A512_GLASS-FILLED.obj
    /home/catgrasp/data/object_models/nut_carr_90215A433_NICKEL-PLATED.obj
    /home/catgrasp/data/object_models/nut_carr_6407T760_HIGH-PRESSURE-VACUUM.obj
    obj_dir /home/catgrasp/data/object_models/nut_LBNR12-screw.obj
    estimated resolution=0.0018148186202093218
    #sphere_pts=30
    #sample_ids=157
    begin center_ob_between_gripper...
    Filtering #grasp_poses=113668
    Traceback (most recent call last):
      File "generate_grasp.py", line 152, in <module>
        generate_grasp_one_object_complete_space(obj_dir)
      File "generate_grasp.py", line 97, in generate_grasp_one_object_complete_space
        grasps = ags.sample_grasps(background_pts=np.ones((1,3))*99999,points_for_sample=points_for_sample,normals_for_sample=normals_for_sample,num_grasps=np.inf,max_num_samples=np.inf,n_sphere_dir=30,approach_step=0.005,ee_in_grasp=np.eye(4),cam_in_world=np.eye(4),upper=np.ones((7))*999,lower=-np.ones((7))*999,open_gripper_collision_pts=np.ones((1,3))*999999,center_ob_between_gripper=True,filter_ik=False,filter_approach_dir_face_camera=False,adjust_collision_pose=False)
      File "/home/catgrasp/dexnet/grasping/grasp_sampler.py", line 216, in sample_grasps
        grasp_poses = my_cpp.filterGraspPose(grasp_poses,list(symmetry_tfs),nocs_pose,canonical_to_nocs,cam_in_world,ee_in_grasp,gripper_in_grasp,filter_approach_dir_face_camera,filter_ik,adjust_collision_pose,upper,lower,self.gripper.trimesh.vertices,self.gripper.trimesh.faces,self.gripper.trimesh_enclosed.vertices,self.gripper.trimesh_enclosed.faces,open_gripper_collision_pts,background_pts,resolution,verbose)
    AttributeError: module 'my_cpp' has no attribute 'filterGraspPose'
    

    在运行python run_grasp_simulation.py也出现了类似的问题,第一运行的时候也能正常运行,但是第二次运行就会出现错误。

    Traceback (most recent call last):
      File "/opt/project/run_grasp_simulation.py", line 30, in <module>
        from predicter import *
      File "/opt/project/predicter.py", line 21, in <module>
        import PointGroup.data.dataset_seg as dataset_seg
      File "/opt/project/PointGroup/data/dataset_seg.py", line 19, in <module>
        from lib.pointgroup_ops.functions import pointgroup_ops
      File "/opt/project/PointGroup/data/../lib/pointgroup_ops/functions/pointgroup_ops.py", line 8, in <module>
        import PG_OP
    ModuleNotFoundError: No module named 'PG_OP'
    
    opened by PoistRXE 8
  • Question about code comprehension

    Question about code comprehension

    wen博,你好 我想问一下关于几个变换矩阵和姿态矩阵的问题。 在my_cpp.filterGraspPose()代码实现中

    vectorMatrix4f filterGraspPose(const vectorMatrix4f grasp_poses, const vectorMatrix4f symmetry_tfs, const Eigen::Matrix4f nocs_pose, const Eigen::Matrix4f canonical_to_nocs_transform, const Eigen::Matrix4f cam_in_world, const Eigen::Matrix4f ee_in_grasp, const Eigen::Matrix4f gripper_in_grasp, bool filter_approach_dir_face_camera, bool filter_ik, bool adjust_collision_pose, const std::vector<double> upper, const std::vector<double> lower, const Eigen::MatrixXf gripper_vertices, const Eigen::MatrixXi gripper_faces, const Eigen::MatrixXf gripper_enclosed_vertices, const Eigen::MatrixXi gripper_enclosed_faces, const Eigen::MatrixXf gripper_collision_pts, const Eigen::MatrixXf gripper_enclosed_collision_pts, float octo_resolution, bool verbose)
    {
      vectorMatrix4f out;
      Eigen::Matrix4f canonical_to_cam = nocs_pose*canonical_to_nocs_transform;
      std::cout<<"canonical_to_cam:\n"<<canonical_to_cam<<"\n\n";
    
      int n_approach_dir_rej = 0;
      int n_ik_rej = 0;
      int n_open_gripper_rej = 0;
      int n_close_gripper_rej = 0;
    
    omp_set_num_threads(int(std::thread::hardware_concurrency()));
    #pragma omp parallel firstprivate(grasp_poses,symmetry_tfs,nocs_pose,canonical_to_nocs_transform,cam_in_world,ee_in_grasp,gripper_in_grasp,upper,lower,gripper_vertices,gripper_faces,gripper_enclosed_vertices,gripper_enclosed_faces,gripper_collision_pts,gripper_enclosed_collision_pts,canonical_to_cam)
    {
      vectorMatrix4f out_local;
      int n_approach_dir_rej_local = 0;
      int n_ik_rej_local = 0;
      int n_open_gripper_rej_local = 0;
      int n_close_gripper_rej_local = 0;
      // 碰撞效率测试
      CollisionManager cm;
      int gripper_id = cm.registerMesh(gripper_vertices,gripper_faces);
      cm.registerPointCloud(gripper_collision_pts,octo_resolution);
    
      CollisionManager cm_bg;
      int gripper_enclosed_id = cm_bg.registerMesh(gripper_enclosed_vertices,gripper_enclosed_faces);
      cm_bg.registerPointCloud(gripper_enclosed_collision_pts,octo_resolution);
    
      #pragma omp for schedule(dynamic)
      for (int i=0;i<grasp_poses.size();i++)
      {
        const auto &grasp_pose = grasp_poses[i];
        for (int j=0;j<symmetry_tfs.size();j++)
        {
          const auto &tf = symmetry_tfs[j];
          Eigen::Matrix4f tmp_grasp_pose = tf*grasp_pose;
          Eigen::Matrix4f grasp_in_cam = canonical_to_cam*tmp_grasp_pose;
    
          for (int col=0;col<3;col++)
          {
            grasp_in_cam.block(0,col,3,1).normalize();
          }
    		// filter_approach_dir_face_camera -- True
          if (filter_approach_dir_face_camera)
          {
              // 抓取方向
            Eigen::Vector3f approach_dir = grasp_in_cam.block(0,0,3,1);
              // 归一化
            approach_dir.normalize();
              // z轴值
            float dot = approach_dir.dot(Eigen::Vector3f(0,0,1));
            if (dot<0)
            {
                //verbose -- True
              if (verbose)
              {
                n_approach_dir_rej_local++;
              }
              continue;
            }
          }
    		// filter_ik -- False
          if (filter_ik)
          {
            Eigen::Matrix4f ee_in_base = cam_in_world*grasp_in_cam*ee_in_grasp;
            auto sols = get_ik_within_limits(ee_in_base,upper,lower);
            if (sols.size()==0)
            {
              if (verbose)
              {
                n_ik_rej_local++;
              }
              continue;
            }
          }
    		// adjust_collision_pose -- False
          if (!adjust_collision_pose)
          {
            Eigen::Matrix4f gripper_in_cam = grasp_in_cam*gripper_in_grasp;
            cm.setTransform(gripper_in_cam,gripper_id);
            if (cm.isAnyCollision())
            {
                //verbose -- True
              if (verbose)
              {
                n_open_gripper_rej_local++;
              }
              continue;
            }
            // gripper_enclosed_id
            cm_bg.setTransform(gripper_in_cam,gripper_enclosed_id);
            if (cm_bg.isAnyCollision())
            {
                //verbose -- True
              if (verbose)
              {
                n_close_gripper_rej_local++;
              }
              continue;
            }
          }
          else
          {
            Eigen::Vector3f major_dir = grasp_in_cam.block(0,1,3,1);
            bool found = false;
            for (float step=0.0;step<=0.003;step+=0.001)
            {
              std::vector<int> signs = {1,-1};
              if (step==0)
              {
                signs = {1};
              }
              for (auto sign:signs)
              {
                Eigen::Matrix4f cur_grasp_in_cam = grasp_in_cam;
                cur_grasp_in_cam.block(0,3,3,1) += step*major_dir*sign;
                Eigen::Matrix4f cur_gripper_in_cam = cur_grasp_in_cam*gripper_in_grasp;
                cm.setTransform(cur_gripper_in_cam,gripper_id);
                if (cm.isAnyCollision())
                {
                  continue;
                }
    
                cm_bg.setTransform(cur_gripper_in_cam,gripper_enclosed_id);
                if (cm_bg.isAnyCollision())
                {
                  continue;
                }
    
                grasp_in_cam = cur_grasp_in_cam;
                found = true;
                break;
              }
              if (found)
              {
                break;
              }
            }
              
            if (!found)
            {
              grasp_in_cam.setZero();
              n_open_gripper_rej_local++;
            }
          }
    
            //将没有发生碰撞的变换矩阵放入输出中
          if (grasp_in_cam!=Eigen::Matrix4f::Zero())
          {
            out_local.push_back(grasp_in_cam);
          }
        }
      }
    
      #pragma omp critical
      {
        n_approach_dir_rej += n_approach_dir_rej_local;
        n_ik_rej += n_ik_rej_local;
        n_open_gripper_rej += n_open_gripper_rej_local;
        n_close_gripper_rej += n_close_gripper_rej_local;
        for (int i=0;i<out_local.size();i++)
        {
          out.push_back(out_local[i]);
        }
      }
    }
    
        // verbose -- True
      if (verbose)
      {
        printf("n_approach_dir_rej=%d, n_ik_rej=%d, n_open_gripper_rej=%d, n_close_gripper_rej=%d\n",n_approach_dir_rej,n_ik_rej,n_open_gripper_rej,n_close_gripper_rej);
      }
      return out;
    }
    

    你使用了四个变换矩阵(canonical_to_nocs_transform、canonical_to_cam、canonical_to_nocs_transform、symmetry_tfs)和两个姿态矩阵(nocs_pose、grasp_pose)。我想说一下我的个人理解,然后您能帮我指导一下对错吗?

    1.nocs_pose:相当于R_{nocs}^{cam},即归一化目标坐标系空间相对于相机空间坐标系的变换矩阵; 2.grasp_pose:相当于,即在相机坐标系下表示的抓取姿态矩阵; 3.canonical_to_cam:相当于R_{canonical}^{cam},即规范坐标系相对于相机坐标系的变换矩阵; 4.canonical_to_nocs_transform:相当于,即规范坐标系相对于归一化目标坐标系空间的变换矩阵; 5.symmetry_tfs:我不是很理解为什么是一个4x4的单位矩阵,它有什么具体的功能吗? 6.canonical:对应的是哪一个坐标系呢?

    同时我还有几个关于数据相关的问题,在my_cpp.filterGraspPose()代码实现中,你使用了两类夹具向量(gripper_enclosed_vertices和gripper_vertices)用于碰撞检测,我通过meshlab绘制这两类夹具向量的点云发现,gripper_enclosed_vertices的点云会比gripper_vertices少几个点,但是我不是很理解这两者之间的差别和功能,您解释一下吗?

    opened by PoistRXE 6
  • 环境配置问题

    环境配置问题

    Wen博,你好,之前在3D视觉工坊上观看过您分享的GatGrsap的视频,想学习您的方法,但是我遇到了一些问题: 在Ubuntu18.04上安装了docker,也拉取了catgrasp的镜像,但是无法成功运行 bash run_container.sh,所以想请教一下您 还需要配置什么环境才能成功运行bash run_container.sh,或者需要配置什么环境可以让我直接在自己的Ubuntu上进行网络训练,而不需要docker

    opened by PoistRXE 5
  • size mismatch between ckpt and model

    size mismatch between ckpt and model

    Hi,

    When I run python run_grasp_simulation.py I encountered the error for size mismatch when loading PointGroupPredictor from artifacts/artifacts-77.

    size mismatch for input_conv.0.weight: copying a param with shape torch.Size([3, 3, 3, 6, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 6]).
    
    Full error report section 1. Click to expand

    Error(s) in loading state_dict for PointGroup: size mismatch for input_conv.0.weight: copying a param with shape torch.Size([3, 3, 3, 6, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 6]). size mismatch for unet.blocks.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for unet.blocks.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for unet.blocks.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for unet.blocks.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for unet.conv.2.weight: copying a param with shape torch.Size([2, 2, 2, 16, 32]) from checkpoint, the shape in current model is torch.Size([32, 2, 2, 2, 16]). size mismatch for unet.u.blocks.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]). size mismatch for unet.u.blocks.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]). size mismatch for unet.u.blocks.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]). size mismatch for unet.u.blocks.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]). size mismatch for unet.u.conv.2.weight: copying a param with shape torch.Size([2, 2, 2, 32, 48]) from checkpoint, the shape in current model is torch.Size([48, 2, 2, 2, 32]). size mismatch for unet.u.u.blocks.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 48, 48]) from checkpoint, the shape in current model is torch.Size([48, 3, 3, 3, 48]). size mismatch for unet.u.u.blocks.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 48, 48]) from checkpoint, the shape in current model is torch.Size([48, 3, 3, 3, 48]). size mismatch for unet.u.u.blocks.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 48, 48]) from checkpoint, the shape in current model is torch.Size([48, 3, 3, 3, 48]). size mismatch for unet.u.u.blocks.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 48, 48]) from checkpoint, the shape in current model is torch.Size([48, 3, 3, 3, 48]). size mismatch for unet.u.u.conv.2.weight: copying a param with shape torch.Size([2, 2, 2, 48, 64]) from checkpoint, the shape in current model is torch.Size([64, 2, 2, 2, 48]). size mismatch for unet.u.u.u.blocks.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for unet.u.u.u.blocks.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for unet.u.u.u.blocks.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for unet.u.u.u.blocks.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for unet.u.u.u.conv.2.weight: copying a param with shape torch.Size([2, 2, 2, 64, 80]) from checkpoint, the shape in current model is torch.Size([80, 2, 2, 2, 64]). size mismatch for unet.u.u.u.u.blocks.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 80, 80]) from checkpoint, the shape in current model is torch.Size([80, 3, 3, 3, 80]). size mismatch for unet.u.u.u.u.blocks.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 80, 80]) from checkpoint, the shape in current model is torch.Size([80, 3, 3, 3, 80]). size mismatch for unet.u.u.u.u.blocks.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 80, 80]) from checkpoint, the shape in current model is torch.Size([80, 3, 3, 3, 80]). size mismatch for unet.u.u.u.u.blocks.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 80, 80]) from checkpoint, the shape in current model is torch.Size([80, 3, 3, 3, 80]). size mismatch for unet.u.u.u.u.conv.2.weight: copying a param with shape torch.Size([2, 2, 2, 80, 96]) from checkpoint, the shape in current model is torch.Size([96, 2, 2, 2, 80]). size mismatch for unet.u.u.u.u.u.blocks.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 96, 96]) from checkpoint, the shape in current model is torch.Size([96, 3, 3, 3, 96]). size mismatch for unet.u.u.u.u.u.blocks.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 96, 96]) from checkpoint, the shape in current model is torch.Size([96, 3, 3, 3, 96]). size mismatch for unet.u.u.u.u.u.blocks.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 96, 96]) from checkpoint, the shape in current model is torch.Size([96, 3, 3, 3, 96]). size mismatch for unet.u.u.u.u.u.blocks.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 96, 96]) from checkpoint, the shape in current model is torch.Size([96, 3, 3, 3, 96]). size mismatch for unet.u.u.u.u.u.conv.2.weight: copying a param with shape torch.Size([2, 2, 2, 96, 112]) from checkpoint, the shape in current model is torch.Size([112, 2, 2, 2, 96]). size mismatch for unet.u.u.u.u.u.u.blocks.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 112, 112]) from checkpoint, the shape in current model is torch.Size([112, 3, 3, 3, 112]). size mismatch for unet.u.u.u.u.u.u.blocks.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 112, 112]) from checkpoint, the shape in current model is torch.Size([112, 3, 3, 3, 112]). size mismatch for unet.u.u.u.u.u.u.blocks.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 112, 112]) from checkpoint, the shape in current model is torch.Size([112, 3, 3, 3, 112]). size mismatch for unet.u.u.u.u.u.u.blocks.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 112, 112]) from checkpoint, the shape in current model is torch.Size([112, 3, 3, 3, 112]). size mismatch for unet.u.u.u.u.u.deconv.2.weight: copying a param with shape torch.Size([2, 2, 2, 112, 96]) from checkpoint, the shape in current model is torch.Size([96, 2, 2, 2, 112]). size mismatch for unet.u.u.u.u.u.blocks_tail.block0.i_branch.0.weight: copying a param with shape torch.Size([1, 1, 1, 192, 96]) from checkpoint, the shape in current model is torch.Size([96, 1, 1, 1, 192]). size mismatch for unet.u.u.u.u.u.blocks_tail.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 192, 96]) from checkpoint, the shape in current model is torch.Size([96, 3, 3, 3, 192]). size mismatch for unet.u.u.u.u.u.blocks_tail.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 96, 96]) from checkpoint, the shape in current model is torch.Size([96, 3, 3, 3, 96]). size mismatch for unet.u.u.u.u.u.blocks_tail.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 96, 96]) from checkpoint, the shape in current model is torch.Size([96, 3, 3, 3, 96]). size mismatch for unet.u.u.u.u.u.blocks_tail.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 96, 96]) from checkpoint, the shape in current model is torch.Size([96, 3, 3, 3, 96]). size mismatch for unet.u.u.u.u.deconv.2.weight: copying a param with shape torch.Size([2, 2, 2, 96, 80]) from checkpoint, the shape in current model is torch.Size([80, 2, 2, 2, 96]). size mismatch for unet.u.u.u.u.blocks_tail.block0.i_branch.0.weight: copying a param with shape torch.Size([1, 1, 1, 160, 80]) from checkpoint, the shape in current model is torch.Size([80, 1, 1, 1, 160]). size mismatch for unet.u.u.u.u.blocks_tail.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 160, 80]) from checkpoint, the shape in current model is torch.Size([80, 3, 3, 3, 160]). size mismatch for unet.u.u.u.u.blocks_tail.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 80, 80]) from checkpoint, the shape in current model is torch.Size([80, 3, 3, 3, 80]). size mismatch for unet.u.u.u.u.blocks_tail.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 80, 80]) from checkpoint, the shape in current model is torch.Size([80, 3, 3, 3, 80]). size mismatch for unet.u.u.u.u.blocks_tail.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 80, 80]) from checkpoint, the shape in current model is torch.Size([80, 3, 3, 3, 80]). size mismatch for unet.u.u.u.deconv.2.weight: copying a param with shape torch.Size([2, 2, 2, 80, 64]) from checkpoint, the shape in current model is torch.Size([64, 2, 2, 2, 80]). size mismatch for unet.u.u.u.blocks_tail.block0.i_branch.0.weight: copying a param with shape torch.Size([1, 1, 1, 128, 64]) from checkpoint, the shape in current model is torch.Size([64, 1, 1, 1, 128]). size mismatch for unet.u.u.u.blocks_tail.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 128, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 128]). size mismatch for unet.u.u.u.blocks_tail.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for unet.u.u.u.blocks_tail.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for unet.u.u.u.blocks_tail.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 64, 64]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3, 64]). size mismatch for unet.u.u.deconv.2.weight: copying a param with shape torch.Size([2, 2, 2, 64, 48]) from checkpoint, the shape in current model is torch.Size([48, 2, 2, 2, 64]). size mismatch for unet.u.u.blocks_tail.block0.i_branch.0.weight: copying a param with shape torch.Size([1, 1, 1, 96, 48]) from checkpoint, the shape in current model is torch.Size([48, 1, 1, 1, 96]). size mismatch for unet.u.u.blocks_tail.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 96, 48]) from checkpoint, the shape in current model is torch.Size([48, 3, 3, 3, 96]). size mismatch for unet.u.u.blocks_tail.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 48, 48]) from checkpoint, the shape in current model is torch.Size([48, 3, 3, 3, 48]). size mismatch for unet.u.u.blocks_tail.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 48, 48]) from checkpoint, the shape in current model is torch.Size([48, 3, 3, 3, 48]). size mismatch for unet.u.u.blocks_tail.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 48, 48]) from checkpoint, the shape in current model is torch.Size([48, 3, 3, 3, 48]). size mismatch for unet.u.deconv.2.weight: copying a param with shape torch.Size([2, 2, 2, 48, 32]) from checkpoint, the shape in current model is torch.Size([32, 2, 2, 2, 48]). size mismatch for unet.u.blocks_tail.block0.i_branch.0.weight: copying a param with shape torch.Size([1, 1, 1, 64, 32]) from checkpoint, the shape in current model is torch.Size([32, 1, 1, 1, 64]). size mismatch for unet.u.blocks_tail.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 64, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 64]). size mismatch for unet.u.blocks_tail.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]). size mismatch for unet.u.blocks_tail.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]). size mismatch for unet.u.blocks_tail.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]). size mismatch for unet.deconv.2.weight: copying a param with shape torch.Size([2, 2, 2, 32, 16]) from checkpoint, the shape in current model is torch.Size([16, 2, 2, 2, 32]). size mismatch for unet.blocks_tail.block0.i_branch.0.weight: copying a param with shape torch.Size([1, 1, 1, 32, 16]) from checkpoint, the shape in current model is torch.Size([16, 1, 1, 1, 32]). size mismatch for unet.blocks_tail.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 32, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 32]). size mismatch for unet.blocks_tail.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for unet.blocks_tail.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for unet.blocks_tail.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for score_unet.blocks.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for score_unet.blocks.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for score_unet.blocks.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for score_unet.blocks.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for score_unet.conv.2.weight: copying a param with shape torch.Size([2, 2, 2, 16, 32]) from checkpoint, the shape in current model is torch.Size([32, 2, 2, 2, 16]). size mismatch for score_unet.u.blocks.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]). size mismatch for score_unet.u.blocks.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]). size mismatch for score_unet.u.blocks.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]). size mismatch for score_unet.u.blocks.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([32, 3, 3, 3, 32]). size mismatch for score_unet.deconv.2.weight: copying a param with shape torch.Size([2, 2, 2, 32, 16]) from checkpoint, the shape in current model is torch.Size([16, 2, 2, 2, 32]). size mismatch for score_unet.blocks_tail.block0.i_branch.0.weight: copying a param with shape torch.Size([1, 1, 1, 32, 16]) from checkpoint, the shape in current model is torch.Size([16, 1, 1, 1, 32]). size mismatch for score_unet.blocks_tail.block0.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 32, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 32]). size mismatch for score_unet.blocks_tail.block0.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for score_unet.blocks_tail.block1.conv_branch.2.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]). size mismatch for score_unet.blocks_tail.block1.conv_branch.5.weight: copying a param with shape torch.Size([3, 3, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3, 16]).

    opened by waiyc 3
  • cuda compilation issue

    cuda compilation issue

    wen博,你好,

    Very much interested in your development. Trying to run the code under my virtual environment (not the docker environment). When running bash build.sh, encountered following issue:

    /home/gz/workspace/grasp/catgrasp-master/PointGroup/lib/spconv/src/spconv/maxpool.cu(116): error: more than one operator ">" matches these operands: built-in operator "arithmetic > arithmetic" function "operator>(const __half &, const __half &)" /usr/local/cuda-11.3/include/cuda_fp16.hpp(296): here operand types are: c10::Half > c10::Half detected during: instantiation of "void spconv::maxPoolFwdVecBlockKernel<T,Index,NumTLP,NumILP,VecType>(T *, const T *, const Index *, const Index *, int, int) [with T=c10::Half, Index=int, NumTLP=64, NumILP=16, VecType=std::conditional_t<true, int2, int4>]"

    Could you please help check out the issue above? Lots of thanks!

    opened by george66s 3
  • Question aobout code

    Question aobout code

    wen博,你好 my_cpp/common.cpp中的这一个行代码是不是有错误: cur_grasp_in_cam.block(0,3,3,1) += step*major_dir*sign; 正确的应该是: cur_grasp_in_cam.block(0,3,3,3) += step*major_dir*sign;

    opened by PoistRXE 2
  • How to split train set and test set

    How to split train set and test set

    The config_grasp.yml, config_nunocs.yml, config_pointgroup.yml 's default settings show that the train_root is "dataset/nut/train" and test_root is "dataset/nut/test", what files should I place into them and how can I split them? Thank you!

    opened by sky23249 2
  • cannot connect to X server

    cannot connect to X server

    How to solve this problem? thanks very much

    startThreads creating 1 threads. starting thread 0 started thread 0 argc=2 argv[0] = --unused argv[1] = --start_demo_name=Physics Server ExampleBrowserThreadFunc started X11 functions dynamically loaded using dlopen/dlsym OK! No protocol specified

    cannot connect to X server
    
    opened by qxtian 1
  • Where to find NUNOCS training data?

    Where to find NUNOCS training data?

    HI! I am interested in your project! When I clone the repo, I am able to successfully build the code within the Docker container, but when I run python train_nunocs.py no training or validation data is found and error is:

    (catgrasp) root@XPS:/home/catgrasp# python train_nunocs.py
        phase=train #self.files=0
        phase=val #self.files=0
        Traceback (most recent call last):
          File "train_nunocs.py", line 38, in <module>
            trainer = TrainerNunocs(cfg)
          File "/home/catgrasp/trainer_nunocs.py", line 31, in __init__
            self.train_loader = torch.utils.data.DataLoader(self.train_data, batch_size=self.cfg['batch_size'], shuffle=True, num_workers=self.cfg['n_workers'], pin_memory=False, drop_last=True,worker_init_fn=worker_init_fn)
          File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 213, in __init__
            sampler = RandomSampler(dataset)
          File "/opt/conda/envs/catgrasp/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 94, in __init__
            "value, but got num_samples={}".format(self.num_samples))
        ValueError: num_samples should be a positive integer value, but got num_samples=0
    

    . I followed the instructions to download the data directory from https://archive.cs.rutgers.edu/pracsys/catgrasp/ but still no success. Do I need to download more data? Any help would be greatly appreciated!

    In the future I would like to try to train my own NUNOCS Net for different object classes, so learning to train from scratch is necessary

    opened by camkisailus 1
  • Gripper model configuration

    Gripper model configuration

    Hi Bowen,

    Thanks for your gerat work! I want to change your robotiq Hande gripper to Fanka Panda gripper in my project. Your gripper models are in urdf/robotiq_hande and there are many model and configuration files. How and where do you get this files and could you please give me a hint about how to configure the Franka Panda gripper? Thank your ahead.

    opened by zhaoju37 1
  • issure of segement

    issure of segement

    Hi Wen: When i try to run the train_pointgroup.py in the container, it raises the value error as i showed. 2022-07-29 09-56-43 的屏幕截图 Except this, i can run any other py files in docker .

    opened by hanbinliuu 1
  • Question about the camera

    Question about the camera

    Hi wenbo, thanks for your code sharing. Could I ask what is the specific model of the camera and what is the role of the camera in the whole project? Looking forward to your reply.

    opened by Jeling-W 0
  • Question about  class Ublock

    Question about class Ublock

    wen博,你好 我有一个问题想请教您,是关于'train_pointgroup.py'中创建网络模型时用到的UBlock类 这个类的forward方法实现如下:

      def forward(self, input):
          output = self.blocks(input)
          identity = spconv.SparseConvTensor(output.features, output.indices, output.spatial_shape, output.batch_size)
          if len(self.nPlanes) > 1:
              output_decoder = self.conv(output)
              output_decoder = self.u(output_decoder)
              output_decoder = self.deconv(output_decoder)
              output.features = torch.cat((identity.features, output_decoder.features), dim=1)
              output = self.blocks_tail(output)
    
          return output
    

    len(self.nPlanes) >1时,output.features的维度是self.nPlanes[0]*2output的维度是self.nPlanes[0],但是self.blocks_tail()需要outputoutput.features这两个tensor的维度都是self.nPlanes[0],为什么这里会出现函数输入维度和实际输入维度不一致的情况呢?

    opened by PoistRXE 2
  • Question about Train environment

    Question about Train environment

    Hi wenbo, Thanks for your code sharing. I am trying to use your code to train my own model. Could I ask what typr of GPU and CPU are you based on when you are training your model? Like how many kernals and memory of cpu and GPUtpe(1080,2060 or 3090?) Also, the type of my GPU is 3060 and it is not suit for your image env(cuda10). I couldn't run model.cuda( ) on my computer. So I changed the cudatoolkit to 11.0. In this way model.cuda() is sucessful but when I run build.sh, pg_op can't be built successfully. I gusess it is because the dismatch of cuda. Do you have any sugesstion on how to import pg_op on cuda11.0? Looking forward to your reply : )

    opened by YUHANG-Ma 5
Owner
Bowen Wen
CS PhD || Robotics, Computer Vision || Intern@Google[X] Ex-intern@Facebook, Amazon and SenseTime(商汤).
Bowen Wen
Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020)

GraspNet Baseline Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020). [paper] [dataset] [API] [do

GraspNet 209 Dec 29, 2022
Official PyTorch implementation of CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds

CAPTRA: CAtegory-level Pose Tracking for Rigid and Articulated Objects from Point Clouds Introduction This is the official PyTorch implementation of o

Yijia Weng 96 Dec 7, 2022
Single-stage Keypoint-based Category-level Object Pose Estimation from an RGB Image

CenterPose Overview This repository is the official implementation of the paper "Single-stage Keypoint-based Category-level Object Pose Estimation fro

NVIDIA Research Projects 188 Dec 27, 2022
PyTorch implemention of ICCV'21 paper SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation

SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation This is the PyTorch implemention of ICCV'21 paper SGPA: Structure

Chen Kai 24 Dec 5, 2022
The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022

DG-TrajGen The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022. Our Meth

Wang 25 Sep 26, 2022
[ICRA 2022] An opensource framework for cooperative detection. Official implementation for OPV2V.

OpenCOOD OpenCOOD is an Open COOperative Detection framework for autonomous driving. It is also the official implementation of the ICRA 2022 paper OPV

Runsheng Xu 322 Dec 23, 2022
Imposter-detector-2022 - HackED 2022 Team 3IQ - 2022 Imposter Detector

HackED 2022 Team 3IQ - 2022 Imposter Detector By Aneeljyot Alagh, Curtis Kan, Jo

Joshua Ji 3 Aug 20, 2022
NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation (ACL-IJCNLP 2021)

NeuralWOZ This code is official implementation of "NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-based Simulation". Sungdong Kim, Mi

NAVER AI 31 Oct 25, 2022
Code for the RA-L (ICRA) 2021 paper "SeqNet: Learning Descriptors for Sequence-Based Hierarchical Place Recognition"

SeqNet: Learning Descriptors for Sequence-Based Hierarchical Place Recognition [ArXiv+Supplementary] [IEEE Xplore RA-L 2021] [ICRA 2021 YouTube Video]

Sourav Garg 63 Dec 12, 2022
This is the offical website for paper ''Category-consistent deep network learning for accurate vehicle logo recognition''

The Pytorch Implementation of Category-consistent deep network learning for accurate vehicle logo recognition This is the offical website for paper ''

Wanglong Lu 28 Oct 29, 2022
Code for 'Single Image 3D Shape Retrieval via Cross-Modal Instance and Category Contrastive Learning', ICCV 2021

CMIC-Retrieval Code for Single Image 3D Shape Retrieval via Cross-Modal Instance and Category Contrastive Learning. ICCV 2021. Introduction In this wo

null 42 Nov 17, 2022
Self-supervised learning on Graph Representation Learning (node-level task)

graph_SSL Self-supervised learning on Graph Representation Learning (node-level task) How to run the code To run GRACE, sh run_GRACE.sh To run GCA, sh

Namkyeong Lee 3 Dec 31, 2021
We envision models that are pre-trained on a vast range of domain-relevant tasks to become key for molecule property prediction

We envision models that are pre-trained on a vast range of domain-relevant tasks to become key for molecule property prediction. This repository aims to give easy access to state-of-the-art pre-trained models.

GMUM 90 Jan 8, 2023
ICRA 2021 "Towards Precise and Efficient Image Guided Depth Completion"

PENet: Precise and Efficient Depth Completion This repo is the PyTorch implementation of our paper to appear in ICRA2021 on "Towards Precise and Effic

null 232 Dec 25, 2022
SSL_SLAM2: Lightweight 3-D Localization and Mapping for Solid-State LiDAR (mapping and localization separated) ICRA 2021

SSL_SLAM2 Lightweight 3-D Localization and Mapping for Solid-State LiDAR (Intel Realsense L515 as an example) This repo is an extension work of SSL_SL

Wang Han 王晗 1.3k Jan 8, 2023
Spatial Intention Maps for Multi-Agent Mobile Manipulation (ICRA 2021)

spatial-intention-maps This code release accompanies the following paper: Spatial Intention Maps for Multi-Agent Mobile Manipulation Jimmy Wu, Xingyua

Jimmy Wu 70 Jan 2, 2023
Offcial repository for the IEEE ICRA 2021 paper Auto-Tuned Sim-to-Real Transfer.

Offcial repository for the IEEE ICRA 2021 paper Auto-Tuned Sim-to-Real Transfer.

null 47 Jun 30, 2022
Code for "FGR: Frustum-Aware Geometric Reasoning for Weakly Supervised 3D Vehicle Detection", ICRA 2021

FGR This repository contains the python implementation for paper "FGR: Frustum-Aware Geometric Reasoning for Weakly Supervised 3D Vehicle Detection"(I

Yi Wei 31 Dec 8, 2022
Official code for "EagerMOT: 3D Multi-Object Tracking via Sensor Fusion" [ICRA 2021]

EagerMOT: 3D Multi-Object Tracking via Sensor Fusion Read our ICRA 2021 paper here. Check out the 3 minute video for the quick intro or the full prese

Aleksandr Kim 276 Dec 30, 2022