Official page of Patchwork (RA-L'21 w/ IROS'21)

Overview

Patchwork

Official page of "Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor", which is accepted by RA-L with IROS'21 option

[Video] [Preprint Paper] [Project Wiki]

Patchwork Concept of our method (CZM & GLE)

It's an overall updated version of R-GPF of ERASOR [Code] [Paper].


Demo

KITTI 00

Rough Terrain


Characteristics

  • Single hpp file (include/patchwork/patchwork.hpp)

  • Robust ground consistency

As shown in the demo videos and below figure, our method shows the most promising robust performance compared with other state-of-the-art methods, especially, our method focuses on the little perturbation of precision/recall as shown in this figure.

Please kindly note that the concept of traversable area and ground is quite different! Please refer to our paper.

Contents

  1. Test Env.
  2. Requirements
  3. How to Run Patchwork
  4. Citation

Test Env.

The code is tested successfully at

  • Linux 18.04 LTS
  • ROS Melodic

Requirements

ROS Setting

    1. Install ROS on a machine.
    1. Thereafter, jsk-visualization is required to visualize Ground Likelihood Estimation status.
sudo apt-get install ros-melodic-jsk-recognition
sudo apt-get install ros-melodic-jsk-common-msgs
sudo apt-get install ros-melodic-jsk-rviz-plugins
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src
git clone https://github.com/LimHyungTae/patchwork.git
cd .. && catkin build patchwork 

How to Run Patchwork

We provide three examples

  • Offline KITTI dataset
  • Online (ROS Callback) KITTI dataset
  • Own dataset using pcd files

Offline KITTI dataset

  1. Download SemanticKITTI Odometry dataset (We also need labels since we also open the evaluation code! :)

  2. Set the data_path in launch/offline_kitti.launch for your machine.

The data_path consists of velodyne folder and labels folder as follows:

data_path (e.g. 00, 01, ..., or 10)
_____velodyne
     |___000000.bin
     |___000001.bin
     |___000002.bin
     |...
_____labels
     |___000000.label
     |___000001.label
     |___000002.label
     |...
_____...
   
  1. Run launch file
roslaunch patchwork offline_kitti.launch

You can directly feel the speed of Patchwork! 😉

Online (ROS Callback) KITTI dataset

We also provide rosbag example. If you run our patchwork via rosbag, please refer to this example.

  1. Download readymade rosbag
wget https://urserver.kaist.ac.kr/publicdata/patchwork/kitti_00_xyzilid.bag
  1. After building this package, run the roslaunch as follows:
roslaunch patchwork rosbag_kitti.launch
  1. Then play the rosbag file in another command
rosbag play kitti_00_xyzilid.bag

Own dataset using pcd files

Please refer to /nodes/offilne_own_data.cpp.

(Note that in your own data format, there may not exist ground truth labels!)

Be sure to set right params. Otherwise, your results may be wrong as follows:

W/ wrong params After setting right params

For better understanding of the parameters of Patchwork, please read our wiki, 4. IMPORTANT: Setting Parameters of Patchwork in Your Own Env..

Offline (Using *.pcd or *.bin file)

  1. Utilize /nodes/offilne_own_data.cpp

  2. Please check the output by following command and corresponding files:

roslaunch patchwork offline_ouster128.launch

Online (via rosbag)

  1. Utilize rosbag_kitti.launch.

  2. To do so, remap the topic of subscriber, e.g. add remap line as follows:

<remap from="/node" to="$YOUR_LIDAR_TOPIC_NAME$"/>
  1. In addition, minor modification of ros_kitti.cpp is necessary by refering to offline_own_data.cpp.

Citation

If you use our code or method in your work, please consider citing the following:

@article{lim2021patchwork,
title={Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor},
author={Lim, Hyungtae and Minho, Oh and Myung, Hyun},
journal={IEEE Robotics and Automation Letters},
year={2021}
}

Description

All explanations of parameters and other experimental results will be uploaded in wiki

Contact

If you have any questions, please let me know:

TODO List

  • Add ROS support
  • Add preprint paper
  • Add demo videos
  • Add own dataset examples
  • Update wiki

Comments
  • md5sum mismatch : Connection drop

    md5sum mismatch : Connection drop

    I am trying to run rosbag_kitti.launch on a rosbag which contains sensor::PointCloud2 message. Using Ubuntu 20.04, ROS Noetic.

    On running rosbag, I am getting --

    [ERROR] [1645128512.247642039]: Client [/ros_kitti_bhooshan_Legion] wants topic /os_cloud_node/points to have 
    datatype/md5sum [patchwork/node/8ffdb3dcfd475161209f2ce2c04a5bcc], but our version has 
    [sensor_msgs/PointCloud2/1158d486dd51d683ce2f1be655c3c181]. Dropping connection.
    

    According to internet there is a mismatch in what the subscriber is asking for and what my rosbag is publishing. The same rosbag works well on pub_for_legoloam.launch. I compared the two files and found out the difference being line 150 in rosbag_kitti.cpp :

        ros::Subscriber NodeSubscriber = nh.subscribe<patchwork::node>("/node", 5000, callbackNode);
    

    and line 90 on pub_for_legloam.cpp :

        ros::Subscriber NodeSubscriber = nh.subscribe<sensor_msgs::PointCloud2>("/node", 5000, callbackNode);
    

    That's what I think on a primary check. Can you help? You can try running any rosbag with PointCloud2 msgs and try running it with rosbag_kitti.launch. Could it be due to ROS Noetic??

    opened by BhooshanDeshpande 8
  • The different between code and paper

    The different between code and paper

    Thank you for your excellent work. In your paper, If the probability of Ground Likelihood Estimation is larger than 0.5, than ˆGn belongs to the actual ground. However, I noticed you don't do this step, Besides, the calculation of equation (10)and (11)didn't happen. I want to know whether it's just some kind of simplification.

    opened by HMX2013 2
  • 12.27 patchwork 수정사항

    12.27 patchwork 수정사항

    1. pub_for_legoloam.cpp로 kitti bagfile에 대해 pcl::PointXYZ pointtype으로 source / ground / non-ground를 publish할 수 있도록 수정하였습니다.
    2. xy2theta function를 수정하여 sector_idx가 음수가 되는것을 방지하였습니다.
    3. comb.msg를 만들어서 LeGO-LOAM을 위해 source와 ground msg를 합쳐 보낼 수 있도록 수정하였습니다. 추가적으로 launch파일,rviz도 추가하였고 REAMDE의 How to run에 적어두었습니다!
    opened by SeoDU 2
  • Difference between estimated nonground points and origin kitti point cloud

    Difference between estimated nonground points and origin kitti point cloud

    I use offline_kitti.launch to generate classified point cloud, however, when I put the estimated point cloud and corresponding kitti odometry velodyne file in Cloudcompare, I found that there is a offset between them as following pic. 2022-10-03 15-31-48 的屏幕截图

    I only changed the pcd_savepath and save_flag in source code. Did I do something wrong? I tried patchwork++ and got same results.

    opened by kang-1-2-3 1
  • Skipped empty section pointcloud's ground estimation proposals

    Skipped empty section pointcloud's ground estimation proposals

    Hello,

    I noticed that we are letting empty sections and calculate the singular values on them which will lead to nan values.

    https://github.com/LimHyungTae/patchwork/blob/4b4f2118f706e339c1ed23b5710bc732d8c7c54d/include/patchwork/patchwork.hpp#L444-L462

    1. Here is the output of Size of the pointcloud for ring(section), singular values and Linearity/Planarity results in original code:
      1. You can clearly see that from the last ring(section), even we don't have any points in section, if there were calculations done in step before; the values of respective features will copied from the last section. Which will definetely effect the correctness of the height calculation.
      2. Here is the subset of the wrong output:
    patchwork_ground_seg: SizeOfRing: 33
    patchwork_ground_seg: * Singular Values: 0.0368877, 0.00435487, 1.95469e-05
    patchwork_ground_seg: * Linearity: 0.881942
    patchwork_ground_seg: * Planarity: 0.117528
    patchwork_ground_seg: SizeOfRing: 0
    patchwork_ground_seg: * Singular Values: 0.0368877, 0.00435487, 1.95469e-05
    patchwork_ground_seg: * Linearity: 0.881942
    patchwork_ground_seg: * Planarity: 0.117528
    
    patchwork_ground_seg: SizeOfRing: 0
    patchwork_ground_seg: * Singular Values: 0, 0, 0
    patchwork_ground_seg: * Linearity: -nan
    patchwork_ground_seg: * Planarity: -nan
    patchwork_ground_seg: SizeOfRing: 143
    patchwork_ground_seg: * Singular Values: 0.109471, 0.0290639, 3.99177e-05
    patchwork_ground_seg: * Linearity: 0.734506
    patchwork_ground_seg: * Planarity: 0.26513
    patchwork_ground_seg: SizeOfRing: 279
    patchwork_ground_seg: * Singular Values: 0.294544, 0.0965589, 3.21574e-05
    patchwork_ground_seg: * Linearity: 0.672175
    patchwork_ground_seg: * Planarity: 0.327716
    patchwork_ground_seg: SizeOfRing: 323
    patchwork_ground_seg: * Singular Values: 0.462427, 0.120994, 4.09155e-05
    patchwork_ground_seg: * Linearity: 0.738349
    patchwork_ground_seg: * Planarity: 0.261562
    patchwork_ground_seg: SizeOfRing: 493
    patchwork_ground_seg: * Singular Values: 0.422977, 0.111352, 4.06778e-05
    patchwork_ground_seg: * Linearity: 0.736741
    patchwork_ground_seg: * Planarity: 0.263163
    patchwork_ground_seg: SizeOfRing: 478
    patchwork_ground_seg: * Singular Values: 0.265785, 0.124776, 5.04178e-05
    patchwork_ground_seg: * Linearity: 0.530537
    patchwork_ground_seg: * Planarity: 0.469274
    patchwork_ground_seg: SizeOfRing: 430
    patchwork_ground_seg: * Singular Values: 0.158036, 0.113088, 3.39509e-05
    patchwork_ground_seg: * Linearity: 0.284418
    patchwork_ground_seg: * Planarity: 0.715367
    patchwork_ground_seg: SizeOfRing: 461
    patchwork_ground_seg: * Singular Values: 0.163267, 0.104623, 4.05746e-05
    patchwork_ground_seg: * Linearity: 0.359195
    patchwork_ground_seg: * Planarity: 0.640557
    patchwork_ground_seg: SizeOfRing: 527
    patchwork_ground_seg: * Singular Values: 0.202164, 0.128592, 9.12833e-05
    patchwork_ground_seg: * Linearity: 0.363922
    patchwork_ground_seg: * Planarity: 0.635627
    patchwork_ground_seg: SizeOfRing: 786
    patchwork_ground_seg: * Singular Values: 0.356911, 0.116977, 8.66355e-05
    patchwork_ground_seg: * Linearity: 0.672251
    patchwork_ground_seg: * Planarity: 0.327506
    patchwork_ground_seg: SizeOfRing: 843
    patchwork_ground_seg: * Singular Values: 0.350674, 0.110203, 8.68107e-05
    patchwork_ground_seg: * Linearity: 0.68574
    patchwork_ground_seg: * Planarity: 0.314012
    patchwork_ground_seg: SizeOfRing: 754
    patchwork_ground_seg: * Singular Values: 0.237328, 0.132515, 5.75546e-05
    patchwork_ground_seg: * Linearity: 0.441639
    patchwork_ground_seg: * Planarity: 0.558119
    patchwork_ground_seg: SizeOfRing: 558
    patchwork_ground_seg: * Singular Values: 0.162093, 0.110739, 5.16564e-05
    patchwork_ground_seg: * Linearity: 0.316819
    patchwork_ground_seg: * Planarity: 0.682862
    patchwork_ground_seg: SizeOfRing: 753
    patchwork_ground_seg: * Singular Values: 0.167197, 0.103093, 0.00201635
    patchwork_ground_seg: * Linearity: 0.383403
    patchwork_ground_seg: * Planarity: 0.604537
    patchwork_ground_seg: SizeOfRing: 795
    patchwork_ground_seg: * Singular Values: 0.195423, 0.126148, 0.00168774
    patchwork_ground_seg: * Linearity: 0.354488
    patchwork_ground_seg: * Planarity: 0.636875
    patchwork_ground_seg: SizeOfRing: 702
    patchwork_ground_seg: * Singular Values: 0.35721, 0.123147, 0.00249468
    patchwork_ground_seg: * Linearity: 0.655252
    patchwork_ground_seg: * Planarity: 0.337764
    patchwork_ground_seg: SizeOfRing: 511
    patchwork_ground_seg: * Singular Values: 0.480244, 0.13327, 0.002766
    patchwork_ground_seg: * Linearity: 0.722495
    patchwork_ground_seg: * Planarity: 0.271745
    patchwork_ground_seg: SizeOfRing: 187
    patchwork_ground_seg: * Singular Values: 0.279308, 0.105251, 0.000206188
    patchwork_ground_seg: * Linearity: 0.623173
    patchwork_ground_seg: * Planarity: 0.376089
    patchwork_ground_seg: SizeOfRing: 33
    patchwork_ground_seg: * Singular Values: 0.0368877, 0.00435487, 1.95469e-05
    patchwork_ground_seg: * Linearity: 0.881942
    patchwork_ground_seg: * Planarity: 0.117528
    patchwork_ground_seg: SizeOfRing: 0
    patchwork_ground_seg: * Singular Values: 0.0368877, 0.00435487, 1.95469e-05
    patchwork_ground_seg: * Linearity: 0.881942
    patchwork_ground_seg: * Planarity: 0.117528
    
    1. And here is the after the code change:
    patchwork_ground_seg: SizeOfRing: 142
    patchwork_ground_seg: * Singular Values: 0.110376, 0.0330322, 3.93967e-05
    patchwork_ground_seg: * Linearity: 0.700729
    patchwork_ground_seg: * Planarity: 0.298914
    patchwork_ground_seg: SizeOfRing: 254
    patchwork_ground_seg: * Singular Values: 0.233747, 0.106727, 3.83107e-05
    patchwork_ground_seg: * Linearity: 0.54341
    patchwork_ground_seg: * Planarity: 0.456426
    patchwork_ground_seg: SizeOfRing: 288
    patchwork_ground_seg: * Singular Values: 0.411811, 0.121756, 4.87517e-05
    patchwork_ground_seg: * Linearity: 0.704341
    patchwork_ground_seg: * Planarity: 0.295541
    patchwork_ground_seg: SizeOfRing: 414
    patchwork_ground_seg: * Singular Values: 0.443487, 0.123922, 4.69576e-05
    patchwork_ground_seg: * Linearity: 0.720574
    patchwork_ground_seg: * Planarity: 0.27932
    patchwork_ground_seg: SizeOfRing: 497
    patchwork_ground_seg: * Singular Values: 0.327631, 0.115926, 3.13969e-05
    patchwork_ground_seg: * Linearity: 0.646168
    patchwork_ground_seg: * Planarity: 0.353736
    patchwork_ground_seg: SizeOfRing: 423
    patchwork_ground_seg: * Singular Values: 0.16464, 0.100782, 2.70376e-05
    patchwork_ground_seg: * Linearity: 0.387868
    patchwork_ground_seg: * Planarity: 0.611967
    patchwork_ground_seg: SizeOfRing: 422
    patchwork_ground_seg: * Singular Values: 0.167495, 0.0648561, 3.43014e-05
    patchwork_ground_seg: * Linearity: 0.612789
    patchwork_ground_seg: * Planarity: 0.387007
    patchwork_ground_seg: SizeOfRing: 487
    patchwork_ground_seg: * Singular Values: 0.229241, 0.104464, 0.000121826
    patchwork_ground_seg: * Linearity: 0.544303
    patchwork_ground_seg: * Planarity: 0.455166
    patchwork_ground_seg: SizeOfRing: 767
    patchwork_ground_seg: * Singular Values: 0.36055, 0.138742, 9.2921e-05
    patchwork_ground_seg: * Linearity: 0.615193
    patchwork_ground_seg: * Planarity: 0.384549
    patchwork_ground_seg: SizeOfRing: 852
    patchwork_ground_seg: * Singular Values: 0.307547, 0.121315, 0.000131403
    patchwork_ground_seg: * Linearity: 0.605542
    patchwork_ground_seg: * Planarity: 0.394031
    patchwork_ground_seg: SizeOfRing: 730
    patchwork_ground_seg: * Singular Values: 0.198971, 0.129057, 6.5772e-05
    patchwork_ground_seg: * Linearity: 0.351377
    patchwork_ground_seg: * Planarity: 0.648292
    patchwork_ground_seg: SizeOfRing: 529
    patchwork_ground_seg: * Singular Values: 0.158642, 0.100017, 4.66582e-05
    patchwork_ground_seg: * Linearity: 0.369541
    patchwork_ground_seg: * Planarity: 0.630164
    patchwork_ground_seg: SizeOfRing: 741
    patchwork_ground_seg: * Singular Values: 0.16636, 0.103954, 0.00170705
    patchwork_ground_seg: * Linearity: 0.375128
    patchwork_ground_seg: * Planarity: 0.614611
    patchwork_ground_seg: SizeOfRing: 804
    patchwork_ground_seg: * Singular Values: 0.201368, 0.123692, 0.00174579
    patchwork_ground_seg: * Linearity: 0.385743
    patchwork_ground_seg: * Planarity: 0.605588
    patchwork_ground_seg: SizeOfRing: 704
    patchwork_ground_seg: * Singular Values: 0.360332, 0.119244, 0.00250925
    patchwork_ground_seg: * Linearity: 0.669073
    patchwork_ground_seg: * Planarity: 0.323963
    patchwork_ground_seg: SizeOfRing: 507
    patchwork_ground_seg: * Singular Values: 0.4803, 0.135758, 0.00307255
    patchwork_ground_seg: * Linearity: 0.717348
    patchwork_ground_seg: * Planarity: 0.276255
    patchwork_ground_seg: SizeOfRing: 184
    patchwork_ground_seg: * Singular Values: 0.278921, 0.10715, 0.000249198
    patchwork_ground_seg: * Linearity: 0.615843
    patchwork_ground_seg: * Planarity: 0.383264
    patchwork_ground_seg: SizeOfRing: 30
    patchwork_ground_seg: * Singular Values: 0.0474286, 0.00591493, 3.23724e-05
    patchwork_ground_seg: * Linearity: 0.875288
    patchwork_ground_seg: * Planarity: 0.12403
    
    
    
    opened by yucedagonurcan 1
  • How to show pointcloud frame by frame?

    How to show pointcloud frame by frame?

    Hi, it's a wonderful work! I'm a new user of ROS and I wonder how can I show the result of patchwork frame by frame, since your work show the result in a video-like style.

    opened by mc171819 1
  • Feature: Publish ground and non-ground pointcloud

    Feature: Publish ground and non-ground pointcloud

    Issue

    Need ground and non-ground point cloud for further processing downline.

    Solution

    Add previously commented publishing lines back in. Also removed CloudPublisher comment.

    opened by asatria-nix 1
  • Is there a mistake in patchwork.hpp????

    Is there a mistake in patchwork.hpp????

    Line 439 if (ground_z_elevation > elevation_thr_[ring_idx + 2 * k]) {

    is index 'ring_idx + 2k' wrong ? the size of elevation_thr_ is 4. So, ' ring_idx + 2k' may be large than 4. Can you explain it?

    opened by 512938445 1
  • Why is elevation_threshold different for each zone?

    Why is elevation_threshold different for each zone?

    Hi,

    I read through your paper explaining the patchwork algorithm, but need a little more clarification on why different elevation thresholds are needed for each zone.

    If I am understanding the parameter correctly, the elevation thresholds represent the adaptive midpoint function κ(r). Would I be correct to say that this parameter is simply the maximum height that the ground plane can be? In this case, why would the maximum ground height change depending on the distance of each zone?

    Thanks!

    opened by natashasoon 1
  • Under And over segmentation Ouster 128

    Under And over segmentation Ouster 128

    Hello LIm, I was wondering if you could help me with the issue I'm having with my data. It's over and under segmenting. I tried adjusting the parameters but no luck far. Thanks!

    opened by faisalst 0
  • Explanation needed of consensus_set_based_height_estimation() function

    Explanation needed of consensus_set_based_height_estimation() function

    Hello @LimHyungTae, Thank you for making this great work open-source. I have been going through the patchwork.hpp file and am having trouble understanding the consensus_set_based_height_estimation function. Could you explain what the function is doing when you pass the values, ranges, and weights? What is the physical significance of the linearities and planarities vector and their relation to the ranges, and weights?

    Screenshot from 2022-10-11 15-33-46

    Screenshot from 2022-10-11 15-34-41

    opened by anaskhan496 4
  • patchwork.hpp:373

    patchwork.hpp:373

    include/patchwork/patchwork.hpp:373: double PatchWork<PointT>::consensus_set_based_height_estimation(const RowVectorXd&, const RowVectorXd&, const RowVectorXd&) [with PointT = pcl::PointXYZ; Eigen::RowVectorXd = Eigen::Matrix<double, 1, -1>]: Assertion!only_one_element' failed.`

    Hi, when I tried to run my own data, some troubles occurred in this line. I find some comments behind this line, i.e. "TODO: admit a trivial solution". Is there any bugs when you debuged it?

    opened by caleb-feng 1
Owner
Hyungtae Lim
Ph.D Candidate of URL lab. @ KAIST, South Korea
Hyungtae Lim
project page for VinVL

VinVL: Revisiting Visual Representations in Vision-Language Models Updates 02/28/2021: Project page built. Introduction This repository is the project

null 308 Jan 9, 2023
Project page for the paper Semi-Supervised Raw-to-Raw Mapping 2021.

Project page for the paper Semi-Supervised Raw-to-Raw Mapping 2021.

Mahmoud Afifi 22 Nov 8, 2022
Project page of the paper 'Analyzing Perception-Distortion Tradeoff using Enhanced Perceptual Super-resolution Network' (ECCVW 2018)

EPSR (Enhanced Perceptual Super-resolution Network) paper This repo provides the test code, pretrained models, and results on benchmark datasets of ou

Subeesh Vasu 78 Nov 19, 2022
Using Streamlit to host a multi-page tool with model specs and classification metrics, while also accepting user input values for prediction.

Predicitng_viability Using Streamlit to host a multi-page tool with model specs and classification metrics, while also accepting user input values for

Gopalika Sharma 1 Nov 8, 2021
(Personalized) Page-Rank computation using PyTorch

torch-ppr This package allows calculating page-rank and personalized page-rank via power iteration with PyTorch, which also supports calculation on GP

Max Berrendorf 69 Dec 3, 2022
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019) News [2020/07/05] A very nice blog from Towards Data Science introd

Leo Xiao 3.9k Jan 5, 2023
Official implementation of AAAI-21 paper "Label Confusion Learning to Enhance Text Classification Models"

Description: This is the official implementation of our AAAI-21 accepted paper Label Confusion Learning to Enhance Text Classification Models. The str

null 101 Nov 25, 2022
Official PyTorch implementation for paper Context Matters: Graph-based Self-supervised Representation Learning for Medical Images

Context Matters: Graph-based Self-supervised Representation Learning for Medical Images Official PyTorch implementation for paper Context Matters: Gra

null 49 Nov 23, 2022
The official implementation of NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021]. https://arxiv.org/pdf/2101.12378.pdf

NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation [ICLR-2021] Release Notes The offical PyTorch implementation of NeMo, p

Angtian Wang 76 Nov 23, 2022
Official Repo for Ground-aware Monocular 3D Object Detection for Autonomous Driving

Visual 3D Detection Package: This repo aims to provide flexible and reproducible visual 3D detection on KITTI dataset. We expect scripts starting from

Yuxuan Liu 305 Dec 19, 2022
Official TensorFlow code for the forthcoming paper

~ Efficient-CapsNet ~ Are you tired of over inflated and overused convolutional neural networks? You're right! It's time for CAPSULES :)

Vittorio Mazzia 203 Jan 8, 2023
StyleGAN2-ADA - Official PyTorch implementation

Abstract: Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes.

NVIDIA Research Projects 3.2k Dec 30, 2022
Official code for Score-Based Generative Modeling through Stochastic Differential Equations

Score-Based Generative Modeling through Stochastic Differential Equations This repo contains the official implementation for the paper Score-Based Gen

Yang Song 818 Jan 6, 2023
Official implementation of the ICLR 2021 paper

You Only Need Adversarial Supervision for Semantic Image Synthesis Official PyTorch implementation of the ICLR 2021 paper "You Only Need Adversarial S

Bosch Research 272 Dec 28, 2022
Pre-trained NFNets with 99% of the accuracy of the official paper

NFNet Pytorch Implementation This repo contains pretrained NFNet models F0-F6 with high ImageNet accuracy from the paper High-Performance Large-Scale

Benjamin Schmidt 133 Dec 9, 2022
Official PyTorch implementation of Joint Object Detection and Multi-Object Tracking with Graph Neural Networks

This is the official PyTorch implementation of our paper: "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks". Our project website and video demos are here.

Richard Wang 443 Dec 6, 2022
Official implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis https://arxiv.org/abs/2011.13775

CIPS -- Official Pytorch Implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis Requirements pip install -r requi

Multimodal Lab @ Samsung AI Center Moscow 201 Dec 21, 2022
Official pytorch implementation of paper "Image-to-image Translation via Hierarchical Style Disentanglement".

HiSD: Image-to-image Translation via Hierarchical Style Disentanglement Official pytorch implementation of paper "Image-to-image Translation

null 364 Dec 14, 2022
Official code for paper "Optimization for Oriented Object Detection via Representation Invariance Loss".

Optimization for Oriented Object Detection via Representation Invariance Loss By Qi Ming, Zhiqiang Zhou, Lingjuan Miao, Xue Yang, and Yunpeng Dong. Th

ming71 56 Nov 28, 2022