Official code for 'Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentationon Complex Urban Driving Scenes'

Related tags

Deep Learning PEBAL
Overview

PEBAL

This repo contains the Pytorch implementation of our paper:

Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentation on Complex Urban Driving Scenes

Yu Tian*, Yuyuan Liu*, Guansong Pang, Fengbei Liu, Yuanhong Chen, Gustavo Carneiro.

Inference

Checkpoint for anomaly segmentation

After downloading the pre-trained checkpoint, simply run the following command:

python test.py

Training Code will be released soon

Citation

If you find this repo useful for your research, please consider citing our paper:

@misc{tian2021pixelwise,
      title={Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentation on Complex Urban Driving Scenes}, 
      author={Yu Tian and Yuyuan Liu and Guansong Pang and Fengbei Liu and Yuanhong Chen and Gustavo Carneiro},
      year={2021},
      eprint={2111.12264},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Comments
  • can't get the good results on the FS datasets

    can't get the good results on the FS datasets

    Hi!

    I tried to run the test.py , but I can't get the good results on the FS dataset. Here are the results I got:

    [pebal][INFO] validating cityscapes dataset ... 
     labeled: 1966976, correct: 1933282: 100%|█████████████████████████████████████████████████████████████| 500/500 [33:56<00:00,  4.07s/it]
    [pebal][CRITICAL] current mIoU is 0.895736885447597, mAcc is 0.979122587176078 
    
    100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [02:17<00:00,  1.37s/it]
    [pebal][CRITICAL] AUROC score for Fishyscapes_ls: 0.5020263181424963                                                                                    
    [pebal][CRITICAL] AUPRC score for Fishyscapes_ls: 0.0025106950112928753 
    [pebal][CRITICAL] FPR@TPR95 for Fishyscapes_ls: 0.7372275091455253 
    
    [pebal][INFO] validating Fishyscapes_static dataset ... 
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [00:39<00:00,  1.31s/it]
    [pebal][CRITICAL] AUROC score for Fishyscapes_static: 0.4023022576107206 
    [pebal][CRITICAL] AUPRC score for Fishyscapes_static: 0.012901811279743608 
    [pebal][CRITICAL] FPR@TPR95 for Fishyscapes_static: 0.8040210492683544 
    

    The FS datasets is downloaded from the link you provided (synboost GitHub), and the corresponding folder structure is:

    fishyscapes/
    ├── LostAndFound
    │ ├── entropy
    │ ├── labels
    │ ├── labels_with_ROI
    │ ├── logit_distance
    │ ├── mae_features
    │ ├── original
    │ ├── semantic
    │ └── synthesis
    └── Static
         ├── entropy
         ├── labels
         ├── labels_with_ROI
         ├── logit_distance
         ├── mae_features
         ├── original
         └── semantic
    

    In Static and LostAndFound, I use the images in the labels folder and the original folder.

    Here is the code in my modified data_loader.py:

    def __init__(self, split='Static', root="", transform=None):
        """Load all filenames."""
        self.transform = transform
        self.root = root
        self.split = split  # ['Static', 'LostAndFound']
        self.images = []  # list of all raw input images
        self.targets = []  # list of all ground truth TrainIds images
        # filenames = os.listdir(os.path.join(root, self.split, 'images'))    
        filenames = os.listdir(os.path.join(root, self.split, 'original'))    # final folder
        root = os.path.join(root, self.split)
        for filename in filenames:
            if os.path.splitext(filename)[1] == '.png':
                f_name = os.path.splitext(filename)[0]
                # ======= old ======= #
                # filename_base_img = os.path.join("images", f_name)
                # filename_base_labels = os.path.join("labels", f_name.replace("leftImg8bit", "labels"))
                # ======= final ======= #
                filename_base_img = os.path.join("original", f_name)
                filename_base_labels = os.path.join("labels", f_name)
                # ========================== #
                self.images.append(os.path.join(root, filename_base_img + '.png'))
                self.targets.append(os.path.join(root, filename_base_labels + '.png'))
                self.images = sorted(self.images)
                self.targets = sorted(self.targets)
    

    Thanks! Looking forward to your reply.

    opened by StarWanan 8
  • What pixels are predicted as the extra class Y+1?

    What pixels are predicted as the extra class Y+1?

    Hi,

    I am interested in which kind of pixels are predicted as the extra class after training? In the paper, it appears me that the anomaly prediction is mainly based on the energy score. Thus, I am very curious about the extra class the model has learned to predict.

    Also, how the deep baseline gambler is implemented in the paper to get the numbers in Tab. 1 and 3? Does it use the outlier dataset? what is the reward if not having the energy function?

    Thanks! Looking forward to your reply.

    opened by zhd2rng 7
  • How to generate the city_scape dataset?

    How to generate the city_scape dataset?

    Hello and thanks for the awesome work!

    I am trying to prepare the data in the format you mentioned in installation page, however I don't understand how to get your structure. When I download the city_scape gtFine dataset I have the following data structure:

    city_scapes
    └── gtFine
        ├── test
        │   ├── berlin
        │   ├── bielefeld
        │   ├── bonn
        │   ├── leverkusen
        │   ├── mainz
        │   └── munich
        ├── train
        │   ├── aachen
        │   ├── bochum
        │   ├── bremen
        │   ├── cologne
        │   ├── darmstadt
        │   ├── dusseldorf
        │   ├── erfurt
        │   ├── hamburg
        │   ├── hanover
        │   ├── jena
        │   ├── krefeld
        │   ├── monchengladbach
        │   ├── strasbourg
        │   ├── stuttgart
        │   ├── tubingen
        │   ├── ulm
        │   ├── weimar
        │   └── zurich
        └── val
            ├── frankfurt
            ├── lindau
            └── munster
    

    I succesfully ran the processing step in here.

    I also had a look at the synboost preprocessed data and also at the issue #13 but I still don't get how to easily convert the structure to yours.

    Also, I saw on the here that you provided the annotation folder. Should I use these files or generate them, if so how?

    Hopefully I am not missing something obvious!

    Best, Aldi

    opened by aldipiroli 6
  • Having issues in model training

    Having issues in model training

    Hi, thanks for the fantastic work. But I have some issues in model training, I prepared the dataset according to the instructions, but got this error: 0%| | 0/818 [00:02<?, ?it/s] Traceback (most recent call last): File "/home/xxxx/xxxx/xxxx/PEBAL/code/main.py", line 158, in main(-1, 1, config=config, args=args) File "/home/xxxx/xxxx/xxxx/PEBAL/code/main.py", line 113, in main trainer.train(model=model, epoch=curr_epoch, train_sampler=train_sampler, train_loader=train_loader, File "/home/xxxx/xxxx/xxxx/PEBAL/code/engine/trainer.py", line 47, in train target = minibatch['label'].cuda(non_blocking=True) KeyError: 'label'

    Process finished with exit code 1


    I print out the minibatch and found it doesn't have the label field: {'data': tensor([[[[-0.0116, -0.0116, -0.0116, ..., -1.6727, -1.6727, -1.6555], [-0.0116, -0.0116, -0.0116, ..., -1.6555, -1.6727, -1.6555], [-0.0287, -0.0116, -0.0116, ..., -1.6555, -1.6555, -1.6384], ..., ..., [-0.5844, -0.6367, -0.6367, ..., -0.6193, -0.6193, -0.6018], [-0.5844, -0.6193, -0.6193, ..., -0.6367, -0.6193, -0.6018], [-0.5844, -0.5844, -0.6018, ..., -0.6367, -0.6367, -0.6193]]]]), 'fn': ['dusseldorf_000206_000019', 'dusseldorf_000144_000019', 'tubingen_000062_000019', 'hamburg_000000_089696', 'dusseldorf_000035_000019_unknown_unknown', 'erfurt_000072_000019', 'hamburg_000000_049558', 'bremen_000118_000019'], 'n': tensor([6545, 6545, 6545, 6545, 6545, 6545, 6545, 6545]), 'is_ood': tensor([False, False, False, False, False, False, False, False])}


    What files should be include in the annotation files? I put the original files from cityscape and the generated labeltrainids files there. Is there anything I did wrong?

    Thanks! Thanks!

    opened by yzpick 6
  • Confused about the PAL loss on the outlier pixels.

    Confused about the PAL loss on the outlier pixels.

    Hi, thanks for your excellent work! I am a little confused about the PAL loss, especially on how it works on the outlier pixels in D_out.

    Main Confusion: In eq(3) of the paper, as I understand, the label y_w should be Y+1 for outlier pixels. Then the loss term inside the log becomes f(Y+1;x)_w + f(Y+1;x)_w / a_w. However, when I checked the code, I found that the real implementation of the PAL loss on the outlier pixels (inside log) seems to be a sum of the interior class probabilities and f(Y+1;x)_w / a_w. As can be seen in reserve_boosting_energy = torch.add(true_pred, reservation.unsqueeze(1))[mask.unsqueeze(1). repeat(1, 19, 1, 1)].log(). Would you kindly explain this? (p.s. I have also read the paper [33]. It seems that they do not consider an outlier dataset during the training, therefore, I can not find helpful information.)

    Others: I find the code conducts the 'log' operation twice. (Besides the above code, another 'log' operation is conducted on reserve_boosting_energy = torch.clamp(reserve_boosting_energy, min=1e-7).log()). It is really confusing. Also, there seems to be a minor typo in eq(3) where a softmax operation is lacking on the logit.

    Your response would be fully appreciated!

    opened by gaozhitong 5
  • Some question about paper

    Some question about paper

    Hi,

    So, after carefully read the paper, I am not sure if I got the paper correctly.

    The paper proposed a loss which is helpful to find out abnormal class.

    steps: 1. formulate D_in and D_out. D_in should not overlap with D_out 2. Train the model with D_IN 3. Retrain model with D_OUT Question: what do you mean by fine-tune only the final classification block using the loss in (2)

    Thanks!

    opened by jyang68sh 4
  • inlier vs outlier

    inlier vs outlier

    Hi I am bit confused with the definition of inlier and outlier classes described in your paper.

    e.g. For cityscapes, what are the inlier classes and outlier classes?

    Thanks

    opened by jyang68sh 4
  • Cannot reproduce the paper result and checkpoint result

    Cannot reproduce the paper result and checkpoint result

    Hello , Thanks for your work

    I followed the pebal config and trained using batch size 16 (as the paper) and 40 epochs However I cannot reproduce the paper result and check point result Is there any tips for reproducing the high AUPRC on the FS LS dataset?? Or is there any way to reproduce the result that has similar performance with checkpoint Could you help??

    Thanks in advance Config : import os import numpy from easydict import EasyDict

    C = EasyDict() config = C cfg = C

    C.seed = 666

    """Root Directory Config""" C.repo_name = 'pebal' C.root_dir = os.path.realpath(".")

    """Data Dir and Weight Dir""" C.city_root_path = '/home/numb7315/PEBAL/code/dataset/city_scape' # path/to/your/city_scape C.coco_root_path = '/home/numb7315/PEBAL/code/dataset/coco' # path/to/your/coco C.fishy_root_path = '/home/numb7315/PEBAL/code/dataset/fishyscapes' # path/to/your/fishy

    C.pebal_weight_path = os.path.join(C.root_dir, 'ckpts', 'pebal', 'best_ad_ckpt.pth') C.pretrained_weight_path = os.path.join(C.root_dir, 'ckpts', 'pretrained_ckpts', 'cityscapes_best.pth')

    """Network Config""" C.fix_bias = True C.bn_eps = 1e-5 C.bn_momentum = 0.1

    """Image Config""" C.num_classes = 19+1 # NOTE: 1 more channel for gambler loss C.image_mean = numpy.array([0.485, 0.456, 0.406]) # 0.485, 0.456, 0.406 C.image_std = numpy.array([0.229, 0.224, 0.225])

    C.image_height = 900 C.image_width = 900

    C.num_train_imgs = 2975 C.num_eval_imgs = 500

    """Train Config""" C.lr = 1e-5 C.batch_size = 8 C.lr_power = 0.9 C.momentum = 0.9 C.weight_decay = 1e-4

    C.nepochs = 40 C.niters_per_epoch = C.num_train_imgs // C.batch_size C.num_workers = 8 C.train_scale_array = [0.5, 0.75, 1, 1.5, 1.75, 2.0] C.void_number = 5 C.warm_up_epoch = 0

    """Eval Config""" C.eval_epoch = 1 C.eval_stride_rate = 2 / 3 C.eval_scale_array = [1, ] # 0.5, 0.75, 1, 1.5, 1.75 C.eval_flip = False C.eval_base_size = 800 C.eval_crop_size = 800

    """Display Config""" C.record_info_iter = 20 C.display_iter = 50

    Your project [work_space] name C.proj_name = "OoD_Segmentation"

    Your current experiment name C.experiment_name = "pebal_baseline"

    half pretrained_ckpts-loader upload images; loss upload every iteration C.upload_image_step = [0, int((C.num_train_imgs / C.batch_size) / 2)]

    False for debug; True for visualize C.wandb_online = True

    """Save Config""" C.saved_dir = os.path.join(C.root_dir, 'ckpts', C.experiment_name)

    if not os.path.exists(C.saved_dir): os.mkdir(C.saved_dir)

    Result:

    wandb: Fishyscapes_ls_auprc 0.44404 wandb: Fishyscapes_ls_auroc 0.98498 wandb: Fishyscapes_ls_fpr95 0.06383 wandb: Fishyscapes_static_auprc 0.89534 wandb: Fishyscapes_static_auroc 0.9951 wandb: Fishyscapes_static_fpr95 0.0199 wandb: energy_loss 0.03873 wandb: gambler_loss 0.17864 wandb: global_step 39

    opened by hyunjunChhoi 3
  • RuntimeError: CUDA error: device-side assert triggered

    RuntimeError: CUDA error: device-side assert triggered

    what problem for it?

    train log is: /pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [20430,0,0], thread: [34,0,0] Assertionidx_dim >= 0 && idx_dim < index_size && "index out of bounds"failed. epoch (0) | gambler_loss: 10.962 energy_loss: 7.645 : 50%|███████████████████████▌ | 1/2 [00:21<00:21, 21.57s/it] Traceback (most recent call last): File "code/main.py", line 151, in <module> main(-1, 1, config=config, args=args) File "code/main.py", line 107, in main optimizer=optimizer) File "/home/rhdai/workspace/code/PEBAL/code/engine/trainer.py", line 51, in train loss = self.loss1(pred=in_logits, targets=in_target, wrong_sample=False) File "/home/rhdai/miniconda3/envs/torch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/rhdai/workspace/code/PEBAL/code/losses.py", line 103, in forward gambler_loss = gambler_loss[~mask].log() RuntimeError: CUDA error: device-side assert triggered wandb: Waiting for W&B process to finish... (failed 1). Press Control-C to abort syncing. /pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [19955,0,0], thread: [52,0,0] Assertionidx_dim >= 0 && idx_dim < index_size && "index out of bounds"failed. /pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [19955,0,0], thread: [53,0,0] Assertionidx_dim >= 0 && idx_dim < index_size && "index out of bounds"failed. /pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [19955,0,0], thread: [54,0,0] Assertionidx_dim >= 0 && idx_dim < index_size && "index out of bounds"failed. /pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [19955,0,0], thread: [55,0,0] Assertionidx_dim >= 0 && idx_dim < index_size && "index out of bounds"failed. /pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [19955,0,0], thread: [56,0,0] Assertionidx_dim >= 0 && idx_dim < index_size && "index out of bounds"failed. /pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:115: operator(): block: [19955,0,0], thread: [57,0,0] Assertionidx_dim >= 0 && idx_dim < index_size && "index out of bounds"f ailed. wandb: wandb: wandb: Run history: wandb: energy_loss ▁ wandb: gambler_loss ▁ wandb: global_step ▁▁ wandb: wandb: Run summary: wandb: energy_loss 7.64503 wandb: gambler_loss 10.96237 wandb: global_step 0 wandb: wandb: Synced your_pebal_exp: https://wandb.ai/runist/OoD_Segmentation/runs/359y48y4

    opened by Runist 3
  • why larger aω means lower inlier free energy

    why larger aω means lower inlier free energy

    Thanks for sharing your great work!

    In the paper you mentioned that: aω = (−Eθ (x)ω )^2, but this is same as aω = Eθ (x)ω ^2, but larger aω means lower inlier free energy only happens when Eθ (x)ω <0, where can we confirm this condition ?

    opened by luoyuchenmlcv 3
  • Can you offer the fishyscapes and coco (for outlier exposures) datasets baidu netdisk for training?

    Can you offer the fishyscapes and coco (for outlier exposures) datasets baidu netdisk for training?

    hello author, Can you offer the fishyscapes and coco (for outlier exposures) datasets baidu netdisk for training? I don't know how to download the fishyscapes datasets from the website and what is necessary for training. Thank you very much!

    opened by liuxubit 3
  • Eq.3 logits or probabilities

    Eq.3 logits or probabilities

    In Eq.3, the PAL loss uses logits (f_theta) instead of probabilities (p_theta). When using logits, it seems that the parts inside the parenthesis can become negative. When the parts inside the parenthesis becomes negative, the log operation cannot be performed. When using probabilities, however, the parts inside the parenthesis will always be positive. Why Eq.3 uses logits instead of probabilities?

    image
    opened by Yang-Li-2000 0
  • Training pebal with custom backbone model and dataset

    Training pebal with custom backbone model and dataset

    Hi there! Congrats on such excellent results and thank you so much for you inspiring work!

    After reading through the paper and trying to implement PEBAL with the code provided by you (again, thank you so much for opensourcing the code), I am now experimenting the use of custom backbone model and dataset on the performance of PEBAL since the paper discussed the influence of segmentation models. There are a few points that are not mentioned in previous issues and I would like to confirm them before continue working on my experiments.

    1. The layers to finetune. As stated in the paper, only "the last block of a segmentation model" needs to be fine-tuned. From my understanding, the reason is that there is an extra channel in these layers and the parameter weights for that channel needs to be obtained through fine-tuning. Therefore, only the layers with the the constructed extra channel need to keep requires_grad as True for their parameters. I was wondering if my understanding regarding what layers to freeze and what layers to train is correct.

    2. How to pick m_in & m_out values I suppose the m_in and m_out need to be repicked as well. I am curious as to what criteria are involved in the selection for m_in and m_out values. In the reply https://github.com/tianyu0207/PEBAL/issues/19#issuecomment-1235030055, it is mentioned that the energy is constrained by the two values -12 and -6. During my experiments, I am observing great overlapping between the inlier and outlier energy, I was wondering if you could explain a bit more on how they are constrained and provide some advice on choosing the two values.

    3. Other parameters I was wondering if you can provide some advice on how to select other hyperparameters such as β_1, β_2 and λ as well?

    4. Loss When trying to replicate the results from the PEBAL paper, it has caught my attention that both the energy loss and the gambler loss are not converging and keep on fluctuating. However, from the metrics such as AUROC, the performance has actually improved. I was wondering if this behavior is normal for PEBAL.

    Thanks!

    opened by mei0824 0
Owner
Yu Tian
Yu Tian
Official PyTorch code for CVPR 2020 paper "Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision"

Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision https://arxiv.org/abs/2003.00393 Abstract Active learning (AL) aims to min

Denis 29 Nov 21, 2022
This is an official implementation of "Polarized Self-Attention: Towards High-quality Pixel-wise Regression"

Polarized Self-Attention: Towards High-quality Pixel-wise Regression This is an official implementation of: Huajun Liu, Fuqiang Liu, Xinyi Fan and Don

DeLightCMU 212 Jan 8, 2023
Code for "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation" CVPR 2019 oral

Good news! We release a clean version of PVNet: clean-pvnet, including how to train the PVNet on the custom dataset. Use PVNet with a detector. The tr

ZJU3DV 722 Dec 27, 2022
One Million Scenes for Autonomous Driving

ONCE Benchmark This is a reproduced benchmark for 3D object detection on the ONCE (One Million Scenes) dataset. The code is mainly based on OpenPCDet.

null 148 Dec 28, 2022
Pixel-wise segmentation on VOC2012 dataset using pytorch.

PiWiSe Pixel-wise segmentation on the VOC2012 dataset using pytorch. FCN SegNet PSPNet UNet RefineNet For a more complete implementation of segmentati

Bodo Kaiser 378 Dec 30, 2022
Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera.

Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera. This project prepares training and testing data for various deep learning projects such as 6D object pose estimation projects singleshotpose, as well as object detection and instance segmentation projects.

null 305 Dec 16, 2022
Retinal Vessel Segmentation with Pixel-wise Adaptive Filters (ISBI 2022)

Retinal Vessel Segmentation with Pixel-wise Adaptive Filters (ISBI 2022) Introdu

anonymous 14 Oct 27, 2022
"SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image", Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan, Humphrey Shi, Zhangyang Wang

SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image [Paper] [Website] Pipeline Code Environment pip install -r requirements

VITA 250 Jan 5, 2023
Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy" (ICLR 2022 Spotlight)

About Code release for Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy (ICLR 2022 Spotlight)

THUML @ Tsinghua University 221 Dec 31, 2022
Complex-Valued Neural Networks (CVNN)Complex-Valued Neural Networks (CVNN)

Complex-Valued Neural Networks (CVNN) Done by @NEGU93 - J. Agustin Barrachina Using this library, the only difference with a Tensorflow code is that y

youceF 1 Nov 12, 2021
Urban mobility simulations with Python3, RLlib (Deep Reinforcement Learning) and Mesa (Agent-based modeling)

Deep Reinforcement Learning for Smart Cities Documentation RLlib: https://docs.ray.io/en/master/rllib.html Mesa: https://mesa.readthedocs.io/en/stable

null 1 May 15, 2022
Official Pytorch implementation of "Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes", CVPR 2022

Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes / 3DCrowdNet News ?? 3DCrowdNet achieves the state-of-the-art accuracy on 3D

Hongsuk Choi 113 Dec 21, 2022
Official code for "Stereo Waterdrop Removal with Row-wise Dilated Attention (IROS2021)"

Stereo-Waterdrop-Removal-with-Row-wise-Dilated-Attention This repository includes official codes for "Stereo Waterdrop Removal with Row-wise Dilated A

null 29 Oct 1, 2022
[ICCV2021] Official code for "Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition"

CTR-GCN This repo is the official implementation for Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition. The pap

Yuxin Chen 148 Dec 16, 2022
An efficient 3D semantic segmentation framework for Urban-scale point clouds like SensatUrban, Campus3D, etc.

An efficient 3D semantic segmentation framework for Urban-scale point clouds like SensatUrban, Campus3D, etc.

Zou 33 Jan 3, 2023
[arXiv'22] Panoptic NeRF: 3D-to-2D Label Transfer for Panoptic Urban Scene Segmentation

Panoptic NeRF Project Page | Paper | Dataset Panoptic NeRF: 3D-to-2D Label Transfer for Panoptic Urban Scene Segmentation Xiao Fu*, Shangzhan zhang*,

Xiao Fu 111 Dec 16, 2022
MGFN: Multi-Graph Fusion Networks for Urban Region Embedding was accepted by IJCAI-2022.

Multi-Graph Fusion Networks for Urban Region Embedding (IJCAI-22) This is the implementation of Multi-Graph Fusion Networks for Urban Region Embedding

null 202 Nov 18, 2022
[arXiv'22] Panoptic NeRF: 3D-to-2D Label Transfer for Panoptic Urban Scene Segmentation

Panoptic NeRF: 3D-to-2D Label Transfer for Panoptic Urban Scene Segmentation Xiao Fu1*  Shangzhan Zhang1*  Tianrun Chen1  Yichong Lu1  Lanyun Zhu2  Xi

Xiao Fu 37 May 17, 2022
Official PyTorch code for WACV 2022 paper "CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows"

CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows WACV 2022 preprint:https://arxiv.org/abs/2107.1

Denis 156 Dec 28, 2022