You Only Look Once for Panopitic Driving Perception

Overview

You Only ๐Ÿ‘€ Once for Panoptic โ€‹ ๐Ÿš— Perception

You Only Look at Once for Panoptic driving Perception

by Dong Wu, Manwen Liao, Weitian Zhang, Xinggang Wang ๐Ÿ“ง School of EIC, HUST

( ๐Ÿ“ง ) corresponding author.

arXiv technical report (arXiv 2108.11250)


ไธญๆ–‡ๆ–‡ๆกฃ

The Illustration of YOLOP

yolop

Contributions

  • We put forward an efficient multi-task network that can jointly handle three crucial tasks in autonomous driving: object detection, drivable area segmentation and lane detection to save computational costs, reduce inference time as well as improve the performance of each task. Our work is the first to reach real-time on embedded devices while maintaining state-of-the-art level performance on the BDD100K dataset.

  • We design the ablative experiments to verify the effectiveness of our multi-tasking scheme. It is proved that the three tasks can be learned jointly without tedious alternating optimization.

Results

PWC

Traffic Object Detection Result

Model Recall(%) mAP50(%) Speed(fps)
Multinet 81.3 60.2 8.6
DLT-Net 89.4 68.4 9.3
Faster R-CNN 77.2 55.6 5.3
YOLOv5s 86.8 77.2 82
YOLOP(ours) 89.2 76.5 41

Drivable Area Segmentation Result

Model mIOU(%) Speed(fps)
Multinet 71.6 8.6
DLT-Net 71.3 9.3
PSPNet 89.6 11.1
YOLOP(ours) 91.5 41

Lane Detection Result:

Model mIOU(%) IOU(%)
ENet 34.12 14.64
SCNN 35.79 15.84
ENet-SAD 36.56 16.02
YOLOP(ours) 70.50 26.20

Ablation Studies 1: End-to-end v.s. Step-by-step:

Training_method Recall(%) AP(%) mIoU(%) Accuracy(%) IoU(%)
ES-W 87.0 75.3 90.4 66.8 26.2
ED-W 87.3 76.0 91.6 71.2 26.1
ES-D-W 87.0 75.1 91.7 68.6 27.0
ED-S-W 87.5 76.1 91.6 68.0 26.8
End-to-end 89.2 76.5 91.5 70.5 26.2

Ablation Studies 2: Multi-task v.s. Single task:

Training_method Recall(%) AP(%) mIoU(%) Accuracy(%) IoU(%) Speed(ms/frame)
Det(only) 88.2 76.9 - - - 15.7
Da-Seg(only) - - 92.0 - - 14.8
Ll-Seg(only) - - - 79.6 27.9 14.8
Multitask 89.2 76.5 91.5 70.5 26.2 24.4

Notes:

  • The works we has use for reference including Multinet (paper,code๏ผ‰,DLT-Net (paper๏ผ‰,Faster R-CNN (paper,code๏ผ‰,YOLOv5s๏ผˆcode) ,PSPNet(paper,code) ,ENet(paper,code) SCNN(paper,code) SAD-ENet(paper,code). Thanks for their wonderful works.
  • In table 4, E, D, S and W refer to Encoder, Detect head, two Segment heads and whole network. So the Algorithm (First, we only train Encoder and Detect head. Then we freeze the Encoder and Detect head as well as train two Segmentation heads. Finally, the entire network is trained jointly for all three tasks.) can be marked as ED-S-W, and the same for others.

Visualization

Traffic Object Detection Result

detect result

Drivable Area Segmentation Result

Lane Detection Result

Notes:

  • The visualization of lane detection result has been post processed by quadratic fitting.

Project Structure

โ”œโ”€inference
โ”‚ โ”œโ”€images   # inference images
โ”‚ โ”œโ”€output   # inference result
โ”œโ”€lib
โ”‚ โ”œโ”€config/default   # configuration of training and validation
โ”‚ โ”œโ”€core    
โ”‚ โ”‚ โ”œโ”€activations.py   # activation function
โ”‚ โ”‚ โ”œโ”€evaluate.py   # calculation of metric
โ”‚ โ”‚ โ”œโ”€function.py   # training and validation of model
โ”‚ โ”‚ โ”œโ”€general.py   #calculation of metricใ€nmsใ€conversion of data-formatใ€visualization
โ”‚ โ”‚ โ”œโ”€loss.py   # loss function
โ”‚ โ”‚ โ”œโ”€postprocess.py   # postprocess(refine da-seg and ll-seg, unrelated to paper)
โ”‚ โ”œโ”€dataset
โ”‚ โ”‚ โ”œโ”€AutoDriveDataset.py   # Superclass dataset๏ผŒgeneral function
โ”‚ โ”‚ โ”œโ”€bdd.py   # Subclass dataset๏ผŒspecific function
โ”‚ โ”‚ โ”œโ”€hust.py   # Subclass dataset(Campus scene, unrelated to paper)
โ”‚ โ”‚ โ”œโ”€convect.py 
โ”‚ โ”‚ โ”œโ”€DemoDataset.py   # demo dataset(image, video and stream)
โ”‚ โ”œโ”€models
โ”‚ โ”‚ โ”œโ”€YOLOP.py    # Setup and Configuration of model
โ”‚ โ”‚ โ”œโ”€light.py    # Model lightweight๏ผˆunrelated to paper, zwt)
โ”‚ โ”‚ โ”œโ”€commom.py   # calculation module
โ”‚ โ”œโ”€utils
โ”‚ โ”‚ โ”œโ”€augmentations.py    # data augumentation
โ”‚ โ”‚ โ”œโ”€autoanchor.py   # auto anchor(k-means)
โ”‚ โ”‚ โ”œโ”€split_dataset.py  # (Campus scene, unrelated to paper)
โ”‚ โ”‚ โ”œโ”€utils.py  # loggingใ€device_selectใ€time_measureใ€optimizer_selectใ€model_save&initialize ใ€Distributed training
โ”‚ โ”œโ”€run
โ”‚ โ”‚ โ”œโ”€dataset/training time  # Visualization, logging and model_save
โ”œโ”€tools
โ”‚ โ”‚ โ”œโ”€demo.py    # demo(folderใ€camera)
โ”‚ โ”‚ โ”œโ”€test.py    
โ”‚ โ”‚ โ”œโ”€train.py    
โ”œโ”€toolkits
โ”‚ โ”‚ โ”œโ”€deploy    # Deployment of model
โ”‚ โ”‚ โ”œโ”€datapre    # Generation of gt(mask) for drivable area segmentation task
โ”œโ”€weights    # Pretraining model

Requirement

This codebase has been developed with python version 3.7, PyTorch 1.7+ and torchvision 0.8+:

conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.2 -c pytorch

See requirements.txt for additional dependencies and version requirements.

pip install -r requirements.txt

Data preparation

Download

We recommend the dataset directory structure to be the following:

# The id represent the correspondence relation
โ”œโ”€dataset root
โ”‚ โ”œโ”€images
โ”‚ โ”‚ โ”œโ”€train
โ”‚ โ”‚ โ”œโ”€val
โ”‚ โ”œโ”€det_annotations
โ”‚ โ”‚ โ”œโ”€train
โ”‚ โ”‚ โ”œโ”€val
โ”‚ โ”œโ”€da_seg_annotations
โ”‚ โ”‚ โ”œโ”€train
โ”‚ โ”‚ โ”œโ”€val
โ”‚ โ”œโ”€ll_seg_annotations
โ”‚ โ”‚ โ”œโ”€train
โ”‚ โ”‚ โ”œโ”€val

Update the your dataset path in the ./lib/config/default.py.

Training

You can set the training configuration in the ./lib/config/default.py. (Including: the loading of preliminary model, loss, data augmentation, optimizer, warm-up and cosine annealing, auto-anchor, training epochs, batch_size).

If you want try alternating optimization or train model for single task, please modify the corresponding configuration in ./lib/config/default.py to True. (As following, all configurations is False, which means training multiple tasks end to end).

# Alternating optimization
_C.TRAIN.SEG_ONLY = False           # Only train two segmentation branchs
_C.TRAIN.DET_ONLY = False           # Only train detection branch
_C.TRAIN.ENC_SEG_ONLY = False       # Only train encoder and two segmentation branchs
_C.TRAIN.ENC_DET_ONLY = False       # Only train encoder and detection branch

# Single task 
_C.TRAIN.DRIVABLE_ONLY = False      # Only train da_segmentation task
_C.TRAIN.LANE_ONLY = False          # Only train ll_segmentation task
_C.TRAIN.DET_ONLY = False          # Only train detection task

Start training:

python tools/train.py

Evaluation

You can set the evaluation configuration in the ./lib/config/default.py. (Including๏ผš batch_size and threshold value for nms).

Start evaluating:

python tools/test.py --weights weights/End-to-end.pth

Demo Test

We provide two testing method.

Folder

You can store the image or video in --source, and then save the reasoning result to --save-dir

python tools/demo.py --source inference/images

Camera

If there are any camera connected to your computer, you can set the source as the camera number(The default is 0).

python tools/demo.py --source 0

Demonstration

input output

Deployment

Our model can reason in real-time on Jetson Tx2, with Zed Camera to capture image. We use TensorRT tool for speeding up. We provide code for deployment and reasoning of model in ./toolkits/deploy.

Segmentation Label(Mask) Generation

You can generate the label for drivable area segmentation task by running

python toolkits/datasetpre/gen_bdd_seglabel.py

Model Transfer

Before reasoning with TensorRT C++ API, you need to transfer the .pth file into binary file which can be read by C++.

python toolkits/deploy/gen_wts.py

After running the above command, you obtain a binary file named yolop.wts.

Running Inference

TensorRT needs an engine file for inference. Building an engine is time-consuming. It is convenient to save an engine file so that you can reuse it every time you run the inference. The process is integrated in main.cpp. It can determine whether to build an engine according to the existence of your engine file.

Third Parties Resource

Citation

If you find our paper and code useful for your research, please consider giving a star โญ and citation ๐Ÿ“ :

@misc{2108.11250,
Author = {Dong Wu and Manwen Liao and Weitian Zhang and Xinggang Wang},
Title = {YOLOP: You Only Look Once for Panoptic Driving Perception},
Year = {2021},
Eprint = {arXiv:2108.11250},
}
Comments
  • ๅœจ่‡ชๅทฑๆ•ฐๆฎ้›†ไธŠ่ฎญ็ปƒ

    ๅœจ่‡ชๅทฑๆ•ฐๆฎ้›†ไธŠ่ฎญ็ปƒ

    ๆœ‰ไบบๅœจ่‡ชๅทฑๆ•ฐๆฎ้›†ไธŠ่ฎญ็ปƒ็š„ๅ—๏ผŸๆ•ˆๆžœๆ€Žไนˆๆ ท๏ผŒๆˆ‘่ฟ™่พน่‡ชๅทฑๆ ‡ๅ‡†ไบ†detect bboxๅ’Œ่ฝฆ้“็บฟ๏ผŒlossไปŽ1.0้™ๅˆฐ0.18(70ไธชepoch) image

    ็ฒพๅบฆ

    Lane line Segment: Acc(0.337)    IOU (0.008)  mIOU(0.340)
    Detect: P(0.008)  R(0.665)  [email protected](0.260)  [email protected]:0.95(0.085)
    

    ๆ„Ÿ่ง‰ๅฎŒๅ…จไธๅฏน

    opened by ycdhqzhiai 31
  • How to detect multiple objects using YOLOP?

    How to detect multiple objects using YOLOP?

    First of all, thx for your great jobs! The YOLOP model right now seems to have the ability to detect only the car, if I would like to detect more classes of object, what parameters should I modified? I had already tried to modify model.nc in train.py to be 41, change the single_cls to be False in bdd.py, uncomment the bdd_labels dict in convert.py, but still I got the error said:

    Traceback (most recent call last): File "tools/train.py", line 406, in main() File "tools/train.py", line 333, in main train(cfg, train_loader, model, criterion, optimizer, scaler, File "/home/roy/Github/YOLOP/lib/core/function.py", line 77, in train total_loss, head_losses = criterion(outputs, target, shapes,model) File "/home/roy/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/roy/Github/YOLOP/lib/core/loss.py", line 50, in forward total_loss, head_losses = self._forward_impl(head_fields, head_targets, shapes, model) File "/home/roy/Github/YOLOP/lib/core/loss.py", line 96, in _forward_impl iou = bbox_iou(pbox.T, tbox[i], x1y1x2y2=False, CIoU=True) # iou(prediction, target) File "/home/roy/Github/YOLOP/lib/core/general.py", line 38, in bbox_iou print(box1[0] - box1[2] / 2) File "/home/roy/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/_tensor.py", line 249, in repr return torch._tensor_str._str(self) File "/home/roy/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/_tensor_str.py", line 415, in _str return _str_intern(self) File "/home/roy/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/_tensor_str.py", line 390, in _str_intern tensor_str = _tensor_str(self, indent) File "/home/roy/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/_tensor_str.py", line 251, in _tensor_str formatter = _Formatter(get_summarized_data(self) if summarize else self) File "/home/roy/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/_tensor_str.py", line 90, in init nonzero_finite_vals = torch.masked_select(tensor_view, torch.isfinite(tensor_view) & tensor_view.ne(0)) RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

    Please help me out!

    opened by PigLogic-Cyber 8
  • onnx export problem

    onnx export problem

    Hello,

    Thank you for your great work. I was trying to export your trained model to onnx using torch.onnx.export function. Yet I received the following error. RuntimeError: Only tuples, lists and Variables supported as JIT inputs/outputs. Dictionaries and strings are also accepted but their usage is not recommended. But got unsupported type builtin_function_or_method

    Is your model compatible to convert to onnx?

    And, in your activations.py script I observed that you replaced hardsigmoid with hardtanh.

    class Hardswish(nn.Module):  # export-friendly version of nn.Hardswish()
        @staticmethod
        def forward(x):
            # return x * F.hardsigmoid(x)  # for torchscript and CoreML
            return x * F.hardtanh(x + 3, 0., 6.) / 6.  # for torchscript, CoreML and ONNX
    

    Yet when I export the model to torch.jit using trace, there seems to be aten::hardswish functions in the model nonetheless. What am I missing?

    Thanks in Advance, Kind Regards

    opened by kadirbeytorun 6
  • Fineturning

    Fineturning

    Hello, if I want to detect people, can I make fineturning on the basis of your weight? For example, after we labeled people's images, we use end-to-end.pth to continue training for models.

    opened by zhangbaoj 4
  • problem of lane detection

    problem of lane detection

    Hello:

    I use your default model weights, and default parameters.

    I do not get a good lane detection as your demo. problem of lane

    I am wondering is there any threshold I should set in the inference process or demo.py ?

    Thank you very much in advance.

    John Feng in Shanghai, China

    opened by luckyjohnfeng 3
  • About dataset preparation

    About dataset preparation

    I use CVAT to annotation my dataset.

    I do not know how to convert my CVAT annotations to the required data format of YOLOP.

    If there is any reference, I will appreciate it.

    John Feng from Shanghai

    opened by luckyjohnfeng 3
  • how to generate the lane segmentation mask picture ?

    how to generate the lane segmentation mask picture ?

    May I know how do you generate the lane segmentation mask png picture ?

    I do not find the corresponding code in YOLOP yet.

    Then how do you make the ll_seg_annotations png pictures?

    Thank you very much !

    John Feng in Shanghai, China

    opened by luckyjohnfeng 3
  • Understanding Resolution / Network Input

    Understanding Resolution / Network Input

    Following [6], we resize images in BDD100k dataset from 1280ร—720ร—3 to 640ร—384ร—3.

    But in default.py training configuration:

    _C.MODEL.IMAGE_SIZE = [640, 640] # width * height, ex: 192 * 256

    Is this because YOLO network usually resizes image to the longer side by using padding?

    opened by SikandAlex 3
  • Can't pickle generator objects

    Can't pickle generator objects

    Hello, I'm trying to learn YOLOP model. python tools/train.py When the code was executed, a can't pick generator objects error and a Ran out of input error occurred. Please give me some advice.

    opened by Taeng-ioio 3
  • Camera demo has some issue with C920 nor ZED

    Camera demo has some issue with C920 nor ZED

    Setup on Jetson Xavier AGX; Jetpack4.5.1, Pytorch==1.8.0, Torchvision==0.9.0, etc. YOLOP works on demo.py by images and mp4 video. "python tools/demo.py --source 0" has stop working and following all output with C920.
    jetson@xavier-agx:~/YOLOP$ python3 tools/demo.py --source 0 ['/home/jetson/YOLOP/tools', '/usr/lib/python36.zip', '/usr/lib/python3.6', '/usr/lib/python3.6/lib-dynload', '/home/jetson/.local/lib/python3.6/site-packages', '/home/jetson/.local/lib/python3.6/site-packages/torchvision-0.9.0-py3.6-linux-aarch64.egg', '/home/jetson/.local/lib/python3.6/site-packages/Pillow-8.3.1-py3.6-linux-aarch64.egg', '/home/jetson/.local/lib/python3.6/site-packages/scipy-1.4.1-py3.6-linux-aarch64.egg', '/usr/local/lib/python3.6/dist-packages', '/usr/lib/python3/dist-packages', '/usr/lib/python3.6/dist-packages', '/home/jetson/YOLOP'] => creating runs/BddDataset/_2021-08-31-15-38 Using torch 1.8.0 CPU

    [ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1 1/1: 0... success (inf frames 2304x1536 at 2.00 FPS)

    0%| | 0/1 [00:00<?, ?it/s]/home/jetson/.local/lib/python3.6/site-packages/torch/nn/functional.py:3455: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode) 0%| | 0/1 [00:05<?, ?it/s] Traceback (most recent call last): File "tools/demo.py", line 174, in detect(cfg,opt) File "tools/demo.py", line 127, in detect img_det = show_seg_result(img_det, (da_seg_mask, ll_seg_mask), _, _, is_demo=True) File "/home/jetson/YOLOP/lib/utils/plot.py", line 57, in show_seg_result img[color_mask != 0] = img[color_mask != 0] * 0.5 + color_seg[color_mask != 0] * 0.5 IndexError: boolean index did not match indexed array along dimension 0; dimension is 1536 but corresponding boolean dimension is 1284

    opened by Jiroh 3
  • dataset not accessible to opencv methods while training

    dataset not accessible to opencv methods while training

    After downloading the images and annotations/lables, I made the corresponding changes in the default.py config and ran python tools/train.py. The data seemed to load successfully but then some of the thread workers spit out some errors.

    $ python3 tools/train.py
    => creating runs/BddDataset/_2021-08-30-23-52
    Namespace(conf_thres=0.001, dataDir='', iou_thres=0.6, local_rank=-1, logDir='runs/', modelDir='', prevModelDir='', sync_bn=False)
    AUTO_RESUME: False
    CUDNN:
      BENCHMARK: True
      DETERMINISTIC: False
    ...
    ...
    
    load model to device
    begin to load data
    building database...
    100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 70000/70000 [00:13<00:00, 5352.88it/s]
    database build finish
    building database...
    100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 10000/10000 [00:01<00:00, 5334.48it/s]
    database build finish
    load data finished
    anchors loaded successfully
    tensor([[[0.3750, 1.1250],
             [0.6250, 1.3750],
             [0.5000, 2.5000]],
    
            [[0.4375, 1.1250],
             [0.3750, 2.4375],
             [0.7500, 1.9375]],
    
            [[0.5938, 1.5625],
             [1.1875, 2.5312],
             [2.1250, 4.9062]]])
    => start training...
    Exception in thread Thread-3:
    Traceback (most recent call last):
      File "/home/hotify/anaconda3/envs/yolop/lib/python3.7/threading.py", line 926, in _bootstrap_inner
        self.run()
      File "/home/hotify/anaconda3/envs/yolop/lib/python3.7/site-packages/prefetch_generator/__init__.py", line 80, in run
        for item in self.generator:
      File "/home/hotify/anaconda3/envs/yolop/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
        data = self._next_data()
      File "/home/hotify/anaconda3/envs/yolop/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1085, in _next_data
        return self._process_data(data)
      File "/home/hotify/anaconda3/envs/yolop/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1111, in _process_data
        data.reraise()
      File "/home/hotify/anaconda3/envs/yolop/lib/python3.7/site-packages/torch/_utils.py", line 428, in reraise
        raise self.exc_type(msg)
    cv2.error: Caught error in DataLoader worker process 0.
    Original Traceback (most recent call last):
      File "/home/hotify/anaconda3/envs/yolop/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop
        data = fetcher.fetch(index)
      File "/home/hotify/anaconda3/envs/yolop/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
        data = [self.dataset[idx] for idx in possibly_batched_index]
      File "/home/hotify/anaconda3/envs/yolop/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
        data = [self.dataset[idx] for idx in possibly_batched_index]
      File "/home/hotify/YOLOP/lib/dataset/AutoDriveDataset.py", line 100, in __getitem__
        img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    cv2.error: OpenCV(4.5.3) /tmp/pip-req-build-l1r0y34w/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
    
    
    
    ^CTraceback (most recent call last):
      File "tools/train.py", line 395, in <module>
        main()
      File "tools/train.py", line 323, in main
        epoch, num_batch, num_warmup, writer_dict, logger, device, rank)
      File "/home/hotify/YOLOP/lib/core/function.py", line 51, in train
        for i, (input, target, paths, shapes) in enumerate(train_loader):
      File "/home/hotify/anaconda3/envs/yolop/lib/python3.7/site-packages/prefetch_generator/__init__.py", line 92, in __next__
        return self.next()
      File "/home/hotify/anaconda3/envs/yolop/lib/python3.7/site-packages/prefetch_generator/__init__.py", line 85, in next
        next_item = self.queue.get()
      File "/home/hotify/anaconda3/envs/yolop/lib/python3.7/queue.py", line 170, in get
        self.not_empty.wait()
      File "/home/hotify/anaconda3/envs/yolop/lib/python3.7/threading.py", line 296, in wait
        waiter.acquire()
    KeyboardInterrupt
    ^C
    

    This resulted from OpenCV trying to read an empty file. To further confirm this, I edited AutoDriveDataset.py and tried to print the image shape just before cvtColor()

        def __getitem__(self, idx):
            """
            Get input and groud-truth from database & add data augmentation on input
    
            Inputs:
            -idx: the index of image in self.db(database)(list)
            self.db(list) [a,b,c,...]
            a: (dictionary){'image':, 'information':}
    
            Returns:
            -image: transformed image, first passed the data augmentation in __getitem__ function(type:numpy), then apply self.transform
            -target: ground truth(det_gt,seg_gt)
    
            function maybe useful
            cv2.imread
            cv2.cvtColor(data, cv2.COLOR_BGR2RGB)
            cv2.warpAffine
            """
            data = self.db[idx]
            img = cv2.imread(data["image"], cv2.IMREAD_COLOR | cv2.IMREAD_IGNORE_ORIENTATION)
            print(f'log: {img.shape}') # My edit
            img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    ...
    

    This verifies the claim. Can this be originating from pytorch ?

    Kindly help me in this. Thanks

    opened by pra-dan 3
  • YOLOP trained on RGB images / BGR images

    YOLOP trained on RGB images / BGR images

    I find that YOLOP is trained on RGB images,

    While loading images for train/val, in AutoDriveDataset.py, its mentioned this : img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

    but while loading data for demo.py, I dont see conversion of BGR to RGB.

    Can author please confirm, whats the right way ?

    opened by LuthraBhomik 0
  • ๅคšๅก่ฎญ็ปƒ็š„ๆ—ถๅ€™๏ผŒๅกๅœจ=> start training...

    ๅคšๅก่ฎญ็ปƒ็š„ๆ—ถๅ€™๏ผŒๅกๅœจ=> start training...

    ่ฎญ็ปƒ้…็ฝฎๅฆ‚ไธ‹๏ผš (torch171) lpj@252-2titanx:~/csn_work/YOLOP$ python -m torch.distributed.launch --nproc_per_node=2 tools/train.py


    Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.


    begin to bulid up model... => creating runs/BddDataset/_2022-11-29-15-36 Namespace(conf_thres=0.001, dataDir='', iou_thres=0.6, local_rank=0, logDir='runs/', modelDir='', prevModelDir='', sync_bn=False) AUTO_RESUME: False CUDNN: BENCHMARK: True DETERMINISTIC: False ENABLED: True DATASET: COLOR_RGB: False DATAROOT: /media/new_data4/csn_work/Datasets/BDD/BDD100K/images DATASET: BddDataset DATA_FORMAT: jpg FLIP: True HSV_H: 0.015 HSV_S: 0.7 HSV_V: 0.4 LABELROOT: /media/new_data4/csn_work/Datasets/BDD/BDD100K/det_annotations LANEROOT: /media/new_data4/csn_work/Datasets/BDD/BDD100K/ll_seg_annotations MASKROOT: /media/new_data4/csn_work/Datasets/BDD/BDD100K/da_seg_annotations ORG_IMG_SIZE: [720, 1280] ROT_FACTOR: 10 SCALE_FACTOR: 0.25 SELECT_DATA: False SHEAR: 0.0 TEST_SET: val TRAIN_SET: train TRANSLATE: 0.1 DEBUG: False GPUS: (0, 1) LOG_DIR: runs/ LOSS: BOX_GAIN: 0.05 CLS_GAIN: 0.5 CLS_POS_WEIGHT: 1.0 DA_SEG_GAIN: 0.2 FL_GAMMA: 0.0 LL_IOU_GAIN: 0.2 LL_SEG_GAIN: 0.2 LOSS_NAME: MULTI_HEAD_LAMBDA: None OBJ_GAIN: 1.0 OBJ_POS_WEIGHT: 1.0 SEG_POS_WEIGHT: 1.0 MODEL: EXTRA:

    HEADS_NAME: [''] IMAGE_SIZE: [640, 640] NAME: PRETRAINED: PRETRAINED_DET: STRU_WITHSHARE: False NEED_AUTOANCHOR: False PIN_MEMORY: False PRINT_FREQ: 20 TEST: BATCH_SIZE_PER_GPU: 16 MODEL_FILE: NMS_CONF_THRESHOLD: 0.001 NMS_IOU_THRESHOLD: 0.6 PLOTS: True SAVE_JSON: False SAVE_TXT: False TRAIN: ANCHOR_THRESHOLD: 4.0 BATCH_SIZE_PER_GPU: 16 BEGIN_EPOCH: 0 DET_ONLY: False DRIVABLE_ONLY: False ENC_DET_ONLY: False ENC_SEG_ONLY: True END_EPOCH: 200 GAMMA1: 0.99 GAMMA2: 0.0 IOU_THRESHOLD: 0.2 LANE_ONLY: False LR0: 0.001 LRF: 0.2 MOMENTUM: 0.937 NESTEROV: True OPTIMIZER: adam PLOT: True SEG_ONLY: True SHUFFLE: True VAL_FREQ: 1 WARMUP_BIASE_LR: 0.1 WARMUP_EPOCHS: 3.0 WARMUP_MOMENTUM: 0.8 WD: 0.0005 WORKERS: 8 num_seg_class: 2 begin to bulid up model... Using torch 1.7.1 CUDA:0 (NVIDIA GeForce RTX 3080 Ti, 12053MB) CUDA:1 (NVIDIA GeForce RTX 3080 Ti, 12053MB)

    load model to device load model to device freeze encoder and Det head... freezing model.0.conv.conv.weight freezing model.0.conv.bn.weight freezing model.0.conv.bn.bias freezing model.1.conv.weight freezing model.1.bn.weight freezing model.1.bn.bias freezing model.2.cv1.conv.weight freezing model.2.cv1.bn.weight freezing model.2.cv1.bn.bias freezing model.2.cv2.weight freezing model.2.cv3.weight freezing model.2.cv4.conv.weight freezing model.2.cv4.bn.weight freezing model.2.cv4.bn.bias freezing model.2.bn.weight freezing model.2.bn.bias freezing model.2.m.0.cv1.conv.weight freezing model.2.m.0.cv1.bn.weight freezing model.2.m.0.cv1.bn.bias freezing model.2.m.0.cv2.conv.weight freezing model.2.m.0.cv2.bn.weight freezing model.2.m.0.cv2.bn.bias freezing model.3.conv.weight freezing model.3.bn.weight freezing model.3.bn.bias freezing model.4.cv1.conv.weight freezing model.4.cv1.bn.weight freezing model.4.cv1.bn.bias freezing model.4.cv2.weight freezing model.4.cv3.weight freezing model.4.cv4.conv.weight freezing model.4.cv4.bn.weight freezing model.4.cv4.bn.bias freezing model.4.bn.weight freezing model.4.bn.bias freezing model.4.m.0.cv1.conv.weight freezing model.4.m.0.cv1.bn.weight freezing model.4.m.0.cv1.bn.bias freezing model.4.m.0.cv2.conv.weight freezing model.4.m.0.cv2.bn.weight freezing model.4.m.0.cv2.bn.bias freezing model.4.m.1.cv1.conv.weight freezing model.4.m.1.cv1.bn.weight freezing model.4.m.1.cv1.bn.bias freezing model.4.m.1.cv2.conv.weight freezing model.4.m.1.cv2.bn.weight freezing model.4.m.1.cv2.bn.bias freezing model.4.m.2.cv1.conv.weight freezing model.4.m.2.cv1.bn.weight freezing model.4.m.2.cv1.bn.bias freezing model.4.m.2.cv2.conv.weight freezing model.4.m.2.cv2.bn.weight freezing model.4.m.2.cv2.bn.bias freezing model.5.conv.weight freezing model.5.bn.weight freezing model.5.bn.bias freezing model.6.cv1.conv.weight freezing model.6.cv1.bn.weight freezing model.6.cv1.bn.bias freezing model.6.cv2.weight freezing model.6.cv3.weight freezing model.6.cv4.conv.weight freezing model.6.cv4.bn.weight freezing model.6.cv4.bn.bias freezing model.6.bn.weight freezing model.6.bn.bias freezing model.6.m.0.cv1.conv.weight freezing model.6.m.0.cv1.bn.weight freezing model.6.m.0.cv1.bn.bias freezing model.6.m.0.cv2.conv.weight freezing model.6.m.0.cv2.bn.weight freezing model.6.m.0.cv2.bn.bias freezing model.6.m.1.cv1.conv.weight freezing model.6.m.1.cv1.bn.weight freezing model.6.m.1.cv1.bn.bias freezing model.6.m.1.cv2.conv.weight freezing model.6.m.1.cv2.bn.weight freezing model.6.m.1.cv2.bn.bias freezing model.6.m.2.cv1.conv.weight freezing model.6.m.2.cv1.bn.weight freezing model.6.m.2.cv1.bn.bias freezing model.6.m.2.cv2.conv.weight freezing model.6.m.2.cv2.bn.weight freezing model.6.m.2.cv2.bn.bias freezing model.7.conv.weight freezing model.7.bn.weight freezing model.7.bn.bias freezing model.8.cv1.conv.weight freezing model.8.cv1.bn.weight freezing model.8.cv1.bn.bias freezing model.8.cv2.conv.weight freezing model.8.cv2.bn.weight freezing model.8.cv2.bn.bias freezing model.9.cv1.conv.weight freezing model.9.cv1.bn.weight freezing model.9.cv1.bn.bias freezing model.9.cv2.weight freezing model.9.cv3.weight freezing model.9.cv4.conv.weight freezing model.9.cv4.bn.weight freezing model.9.cv4.bn.bias freezing model.9.bn.weight freezing model.9.bn.bias freezing model.9.m.0.cv1.conv.weight freezing model.9.m.0.cv1.bn.weight freezing model.9.m.0.cv1.bn.bias freezing model.9.m.0.cv2.conv.weight freezing model.9.m.0.cv2.bn.weight freezing model.9.m.0.cv2.bn.bias freezing model.10.conv.weight freezing model.10.bn.weight freezing model.10.bn.bias freezing model.13.cv1.conv.weight freezing model.13.cv1.bn.weight freezing model.13.cv1.bn.bias freezing model.13.cv2.weight freezing model.13.cv3.weight freezing model.13.cv4.conv.weight freezing model.13.cv4.bn.weight freezing model.13.cv4.bn.bias freezing model.13.bn.weight freezing model.13.bn.bias freezing model.13.m.0.cv1.conv.weight freezing model.13.m.0.cv1.bn.weight freezing model.13.m.0.cv1.bn.bias freezing model.13.m.0.cv2.conv.weight freezing model.13.m.0.cv2.bn.weight freezing model.13.m.0.cv2.bn.bias freezing model.14.conv.weight freezing model.14.bn.weight freezing model.14.bn.bias freezing model.17.cv1.conv.weight freezing model.17.cv1.bn.weight freezing model.17.cv1.bn.bias freezing model.17.cv2.weight freezing model.17.cv3.weight freezing model.17.cv4.conv.weight freezing model.17.cv4.bn.weight freezing model.17.cv4.bn.bias freezing model.17.bn.weight freezing model.17.bn.bias freezing model.17.m.0.cv1.conv.weight freezing model.17.m.0.cv1.bn.weight freezing model.17.m.0.cv1.bn.bias freezing model.17.m.0.cv2.conv.weight freezing model.17.m.0.cv2.bn.weight freezing model.17.m.0.cv2.bn.bias freezing model.18.conv.weight freezing model.18.bn.weight freezing model.18.bn.bias freezing model.20.cv1.conv.weight freezing model.20.cv1.bn.weight freezing model.20.cv1.bn.bias freezing model.20.cv2.weight freezing model.20.cv3.weight freezing model.20.cv4.conv.weight freezing model.20.cv4.bn.weight freezing model.20.cv4.bn.bias freezing model.20.bn.weight freezing model.20.bn.bias freezing model.20.m.0.cv1.conv.weight freezing model.20.m.0.cv1.bn.weight freezing model.20.m.0.cv1.bn.bias freezing model.20.m.0.cv2.conv.weight freezing model.20.m.0.cv2.bn.weight freezing model.20.m.0.cv2.bn.bias freezing model.21.conv.weight freezing model.21.bn.weight freezing model.21.bn.bias freezing model.23.cv1.conv.weight freezing model.23.cv1.bn.weight freezing model.23.cv1.bn.bias freezing model.23.cv2.weight freezing model.23.cv3.weight freezing model.23.cv4.conv.weight freezing model.23.cv4.bn.weight freezing model.23.cv4.bn.bias freezing model.23.bn.weight freezing model.23.bn.bias freezing model.23.m.0.cv1.conv.weight freezing model.23.m.0.cv1.bn.weight freezing model.23.m.0.cv1.bn.bias freezing model.23.m.0.cv2.conv.weight freezing model.23.m.0.cv2.bn.weight freezing model.23.m.0.cv2.bn.bias freezing model.24.m.0.weight freezing model.24.m.0.bias freezing model.24.m.1.weight freezing model.24.m.1.bias freezing model.24.m.2.weight freezing model.24.m.2.bias freeze Det head... freezing model.17.cv1.conv.weight freezing model.17.cv1.bn.weight freezing model.17.cv1.bn.bias freezing model.17.cv2.weight freezing model.17.cv3.weight freezing model.17.cv4.conv.weight freezing model.17.cv4.bn.weight freezing model.17.cv4.bn.bias freezing model.17.bn.weight freezing model.17.bn.bias freezing model.17.m.0.cv1.conv.weight freezing model.17.m.0.cv1.bn.weight freezing model.17.m.0.cv1.bn.bias freezing model.17.m.0.cv2.conv.weight freezing model.17.m.0.cv2.bn.weight freezing model.17.m.0.cv2.bn.bias freezing model.18.conv.weight freezing model.18.bn.weight freezing model.18.bn.bias freezing model.20.cv1.conv.weight freezing model.20.cv1.bn.weight freezing model.20.cv1.bn.bias freezing model.20.cv2.weight freezing model.20.cv3.weight freezing model.20.cv4.conv.weight freezing model.20.cv4.bn.weight freezing model.20.cv4.bn.bias freezing model.20.bn.weight freezing model.20.bn.bias freezing model.20.m.0.cv1.conv.weight freezing model.20.m.0.cv1.bn.weight freezing model.20.m.0.cv1.bn.bias freezing model.20.m.0.cv2.conv.weight freezing model.20.m.0.cv2.bn.weight freezing model.20.m.0.cv2.bn.bias freezing model.21.conv.weight freezing model.21.bn.weight freezing model.21.bn.bias freezing model.23.cv1.conv.weight freezing model.23.cv1.bn.weight freezing model.23.cv1.bn.bias freezing model.23.cv2.weight freezing model.23.cv3.weight freezing model.23.cv4.conv.weight freezing model.23.cv4.bn.weight freezing model.23.cv4.bn.bias freezing model.23.bn.weight freezing model.23.bn.bias freezing model.23.m.0.cv1.conv.weight freezing model.23.m.0.cv1.bn.weight freezing model.23.m.0.cv1.bn.bias freezing model.23.m.0.cv2.conv.weight freezing model.23.m.0.cv2.bn.weight freezing model.23.m.0.cv2.bn.bias freezing model.24.m.0.weight freezing model.24.m.0.bias freezing model.24.m.1.weight freezing model.24.m.1.bias freezing model.24.m.2.weight freezing model.24.m.2.bias begin to load data building database... 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 70000/70000 [00:24<00:00, 2912.50it/s] database build finish building database... 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 10000/10000 [00:03<00:00, 2907.35it/s] database build finish load data finished anchors loaded successfully tensor([[[0.3750, 1.1250], [0.6250, 1.3750], [0.5000, 2.5000]],

        [[0.4375, 1.1250],
         [0.3750, 2.4375],
         [0.7500, 1.9375]],
    
        [[0.5938, 1.5625],
         [1.1875, 2.5312],
         [2.1250, 4.9062]]], device='cuda:0')
    

    => start training... Start traning ่ฏท้—ฎ่ฟ™ๆ˜ฏไป€ไนˆๅŽŸๅ› ๅ‘ข๏ผŸๆœ‰ไป€ไนˆๅฅฝ็š„่งฃๅ†ณๅŠžๆณ•ๅ—๏ผŸ

    opened by csn223355 0
  • Error during PTQ

    Error during PTQ

    Hello everyone,

    currently, I'm trying to speed up the inference of yolop. I tried several different containers from NGC with torch-tensorrt support but I always get the same error. When I use Hardswish with return x * F.hardtanh(x + 3, 0., 6.) / 6. # for torchscript, CoreML and ONNX I'm getting the error: '__torch__.lib.models.common.Hardswish' object has no attribute or method '__add__'. Did you forget to initialize an attribute in __init__()? When I try to use Hardswish with return x * F.hardsigmoid(x) # for torchscript and CoreML I'm getting: Expected a value of type 'Tensor' for argument 'input' but instead found type '__torch__.lib.models.common.Hardswish. Any solutions?

    opened by Darianek 0
Owner
Hust Visual Learning Team
Hust Visual Learning Team belongs to the Artificial Intelligence Research Institute in the School of EIC in HUST
Hust Visual Learning Team
You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors

You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors In this paper, we propose a novel local descriptor-based fra

Haiping Wang 80 Dec 15, 2022
Code for You Only Cut Once: Boosting Data Augmentation with a Single Cut

You Only Cut Once (YOCO) YOCO is a simple method/strategy of performing augmenta

null 88 Dec 28, 2022
(CVPR 2022) A minimalistic mapless end-to-end stack for joint perception, prediction, planning and control for self driving.

LAV Learning from All Vehicles Dian Chen, Philipp Krรคhenbรผhl CVPR 2022 (also arXiV 2203.11934) This repo contains code for paper Learning from all veh

Dian Chen 300 Dec 15, 2022
You Only Look One-level Feature (YOLOF), CVPR2021, Detectron2

You Only Look One-level Feature (YOLOF), CVPR2021 A simple, fast, and efficient object detector without FPN. This repo provides a neat implementation

qiang chen 273 Jan 3, 2023
A lane detection integrated Real-time Instance Segmentation based on YOLACT (You Only Look At CoefficienTs)

Real-time Instance Segmentation and Lane Detection This is a lane detection integrated Real-time Instance Segmentation based on YOLACT (You Only Look

Jin 4 Dec 30, 2022
Official implementation of "Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision" ECCV2020

XDVioDet Official implementation of "Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision" ECCV2020. The proj

peng 64 Dec 12, 2022
[CVPR2021] Look before you leap: learning landmark features for one-stage visual grounding.

LBYL-Net This repo implements paper Look Before You Leap: Learning Landmark Features For One-Stage Visual Grounding CVPR 2021. Getting Started Prerequ

SVIP Lab 45 Dec 12, 2022
Help you understand Manual and w/ Clutch point while driving.

็ฎ€ไฝ“ไธญๆ–‡ forza_auto_gear forza_auto_gear is a tool for Forza Horizon 5. It will help us understand the best gear shift point using Manual or w/ Clutch in

null 15 Oct 8, 2022
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

็ฎ€ไฝ“ไธญๆ–‡ | English PaddleRobotics paddleRoboticsๆ˜ฏๅŸบไบŽpaddle็š„ๆœบๅ™จไบบๅผ€ๆบ็ฎ—ๆณ•ๅบ“้›†๏ผŒๅŒ…ๆ‹ฌไบบๆœบไบคไบ’ใ€ๅคๆ‚่ฟๅŠจๆŽงๅˆถใ€็Žฏๅขƒๆ„Ÿ็Ÿฅใ€slamๅฎšไฝๅฏผ่ˆช็ญ‰ๅผ€ๆบ็ฎ—ๆณ•้ƒจๅˆ†ใ€‚ ไบบๆœบไบคไบ’ ไธปๅŠจๅคšๆจกไบคไบ’ๆŠ€ๆœฏTFVT-HRI ไธปๅŠจๅคšๆจกไบคไบ’ๆŠ€ๆœฏๆ˜ฏ้€š่ฟ‡่ง†่ง‰ใ€่ฏญ้Ÿณใ€่งฆๆ‘ธไผ ๆ„Ÿๅ™จ็ญ‰่พ“ๅ…ฅๆœบๅ™จไบบ

null 185 Dec 26, 2022
Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch

Perceiver - Pytorch Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch Install $ pip install perceiver-pytorch Usage

Phil Wang 876 Dec 29, 2022
Implementation of Perceiver, General Perception with Iterative Attention in TensorFlow

Perceiver This Python package implements Perceiver: General Perception with Iterative Attention by Andrew Jaegle in TensorFlow. This model builds on t

Rishit Dagli 84 Oct 15, 2022
Official source code to CVPR'20 paper, "When2com: Multi-Agent Perception via Communication Graph Grouping"

When2com: Multi-Agent Perception via Communication Graph Grouping This is the PyTorch implementation of our paper: When2com: Multi-Agent Perception vi

null 34 Nov 9, 2022
Code for Towards Streaming Perception (ECCV 2020) :car:

sAP โ€” Code for Towards Streaming Perception ECCV Best Paper Honorable Mention Award Feb 2021: Announcing the Streaming Perception Challenge (CVPR 2021

Martin Li 85 Dec 22, 2022
Project page of the paper 'Analyzing Perception-Distortion Tradeoff using Enhanced Perceptual Super-resolution Network' (ECCVW 2018)

EPSR (Enhanced Perceptual Super-resolution Network) paper This repo provides the test code, pretrained models, and results on benchmark datasets of ou

Subeesh Vasu 78 Nov 19, 2022
TorchDistiller - a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

This project is a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

yifan liu 147 Dec 3, 2022
Certifiable Outlier-Robust Geometric Perception

Certifiable Outlier-Robust Geometric Perception About This repository holds the implementation for certifiably solving outlier-robust geometric percep

null 83 Dec 31, 2022
PyTorch implementation for the visual prior component (i.e. perception module) of the Visually Grounded Physics Learner [Li et al., 2020].

VGPL-Visual-Prior PyTorch implementation for the visual prior component (i.e. perception module) of the Visually Grounded Physics Learner (VGPL). Give

Toru 8 Dec 29, 2022
Autonomous Perception: 3D Object Detection with Complex-YOLO

Autonomous Perception: 3D Object Detection with Complex-YOLO LiDAR object detect

Thomas Dunlap 2 Feb 18, 2022
[CVPR 2022 Oral] Versatile Multi-Modal Pre-Training for Human-Centric Perception

Versatile Multi-Modal Pre-Training for Human-Centric Perception Fangzhou Hong1โ€ƒ Liang Pan1โ€ƒ Zhongang Cai1,2,3โ€ƒ Ziwei Liu1* 1S-Lab, Nanyang Technologic

Fangzhou Hong 96 Jan 3, 2023