LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models

Overview

LaneDet

Introduction

LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models. Developers can reproduce these SOTA methods and build their own methods.

demo image

Table of Contents

Benchmark and model zoo

Supported backbones:

  • ResNet
  • ERFNet
  • VGG
  • DLA (comming soon)

Supported detectors:

Installation

Clone this repository

git clone https://github.com/turoad/lanedet.git

We call this directory as $LANEDET_ROOT

Create a conda virtual environment and activate it (conda is optional)

conda create -n lanedet python=3.8 -y
conda activate lanedet

Install dependencies

# Install pytorch firstly, the cudatoolkit version should be same in your system. (you can also use pip to install pytorch and torchvision)
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch

# Or you can install via pip
pip install torch torchvision

# Install python packages
python setup.py build develop

Data preparation

CULane

Download CULane. Then extract them to $CULANEROOT. Create link to data directory.

cd $RESA_ROOT
mkdir -p data
ln -s $CULANEROOT data/CULane

For CULane, you should have structure like this:

$CULANEROOT/driver_xx_xxframe    # data folders x6
$CULANEROOT/laneseg_label_w16    # lane segmentation labels
$CULANEROOT/list                 # data lists

Tusimple

Download Tusimple. Then extract them to $TUSIMPLEROOT. Create link to data directory.

cd $RESA_ROOT
mkdir -p data
ln -s $TUSIMPLEROOT data/tusimple

For Tusimple, you should have structure like this:

$TUSIMPLEROOT/clips # data folders
$TUSIMPLEROOT/lable_data_xxxx.json # label json file x4
$TUSIMPLEROOT/test_tasks_0627.json # test tasks json file
$TUSIMPLEROOT/test_label.json # test label json file

For Tusimple, the segmentation annotation is not provided, hence we need to generate segmentation from the json annotation.

python tools/generate_seg_tusimple.py --root $TUSIMPLEROOT
# this will generate seg_label directory

Getting Started

Training

For training, run

python main.py [configs/path_to_your_config] --gpus [gpu_ids]

For example, run

python main.py configs/resa/resa50_culane.py --gpus 0 1 2 3

Testing

For testing, run

python main.py [configs/path_to_your_config] --validate --load_from [path_to_your_model] [gpu_num]

For example, run

python main.py configs/resa/resa50_culane.py --validate --load_from culane_resnet50.pth --gpus 0 1 2 3

Currently, this code can output the visualization result when testing, just add --view. We will get the visualization result in work_dirs/xxx/xxx/visualization.

For example, run

python main.py configs/resa/resa50_culane.py --validate --load_from culane_resnet50.pth --gpus 0 --view

Contributing

We appreciate all contributions to improve LaneDet. Any pull requests or issues are welcomed.

Licenses

This project is released under the Apache 2.0 license.

Acknowledgement

Comments
  • How can I properly change the input image size on CondLane?

    How can I properly change the input image size on CondLane?

    Currently I'm detecting lanes using tools/detect.py.

    For Condlane inference, I changed this

    batch_size=1 # from 8 (for condlane inference)
    

    And tried these configs for FHD input image

    img_height = 1080 # from 320
    img_width = 1920 # from 800
    
    ori_img_h = 1080 # from 590
    ori_img_w = 1920 # from 1640
    
    crop_bbox = [0,540,1920,1080] # from [0, 270, 1640, 590]
    

    Changing img_scale = (800,320) results

    The size of tensor a must match the size of tensor b at non-singleton dimension 3
    

    How can I properly change the input image size (ex. FHD) on CondLane config file?

    opened by parkjbdev 20
  • curvature estimation

    curvature estimation

    Hello, I would like to know if there is any way to get real-time lane detection and curvature detection using deep learning. I have seen traditional computer vision algorithms but I am looking for a Deep Learning model that could help me out with this. Any suggestions will be very helpful. Thanks in advance.

    opened by k-nayak 9
  • Really bad inference results

    Really bad inference results

    The inference outputs from the model are really bad even for very easy images.

    1. Using Laneatt_Res18_Culane straight-lines2-laneatt-res18

    2. Using SCNN_Res50_Culane straight-lines2-scnn-res50

    Any idea why this is happening? I've just done normal inference without any changes.

    opened by sowmen 9
  • ImportError: connot import name 'nms_impl' form partially initialized module 'lanedet.ops' (most likely due to a circular improt)o)

    ImportError: connot import name 'nms_impl' form partially initialized module 'lanedet.ops' (most likely due to a circular improt)o)

    When I run: python tools/detect.py configs/resa/resa34_culane.py --img images --load_from resa_r34_culane.pth --savedir ./vis Traceback (most recent call last): File "D:/XXX/XXX/XXX/lanedet-main/tools/detect.py", line 8, in from lanedet.datasets.process import Process File "D:\XXX\XXX\XXX\lanedet-main\lanedet_init_.py", line 1, in from .ops import * File "D:\XXX\XXX\XXX\lanedet-main\lanedet\ops_init_.py", line 1, in from .nms import nms File "D:\XXX\XXX\XXX\lanedet-main\lanedet\ops\nms.py", line 29, in from . import nms_impl ImportError: cannot import name 'nms_impl' from partially initialized module 'lanedet.ops' (most likely due to a circular import) (D:\XXX\XXX\XXX\lanedet-main\lanedet\ops_init_.py)

    opened by readerrubic 8
  • custom image size for resa !

    custom image size for resa !

    Hello,

    I have tried testing with the CULane dataset with rsea and it is working well with the example video_example/05081544_0305/
    With the following image configuration: img_height = 288 img_width = 800 cut_height = 240 ori_img_h = 590 ori_img_w = 1640

    05081544_0305-000073

    But with custom image of configurations: img_height = 288 img_width = 800 cut_height = 240 ori_img_h = 1208 // 590 ori_img_w = 1920 //1640

    With above parameters: custom image 05081544_0305-000001

    With defaut parameters: custom image 05081544_0305-000001

    Could you please assist me which params needs to be tuned.

    Appreciate any response.

    Regards, Ajay

    opened by ajay1606 7
  • Can't convert the model to onnx

    Can't convert the model to onnx

    `sample_input = torch.rand((32, 3, 3, 3))

    torch.onnx.export( net1.module, # PyTorch Model sample_input, # Input tensor '/content/drive/MyDrive/MobileNetV2-model-onnx.onnx', # Output file (eg. 'output_model.onnx') opset_version = 12, # Operator support version input_names = ['input'], # Input tensor name (arbitary) output_names = ['output'] # Output tensor name (arbitary) )`

    Got this Error:

    TypeError Traceback (most recent call last) in () 5 opset_version=12, # Operator support version 6 input_names=['input'], # Input tensor name (arbitary) ----> 7 output_names=['output'] # Output tensor name (arbitary) 8 )

     21     def forward(self, batch):
     22         output = {}
    

    ---> 23 fea = self.backbone(batch['img']) 24 25 if self.aggregator:

    TypeError: new(): invalid data type 'str'

    enhancement 
    opened by AbdulFMS 6
  • HELP! A circular import error message appears in nms.py

    HELP! A circular import error message appears in nms.py

    from . import nms_impl ImportError: cannot import name 'nms_impl' from partially initialized module 'la nedet.ops' (most likely due to a circular import) (D:\lanedet-main\lanedet\ops_ init_.py)

    opened by 13xyz7 6
  • Unable to find model file

    Unable to find model file

    Hello, Thank you so much for sharing a very much useful repository.

    I have followed the step by step instructions given, and have downloaded all the datasets as mentioned in the below image

    image

    Training: python main.py configs/resa/resa50_culane.py --gpus 0

    After running the above command, i was able to see following window: image

    But i couldn't find any model file such as culane_resnet50.pth ,resa_r34_culane.pth !! As it mentioned in the example run case.

    Alternatively, is it possible to share the pre-trained model file?

    As I am a beginner, I greatly appreciate your understanding and kind response.

    Regards, Ajay

    opened by ajay1606 5
  • TypeError: expected string or bytes-like object

    TypeError: expected string or bytes-like object

    python setup.py build develop

    File "/home/zzj/anaconda3/envs/Lanedet/lib/python3.8/site-packages/pkg_resources/_vendor/packaging/version.py", line 275, in init match = self._regex.search(version) TypeError: expected string or bytes-like object

    ubuntu20.04 what can i do?

    opened by hzzzzjzyq 5
  • Error

    Error

    if don't modify (from .nms import nms) from lanedet/ops/init.py to (from . import *) there will be an error. and if don't modify (from . import nms_impl) from lanedet/ops/nms.py to (from . import *) there will be an error. And when run inference, there is no lanedet directory in the tools directory, resulting in module error from lanedet/tools/detect.py line 8~12. Is there any other way to remove the error?

    opened by gui-hoon 5
  • Mobilenetv2 for condlane got error.

    Mobilenetv2 for condlane got error.

    Hey @Turoad, thanks for your work, it's very useful. I recently customized to train condlane with mobilenetv2 backbone but got this error!!

    Traceback (most recent call last):
      File "main.py", line 65, in <module>
        main()
      File "main.py", line 35, in main
        runner.train()
      File "/mnt/09a762a6-3f6e-469b-8d6d-e9fa625e24b9/USER/LuanDD/lanedet/lanedet/engine/runner.py", line 94, in train
        self.train_epoch(epoch, train_loader)
      File "/mnt/09a762a6-3f6e-469b-8d6d-e9fa625e24b9/USER/LuanDD/lanedet/lanedet/engine/runner.py", line 67, in train_epoch
        output = self.net(data)
      File "/mnt/09a762a6-3f6e-469b-8d6d-e9fa625e24b9/USER/LuanDD/pyenv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/mnt/09a762a6-3f6e-469b-8d6d-e9fa625e24b9/USER/LuanDD/pyenv/lib/python3.6/site-packages/mmcv/parallel/data_parallel.py", line 42, in forward
        return super().forward(*inputs, **kwargs)
      File "/mnt/09a762a6-3f6e-469b-8d6d-e9fa625e24b9/USER/LuanDD/pyenv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 165, in forward
        return self.module(*inputs[0], **kwargs[0])
      File "/mnt/09a762a6-3f6e-469b-8d6d-e9fa625e24b9/USER/LuanDD/pyenv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/mnt/09a762a6-3f6e-469b-8d6d-e9fa625e24b9/USER/LuanDD/lanedet/lanedet/models/nets/detector.py", line 29, in forward
        fea = self.neck(fea)
      File "/mnt/09a762a6-3f6e-469b-8d6d-e9fa625e24b9/USER/LuanDD/pyenv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/mnt/09a762a6-3f6e-469b-8d6d-e9fa625e24b9/USER/LuanDD/lanedet/lanedet/models/necks/fpn.py", line 113, in forward
        assert len(inputs) >= len(self.in_channels)
    AssertionError
    

    Can you help me clarify it? This is my config

    net = dict(
        type='Detector',
    )
    
    backbone = dict(
        type='MobileNet',
        net='MobileNetV2',
        pretrained=True,
        # replace_stride_with_dilation=[False, False, False],
        out_conv=False,
        # in_channels=[64, 128, 256, 512]
    )
    
    featuremap_out_channel = 1280
    featuremap_out_stride = 32 
    
    sample_y = range(590, 270, -8)
    
    batch_size = 8
    aggregator = dict(
        type='TransConvEncoderModule',
        in_dim=1280,
        attn_in_dims=[1280, 64],
        attn_out_dims=[64, 64],
        strides=[1, 1],
        ratios=[4, 4],
        pos_shape=(batch_size, 10, 25),
    )
    
    neck=dict(
        type='FPN',
        in_channels=[64, 128, 256, 64],
        out_channels=64,
        num_outs=4,
        #trans_idx=-1,
    )
    
    loss_weights=dict(
            hm_weight=1,
            kps_weight=0.4,
            row_weight=1.,
            range_weight=1.,
        )
    
    num_lane_classes=1
    heads=dict(
        type='CondLaneHead',
        heads=dict(hm=num_lane_classes),
        in_channels=(64, ),
        num_classes=num_lane_classes,
        head_channels=64,
        head_layers=1,
        disable_coords=False,
        branch_in_channels=64,
        branch_channels=64,
        branch_out_channels=64,
        reg_branch_channels=64,
        branch_num_conv=1,
        hm_idx=2,
        mask_idx=0,
        compute_locations_pre=True,
        location_configs=dict(size=(batch_size, 1, 80, 200), device='cuda:0')
    )
    
    optimizer = dict(type='AdamW', lr=3e-4, betas=(0.9, 0.999), eps=1e-8)
    optimizer = dict(type='SGD', lr=3e-3)
    
    epochs = 40
    total_iter = (88880 // batch_size) * epochs
    total_iter = (3688 // batch_size) * epochs
    
    import math
    scheduler = dict(
        type = 'MultiStepLR',
        milestones=[15, 25, 35],
        gamma=0.1
    )
    
    seg_loss_weight = 1.0
    eval_ep = 1
    save_ep = 1 
    
    img_norm = dict(
        mean=[75.3, 76.6, 77.6],
        std=[50.5, 53.8, 54.3]
    )
    
    img_height = 320 
    img_width = 800
    cut_height = 0 
    ori_img_h = 590
    ori_img_w = 1640
    
    mask_down_scale = 4
    hm_down_scale = 16
    num_lane_classes = 1
    line_width = 3
    radius = 6
    nms_thr = 4
    img_scale = (800, 320)
    crop_bbox = [0, 270, 1640, 590]
    mask_size = (1, 80, 200)
    
    train_process = [
        dict(type='Alaug',
        transforms=[dict(type='Compose', params=dict(bboxes=False, keypoints=True, masks=False)),
        dict(
            type='Crop',
            x_min=crop_bbox[0],
            x_max=crop_bbox[2],
            y_min=crop_bbox[1],
            y_max=crop_bbox[3],
            p=1),
        dict(type='Resize', height=img_scale[1], width=img_scale[0], p=1),
        dict(
            type='OneOf',
            transforms=[
                dict(
                    type='RGBShift',
                    r_shift_limit=10,
                    g_shift_limit=10,
                    b_shift_limit=10,
                    p=1.0),
                dict(
                    type='HueSaturationValue',
                    hue_shift_limit=(-10, 10),
                    sat_shift_limit=(-15, 15),
                    val_shift_limit=(-10, 10),
                    p=1.0),
            ],
            p=0.7),
        dict(type='JpegCompression', quality_lower=85, quality_upper=95, p=0.2),
        dict(
            type='OneOf',
            transforms=[
                dict(type='Blur', blur_limit=3, p=1.0),
                dict(type='MedianBlur', blur_limit=3, p=1.0)
            ],
            p=0.2),
        dict(type='RandomBrightness', limit=0.2, p=0.6),
        dict(
            type='ShiftScaleRotate',
            shift_limit=0.1,
            scale_limit=(-0.2, 0.2),
            rotate_limit=10,
            border_mode=0,
            p=0.6),
        dict(
            type='RandomResizedCrop',
            height=img_scale[1],
            width=img_scale[0],
            scale=(0.8, 1.2),
            ratio=(1.7, 2.7),
            p=0.6),
        dict(type='Resize', height=img_scale[1], width=img_scale[0], p=1),]
        ),
        dict(type='CollectLane',
            down_scale=mask_down_scale,
            hm_down_scale=hm_down_scale,
            max_mask_sample=5,
            line_width=line_width,
            radius=radius,
            keys=['img', 'gt_hm'],
            meta_keys=[
                'gt_masks', 'mask_shape', 'hm_shape',
                'down_scale', 'hm_down_scale', 'gt_points'
            ]
        ),
        #dict(type='Resize', size=(img_width, img_height)),
        dict(type='Normalize', img_norm=img_norm),
        dict(type='ToTensor', keys=['img', 'gt_hm'], collect_keys=['img_metas']),
    ]
    
    
    val_process = [
        dict(type='Alaug',
            transforms=[dict(type='Compose', params=dict(bboxes=False, keypoints=True, masks=False)),
                dict(type='Crop',
                x_min=crop_bbox[0],
                x_max=crop_bbox[2],
                y_min=crop_bbox[1],
                y_max=crop_bbox[3],
                p=1),
            dict(type='Resize', height=img_scale[1], width=img_scale[0], p=1)]
        ),
        #dict(type='Resize', size=(img_width, img_height)),
        dict(type='Normalize', img_norm=img_norm),
        dict(type='ToTensor', keys=['img']),
    ]
    
    # dataset_path = './data/CULane'
    dataset_path = './data/Merge_data'
    # val_path = './data/CULane'
    dataset = dict(
        train=dict(
            type='CULane',
            data_root=dataset_path,
            split='train',
            processes=train_process,
        ),
        val=dict(
            type='CULane',
            data_root=dataset_path,
            split='test',
            processes=val_process,
        ),
        test=dict(
            type='CULane',
            data_root=dataset_path,
            split='test',
            processes=val_process,
        )
    )
    
    
    workers = 6
    log_interval = 100
    lr_update_by_epoch=True
    

    Thank you so much

    opened by luan1412167 4
  • Build error: command '/usr/local/cuda-10.1/bin/nvcc' failed with exit code 1

    Build error: command '/usr/local/cuda-10.1/bin/nvcc' failed with exit code 1

    [email protected]:~/CARLA/PythonAPI/carla/lanedet$ python setup.py build develop /home/zzg/.local/lib/python3.8/site-packages/pkg_resources/init.py:123: PkgResourcesDeprecationWarning: 0.18ubuntu0.18.04.1 is an invalid version and will not be supported in a future release warnings.warn( /home/zzg/.local/lib/python3.8/site-packages/setuptools/installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer. warnings.warn( running build running build_py running egg_info writing lanedet.egg-info/PKG-INFO writing dependency_links to lanedet.egg-info/dependency_links.txt writing requirements to lanedet.egg-info/requires.txt writing top-level names to lanedet.egg-info/top_level.txt /home/zzg/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py:369: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend. warnings.warn(msg.format('we could not find ninja.')) reading manifest file 'lanedet.egg-info/SOURCES.txt' adding license file 'LICENSE' writing manifest file 'lanedet.egg-info/SOURCES.txt' running build_ext building 'lanedet.ops.nms_impl' extension x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -I/home/zzg/.local/lib/python3.8/site-packages/torch/include -I/home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/zzg/.local/lib/python3.8/site-packages/torch/include/TH -I/home/zzg/.local/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/usr/include/python3.8 -c ./lanedet/ops/csrc/nms.cpp -o build/temp.linux-x86_64-cpython-38/./lanedet/ops/csrc/nms.o -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=nms_impl -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 In file included from /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/Parallel.h:140:0, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:13, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/extension.h:4, from ./lanedet/ops/csrc/nms.cpp:30: /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/ParallelOpenMP.h:83:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas] #pragma omp parallel for if ((end - begin) >= grain_size)

    In file included from /home/zzg/.local/lib/python3.8/site-packages/torch/include/c10/core/Device.h:5:0, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/c10/core/Allocator.h:6, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/ATen.h:7, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/extension.h:4, from ./lanedet/ops/csrc/nms.cpp:30: ./lanedet/ops/csrc/nms.cpp: In function ‘std::vectorat::Tensor nms_forward(at::Tensor, at::Tensor, float, long unsigned int)’: ./lanedet/ops/csrc/nms.cpp:40:41: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] #define CHECK_CUDA(x) AT_ASSERTM(x.type().is_cuda(), #x " must be a CUDA tensor") ^ /home/zzg/.local/lib/python3.8/site-packages/torch/include/c10/util/Exception.h:225:39: note: in definition of macro ‘C10_EXPAND_MSVC_WORKAROUND’ #define C10_EXPAND_MSVC_WORKAROUND(x) x ^ /home/zzg/.local/lib/python3.8/site-packages/torch/include/c10/util/Exception.h:244:34: note: in expansion of macro ‘C10_UNLIKELY’ #define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e) ^~~~~~~~~~~~ /home/zzg/.local/lib/python3.8/site-packages/torch/include/c10/util/Exception.h:291:7: note: in expansion of macro ‘C10_UNLIKELY_OR_CONST’ if (C10_UNLIKELY_OR_CONST(!(cond))) {
    ^~~~~~~~~~~~~~~~~~~~~ /home/zzg/.local/lib/python3.8/site-packages/torch/include/c10/util/Exception.h:484:32: note: in expansion of macro ‘TORCH_INTERNAL_ASSERT’ C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(cond, VA_ARGS));
    ^~~~~~~~~~~~~~~~~~~~~ ./lanedet/ops/csrc/nms.cpp:40:23: note: in expansion of macro ‘AT_ASSERTM’ #define CHECK_CUDA(x) AT_ASSERTM(x.type().is_cuda(), #x " must be a CUDA tensor") ^~~~~~~~~~ ./lanedet/ops/csrc/nms.cpp:42:24: note: in expansion of macro ‘CHECK_CUDA’ #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) ^~~~~~~~~~ ./lanedet/ops/csrc/nms.cpp:53:5: note: in expansion of macro ‘CHECK_INPUT’ CHECK_INPUT(boxes); ^ In file included from /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/Tensor.h:3:0, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/Context.h:4, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/ATen.h:9, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/extension.h:4, from ./lanedet/ops/csrc/nms.cpp:30: /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:303:30: note: declared here DeprecatedTypeProperties & type() const { ^~~~ In file included from /home/zzg/.local/lib/python3.8/site-packages/torch/include/c10/core/Device.h:5:0, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/c10/core/Allocator.h:6, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/ATen.h:7, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/extension.h:4, from ./lanedet/ops/csrc/nms.cpp:30: ./lanedet/ops/csrc/nms.cpp:40:41: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] #define CHECK_CUDA(x) AT_ASSERTM(x.type().is_cuda(), #x " must be a CUDA tensor") ^ /home/zzg/.local/lib/python3.8/site-packages/torch/include/c10/util/Exception.h:225:39: note: in definition of macro ‘C10_EXPAND_MSVC_WORKAROUND’ #define C10_EXPAND_MSVC_WORKAROUND(x) x ^ /home/zzg/.local/lib/python3.8/site-packages/torch/include/c10/util/Exception.h:244:34: note: in expansion of macro ‘C10_UNLIKELY’ #define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e) ^~~~~~~~~~~~ /home/zzg/.local/lib/python3.8/site-packages/torch/include/c10/util/Exception.h:291:7: note: in expansion of macro ‘C10_UNLIKELY_OR_CONST’ if (C10_UNLIKELY_OR_CONST(!(cond))) {
    ^~~~~~~~~~~~~~~~~~~~~ /home/zzg/.local/lib/python3.8/site-packages/torch/include/c10/util/Exception.h:484:32: note: in expansion of macro ‘TORCH_INTERNAL_ASSERT’ C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(cond, VA_ARGS));
    ^~~~~~~~~~~~~~~~~~~~~ ./lanedet/ops/csrc/nms.cpp:40:23: note: in expansion of macro ‘AT_ASSERTM’ #define CHECK_CUDA(x) AT_ASSERTM(x.type().is_cuda(), #x " must be a CUDA tensor") ^~~~~~~~~~ ./lanedet/ops/csrc/nms.cpp:42:24: note: in expansion of macro ‘CHECK_CUDA’ #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) ^~~~~~~~~~ ./lanedet/ops/csrc/nms.cpp:54:5: note: in expansion of macro ‘CHECK_INPUT’ CHECK_INPUT(idx); ^ In file included from /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/Tensor.h:3:0, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/Context.h:4, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/ATen.h:9, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8, from /home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/extension.h:4, from ./lanedet/ops/csrc/nms.cpp:30: /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:303:30: note: declared here DeprecatedTypeProperties & type() const { ^~~~ /usr/local/cuda-10.1/bin/nvcc -I/home/zzg/.local/lib/python3.8/site-packages/torch/include -I/home/zzg/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/zzg/.local/lib/python3.8/site-packages/torch/include/TH -I/home/zzg/.local/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/usr/include/python3.8 -c ./lanedet/ops/csrc/nms_kernel.cu -o build/temp.linux-x86_64-cpython-38/./lanedet/ops/csrc/nms_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=nms_impl -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_70,code=compute_70 -gencode=arch=compute_70,code=sm_70 -std=c++14 ./lanedet/ops/csrc/nms_kernel.cu: In lambda function: ./lanedet/ops/csrc/nms_kernel.cu:171:43: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES(boxes.type(), "nms_cuda_forward", ([&] { ^ /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:303:1: note: declared here DeprecatedTypeProperties & type() const { ^ ~~ ./lanedet/ops/csrc/nms_kernel.cu:171:98: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES(boxes.type(), "nms_cuda_forward", ([&] { ^ /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/Dispatch.h:109:1: note: declared here inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) { ^~~~~~~~~~~ ./lanedet/ops/csrc/nms_kernel.cu: In lambda function: ./lanedet/ops/csrc/nms_kernel.cu:171:857: warning: ‘T* at::Tensor::data() const [with T = double]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES(boxes.type(), "nms_cuda_forward", ([&] { ^ /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:395:1: note: declared here T * data() const { ^ ~~ ./lanedet/ops/csrc/nms_kernel.cu:171:886: warning: ‘T* at::Tensor::data() const [with T = long int]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES(boxes.type(), "nms_cuda_forward", ([&] { ^ /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:395:1: note: declared here T * data() const { ^ ~~ ./lanedet/ops/csrc/nms_kernel.cu:171:916: warning: ‘T* at::Tensor::data() const [with T = long int]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES(boxes.type(), "nms_cuda_forward", ([&] { ^ /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:395:1: note: declared here T * data() const { ^ ~~ ./lanedet/ops/csrc/nms_kernel.cu: In lambda function: ./lanedet/ops/csrc/nms_kernel.cu:171:1664: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES(boxes.type(), "nms_cuda_forward", ([&] { ^ /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:395:1: note: declared here T * data() const { ^ ~~ ./lanedet/ops/csrc/nms_kernel.cu:171:1693: warning: ‘T* at::Tensor::data() const [with T = long int]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES(boxes.type(), "nms_cuda_forward", ([&] { ^ /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:395:1: note: declared here T * data() const { ^ ~~ ./lanedet/ops/csrc/nms_kernel.cu:171:1723: warning: ‘T* at::Tensor::data() const [with T = long int]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations] AT_DISPATCH_FLOATING_TYPES(boxes.type(), "nms_cuda_forward", ([&] { ^ /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:395:1: note: declared here T * data() const { ^ ~~ ./lanedet/ops/csrc/nms_kernel.cu: In function ‘std::vectorat::Tensor nms_cuda_forward(at::Tensor, at::Tensor, float, long unsigned int)’: ./lanedet/ops/csrc/nms_kernel.cu:183:115: warning: ‘T* at::Tensor::data() const [with T = long int]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations] nms_collect<<<1, 1>>>(boxes_num, col_blocks, top_k, ^ /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:395:1: note: declared here T * data() const { ^ ~~ ./lanedet/ops/csrc/nms_kernel.cu:183:143: warning: ‘T* at::Tensor::data() const [with T = long int]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations] nms_collect<<<1, 1>>>(boxes_num, col_blocks, top_k, ^ /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:395:1: note: declared here T * data() const { ^ ~~ ./lanedet/ops/csrc/nms_kernel.cu:183:171: warning: ‘T* at::Tensor::data() const [with T = long int]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations] nms_collect<<<1, 1>>>(boxes_num, col_blocks, top_k, ^ /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:395:1: note: declared here T * data() const { ^ ~~ ./lanedet/ops/csrc/nms_kernel.cu:183:214: warning: ‘T* at::Tensor::data() const [with T = long int]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations] nms_collect<<<1, 1>>>(boxes_num, col_blocks, top_k, ^ /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:395:1: note: declared here T * data() const { ^ ~~ ./lanedet/ops/csrc/nms_kernel.cu:183:249: warning: ‘T* at::Tensor::data() const [with T = long int]’ is deprecated: Tensor.data() is deprecated. Please use Tensor.data_ptr() instead. [-Wdeprecated-declarations] nms_collect<<<1, 1>>>(boxes_num, col_blocks, top_k, ^ /home/zzg/.local/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:395:1: note: declared here T * data() const { ^ ~~ /usr/include/c++/7/bits/basic_string.tcc: In instantiation of ‘static std::basic_string<_CharT, _Traits, _Alloc>::_Rep* std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_S_create(std::basic_string<_CharT, _Traits, _Alloc>::size_type, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’: /usr/include/c++/7/bits/basic_string.tcc:578:28: required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&, std::forward_iterator_tag) [with _FwdIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’ /usr/include/c++/7/bits/basic_string.h:5042:20: required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct_aux(_InIterator, _InIterator, const _Alloc&, std::__false_type) [with _InIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’ /usr/include/c++/7/bits/basic_string.h:5063:24: required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&) [with _InIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’ /usr/include/c++/7/bits/basic_string.tcc:656:134: required from ‘std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT*, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’ /usr/include/c++/7/bits/basic_string.h:6688:95: required from here /usr/include/c++/7/bits/basic_string.tcc:1067:16: error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’ without object __p->_M_set_sharable(); ~~~~~~~~~^~ /usr/include/c++/7/bits/basic_string.tcc: In instantiation of ‘static std::basic_string<_CharT, _Traits, _Alloc>::_Rep* std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_S_create(std::basic_string<_CharT, _Traits, _Alloc>::size_type, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’: /usr/include/c++/7/bits/basic_string.tcc:578:28: required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&, std::forward_iterator_tag) [with _FwdIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’ /usr/include/c++/7/bits/basic_string.h:5042:20: required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct_aux(_InIterator, _InIterator, const _Alloc&, std::__false_type) [with _InIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’ /usr/include/c++/7/bits/basic_string.h:5063:24: required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&) [with _InIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’ /usr/include/c++/7/bits/basic_string.tcc:656:134: required from ‘std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT*, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’ /usr/include/c++/7/bits/basic_string.h:6693:95: required from here /usr/include/c++/7/bits/basic_string.tcc:1067:16: error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’ without object error: command '/usr/local/cuda-10.1/bin/nvcc' failed with exit code 1

    opened by zzggzz 0
  • Is there any output for score or uncertainty?

    Is there any output for score or uncertainty?

    Hello,

    Thanks for the nice project. I wonder whether the output of lane detection has score or uncerntainty that measures the quality of lane detection. It would be good to know whether the detection is good or bad. Do you have any idea? Another question is, can I remove the points that is is unseen from the image, i.e. occluded by cars or other objects but outputed from lane detection?

    Best

    opened by youkely 0
  • Detection failure and ghost point scenarios

    Detection failure and ghost point scenarios

    Hello, Would you please share some tips to overcome detection failure with the following scenarios? image

    Issues;

    1. The method shows the detected line. though there is a line that does not exist.
    2. Road sign marking (Arrow ) also detected as a line
    3. Unable to detect line during turning

    I currently am using a pre-trained model: condlane_r101_culane.pth

    With the configuration is as follows:

    net = dict(
        type='Detector',
    )
    
    backbone = dict(
        type='ResNetWrapper',
        resnet='resnet101',
        pretrained=True,
        replace_stride_with_dilation=[False, False, False],
        out_conv=False,
        in_channels=[64, 128, 256, 512]
    )
    
    ori_img_h = 2048
    ori_img_w = 2448
    bbox_h_start = 1024
    crop_bbox = [0, bbox_h_start, ori_img_w, ori_img_h]
    sample_y = range(ori_img_h, bbox_h_start, -8)
    
    batch_size = 1
    
    aggregator = dict(
        type='TransConvEncoderModule',
        in_dim=2048,
        attn_in_dims=[2048, 256],
        attn_out_dims=[256, 256],
        strides=[1, 1],
        ratios=[4, 4],
        pos_shape=(batch_size, 10, 25),
    )
    
    neck=dict(
        type='FPN',
        in_channels=[256, 512, 1024, 256],
        out_channels=64,
        num_outs=4,
        #trans_idx=-1,
    )
    
    loss_weights=dict(
            hm_weight=1,
            kps_weight=0.4,
            row_weight=1.,
            range_weight=1.,
        )
    
    num_lane_classes=1
    heads=dict(
        type='CondLaneHead',
        heads=dict(hm=num_lane_classes),
        in_channels=(64, ),
        num_classes=num_lane_classes,
        head_channels=64,
        head_layers=1,
        disable_coords=False,
        branch_in_channels=64,
        branch_channels=64,
        branch_out_channels=64,
        reg_branch_channels=64,
        branch_num_conv=1,
        hm_idx=2,
        mask_idx=0,
        compute_locations_pre=True,
        location_configs=dict(size=(batch_size, 1, 80, 200), device='cuda:0')
    )
    
    optimizer = dict(type='AdamW', lr=3e-4, betas=(0.9, 0.999), eps=1e-8)
    
    epochs = 16
    total_iter = (88880 // batch_size) * epochs
    import math
    scheduler = dict(
        type = 'MultiStepLR',
        milestones=[8, 14],
        gamma=0.1
    )
    
    seg_loss_weight = 1.0
    eval_ep = 1
    save_ep = 1
    
    img_norm = dict(
        mean=[75.3, 76.6, 77.6],
        std=[50.5, 53.8, 54.3]
    )
    
    img_height = 320
    img_width = 800
    cut_height = 0
    
    mask_down_scale = 4
    hm_down_scale = 16
    num_lane_classes = 1
    line_width = 3
    radius = 6
    nms_thr = 4
    img_scale = (800, 320)
    mask_size = (1, 80, 200)
    
    train_process = [
        dict(type='Alaug',
        transforms=[dict(type='Compose', params=dict(bboxes=False, keypoints=True, masks=False)),
        dict(
            type='Crop',
            x_min=crop_bbox[0],
            x_max=crop_bbox[2],
            y_min=crop_bbox[1],
            y_max=crop_bbox[3],
            p=1),
        dict(type='Resize', height=img_scale[1], width=img_scale[0], p=1),
        dict(
            type='OneOf',
            transforms=[
                dict(
                    type='RGBShift',
                    r_shift_limit=10,
                    g_shift_limit=10,
                    b_shift_limit=10,
                    p=1.0),
                dict(
                    type='HueSaturationValue',
                    hue_shift_limit=(-10, 10),
                    sat_shift_limit=(-15, 15),
                    val_shift_limit=(-10, 10),
                    p=1.0),
            ],
            p=0.7),
        dict(type='JpegCompression', quality_lower=85, quality_upper=95, p=0.2),
        dict(
            type='OneOf',
            transforms=[
                dict(type='Blur', blur_limit=3, p=1.0),
                dict(type='MedianBlur', blur_limit=3, p=1.0)
            ],
            p=0.2),
        dict(type='RandomBrightness', limit=0.2, p=0.6),
        dict(
            type='ShiftScaleRotate',
            shift_limit=0.1,
            scale_limit=(-0.2, 0.2),
            rotate_limit=10,
            border_mode=0,
            p=0.6),
        dict(
            type='RandomResizedCrop',
            height=img_scale[1],
            width=img_scale[0],
            scale=(0.8, 1.2),
            ratio=(1.7, 2.7),
            p=0.6),
        dict(type='Resize', height=img_scale[1], width=img_scale[0], p=1),]
    
        ),
        dict(type='CollectLane',
            down_scale=mask_down_scale,
            hm_down_scale=hm_down_scale,
            max_mask_sample=5,
            line_width=line_width,
            radius=radius,
            keys=['img', 'gt_hm'],
            meta_keys=[
                'gt_masks', 'mask_shape', 'hm_shape',
                'down_scale', 'hm_down_scale', 'gt_points'
            ]
        ),
        #dict(type='Resize', size=(img_width, img_height)),
        dict(type='Normalize', img_norm=img_norm),
        dict(type='ToTensor', keys=['img', 'gt_hm'], collect_keys=['img_metas']),
    ]
    
    
    val_process = [
        dict(type='Alaug',
            transforms=[dict(type='Compose', params=dict(bboxes=False, keypoints=True, masks=False)),
                dict(type='Crop',
                x_min=crop_bbox[0],
                x_max=crop_bbox[2],
                y_min=crop_bbox[1],
                y_max=crop_bbox[3],
                p=1),
            dict(type='Resize', height=img_scale[1], width=img_scale[0], p=1)]
        ),
        #dict(type='Resize', size=(img_width, img_height)),
        dict(type='Normalize', img_norm=img_norm),
        dict(type='ToTensor', keys=['img']),
    ]
    
    dataset_path = './data/CULane'
    dataset = dict(
        train=dict(
            type='CULane',
            data_root=dataset_path,
            split='train',
            processes=train_process,
        ),
        val=dict(
            type='CULane',
            data_root=dataset_path,
            split='test',
            processes=val_process,
        ),
        test=dict(
            type='CULane',
            data_root=dataset_path,
            split='test',
            processes=val_process,
        )
    )
    
    
    workers = 12
    log_interval = 1000
    lr_update_by_epoch=True
    

    Appreciate any response.

    Is there any that I can modify the confidence score?

    Regards, Ajay

    opened by ajay1606 0
  • Any ways to increase the line points density

    Any ways to increase the line points density

    Hello,

    Can any parameters be tuned to increase the detected line point density? I would like to see line detection output with quite more dense points.

    Else, I would consider to write an interpolation function to increase the density, I just wonder if any possible ways already exist in there!

    Appreciate any response.

    Regards, Ajay

    opened by ajay1606 0
Owner
TuZheng
TuZheng
LaneDetectionAndLaneKeeping - Lane Detection And Lane Keeping

LaneDetectionAndLaneKeeping This project is part of my bachelor's thesis. The go

null 5 Jun 27, 2022
Lane assist for ETS2, built with the ultra-fast-lane-detection model.

Euro-Truck-Simulator-2-Lane-Assist Lane assist for ETS2, built with the ultra-fast-lane-detection model. This project was made possible by the amazing

null 19 Sep 4, 2022
Lane follower: Lane-detector (OpenCV) + Object-detector (YOLO5) + CAN-bus

Lane Follower This code is for the lane follower, including perception and control, as shown below. Environment Hardware Industrial Camera Intel-NUC(1

Siqi Fan 3 Jul 7, 2022
Find-Lane-Line - Use openCV library and Python to detect the road-lane-line

Find-Lane-Line This project is to use openCV library and Python to detect the road-lane-line. Data Pipeline Step one : Color Selection Step two : Cann

Kenny Cheng 3 Aug 17, 2022
tsai is an open-source deep learning package built on top of Pytorch & fastai focused on state-of-the-art techniques for time series classification, regression and forecasting.

Time series Timeseries Deep Learning Pytorch fastai - State-of-the-art Deep Learning with Time Series and Sequences in Pytorch / fastai

timeseriesAI 2.4k Oct 1, 2022
Quickly comparing your image classification models with the state-of-the-art models (such as DenseNet, ResNet, ...)

Image Classification Project Killer in PyTorch This repo is designed for those who want to start their experiments two days before the deadline and ki

null 346 Sep 26, 2022
MMDetection3D is an open source object detection toolbox based on PyTorch

MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is a part of the OpenMMLab project developed by MMLab.

OpenMMLab 2.8k Oct 1, 2022
An open source object detection toolbox based on PyTorch

MMDetection is an open source object detection toolbox based on PyTorch. It is a part of the OpenMMLab project.

Bo Chen 21 Sep 13, 2022
Mmdetection3d Noted - MMDetection3D is an open source object detection toolbox based on PyTorch

MMDetection3D is an open source object detection toolbox based on PyTorch

Jiangjingwen 10 Jul 6, 2022
TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.

TorchMultimodal (Alpha Release) Introduction TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.

Meta Research 412 Sep 21, 2022
PyZebrascope - an open-source Python platform for brain-wide neural activity imaging in behaving zebrafish

PyZebrascope - an open-source Python platform for brain-wide neural activity imaging in behaving zebrafish

null 1 May 31, 2022
Implementation of the state of the art beat-detection, downbeat-detection and tempo-estimation model

The ISMIR 2020 Beat Detection, Downbeat Detection and Tempo Estimation Model Implementation. This is an implementation in TensorFlow to implement the

Koen van den Brink 1 Nov 12, 2021
MMFlow is an open source optical flow toolbox based on PyTorch

Documentation: https://mmflow.readthedocs.io/ Introduction English | 简体中文 MMFlow is an open source optical flow toolbox based on PyTorch. It is a part

OpenMMLab 630 Sep 29, 2022
mmfewshot is an open source few shot learning toolbox based on PyTorch

OpenMMLab FewShot Learning Toolbox and Benchmark

OpenMMLab 470 Sep 21, 2022
QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art Machine Learning models.

QuickAI is a Python library that makes it extremely easy to experiment with state-of-the-art Machine Learning models.

null 143 Sep 14, 2022
PaddleViT: State-of-the-art Visual Transformer and MLP Models for PaddlePaddle 2.0+

PaddlePaddle Vision Transformers State-of-the-art Visual Transformer and MLP Models for PaddlePaddle ?? PaddlePaddle Visual Transformers (PaddleViT or

null 969 Sep 29, 2022
LWCC: A LightWeight Crowd Counting library for Python that includes several pretrained state-of-the-art models.

LWCC: A LightWeight Crowd Counting library for Python LWCC is a lightweight crowd counting framework for Python. It wraps four state-of-the-art models

Matija Teršek 34 Sep 8, 2022
PySlowFast: video understanding codebase from FAIR for reproducing state-of-the-art video models.

PySlowFast PySlowFast is an open source video understanding codebase from FAIR that provides state-of-the-art video classification models with efficie

Meta Research 5.1k Sep 23, 2022
OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.

OpenFace 2.2.0: a facial behavior analysis toolkit Over the past few years, there has been an increased interest in automatic facial behavior analysis

Tadas Baltrusaitis 5.8k Oct 1, 2022