PointCloud Annotation Tools, support to label object bound box, ground, lane and kerb

Overview

PCAT点云标注工具-使用手册

  • Demo项目,请自行魔改

  • This is the open source version:

    Author: WenwenDu TEL: 18355180339 E-mail: [email protected]

  • Video tutorial:

  1. https://v.youku.com/v_show/id_XNDYxNjY4MDExMg==.html?spm=a2h0k.11417342.soresults.dtitle

  2. https://v.youku.com/v_show/id_XNDYxNjY4MDI5Mg==.html?spm=a2hzp.8244740.0.0

I. 配置使用环境及安装

  • 配置要求:ubuntu16.04 + ROS Kinetic full
  • 注意:请务必保证系统使用原生python2.7,在使用Anaconda2的情况下,请在~/.bashrc环境变量中临时关闭Anaconda2,避免冲突。(如果你长期使用ROS,强烈建议在虚拟环境下使用anaconda,避免冲突。)

1. 安装ROS-Kinetic

参考ROS WiKi-安装说明, 安装步骤如下:

/etc/apt/sources.list.d/ros-latest.list' 添加ROS源秘钥: sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116 更新源 sudo apt-get update ">
添加ROS源:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
添加ROS源秘钥:
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116
更新源
sudo apt-get update
安装ROS完整版:(由于使用Rviz,PCL等模块,请务必安装完整版)
sudo apt-get install ros-kinetic-desktop-full
sudo apt-cache search ros-kinetic
初始化ROS:
sudo rosdep init
rosdep update
> ~/.bashrc source ~/.bashrc 更新ROS环境变量 source /opt/ros/kinetic/setup.bash ">
添加环境变量
echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc
source ~/.bashrc
更新ROS环境变量
source /opt/ros/kinetic/setup.bash
测试ROS是否成功安装:
开启一个新的Teminnal,输入:
roscore
测试Rviz
开启一个新的Teminnal,输入:
rviz

成功显示rviz界面如下: 图片

2. 安装PCAT标注工具

(1) 进入文件夹PCAT
(2) 开启终端,运行安装命令: sh install.sh
(3) 显示 install successful 后,home文件夹下出现lidar_annotation文件夹,安装成功

II. 导入pcd文件

  1. 导入待标注点云pcd文件
Copy 待标注的点云.pcd格式文件到 lidar_annotation/pcd/ 文件夹下

注意:标注工具默认支持激光雷达pcd格式点云,Field为[x,y,z,intensity],如果使用XYZRGB等其他pcd format,请在src/rviz_cloud_annotation/launch/annotation.launch中更改pcd_type参数的value.

常见issue

[1] 如何支持其他类型pcd或其他3Dpoints? 修改以下code...
// src/rviz_cloud_annotation/src/rviz_cloud_annotation_class.cpp
void RVizCloudAnnotation::LoadCloud(const std::string &filename,
                                    const std::string &normal_source,
                                    PointXYZRGBNormalCloud &cloud);

  1. 开始标注
打开 Teminnal, 运行: sh run.sh

显示标注界面如下: 图片


III. 标注手册正篇

首次使用请务必仔细阅读

1. 标注面板详解

下面就上图中 A, B, C, D, E 5个模块做详细说明:

  • A. 标注菜单栏
标注菜单栏由 [文件], [编辑],[视图],[标记],[选择] 5部分组成
文件:(1)切换新文件,(2)清除当前帧标记,(3)保存
编辑:(1)取消,(2)恢复
视图:(1)增加点的尺寸,(2)减小点的尺寸,(3)重置点的尺寸
标记:(1)清除当前物体的标记,(2)切换颜色,(3)设置障碍物BBox遮挡系数,(4)调节障碍物BBox方位,(5)调节障碍物BBox尺寸
选择:(1)跳转至下一物体,(2)跳转至上一物体
特别说明:
1.切换新文件会自动保存当前文件的标注信息
2.取消/恢复开销较大,尽量避免使用
3.标记完成一个物体后,需要切换到下一个物体进行标注,否则会覆盖当前标记;选择新的颜色会自动切换到下一物体;物体ID显示在面板上
4.标记障碍物时,颜色 1~5,6~10,11~15,16~20 分别对应标签: 小车,大车,行人,骑行;
5.标记障碍物时,需要设置方位角和遮挡系数,请以实际为准标注,0--不遮挡,1--完全遮挡
尽量使用简洁的方式完成标注,熟练使用快捷键可以有效提高标注速度。

图片 特别说明 1.点云被重复标记为 障碍物,路沿,车道线,地面时,标签优先级为 (障碍物 > 路沿/车道线 > 地面)

2.标注步骤

在看标注说明之前请务必观看视频教程

  • 标注请按照: 【障碍物--> 路沿-->车道线-->地面】 的顺序。
(1) 障碍物
障碍物包括 小车(轿车),大车(卡车、有轨电车等),行人,骑行(电动车)4类。
在该数据集中主要包含 小车和行人,及少量的大车和骑行。请在标注`颜色面板`选择不同的按钮,对应不同的障碍物。
颜色面板分为4大块,颜色 1~5,6~10,11~15,16~20 分别对应: 小车,大车,行人,骑行,代表不同的障碍物。
对每一帧的点云,障碍物存在则标注,不存在则不标注;每标注完一个障碍物,需要==切换至下一个障碍物进行新的标注。
(比如:标完第一辆小车,需要按`Shitf+N` 切换至下一小车,或者按`Shift+P`切换至上一障碍物进行修改)。
选择新的颜色会自动切换至新的下一障碍物。
每个障碍物,需要标注人员自己判断大致的朝向,并进行方位调节(R、F键)。
受到遮挡的障碍物请设置`遮挡系数`,默认为 0,即不遮挡,大多数障碍物不存在遮挡。

图片

(2)  路沿
 路沿指道路中地面的边界,如上图显示;标记路沿只能使用点选的方式标注(具体操作可以参考标注视频教程)
 一般一帧点云中有多条路沿,每标记一条,需要切换至下一路沿进行标注,切换方式与障碍物切换相同。
(3)  车道线
 车道线指道路中颜色明显突出的线段,一般出现的频率比较低,没有出现或者看不清楚则不用标注;车道线的标注方式与路沿完全相同。
(4)  地面
 地面是一帧点云中比较关键的部分,一般选择使用多边形进行选择标注,边界为之前标注的路沿。
 地面可以分多次标注,拼接生成;如果一次选点过多,地面生成时间会较长。
 *在2.4.0版本之后,标注工具增加了地面辅助标记功能:用户每次选择`地面(F2)`按钮时,系统会自动生成95%的地面,用户在此基础上进行细节修改,
 得到最终的地面标注。

3.标注结果

Result路径说明

图片

3D框label

图片


IV、注意事项

1. 标注工具使用过程中如果遇见问题,或者代码部分有疑问,编辑需要,联系 @杜文文(18355180339 / [email protected])
2. 视频教程:
   A`https://v.youku.com/v_show/id_XNDYxNjY4MDExMg==.html?spm=a2h0k.11417342.soresults.dtitle`
   B`https://v.youku.com/v_show/id_XNDYxNjY4MDI5Mg==.html?spm=a2hzp.8244740.0.0`

V、版权说明

  1. 软件版权 本标注工具的版权归WenwenDu所有
  2. 其他版权 本标注工具在 RIMLab 开源标注工具 rviz_cloud_annotation 上改进完成: https://github.com/RMonica/rviz_cloud_annotation
原始版权说明:
Original Copyright:
/*
 * Copyright (c) 2016-2017, Riccardo Monica
 *   RIMLab, Department of Engineering and Architecture
 *   University of Parma, Italy
 *   http://www.rimlab.ce.unipr.it/
 *
 * Redistribution and use in source and binary forms, with or without
 * modification, are permitted provided that the following conditions are met:
 *
 * 1. Redistributions of source code must retain the above copyright notice,
 *    this list of conditions and the following disclaimer.
 *
 * 2. Redistributions in binary form must reproduce the above copyright notice,
 *    this list of conditions and the following disclaimer in the documentation
 *    and/or other materials provided with the distribution.
 *
 * 3. Neither the name of the copyright holder nor the names of its
 *    contributors may be used to endorse or promote products derived from this
 *    software without specific prior written permission.
 *
 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
 * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
 * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
 * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
 * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
 * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
 * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
 * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
 * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
 * POSSIBILITY OF SUCH DAMAGE.
 */
Comments
  • some confusions about Bbox label definition

    some confusions about Bbox label definition

    image According to the definition of bbox label u provided in 'image/s5.png', the object of interest can be described by its size (length, width, height), location (x_c, y_c, z_c), theta and alpha. However, I have three question about the definition of bbox label:

    1. which object point does the location (x_c, y_c, z_c) represents? the center of target or just like KITTI dataset (the bottom center of target)? And is the location (x_c, y_c, z_c) the original of object coordinate system?

    2. the 3D size of object includes length, width and height, Which axis of object coordinate system are they respectively aligned with? length-x, width-y, height-z?

    3. how is the alpha defined? the angle between the x-axis of observer coordinates and the x-axis of object coordinates? which case is it defined as positive?and when is negative?

    regards, dong zhou

    根据你们在 'image/s5.png' 所提供的bbox label定义来看,感兴趣目标能够由 size (length, width, height), location (x_c, y_c, z_c), theta and alpha 来描述。 但是我对bbox label定义有几个问题没有弄明白:

    1. label中的the location (x_c, y_c, z_c) 对应目标的哪个点?目标中心或者像KITTI数据集那样,对应目标底面中心?还有the location (x_c, y_c, z_c) 应该就是目标坐标系的原点吧?

    2. 目标三维尺寸包括长、宽、高,它们分别对应目标坐标系的哪个轴?长对应x轴,宽对应y轴,高对应z轴?

    3. alpha角是如何定义的?它是观测坐标系x轴和目标坐标系x轴的夹角吗?这个alpha角什么时候为正,什么时候为负?

    非常期待您的回答! 周栋

    opened by Dongzhou-1996 3
  • librviz_cloud_annotation_com.so:对‘pcl::GlasbeyLUT::at(unsigned int)’未定义的引用

    librviz_cloud_annotation_com.so:对‘pcl::GlasbeyLUT::at(unsigned int)’未定义的引用

    你好!我运行sh install.sh之后,得到了这个输出结果: https://paste.ubuntu.com/p/VVMGjzBZsS/ 好像是缺少src/rviz_cloud_annotation/lib/librviz_cloud_annotation_com.so和src/rviz_cloud_annotation/lib/librviz_cloud_annotation_plugin.so两个文件。能帮我看看怎么解决吗?

    opened by sunskyhsh 3
  • No rule to make target '/usr/lib/x86_64-linux-gnu/libGL.so

    No rule to make target '/usr/lib/x86_64-linux-gnu/libGL.so

    [ 72%] Built target rviz_cloud_annotation_plugin_autogen Scanning dependencies of target rviz_cloud_annotation_plugin make[2]: *** No rule to make target '/usr/lib/x86_64-linux-gnu/libGL.so', needed by '/home/hjimi/workspace/tools/PCAT_open_source-master/devel/lib/librviz_cloud_annotation_plugin.so'. Stop. make[2]: *** Waiting for unfinished jobs.... [ 75%] Building CXX object rviz_cloud_annotation/CMakeFiles/rviz_cloud_annotation_plugin.dir/rviz_cloud_annotation_plugin_autogen/mocs_compilation.cpp.o [ 78%] Building CXX object rviz_cloud_annotation/CMakeFiles/rviz_cloud_annotation_plugin.dir/src/rviz_cloud_annotation_plugin.cpp.o [ 81%] Building CXX object rviz_cloud_annotation/CMakeFiles/rviz_cloud_annotation_plugin.dir/src/rviz_select_tool.cpp.o CMakeFiles/Makefile2:704: recipe for target 'rviz_cloud_annotation/CMakeFiles/rviz_cloud_annotation_plugin.dir/all' failed make[1]: *** [rviz_cloud_annotation/CMakeFiles/rviz_cloud_annotation_plugin.dir/all] Error 2 如何解决啊

    opened by XinPeiHou 2
  •  The plugin for class 'rviz_cloud_annotation/Annotation Tool' failed to load.  Error: Failed to load library

    The plugin for class 'rviz_cloud_annotation/Annotation Tool' failed to load. Error: Failed to load library

    [ERROR] [1589107141.546822435]: PluginlibFactory: The plugin for class 'rviz_cloud_annotation/Annotation Tool' failed to load. Error: Failed to load library /share_zyx/PCAT_open_source/devel/lib//librviz_cloud_annotation_plugin.so. Make sure that you are calling the PLUGINLIB_EXPORT_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Could not load library (Poco exception = /usr/lib/x86_64-linux-gnu/libQt5Gui.so.5: version Qt_5' not found (required by /share_zyx/PCAT_open_source/devel/lib//librviz_cloud_annotation_plugin.so)) [ERROR] [1589107141.556759737]: PluginlibFactory: The plugin for class 'rviz_cloud_annotation/Annotation' failed to load. Error: Failed to load library /share_zyx/PCAT_open_source/devel/lib//librviz_cloud_annotation_plugin.so. Make sure that you are calling the PLUGINLIB_EXPORT_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Could not load library (Poco exception = /usr/lib/x86_64-linux-gnu/libQt5Gui.so.5: versionQt_5' not found (required by /share_zyx/PCAT_open_source/devel/lib//librviz_cloud_annotation_plugin.so)) ================================================================================REQUIRED process [annotation_rviz-2] has died! process has finished cleanly log file: /root/.ros/log/7c1fa32e-92a5-11ea-9bf1-e0d55ef88a30/annotation_rviz-2*.log Initiating shutdown!

    opened by zyxcambridge 1
  • [rviz_cloud_annotation_node-2] process has died, 无法正常保存标注文件

    [rviz_cloud_annotation_node-2] process has died, 无法正常保存标注文件

    [ INFO] [1586274937.294268862]: rviz_cloud_annotation: m_current_label 1 [ INFO] [1586274937.312954936]: rviz_cloud_annotation: ANNOTATION_TYPE 0 [ INFO] [1586274937.317525770]: rviz_cloud_annotation: shape[x1 -1.566000 x2 3.004000 y1 1.797000 y2 3.726000 z1 -1.467000 z3 -0.100000] [ INFO] [1586274937.317583735]: BBOX[0]:0.000000 -1.566000 3.004000 0.000000 1.797000 3.726000 -1.467000 -0.100000 1.000000 -1.000000 180 Yaw 0[ INFO] [1586274937.317620274]: rviz_cloud_annotation: alpha is 0.254711 [ INFO] [1586274937.317693286]: rviz_cloud_annotation: selection set 8820 points. [ INFO] [1586274941.427988303]: rviz_cloud_annotation: saving file: /home/dongzhou/lidar_annotation/_annotation/64line.ann [ INFO] [1586274941.429595670]: rviz_cloud_annotation: done. [ INFO] [1586274941.429847390]: rviz_cloud_annotation: saving cloud: /home/dongzhou/lidar_annotation/_pcd/64line.pcd [ INFO] [1586274941.458699367]: rviz_cloud_annotation: done. [rviz_cloud_annotation_node-2] process has died [pid 28851, exit code -11, cmd /home/dongzhou/PCAT_open_source/devel/lib/rviz_cloud_annotation/rviz_cloud_annotation_node __name:=rviz_cloud_annotation_node __log:=/home/dongzhou/.ros/log/270367da-78e8-11ea-bb0a-e82aea0b42da/rviz_cloud_annotation_node-2.log]. log file: /home/dongzhou/.ros/log/270367da-78e8-11ea-bb0a-e82aea0b42da/rviz_cloud_annotation_node-2*.log

    opened by Dongzhou-1996 1
  • 不能运行

    不能运行

    CMake Error at CMakeLists.txt:1: Parse error. Expected a command name, got unquoted argument with text "/opt/ros/kinetic/share/catkin/cmake/toplevel.cmake". -- Configuring incomplete, errors occurred! Invoking "cmake" failed 执行sh install.sh的时候出现上面的错误,这是为啥?

    opened by hebeixmg 1
  • 增加地面分割/矫正功能

    增加地面分割/矫正功能

    为原来的PCAT增加了如下功能:

    • 矫正地面按钮: 点击后(清空当前标注)并重新初始化, 同时检测地面出.根据检测出的地面拟合平面.根据拟合平面矫正点云使得地平面水平, 同时平移点云使得地面中心保持在地面一定高度上(对3D检测保持z轴高度非常重要). 当按钮激活期间, 后续的其他帧点云会自动在加载时就完成矫正.
    • 参数设置栏目: 当矫正按钮激活时, 在输入栏回车可以更新参数并(清空当前标注)重新初始化本次点云然后检测地面.
    • 显示栏:增加地面显示按钮.当矫正地面和显示点云功能激活时显示地面点云.

    image-20211222193054803

    效果如图

    保持点云水平并距离地面1.7m(KITTI). 下图为显示地面状态.

    image-20211222194054215

    当然也可以关闭显示地面.

    image-20211222194204263

    opened by OuyangJunyuan 0
Owner
halo
USTC 中国科学技术大学 Email: [email protected]
halo
This project aim to create multi-label classification annotation tool to boost annotation speed and make it more easier.

This project aim to create multi-label classification annotation tool to boost annotation speed and make it more easier.

null 4 Aug 2, 2022
ObjectDrawer-ToolBox: a graphical image annotation tool to generate ground plane masks for a 3D object reconstruction system

ObjectDrawer-ToolBox is a graphical image annotation tool to generate ground plane masks for a 3D object reconstruction system, Object Drawer.

null 77 Jan 5, 2023
A Data Annotation Tool for Semantic Segmentation, Object Detection and Lane Line Detection.(In Development Stage)

Data-Annotation-Tool How to Run this Tool? To run this software, follow the steps: git clone https://github.com/Autonomous-Car-Project/Data-Annotation

TiVRA AI 13 Aug 18, 2022
Find-Lane-Line - Use openCV library and Python to detect the road-lane-line

Find-Lane-Line This project is to use openCV library and Python to detect the road-lane-line. Data Pipeline Step one : Color Selection Step two : Cann

Kenny Cheng 3 Aug 17, 2022
LaneDetectionAndLaneKeeping - Lane Detection And Lane Keeping

LaneDetectionAndLaneKeeping This project is part of my bachelor's thesis. The go

null 5 Jun 27, 2022
LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models

LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models. Developers can reproduce these SOTA methods and build their own methods.

TuZheng 405 Jan 4, 2023
Lane assist for ETS2, built with the ultra-fast-lane-detection model.

Euro-Truck-Simulator-2-Lane-Assist Lane assist for ETS2, built with the ultra-fast-lane-detection model. This project was made possible by the amazing

null 36 Jan 5, 2023
Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera.

Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera. This project prepares training and testing data for various deep learning projects such as 6D object pose estimation projects singleshotpose, as well as object detection and instance segmentation projects.

null 305 Dec 16, 2022
UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering

UnsupervisedR&R: Unsupervised Pointcloud Registration via Differentiable Rendering This repository holds all the code and data for our recent work on

Mohamed El Banani 118 Dec 6, 2022
A robust pointcloud registration pipeline based on correlation.

PHASER: A Robust and Correspondence-Free Global Pointcloud Registration Ubuntu 18.04+ROS Melodic: Overview Pointcloud registration using correspondenc

ETHZ ASL 101 Dec 1, 2022
Definition of a business problem according to Wilson Lower Bound Score and Time Based Average Rating

Wilson Lower Bound Score, Time Based Rating Average In this study I tried to calculate the product rating and sorting reviews more accurately. I have

null 3 Sep 30, 2021
Prototypical python implementation of the trust-region algorithm presented in Sequential Linearization Method for Bound-Constrained Mathematical Programs with Complementarity Constraints by Larson, Leyffer, Kirches, and Manns.

Prototypical python implementation of the trust-region algorithm presented in Sequential Linearization Method for Bound-Constrained Mathematical Programs with Complementarity Constraints by Larson, Leyffer, Kirches, and Manns.

null 3 Dec 2, 2022
O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning (CoRL 2021)

O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning Object-object Interaction Affordance Learning. For a given object-object int

Kaichun Mo 26 Nov 4, 2022
Official Repo for Ground-aware Monocular 3D Object Detection for Autonomous Driving

Visual 3D Detection Package: This repo aims to provide flexible and reproducible visual 3D detection on KITTI dataset. We expect scripts starting from

Yuxuan Liu 305 Dec 19, 2022
Black-Box-Tuning - Black-Box Tuning for Language-Model-as-a-Service

Black-Box-Tuning Source code for paper "Black-Box Tuning for Language-Model-as-a

Tianxiang Sun 149 Jan 4, 2023
Label Mask for Multi-label Classification

LM-MLC 一种基于完型填空的多标签分类算法 1 前言 本文主要介绍本人在全球人工智能技术创新大赛【赛道一】设计的一种基于完型填空(模板)的多标签分类算法:LM-MLC,该算法拟合能力很强能感知标签关联性,在多个数据集上测试表明该算法与主流算法无显著性差异,在该比赛数据集上的dev效果很好,但是由

null 52 Nov 20, 2022
Official implementation of "Open-set Label Noise Can Improve Robustness Against Inherent Label Noise" (NeurIPS 2021)

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise NeurIPS 2021: This repository is the official implementation of ODNL. Require

Hongxin Wei 12 Dec 7, 2022
A PyTorch implementation of ICLR 2022 Oral paper PiCO: Contrastive Label Disambiguation for Partial Label Learning

PiCO: Contrastive Label Disambiguation for Partial Label Learning This is a PyTorch implementation of ICLR 2022 Oral paper PiCO; also see our Project

王皓波 83 May 11, 2022