Efficient and intelligent interactive segmentation annotation software

Overview

EISeg

Python 3.6 PaddlePaddle 2.2 License Downloads

简体中文 | English

最新动向

  • 交互式分割论文EdgeFlow被ICCV 2021 Workshop接收。
  • 支持静态图预测,全面提升交互速度;新增遥感、医疗标注垂类方向,上线宫格标注,最新EISeg 0.4.0推出。

介绍

EISeg(Efficient Interactive Segmentation)是以RITMEdgeFlow算法为基础,基于飞桨开发的一个高效智能的交互式分割标注软件。涵盖了通用、人像、遥感、医疗等不同方向的高质量交互式分割模型,方便开发者快速实现语义及实例标签的标注,降低标注成本。 另外,将EISeg获取到的标注应用到PaddleSeg提供的其他分割模型进行训练,便可得到定制化场景的高精度模型,打通分割任务从数据标注到模型训练及预测的全流程。

4a9ed-a91y1

模型准备

在使用EIseg前,请先下载模型参数。EISeg 0.4.0版本开放了在COCO+LVIS、大规模人像数据、mapping_challenge及LiTS(Liver Tumor Segmentation Challenge)上训练的四个垂类方向模型,满足通用场景、人像场景、建筑物标注及医疗影像肝脏的标注需求。其中模型结构对应EISeg交互工具中的网络选择模块,用户需要根据自己的场景需求选择不同的网络结构和加载参数。

模型类型 适用场景 模型结构 模型下载地址
高精度模型 适用于通用场景的图像标注。 HRNet18_OCR64 static_hrnet18_ocr64_cocolvis
轻量化模型 适用于通用场景的图像标注。 HRNet18s_OCR48 static_hrnet18s_ocr48_cocolvis
高精度模型 适用于人像标注场景。 HRNet18_OCR64 static_hrnet18_ocr64_human
轻量化模型 适用于人像标注场景。 HRNet18s_OCR48 static_hrnet18s_ocr48_human
高精度模型 适用于通用图像标注场景。 EdgeFlow static_edgeflow_cocolvis
轻量化模型 适用于遥感建筑物标注场景。 HRNet18s_OCR48 static_hrnet18_ocr48_rsbuilding_instance
轻量化模型 适用于医疗肝脏标注场景。 HRNet18s_OCR48 static_hrnet18s_ocr48_lits

NOTE: 将下载的模型结构*.pdmodel及相应的模型参数*.pdiparams需要放到同一个目录下,加载模型时只需选择*.pdiparams结尾的模型参数位置即可, *.pdmodel会自动加载。在使用EdgeFlow模型时,请将使用掩膜关闭,其他模型使用时请勾选使用掩膜

安装使用

EISeg提供多种安装方式,其中使用pip运行代码方式可兼容Windows,Mac OS和Linux。为了避免环境冲突,推荐在conda创建的虚拟环境中安装。

版本要求:

  • PaddlePaddle >= 2.2.0

PaddlePaddle安装请参考官网

克隆到本地

通过git将PaddleSeg克隆到本地:

git clone https://github.com/PaddlePaddle/PaddleSeg.git

安装好所需环境后,进入EISeg,可通过直接运行eiseg打开EISeg:

cd PaddleSeg\EISeg
python -m eiseg

或进入eiseg,运行exe.py打开EISeg:

cd PaddleSeg\EISeg\eiseg
python exe.py

PIP

pip安装方式如下:

pip install eiseg

pip会自动安装依赖。安装完成后命令行输入:

eiseg

即可运行软件。

Windows exe

EISeg使用QPT进行打包。可以从这里下载最新EISeg。解压后双击启动程序.exe即可运行程序。程序第一次运行会初始化安装所需要的包,请稍等片刻。

使用

打开软件后,在对项目进行标注前,需要进行如下设置:

  1. 模型参数加载

    根据标注场景,选择合适的网络模型及参数进行加载。目前在EISeg0.4.0中,已经将动态图预测转为静态图预测,全面提升单次点击的预测速度。选择合适的模型及参数下载解压后,模型结构*.pdmodel及相应的模型参数*.pdiparams需要放到同一个目录下,加载模型时只需选择*.pdiparams结尾的模型参数位置即可。静态图模型初始化时间稍长,请耐心等待模型加载完成后进行下一步操作。正确加载的模型参数会记录在近期模型参数中,可以方便切换,并且下次打开软件时自动加载退出时的模型参数。

  2. 图像加载

    打开图像/图像文件夹。当看到主界面图像正确加载,数据列表正确出现图像路径即可。

  3. 标签添加/加载

    添加/加载标签。可以通过添加标签新建标签,标签分为4列,分别对应像素值、说明、颜色和删除。新建好的标签可以通过保存标签列表保存为txt文件,其他合作者可以通过加载标签列表将标签导入。通过加载方式导入的标签,重启软件后会自动加载。

  4. 自动保存设置

    在使用中可以将自动保存设置上,设定好文件夹即可,这样在使用时切换图像会自动将完成标注的图像进行保存。

当设置完成后即可开始进行标注,默认情况下常用的按键/快捷键如下,如需修改可按E弹出快捷键修改。

部分按键/快捷键 功能
鼠标左键 增加正样本点
鼠标右键 增加负样本点
鼠标中键 平移图像
Ctrl+鼠标中键(滚轮) 缩放图像
S 切换上一张图
F 切换下一张图
Space(空格) 完成标注/切换状态
Ctrl+Z 撤销
Ctrl+Shift+Z 清除
Ctrl+Y 重做
Ctrl+A 打开图像
Shift+A 打开文件夹
E 打开快捷键表
Backspace(退格) 删除多边形
鼠标双击(点) 删除点
鼠标双击(边) 添加点

特色功能使用说明

  • 多边形

    • 交互完成后使用Space(空格)完成交互标注,此时出现多边形边界;
    • 当需要在多边形内部继续进行交互,则使用空格切换为交互模式,此时多边形无法选中和更改。
    • 多边形可以删除,使用鼠标左边可以对锚点进行拖动,鼠标左键双击锚点可以删除锚点,双击两点之间的边则可在此边添加一个锚点。
    • 打开保留最大连通块后,所有的点击只会在图像中保留面积最大的区域,其余小区域将不会显示和保存。
  • 保存格式

    • 打开保存JSON保存COCO保存后,多边形会被记录,加载时会自动加载。
    • 若不设置保存路径,默认保存至当前图像文件夹下的label文件夹中。
    • 如果有图像之间名称相同但后缀名不同,可以打开标签和图像使用相同扩展名
    • 还可设置灰度保存、伪彩色保存和抠图保存,见工具栏中7-9号工具。
  • 生成mask

    • 标签按住第二列可以进行拖动,最后生成mask时会根据标签列表从上往下进行覆盖。
  • 界面模块

    • 可在显示中选择需要显示的界面模块,正常退出时将会记录界面模块的状态和位置,下次打开自动加载。
  • 垂类分割

    EISeg目前已添加对遥感图像和医学影像分割的支持,使用相关功能需要安装额外依赖。

版本更新

  • 2021.11.16 0.4.0:【1】将动态图预测转换成静态图预测,单次点击速度提升十倍;【2】新增遥感图像标注功能,支持多光谱数据通道的选择;【3】支持大尺幅数据的切片(多宫格)处理;【4】新增医疗图像标注功能,支持读取dicom的数据格式,支持选择窗宽和窗位。
  • 2021.09.16 0.3.0:【1】初步完成多边形编辑功能,支持对交互标注的结果进行编辑;【2】支持中/英界面;【3】支持保存为灰度/伪彩色标签和COCO格式;【4】界面拖动更加灵活;【5】标签栏可拖动,生成mask的覆盖顺序由上往下覆盖。
  • 2021.07.07 0.2.0:新增contrib:EISeg,可实现人像和通用图像的快速交互式标注。

贡献者

感谢Yuying Hao, Lin Han, Yizhou Chen, Yiakwy, GT, Zhiliang Yu 等开发者及RITM 算法支持。

学术引用

如果我们的项目在学术上帮助到你,请考虑以下引用:

@article{hao2021edgeflow,
  title={EdgeFlow: Achieving Practical Interactive Segmentation with Edge-Guided Flow},
  author={Hao, Yuying and Liu, Yi and Wu, Zewu and Han, Lin and Chen, Yizhou and Chen, Guowei and Chu, Lutao and Tang, Shiyu and Yu, Zhiliang and Chen, Zeyu and others},
  journal={arXiv preprint arXiv:2109.09406},
  year={2021}
}
Comments
  • fix(add bbox) : the output does not give correct bbox info

    fix(add bbox) : the output does not give correct bbox info

    1. the output for coco dataset type does not generate correct output info

    before (note bbox is [0, 0, 0, 0]): before fix

    fixed: fix

    1. also generates correct bbox annotation QGraphic widget which is bettern than default "boundingRect":

    add bbox

    opened by yiakwy 10
  • 模型输入point的组织形式

    模型输入point的组织形式

    请问作者,对于模型前向传播时,输入的point参数的各个维度代表的含义可以稍微解释下吗?

    比如,我点了一个点,输入的point的形状是:(2, 2, 3) , 我现在知道了 最后一个维度中包含的是 (y, x, index), 也知道 第二个维度的值是 正样本点的数量的2倍,但是我没看懂为啥第二个维度的值要是2,另外一部分存的是啥啊?(看结果,y值没变,x值变化) 第一维,嘿嘿也没看懂? 为啥是2,这里的含义是什么呢?

    感谢您的精彩工作!

    solved 
    opened by zhijiejia 9
  • release/0.4.0 当前存在的部分BUG

    release/0.4.0 当前存在的部分BUG

    [SELF]

    • [x] 切换模型参数后,必须切换图像才能换过来
    • [x] 多边形有孔洞可能越界
    • [x] 偶见打开后界面混乱,同时加载最近模型也错乱(目前猜测是调试崩溃后打乱了setting的保存)
    • [x] #67
    • [x] #66
    • [x] 单独打开多张图片,快捷键S和F切换的第一次有问题
    • [x] 宫格完成后mask和polygon有偏移
    • [x] 宫格模式下某些shp保存无法在arcgis中显示(投影问题需要解决)
    • [x] 宫格模式多标签问题(全部导出为一个标签了)
    • [ ] #70
    • [ ] #68
    • [x] 增加显示图像的地理信息
    • [x] 打开图像报错(存在.xxx.jpg类似格式的文件)
    • [x] GPU/TensorRT加载问题(框架版本问题)
    • [x] 保留最大联通块推理,生成的多边形仍包含所有块
    • [x] 多次关闭和选择图像后,地理信息被清除
    • [x] #71
    • [x] 导出shp多边形未闭合
    • [x] 导出tif波段数不对(为原始图像波段数)
    • [x] 【讨论】导出图像名的大小写与原始图像名不一致(Linux大小写敏感会不会在训练时匹配不上文件 @linhandev)
    • [x] jpg等医疗图像调整窗宽等闪退
    • [x] 【完善】英语翻译支持
    • [x] 【完善】pip打包
    • [ ] 【完善】exe打包
    • [x] 【完善】md文档

    [QA]

    • [x] #74
    • [x] #75
    • [x] #76
    • [x] #77
    • [x] #78
    • [x] #79
    • [x] #80
    • [x] #81
    bug solved 
    opened by geoyee 6
  • 直接打开图像报错

    直接打开图像报错

    直接打开一张没有标签的图像报错:

      File "e:\PdCVSIG\github\EISeg\eiseg\app.py", line 1092, in loadLabel
        imgId = self.coco.imgNameToId.get(osp.basename(imgPath), None)
    AttributeError: 'NoneType' object has no attribute 'imgNameToId'
    
    bug 
    opened by geoyee 6
  • 服务器Linux下无法选择文件夹

    服务器Linux下无法选择文件夹

    点加载网络参数的时候,会弹出一个窗口让我选择模型参数,但是我点哪个文件夹都没反应,这个是出窗口的时候的信息。 image目前看起来可能是QT的问题,有下列参考:

    bug good first issue solved 
    opened by geoyee 5
  • 加载模型后闪退

    加载模型后闪退

    D:\anaconda3\lib\site-packages\win32\lib\pywintypes.py:2: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp, sys, os qt.qpa.fonts: Unable to enumerate family ' "Droid Sans Mono Dotted for Powerline" ' qt.qpa.fonts: Unable to enumerate family ' "Droid Sans Mono Slashed for Powerline" ' qt.qpa.fonts: Unable to enumerate family ' "Roboto Mono Medium for Powerline" ' qt.qpa.fonts: Unable to enumerate family ' "Ubuntu Mono derivative Powerline" ' OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized. OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/. QObject::~QObject: Timers cannot be stopped from another thread

    bug 
    opened by YQisme 4
  • Add sliding bar to adjust window width and window center

    Add sliding bar to adjust window width and window center

    Add sliding bar to adjust window width and window center.When reading medical images, the window width and window center are adaptive according to the modes

    opened by richarddddd198 3
  • Train module problem: ritm_train.py import error.

    Train module problem: ritm_train.py import error.

    THX for sharing! Import error was found in ritm_train.py from model.model import ( get_hrnet_model, DistMapsHRNetModel, get_deeplab_model, get_shufflenet_model, ) these modules were not exist in model.py, even in this project. so where can i find these modules, please?

    opened by mengmeng716 3
  • plugin\remotesensing\raster.py  line123 遥感图像显示为黑色

    plugin\remotesensing\raster.py line123 遥感图像显示为黑色

    bug描述 请大致描述出错的现象,在什么情况下或操作过程中遇到,在上述条件下是否总是出现等。 float32型tif(0~1)会将数据全部替换为0,显示图像为黑色。

    解决方法 rgb.append(np.uint16(self.src_data.read(b))) --> rgb.append(self.src_data.read(b))

    bug solved 
    opened by yangweiguang213 2
  • 宫格功能时,AttributeError: 'NoneType' object has no attribute 'getGrid'报错。

    宫格功能时,AttributeError: 'NoneType' object has no attribute 'getGrid'报错。

    bug描述 在使用宫格功能时,在点击”保存每个宫格的标签“后,自动弹出文件夹,点击相应文件夹(有要求吗?)后,再次点击下一个要标注的宫格时,程序自动退出。且出现如下报错: AttributeError: 'NoneType' object has no attribute 'getGrid'

    截屏 image

    运行环境(请尽量填写,这可以帮助我们定位问题):

    • 系统: Windows
    • 安装方式:pip
    • 软件版本:2.3(最新)
    bug 
    opened by Yanghanwa 0
  • 启动时protobuf 报错

    启动时protobuf 报错

    bug描述 启动时protobuf 报错, 是否考虑升级protobuf 或者在pip包中指定protobuf版本

    截屏

    # eiseg                                                                                                                                                                                                                                   
    Traceback (most recent call last):
      File "/Users/tachao/miniconda3/bin/eiseg", line 5, in <module>
        from eiseg.run import main
      File "/Users/tachao/miniconda3/lib/python3.9/site-packages/eiseg/run.py", line 25, in <module>
        from app import APP_EISeg  # 导入带槽的界面
      File "/Users/tachao/miniconda3/lib/python3.9/site-packages/eiseg/app.py", line 34, in <module>
        from controller import InteractiveController
      File "/Users/tachao/miniconda3/lib/python3.9/site-packages/eiseg/controller.py", line 23, in <module>
        import paddle
      File "/Users/tachao/miniconda3/lib/python3.9/site-packages/paddle/__init__.py", line 25, in <module>
        from .framework import monkey_patch_variable
      File "/Users/tachao/miniconda3/lib/python3.9/site-packages/paddle/framework/__init__.py", line 17, in <module>
        from . import random  # noqa: F401
      File "/Users/tachao/miniconda3/lib/python3.9/site-packages/paddle/framework/random.py", line 16, in <module>
        import paddle.fluid as fluid
      File "/Users/tachao/miniconda3/lib/python3.9/site-packages/paddle/fluid/__init__.py", line 36, in <module>
        from . import framework
      File "/Users/tachao/miniconda3/lib/python3.9/site-packages/paddle/fluid/framework.py", line 35, in <module>
        from .proto import framework_pb2
      File "/Users/tachao/miniconda3/lib/python3.9/site-packages/paddle/fluid/proto/framework_pb2.py", line 33, in <module>
        _descriptor.EnumValueDescriptor(
      File "/Users/tachao/miniconda3/lib/python3.9/site-packages/google/protobuf/descriptor.py", line 755, in __new__
        _message.Message._CheckCalledFromGeneratedFile()
    TypeError: Descriptors cannot not be created directly.
    If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
    If you cannot immediately regenerate your protos, some other possible workarounds are:
     1. Downgrade the protobuf package to 3.20.x or lower.
     2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
    
    More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
    
    

    运行环境(请尽量填写,这可以帮助我们定位问题):

    • 系统: [Mac os/Linux]
    • 安装方式:[pip]
    • 软件版本:[0.5.0]
    bug 
    opened by TaChao 1
  • Hope to add the function of converting json format labels to coco format labels in one click

    Hope to add the function of converting json format labels to coco format labels in one click

    Currently, if we have labeled a batch of images in JSON format, we can only re-label the image if we want to save the label to coco format. This is not very convenient for users, i hope the official can add this function, or can also release the script to achieve such a function. I saw that this software script only semantic tags into instance tags, so few scripts seem to be inconsistent with such a powerful software.

    opened by Leon-Brant 0
Releases(v1.0.5)
A Data Annotation Tool for Semantic Segmentation, Object Detection and Lane Line Detection.(In Development Stage)

Data-Annotation-Tool How to Run this Tool? To run this software, follow the steps: git clone https://github.com/Autonomous-Car-Project/Data-Annotation

TiVRA AI 13 Aug 18, 2022
OpenCVのGrabCut()を利用したセマンティックセグメンテーション向けアノテーションツール(Annotation tool using GrabCut() of OpenCV. It can be used to create datasets for semantic segmentation.)

[Japanese/English] GrabCut-Annotation-Tool GrabCut-Annotation-Tool.mp4 OpenCVのGrabCut()を利用したアノテーションツールです。 セマンティックセグメンテーション向けのデータセット作成にご使用いただけます。 ※Grab

KazuhitoTakahashi 30 Nov 18, 2022
Numenta Platform for Intelligent Computing is an implementation of Hierarchical Temporal Memory (HTM), a theory of intelligence based strictly on the neuroscience of the neocortex.

NuPIC Numenta Platform for Intelligent Computing The Numenta Platform for Intelligent Computing (NuPIC) is a machine intelligence platform that implem

Numenta 6.3k Dec 30, 2022
Numenta Platform for Intelligent Computing is an implementation of Hierarchical Temporal Memory (HTM), a theory of intelligence based strictly on the neuroscience of the neocortex.

NuPIC Numenta Platform for Intelligent Computing The Numenta Platform for Intelligent Computing (NuPIC) is a machine intelligence platform that implem

Numenta 6.2k Feb 12, 2021
House-GAN++: Generative Adversarial Layout Refinement Network towards Intelligent Computational Agent for Professional Architects

House-GAN++ Code and instructions for our paper: House-GAN++: Generative Adversarial Layout Refinement Network towards Intelligent Computational Agent

null 122 Dec 28, 2022
An intelligent, flexible grammar of machine learning.

An english representation of machine learning. Modify what you want, let us handle the rest. Overview Nylon is a python library that lets you customiz

Palash Shah 79 Dec 2, 2022
CityLearn Challenge Multi-Agent Reinforcement Learning for Intelligent Energy Management, 2020, PikaPika team

Citylearn Challenge This is the PyTorch implementation for PikaPika team, CityLearn Challenge Multi-Agent Reinforcement Learning for Intelligent Energ

bigAIdream projects 10 Oct 10, 2022
RCD: Relation Map Driven Cognitive Diagnosis for Intelligent Education Systems

RCD: Relation Map Driven Cognitive Diagnosis for Intelligent Education Systems This is our implementation for the paper: Weibo Gao, Qi Liu*, Zhenya Hu

BigData Lab @USTC  中科大大数据实验室 10 Oct 16, 2022
A privacy-focused, intelligent security camera system.

Self-Hosted Home Security Camera System A privacy-focused, intelligent security camera system. Features: Multi-camera support w/ minimal configuration

Scott Barnes 175 Jan 1, 2023
A platform for intelligent agent learning based on a 3D open-world FPS game developed by Inspir.AI.

Wilderness Scavenger: 3D Open-World FPS Game AI Challenge This is a platform for intelligent agent learning based on a 3D open-world FPS game develope

null 46 Nov 24, 2022
[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion

[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion

Rex Cheng 364 Jan 3, 2023
PointCloud Annotation Tools, support to label object bound box, ground, lane and kerb

PointCloud Annotation Tools, support to label object bound box, ground, lane and kerb

halo 368 Dec 6, 2022
A graphical Semi-automatic annotation tool based on labelImg and Yolov5

??YOLOV5 semi-automatic annotation tool (Based on labelImg)

EricFang 247 Jan 5, 2023
A data annotation pipeline to generate high-quality, large-scale speech datasets with machine pre-labeling and fully manual auditing.

About This repository provides data and code for the paper: Scalable Data Annotation Pipeline for High-Quality Large Speech Datasets Development (subm

Appen Repos 86 Dec 7, 2022
Multi-modal co-attention for drug-target interaction annotation and Its Application to SARS-CoV-2

CoaDTI Multi-modal co-attention for drug-target interaction annotation and Its Application to SARS-CoV-2 Abstract Environment The test was conducted i

Layne_Huang 7 Nov 14, 2022
Reviving Iterative Training with Mask Guidance for Interactive Segmentation

This repository provides the source code for training and testing state-of-the-art click-based interactive segmentation models with the official PyTorch implementation

Visual Understanding Lab @ Samsung AI Center Moscow 406 Jan 1, 2023
[CVPR'21] Learning to Recommend Frame for Interactive Video Object Segmentation in the Wild

IVOS-W Paper Learning to Recommend Frame for Interactive Video Object Segmentation in the Wild Zhaoyun Yin, Jia Zheng, Weixin Luo, Shenhan Qian, Hanli

SVIP Lab 38 Dec 12, 2022
[MedIA2021]MIDeepSeg: Minimally Interactive Segmentation of Unseen Objects from Medical Images Using Deep Learning

MIDeepSeg: Minimally Interactive Segmentation of Unseen Objects from Medical Images Using Deep Learning [MedIA or Arxiv] and [Demo] This repository pr

Healthcare Intelligence Laboratory 92 Dec 8, 2022
ICSS - Interactive Continual Semantic Segmentation

Presentation This repository contains the code of our paper: Weakly-supervised c

Alteia 9 Jul 23, 2022