Powerful and efficient Computer Vision Annotation Tool (CVAT)

Overview

Computer Vision Annotation Tool (CVAT)

CI Gitter chat Coverage Status server pulls ui pulls DOI

CVAT is free, online, interactive video and image annotation tool for computer vision. It is being used by our team to annotate million of objects with different properties. Many UI and UX decisions are based on feedbacks from professional data annotation team. Try it online cvat.org.

CVAT screenshot

Documentation

Screencasts

Supported annotation formats

Format selection is possible after clicking on the Upload annotation and Dump annotation buttons. Datumaro dataset framework allows additional dataset transformations via its command line tool and Python library.

For more information about supported formats look at the documentation.

Annotation format Import Export
CVAT for images X X
CVAT for a video X X
Datumaro X
PASCAL VOC X X
Segmentation masks from PASCAL VOC X X
YOLO X X
MS COCO Object Detection X X
TFrecord X X
MOT X X
LabelMe 3.0 X X
ImageNet X X
CamVid X X
WIDER Face X X
VGGFace2 X X
Market-1501 X X
ICDAR13/15 X X

Deep learning serverless functions for automatic labeling

Name Type Framework CPU GPU
Deep Extreme Cut interactor OpenVINO X
Faster RCNN detector OpenVINO X
Mask RCNN detector OpenVINO X
YOLO v3 detector OpenVINO X
Object reidentification reid OpenVINO X
Semantic segmentation for ADAS detector OpenVINO X
Text detection v4 detector OpenVINO X
SiamMask tracker PyTorch X X
f-BRS interactor PyTorch X
HRNet interactor PyTorch X
Inside-Outside Guidance interactor PyTorch X
Faster RCNN detector TensorFlow X X
Mask RCNN detector TensorFlow X X
RetinaNet detector PyTorch X X

Online demo: cvat.org

This is an online demo with the latest version of the annotation tool. Try it online without local installation. Only own or assigned tasks are visible to users.

Disabled features:

Limitations:

  • No more than 10 tasks per user
  • Uploaded data is limited to 500Mb

Prebuilt Docker images

Prebuilt docker images for CVAT releases are available on Docker Hub:

LICENSE

Code released under the MIT License.

This software uses LGPL licensed libraries from the FFmpeg project. The exact steps on how FFmpeg was configured and compiled can be found in the Dockerfile.

FFmpeg is an open source framework licensed under LGPL and GPL. See https://www.ffmpeg.org/legal.html. You are solely responsible for determining if your use of FFmpeg requires any additional licenses. Intel is not responsible for obtaining any such licenses, nor liable for any licensing fees due in connection with your use of FFmpeg.

Partners

  • Onepanel is an open source vision AI platform that fully integrates CVAT with scalable data processing and parallelized training pipelines.
  • DataIsKey uses CVAT as their prime data labeling tool to offer annotation services for projects of any size.
  • Human Protocol uses CVAT as a way of adding annotation service to the human protocol.
  • Cogito Tech LLC, a Human-in-the-Loop Workforce Solutions Provider, used CVAT in annotation of about 5,000 images for a brand operating in the fashion segment.

Questions

CVAT usage related questions or unclear concepts can be posted in our Gitter chat for quick replies from contributors and other users.

However, if you have a feature request or a bug report that can reproduced, feel free to open an issue (with steps to reproduce the bug if it's a bug report) on GitHub* issues.

If you are not sure or just want to browse other users common questions, Gitter chat is the way to go.

Other ways to ask questions and get our support:

Links

Comments
  • Cuboid annotation

    Cuboid annotation

    Addressing https://github.com/opencv/cvat/issues/147

    Cuboid Annotation:

    Description

    This PR adds fully functional cuboid annotation within CVAT. The cuboid are fully integrated within CVAT and support regular features from other shapes such as copy-pasting, labels, etc.

    Usage

    Cuboid are created just like bounding boxes, simply select the cuboid shape in the UI and create. The cuboids may be edited by dragging certain edges, points or faces. Editing is constrained by a two point perspective model, that is, non-vertical edges all converge on either one of two vanishing points.

    You may see these vanishing points in action by checking cuboid projection lines checkbox in the bottom left of the player.

    Annotation dump

    Points in the dump are ordered by vertical edges, starting with the leftmost edge and moving in counter clockwise order. The first point of each edge is always the top one.

    For example, the first point would be the top point of the leftmost edge and the second point would be the bottom point of the leftmost edge and the third point would be the top point of the edge in the front of the cuboid.

    Known issues

    • Currently, this build only supports dumping and uploading in cvat-xml format.
    • The copy-paste buffer cuboid is just a polyline, but is still usable

    The cuboids have been developed with the feedback of an in-house annotation team. This feature is fully functional but of course any feedback and or comment is appreciated!

    enhancement 
    opened by HollowTube 71
  • CVAT-3D milestone6

    CVAT-3D milestone6

    Hi @bsekachev , @nmanovic , @zhiltsov-max

    CVAT 3D Milestone 6 changes:

    Added support for Dump annotations, Export Annotations and Upload annotations in PCD and Kitti formats. The code changes are only in 4 files i.e bindings.py, registry.py and added new datatsets pointcloud.py and velodynepoint.py.

    The rest of the files are from M5 base branch, was waiting for it to be merged but since comments are in progress created one for M6.

    If you're unsure about any of these, don't hesitate to ask. We're here to help! -->

    • [x] I submit my changes into the develop branch

    • We shall add the changes in CHANGELOG after M5 code is merged .

    • (https://github.com/opencv/cvat/blob/develop/CHANGELOG.md) file Datumaro PR - https://github.com/openvinotoolkit/datumaro/pull/245

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.

    • [x] I have updated the license header for each file

    opened by manasars 69
  • Automatic Annotation

    Automatic Annotation

    I deployed my custom model for automatic annotation and it seems perfect. It shows the inference progress bar and the docker logs show normal. However, the Annotation did not show on my image dataset in CVAT.

    What can I do? image

    More info The following is what I send to CVAT. In other words, it is the context.Response part. <class 'list'>---[{'confidence': '0.4071217', 'label': '0.0', 'points': [360.0, 50.0, 1263.0, 720.0], 'type': 'rectangle'}]

    bug 
    opened by QuarTerll 64
  • Create multiple tasks when uploading multiple videos

    Create multiple tasks when uploading multiple videos

    Motivation and context

    Resolve #916

    How has this been tested?

    Checklist

    • [ ] I submit my changes into the develop branch
    • [ ] I have added a description of my changes into CHANGELOG file
    • [ ] I have updated the documentation accordingly
    • [ ] I have added tests to cover my changes
    • [ ] I have linked related issues (read github docs)
    • [ ] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [ ] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    opened by AlexeyAlexeevXperienceAI 61
  • Added Cypress testing for feature: Multiple tasks creating from videos

    Added Cypress testing for feature: Multiple tasks creating from videos

    Motivation and context

    How has this been tested?

    Checklist

    • [x] I submit my changes into the develop branch
    • [ ] I have added a description of my changes into CHANGELOG file
    • [ ] I have updated the documentation accordingly
    • [ ] I have added tests to cover my changes
    • [ ] I have linked related issues (read github docs)
    • [ ] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    opened by AlexeyAlexeevXperienceAI 60
  • Adding Kuberenetes templates and deployment guide

    Adding Kuberenetes templates and deployment guide

    Motivation and context

    The topic was raised a couple of times in issues like #1087 . Since kubernetes is widely use easy deployment into the kubernetes environment would provide great value to the community and help to get cvat to a wider audience.

    Special due to changes like #1641 its now way easier to deploy cvat in a k8s environment.

    How has this been tested?

    I deployed this in a couple of namespaces with in our cluster (with and without nvida gpu). Furthermore i did not do any changes to the code, therefore the only real issue was networking. Since i was following the docker-compose.yml closely there where no real challges

    Checklist

    License

    • [X] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    • [X] I have updated the license header for each file (see an example below)
    # Copyright (C) 2020 Intel Corporation
    #
    # SPDX-License-Identifier: MIT
    
    opened by Langhalsdino 51
  • CVAT 3D Milestone-5

    CVAT 3D Milestone-5

    CVAT-3D-Milestone5 : Implement cuboid operations in right side bar and save annotations Changes include: Implemented displaying list of annotated objects in right side bar. Implemented Switch lock property, switch hidden, pinned, occluded property of objects. Implemented remove and save annotations Implemented Appearance tab to change opacity and outlined borders.

    Test Cases: Manual Unit testing done locally. Existing test cases work as expected. System Test cases will be shared.

    [x ] I submit my changes into the develop branch

    [ x] I submit my code changes under the same MIT License that covers the project.

    opened by manasars 48
  • Deleted frames

    Deleted frames

    Resolve #4235 Resolve #3000

    Motivation and context

    How has this been tested?

    Checklist

    • [x] I submit my changes into the develop branch
    • [x] I have added a description of my changes into CHANGELOG file ~~- [ ] I have updated the documentation accordingly~~
    • [x] I have added tests to cover my changes
    • [x] I have linked related issues (read github docs)
    • [x] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    • [x] I have updated the license header for each file (see an example below)
    # Copyright (C) 2022 Intel Corporation
    #
    # SPDX-License-Identifier: MIT
    
    opened by ActiveChooN 44
  • Added paint brush tools

    Added paint brush tools

    Motivation and context

    Resolved #1849 Resolved #4868

    How has this been tested?

    Checklist

    • [x] I submit my changes into the develop branch
    • [x] I have added a description of my changes into CHANGELOG file
    • [ ] I have updated the documentation accordingly
    • [ ] I have added tests to cover my changes
    • [x] I have linked related issues (read github docs)
    • [x] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    • [x] I have updated the license header for each file (see an example below)
    # Copyright (C) 2022 Intel Corporation
    #
    # SPDX-License-Identifier: MIT
    
    opened by bsekachev 42
  • Project export

    Project export

    Motivation and context

    PR provides the ability to export the project as a dataset or annotation, reworked export menus.

    Resolve #2911 Resolve #2678 Related #1278

    изображение

    TODOs:

    • [x] Fix image exporting in some cases
    • [x] Add support for CVAT formats
    • [x] Add server unit tests
    • [x] Add UI support for exporting project and rework export task dataset menus

    How has this been tested?

    Checklist

    • [x] I submit my changes into the develop branch
    • [x] I have added a description of my changes into CHANGELOG file ~~- [ ] I have updated the documentation accordingly~~
    • [x] I have added tests to cover my changes
    • [x] I have linked related issues (read github docs)
    • [x] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    • [x] I have updated the license header for each file (see an example below)
    # Copyright (C) 2021 Intel Corporation
    #
    # SPDX-License-Identifier: MIT
    
    opened by ActiveChooN 38
  • Tracking functionality for bounding boxes

    Tracking functionality for bounding boxes

    Hi, We are adding several features into CVAT and will be open-sourced. We might need your advice along the way, just wanted to know if you can help. Currently, we are trying to change the interpolation. As of now, interpolation just puts bounding box in the remaining frames at the same position as it is in the first frame. We are trying to change that and add tracking there. Since the code base is huge I am unable to understand the exact flow of process.

    For now, say instead of constant coordinates I want to shift box to right a little bit (i.e 10 pixels). I guess its trivial task. Just need your help regarding the same, if possible. Thanks

    enhancement 
    opened by savan77 34
  • Adjust Windows Installation Instructions to account for Nuclio issue#1821

    Adjust Windows Installation Instructions to account for Nuclio issue#1821

    Motivation and context

    In my understanding of https://github.com/nuclio/nuclio/issues/1821, the Nuctl (1.8.14) CLI is looking for a path that is only valid on a Linux environment, which it does not find when running via Git Bash (even when using the Windows version of Nuctl). However, installing CVAT onto a Linux VM allows Nuctl to locate this path and operate normally.

    (I am still learning how to use GitHub as far as pull requests / forks / etc work, sorry if this is not the right way to approach this change. Please let me know if I've missed something important.)

    How has this been tested?

    This is only a change to instructions, but I did test this on multiple machines . As long as the machine is capable of running a Linux kernel it shouldn't run into any issues.

    Checklist

    • [x] I submit my changes into the develop branch
    • [x] I have added a description of my changes into CHANGELOG file
    • [x] I have updated the documentation accordingly (Purely documentation changed)
    • [ ] ~~I have added tests to cover my changes~~ (Does not change code)
    • [ ] ~~I have linked related issues (read github docs)~~ (This doesn't resolve the root issue, rather just works around it; doesn't make sense to cause an automatic close)
    • [ ] ~~I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)~~ (Was not necessary)

    License

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    opened by AstronomyGuy 0
  • [WIP] Fix pagination in some endpoints

    [WIP] Fix pagination in some endpoints

    Motivation and context

    How has this been tested?

    Checklist

    • [ ] I submit my changes into the develop branch
    • [ ] I have added a description of my changes into CHANGELOG file
    • [ ] I have updated the documentation accordingly
    • [ ] I have added tests to cover my changes
    • [ ] I have linked related issues (read github docs)
    • [ ] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [ ] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    opened by zhiltsov-max 0
  • [WIP] Improve error messages when limits reached

    [WIP] Improve error messages when limits reached

    Motivation and context

    How has this been tested?

    Checklist

    • [ ] I submit my changes into the develop branch
    • [ ] I have added a description of my changes into CHANGELOG file
    • [ ] I have updated the documentation accordingly
    • [ ] I have added tests to cover my changes
    • [ ] I have linked related issues (read github docs)
    • [ ] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [ ] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    opened by kirill-sizov 0
  • It is very slow when more than 10 people do their job together.

    It is very slow when more than 10 people do their job together.

    Hi, developer,

    It is very slow when more than 10 people do their job together, This question bothers me very much, how to resolve it? btw, The memory of the server is enough!

    opened by YuanNBB 0
  • YoloV7 serverless detector feature for auto annotation

    YoloV7 serverless detector feature for auto annotation

    Motivation and context

    Integration of YOLOv7 as a serverless nuclio function that can be used for auto-labeling. YoloV7 is the SOTA at the time of this PR therefore it would make sense to support it in CVAT. The integration is quite simple into CVAT as docker based on Ultralytics YoloV5 with coco pretrained model (https://github.com/WongKinYiu/yolov7) and a docker image (https://hub.docker.com/r/ultralytics/yolov5).

    related issue: #5548

    How has this been tested?

    Automatic annotation was run using YOLOv7 on a custom dataset. The serverless function was deployed using

    nuctl deploy --project-name cvat \
      --path serverless/onnx/WongKinYiu/yolov7/nuclio \
      --volume `pwd`/serverless/common:/opt/nuclio/common \
      --platform local
    

    Then using the 'Automatic annotation' action the function was tested and the auto-generated labels were controlled to check that no coordinates misfit is happening.

    Checklist

    • [x] I submit my changes into the develop branch
    • [x] I have added a description of my changes into CHANGELOG file
    • [x] I have updated the documentation accordingly
    • [x] I have added tests to cover my changes
    • [x] I have linked related issues (read github docs)
    • [x] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    Use custom model:

    1. Export your model with NMS for image resolution of 640x640 (preferable).
    2. Copy your custom model yolov7-custom.onnx to /serverless/common
    3. Modify function.yaml file according to your labels.
    4. Modify model_handler.py as follow:
     self.model_path = "yolov7-custom.onnx"
    

    License

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    models 
    opened by hardikdava 2
  • Mmdetection MaskRCNN serverless support for semi-automatic annotation

    Mmdetection MaskRCNN serverless support for semi-automatic annotation

    I have created a serverless support for semi-automatic annotation using Mmdetection implementation of MaskRCNN, I believe this would be helpful for anyone who would like to use any of Mmdetection's implementations in building a serveless function. Do let me know if it is needed. Thank you

    opened by michael-selasi-dzamesi 2
Releases(v2.3.0)
Owner
OpenVINO Toolkit
OpenVINO Toolkit
Monk is a low code Deep Learning tool and a unified wrapper for Computer Vision.

Monk - A computer vision toolkit for everyone Why use Monk Issue: Want to begin learning computer vision Solution: Start with Monk's hands-on study ro

Tessellate Imaging 507 Dec 4, 2022
Real-Time Social Distance Monitoring tool using Computer Vision

Social Distance Detector A Real-Time Social Distance Monitoring Tool Table of Contents Motivation YOLO Theory Detection Output Tech Stack Functionalit

Pranav B 13 Oct 14, 2022
A Data Annotation Tool for Semantic Segmentation, Object Detection and Lane Line Detection.(In Development Stage)

Data-Annotation-Tool How to Run this Tool? To run this software, follow the steps: git clone https://github.com/Autonomous-Car-Project/Data-Annotation

TiVRA AI 13 Aug 18, 2022
A graphical Semi-automatic annotation tool based on labelImg and Yolov5

??YOLOV5 semi-automatic annotation tool (Based on labelImg)

EricFang 247 Jan 5, 2023
A highly efficient, fast, powerful and light-weight anime downloader and streamer for your favorite anime.

AnimDL - Download & Stream Your Favorite Anime AnimDL is an incredibly powerful tool for downloading and streaming anime. Core features Abuses the dev

KR 759 Jan 8, 2023
A embed able annotation tool for end to end cross document co-reference

CoRefi CoRefi is an emebedable web component and stand alone suite for exaughstive Within Document and Cross Document Coreference Anntoation. For a de

PythicCoder 39 Dec 12, 2022
OpenCVのGrabCut()を利用したセマンティックセグメンテーション向けアノテーションツール(Annotation tool using GrabCut() of OpenCV. It can be used to create datasets for semantic segmentation.)

[Japanese/English] GrabCut-Annotation-Tool GrabCut-Annotation-Tool.mp4 OpenCVのGrabCut()を利用したアノテーションツールです。 セマンティックセグメンテーション向けのデータセット作成にご使用いただけます。 ※Grab

KazuhitoTakahashi 30 Nov 18, 2022
Open source annotation tool for machine learning practitioners.

doccano doccano is an open source text annotation tool for humans. It provides annotation features for text classification, sequence labeling and sequ

null 7.1k Jan 1, 2023
ObjectDrawer-ToolBox: a graphical image annotation tool to generate ground plane masks for a 3D object reconstruction system

ObjectDrawer-ToolBox is a graphical image annotation tool to generate ground plane masks for a 3D object reconstruction system, Object Drawer.

null 77 Jan 5, 2023
A clear, concise, simple yet powerful and efficient API for deep learning.

The Gluon API Specification The Gluon API specification is an effort to improve speed, flexibility, and accessibility of deep learning technology for

Gluon API 2.3k Dec 17, 2022
[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

Ayan Kumar Bhunia 44 Dec 12, 2022
Datasets, Transforms and Models specific to Computer Vision

torchvision The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision. Installat

null 13.1k Jan 2, 2023
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

null 107 Dec 2, 2022
This project demonstrates the use of neural networks and computer vision to create a classifier that interprets the Brazilian Sign Language.

LIBRAS-Image-Classifier This project demonstrates the use of neural networks and computer vision to create a classifier that interprets the Brazilian

Aryclenio Xavier Barros 26 Oct 14, 2022
[CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang

The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models Codes for this paper The Lottery Tickets Hypo

VITA 59 Dec 28, 2022
Scenic: A Jax Library for Computer Vision and Beyond

Scenic Scenic is a codebase with a focus on research around attention-based models for computer vision. Scenic has been successfully used to develop c

Google Research 1.6k Dec 27, 2022
GluonMM is a library of transformer models for computer vision and multi-modality research

GluonMM is a library of transformer models for computer vision and multi-modality research. It contains reference implementations of widely adopted baseline models and also research work from Amazon Research.

null 42 Dec 2, 2022
ML for NLP and Computer Vision.

Sparrow is our open-source ML product. It runs on Skipper MLOps infrastructure.

Katana ML 2 Nov 28, 2021
Lacmus is a cross-platform application that helps to find people who are lost in the forest using computer vision and neural networks.

lacmus The program for searching through photos from the air of lost people in the forest using Retina Net neural nwtwork. The project is being develo

Lacmus Foundation 168 Dec 27, 2022