Le dataset des images du projet d'IA de 2021

Overview

face-mask-dataset-ilc-2021

Le dataset des images du projet d'IA de 2021, Indiquez vos id git dans la issue pour les droits

TL;DR:

  • Choisir 200 images JPEG avec environ 1/3 sans masque, 1/3 avec masque, et 1/3 mal mis
  • Renommer les images avec le hash md5 du fichier
  • Annoter avec labelimg (ou autre pour fichier xml au format PASCAL-VOC)
  • commit sur votre branch "contrib_NOM1_NOM2"
  • Une fois toutes les images annotées, => Pull requests vers branche VALID
  • Le discord ILC est pratique pour échanger

1. Répartition

Les images sont repertoriées en 3 catégories :

  • "with_mask", un masque correctment porté et qui recouvre la bouche et le nez
  • "with_incorrect_mask", un masque porté sous le nez, ou de facon pas très covid-friendly
  • "without_mask, Un visage sans masque

Le dataset doit faire environ 2300 images qui répartit par 23 doit donner environ 100 images à annoter par personne

2. Gestion des images

Les images doivent être traitées de la sorte :

  • Le nom correspond au md5sum du fichier
  • Les masques rajoutés en mode photoshop sont à proscrire pour des raisons de performances
  • on recherche les images similaires par exemple à l’aide du script python compare_images
  • La répartition des images doivent être équilibrés (environ le même nombre d'image dans chaque catégorie à 100 images près)

3. Pour commit

L'idée va être d'avoir une branche "VALID" pour ajouter toutes les images en attentes de validation et de ne garder la branche "main" que pour le résultat final. Pensez à bien mettre renseigner vos avancés dans vos commits et pull request. -> Chaque binome ajoutera sur sa branche "contrib_NOM1_NOM2", et on effectuera un pull request vers la branche "VALID" une fois les 200 images ajoutées et annotées

4. Les outils qui vont bien

  • Pour annoter les images : labelimg
  • Pour trouver les doublons dans les images : Le script "compare_images.py" (run n'importe ou), et lui passer les deux dossier source(les images des autres) et to_add (les votres à ajouter)
  • Pour renommer toutes ses images en leur hash MD5 (A faire avant d'annoter) : le script "rename_dir_md5.py" (à déplacer dans le dossier JPEGImages pour run)
You might also like...
Extract MNIST handwritten digits dataset binary file into bmp images

MNIST-dataset-extractor Extract MNIST handwritten digits dataset binary file into bmp images More info at http://yann.lecun.com/exdb/mnist/ Dependenci

The first dataset of composite images with rationality score indicating whether the object placement in a composite image is reasonable.
The first dataset of composite images with rationality score indicating whether the object placement in a composite image is reasonable.

Object-Placement-Assessment-Dataset-OPA Object-Placement-Assessment (OPA) is to verify whether a composite image is plausible in terms of the object p

Semantic Segmentation of images using PixelLib with help of Pascalvoc dataset trained with Deeplabv3+ framework.
Semantic Segmentation of images using PixelLib with help of Pascalvoc dataset trained with Deeplabv3+ framework.

CARscan- Approach 1 - Segmentation of images by detecting contours. It failed because in images with elements along with cars were also getting detect

Face Synthetics dataset is a collection of diverse synthetic face images with ground truth labels.
Face Synthetics dataset is a collection of diverse synthetic face images with ground truth labels.

The Face Synthetics dataset Face Synthetics dataset is a collection of diverse synthetic face images with ground truth labels. It was introduced in ou

Script that receives an Image (original) and a set of images to be used as
Script that receives an Image (original) and a set of images to be used as "pixels" in reconstruction of the Original image using the set of images as "pixels"

picinpics Script that receives an Image (original) and a set of images to be used as "pixels" in reconstruction of the Original image using the set of

[CVPR 2021] Rethinking Text Segmentation: A Novel Dataset and A Text-Specific Refinement Approach
[CVPR 2021] Rethinking Text Segmentation: A Novel Dataset and A Text-Specific Refinement Approach

Rethinking Text Segmentation: A Novel Dataset and A Text-Specific Refinement Approach This is the repo to host the dataset TextSeg and code for TexRNe

HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021)
HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021)

Code for HDR Video Reconstruction HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021) Guanying Chen, Cha

EMNLP 2021 Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections

Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections Ruiqi Zhong, Kristy Lee*, Zheng Zhang*, Dan Klein EMN

VIL-100: A New Dataset and A Baseline Model for Video Instance Lane Detection (ICCV 2021)

Preparation Please see dataset/README.md to get more details about our datasets-VIL100 Please see INSTALL.md to install environment and evaluation too

Comments
  • Adding our 200 pictures with their corresponding annotations

    Adding our 200 pictures with their corresponding annotations

    Collected 3 groups of pictures that depict humans in 3 different situations.

    • The first one, in which they wear their mask correctly
    • The second one, in which they don't wear a mask at all
    • The third and last one, in which they wear a mask incorrectly

    When we talk about people wearing masks incorrectly, we talk about people wearing their masks without covering the nose and the chin at the same time, so if they cover just one of the two or none at all (for example by having it over their necks), that means that they are wearing the mask incorrectly.

    We tried having 3 groups of photos of equivalent size.

    opened by TebaiOsama 3
  • Problème nom de catégorie

    Problème nom de catégorie

    Certaines annotations ont comme catégorie "mask_weared_incorrect" au lieu de "with_incorrect_mask".

    Exemple: le fichier "annotations/0294a2bd4e2f4641e050176d83134542.xml"

    opened by youssef-t 2
Owner
Jonathan Lignier
null
Official Implementation and Dataset of "PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask and Group-Level Consistency", CVPR 2021

Portrait Photo Retouching with PPR10K Paper | Supplementary Material PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask an

null 184 Dec 11, 2022
EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering

MADE (Multi-Adapter Dataset Experts) This repository contains the implementation of MADE (Multi-adapter dataset experts), which is described in the pa

Princeton Natural Language Processing 68 Jul 18, 2022
EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering

MADE (Multi-Adapter Dataset Experts) This repository contains the implementation of MADE (Multi-adapter dataset experts), which is described in the pa

Princeton Natural Language Processing 39 Oct 5, 2021
This is the dataset and code release of the OpenRooms Dataset.

This is the dataset and code release of the OpenRooms Dataset.

Visual Intelligence Lab of UCSD 95 Jan 8, 2023
Dataset used in "PlantDoc: A Dataset for Visual Plant Disease Detection" accepted in CODS-COMAD 2020

PlantDoc: A Dataset for Visual Plant Disease Detection This repository contains the Cropped-PlantDoc dataset used for benchmarking classification mode

Pratik Kayal 109 Dec 29, 2022
LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation (NeurIPS2021 Benchmark and Dataset Track)

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation by Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zh

Kingdrone 174 Dec 22, 2022
This is the official source code for SLATE. We provide the code for the model, the training code, and a dataset loader for the 3D Shapes dataset. This code is implemented in Pytorch.

SLATE This is the official source code for SLATE. We provide the code for the model, the training code and a dataset loader for the 3D Shapes dataset.

Gautam Singh 66 Dec 26, 2022
The Habitat-Matterport 3D Research Dataset - the largest-ever dataset of 3D indoor spaces.

Habitat-Matterport 3D Dataset (HM3D) The Habitat-Matterport 3D Research Dataset is the largest-ever dataset of 3D indoor spaces. It consists of 1,000

Meta Research 62 Dec 27, 2022
A public available dataset for road boundary detection in aerial images

Topo-boundary This is the official github repo of paper Topo-boundary: A Benchmark Dataset on Topological Road-boundary Detection Using Aerial Images

Zhenhua Xu 79 Jan 4, 2023