Fight Recognition from Still Images in the Wild @ WACVW2022, Real-world Surveillance Workshop

Related tags

Deep Learning SMFI
Overview

Fight Detection from Still Images in the Wild

Detecting fights from still images is an important task required to limit the distribution of social media images with fight content, in order to prevent the negative effects of such violent media items. For this reason, in this study we addressed the problem of fight detection from still images collected from web and social media. We explored how well one can detect fights from just a single still image.

In this context, a new image dataset on the fight recognition from still images task is collected named Social Media Fight Images (SMFI) dataset. The dataset samples gathered from social media (Twitter and Google) and NTU-CCTV Fights 1 dataset. Since the main concern is recognizing fight actions in the wild, real-world scenarios are included in the dataset where a mass amount of them are spontaneous recordings of fight actions. Using different keywords while crawling the data, the regional diversity is also maintained since the social media uploadings are mostly regional where users share the content in their own language. Some example images from the dataset are given below:

samples

Both fight and non-fight samples are collected from the same domain where the non-fight samples are also content likely to be shared on social media. Hard non-fight samples are also included in the dataset which displays the actions that might be misinterpreted as fight such as hugging, throwing ball, dancing and more. This prevents the dataset bias, so that the trained models focuses on the actions and the performers on the scene instead of benefiting other characteristics such as motion blur. The distribution of the dataset samples among each class and source is given below:

Twitter Google NTU CCTV-Fights Total
Fight 2247 162 330 2739
Non-fight 2642 146 164 2952
Total 4889 308 494 5691

Due to the copyright issues the dataset images are not shared directly and the links to the images / videos are shared. As the dataset samples might be deleted in time by the users or the authorities, the size of the dataset is subject to change.

Dataset Format

The dataset samples are shared through a CSV file where the columns are as follows:

  • Image ID: Unique ID assigned to each image.
  • Class: class of the image as fight / nofight
  • Source: The source of the images or videos as twitter_img / twitter_video / google / ntu-cctv
  • URL: The link for the images / videos.
    • For Twitter and Google data, image and video URLs are shared.
    • For the NTU CCTV-Fights data, the path to the original video is shared.
  • Frame number: If the image is extracted from a video, this column indicates the number of frame within the video.
    • For Twitter videos, the frame number is the number of frame (0-9) out of 10 uniformly sampled frames from each video.
    • For NTU CCTV-Fight videos, the frame number is the number of frame (0-N) out of all frames (N) extracted from each video.

In order to retrieve the dataset, you should first download the NTU CCTV-Fights here.

Citation

TBA

References

1 Mauricio Perez, Alex C. Kot, Anderson Rocha, “Detection of Real-world Fights in Surveillance Videos”, in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2019

You might also like...
One-line your code easily but still with the fun of doing so!

One-liner-iser One-line your code easily but still with the fun of doing so! Have YOU ever wanted to write one-line Python code, but don't have the sa

Image transformations designed for Scene Text Recognition (STR) data augmentation. Published at ICCV 2021 Workshop on Interactive Labeling and Data Augmentation for Vision.

Data Augmentation for Scene Text Recognition (ICCV 2021 Workshop) (Pronounced as "strog") Paper Arxiv Why it matters? Scene Text Recognition (STR) req

A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

The first dataset on shadow generation for the foreground object in real-world scenes.
The first dataset on shadow generation for the foreground object in real-world scenes.

Object-Shadow-Generation-Dataset-DESOBA Object Shadow Generation is to deal with the shadow inconsistency between the foreground object and the backgr

HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021)
HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021)

Code for HDR Video Reconstruction HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021) Guanying Chen, Cha

Repo for FUZE project. I will also publish some Linux kernel LPE exploits for various real world kernel vulnerabilities here. the samples are uploaded for education purposes for red and blue teams.

Linux_kernel_exploits Some Linux kernel exploits for various real world kernel vulnerabilities here. More exploits are yet to come. This repo contains

Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World
Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World

Legged Robots that Keep on Learning Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World, whic

Trained on Simulated Data, Tested in the Real World
Trained on Simulated Data, Tested in the Real World

Trained on Simulated Data, Tested in the Real World

A real world application of a Recurrent Neural Network on a binary classification of time series data
A real world application of a Recurrent Neural Network on a binary classification of time series data

What is this This is a real world application of a Recurrent Neural Network on a binary classification of time series data. This project includes data

Owner
Şeymanur Aktı
Şeymanur Aktı
Make a surveillance camera from your raspberry pi!

rpi-surveillance Make a surveillance camera from your Raspberry Pi 4! The surveillance is built as following: the camera records 10 seconds video and

Vladyslav 62 Feb 3, 2022
Demo code for paper "Learning optical flow from still images", CVPR 2021.

Depthstillation Demo code for "Learning optical flow from still images", CVPR 2021. [Project page] - [Paper] - [Supplementary] This code is provided t

null 130 Dec 25, 2022
Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

Real-ESRGAN Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data Ported from https://github.com/xinntao/Real-ESRGAN Depend

Holy Wu 44 Dec 27, 2022
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing

SVM Données Une base d’images contient 490 images pour l’apprentissage (400 voitures et 90 bateaux), et encore 21 images pour fait des tests. Prétrait

Achraf Rahouti 3 Nov 30, 2021
Learning Generative Models of Textured 3D Meshes from Real-World Images, ICCV 2021

Learning Generative Models of Textured 3D Meshes from Real-World Images This is the reference implementation of "Learning Generative Models of Texture

Dario Pavllo 115 Jan 7, 2023
URIE: Universal Image Enhancementfor Visual Recognition in the Wild

URIE: Universal Image Enhancementfor Visual Recognition in the Wild This is the implementation of the paper "URIE: Universal Image Enhancement for Vis

Taeyoung Son 43 Sep 12, 2022
This is an official implementation for the WTW Dataset in "Parsing Table Structures in the Wild " on table detection and table structure recognition.

WTW-Dataset This is an official implementation for the WTW Dataset in "Parsing Table Structures in the Wild " on ICCV 2021. Here, you can download the

null 109 Dec 29, 2022
This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild with Dense 3D Representations and A Benchmark. (CVPR 2022)"

Gait3D-Benchmark This is the code for the paper "Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei: Gait Recognition in the Wild

null 82 Jan 4, 2023
iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis

iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis Andreas Bl

CompVis Heidelberg 36 Dec 25, 2022
PyTorch DepthNet Training on Still Box dataset

DepthNet training on Still Box Project page This code can replicate the results of our paper that was published in UAVg-17. If you use this repo in yo

Clément Pinard 115 Nov 21, 2022