Make OpenCV camera loops less of a chore by skipping the boilerplate and getting right to the interesting stuff

Overview

License


camloop

Forget the boilerplate from OpenCV camera loops and get to coding the interesting stuff

Table of Contents

Usage

This is a simple project developed to reduce complexity and time writing boilerplate code when prototyping computer vision applications. Stop worrying about opening/closing video caps, handling key presses, etc, and just focus on doing the cool stuff!

The project was developed in Python 3.8 and tested with physical local webcams. If you end up using it in any other context, please consider letting me know if it worked or not for whatever use case you had :)

Install

The project is distributed by pypi, so just:

$ pip install pycamloop

As usual, conda or venv are recommended to manage your local environments.

Quickstart

To run a webcam loop and process each frame, just define a function that takes as argument the frame as obtained from cv2.VideoCapture's cap() method (i.e: a np.array) and wrap it with the @camloop decorator. You just need to make sure your function takes the frame as an argument, and returns it so the loop can show it:

from camloop import camloop

@camloop()
def grayscale_example(frame):
    frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    return frame

# calling the function will start the loop and show the results with the cv2.imshow method
grayscale_example()

The window can be exited at any time by pressing "q" on the keyboard. You can also take screenshots at any time by pressing the "s" key. By default they will be saved in the current directory (see configuring the loop for information on how to customize this and other options).

More advanced use cases

Now, let's say that instead of just converting the frame to grayscale and visualizing it, you want to pass some other arguments, perform more complex operations, and/or persist information every loop. All of this can be done inside the function wrapped by the camloop decorator, and external dependencies can be passed as arguments to your function. For example, let's say we want to run a face detector and save the results to a file called "face-detection-results.txt":

from camloop import camloop

# for simplicity, we use cv2's own haar face detector
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml")

@camloop()
def face_detection_example(frame, face_cascade, results_fp=None):
    grayscale_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(grayscale_frame, 1.2, 5)
    for bbox in faces:
        x1, y1 = bbox[:2]
        x2 = x1 + bbox[2]
        y2 = y1 + bbox[3]
        cv2.rectangle(frame, (x1, y1), (x2, y2), (180, 0, 180), 5)

    if results_fp is not None:
	    with open(results_fp, 'a+') as f:
	        f.write(f"{datetime.datetime.now().isoformat()} - {len(faces)} face(s) found: {faces}\n")
    return frame

face_detection_example(face_cascade, results_fp="face-detection-results.txt")

Camloop can handle any arguments and keyword arguments you define in your function, as long as the frame is the first one. In calling the wrapped function, pass the extra arguments with the exception of the frame which is handled implicitly.

Configuring the loop

Since most of the boilerplate is now hidden, camloop exposes a configuration object that allows the user to modify several aspects of it's behavior. The options are:

parameter type default description
source int 0 Index of the camera to use as source for the loop (passed to cv2.VideoCapture())
mirror bool False Whether to flip the frames horizontally
resolution tuple[int, int] None Desired resolution (H,W) of the frames. Passed to the cv2.VideoCapture.set method. Default values and acceptance of custom ones depend on the webcam.
output string '.' Directory where to save artifacts by default (ex: captured screenshots)
sequence_format string None Format for rendering sequence of frames. Acceptable formats are "gif" or "mp4". If specified a video/gif will be saved to the output folder
fps float None FPS value used for the rendering of the sequence of frames. If unspecified, the program will try to estimate if from the length of the recording and number of frames
exit_key string 'q' Keyboard key used to exit the loop
screenshot_key string 's' Keyboard key used to capture a screenshot

If you want to use something other than the defaults, define a dictionary object with the desired configuration and pass it to the camloop decorator.

For example, here we want to mirror the frames horizontally, and save an MP4 video of the recording at 23.7 FPS to the test directory:

from camloop import camloop

config = {
    'mirror': True,
    'output': "test/",
    'fps': 23.7,
    'sequence_format': "mp4",
}

@camloop(config)
def grayscale_example(frame):
    frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    return frame

grayscale_example()

Demo

Included in the repo is a demonstration script that can be run out-of-the-box to verify camloop and see it's main functionalities. There are a few different samples you can check out, including the grayscale and face detection examples seen in this README).

To run the demo, install camloop and clone the repo:

$ pip install pycamloop
$ git clone https://github.com/glefundes/pycamloop.git
$ cd pycamloop/

Then run it by specifying which demo you want and passing any of the optional arguments (python3 demo.py -h for more info on them). In this case, we're mirroring the frames from the "face detection" demo and saving the a video of the recording in the "demo-videos" directory:

$ mkdir demo-videos
$ python3 demo.py face-detection --mirror --save-sequence mp4 -o demo-videos/

About The Project

I work as a computer vision engineer and often find myself having to prototype or debug projects locally using my own webcam as a source. This, of course, means I have to frequently code the same boilerplate OpenCV camera loop in multiple places. Eventually I got tired of copy-pasting the same 20 lines from file to file and decided to write a 100-ish lines package to make my work a little more efficient, less boring and code overall less bloated. That's pretty much it. Also, it was a nice chance to practice playing with decorators.

TODO

  • Verify functionality with other types of video sources (video files, streams, etc)

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Gabriel Lefundes Vieira - [email protected]

You might also like...
Virtualdragdrop - Virtual Drag and Drop Using OpenCV and Arduino

Virtualdragdrop - Virtual Drag and Drop Using OpenCV and Arduino

Using Opencv ,based on Augmental Reality(AR) and will show the feature matching of image and then by finding its matching
Using Opencv ,based on Augmental Reality(AR) and will show the feature matching of image and then by finding its matching

Using Opencv ,this project is based on Augmental Reality(AR) and will show the feature matching of image and then by finding its matching ,it will just mask that image . This project ,if used in cctv then it will detect black listed people if mentioned properly with their images.

A real-time dolly zoom camera effect
A real-time dolly zoom camera effect

Dolly-Zoom I've always been amazed by the gradual perspective change of dolly zoom, and I have some experience in python and OpenCV, so I decided to c

Learning Camera Localization via Dense Scene Matching, CVPR2021

This repository contains code of our CVPR 2021 paper - "Learning Camera Localization via Dense Scene Matching" by Shitao Tang, Chengzhou Tang, Rui Hua

The virtual calculator will be above the live streaming from your camera
The virtual calculator will be above the live streaming from your camera

The virtual calculator is above the live streaming from my camera usb , the program first detect my hand and in each frame calculate the distance between two finger ,if the distance is lower than the specific length , it detected as a click i can write any arithmitic operation , when i click in the equals sign the result appears in the display section. i can clear the display section by pressing c button in the keyboard .

This project proposes a camera vision based cursor control system, using hand moment captured from a webcam through a landmarks of hand by using Mideapipe module

This project proposes a camera vision based cursor control system, using hand moment captured from a webcam through a landmarks of hand by using Mideapipe module

Solution for Problem 1 by team codesquad for AIDL 2020. Uses ML Kit for OCR and OpenCV for image processing
Solution for Problem 1 by team codesquad for AIDL 2020. Uses ML Kit for OCR and OpenCV for image processing

CodeSquad PS1 Solution for Problem Statement 1 for AIDL 2020 conducted by @unifynd technologies. Problem Given images of bills/invoices, the task was

Introduction to Augmented Reality (AR) with Python 3 and OpenCV 4.2.
Introduction to Augmented Reality (AR) with Python 3 and OpenCV 4.2.

Introduction to Augmented Reality (AR) with Python 3 and OpenCV 4.2.

Comments
  • Long running loop apparentely causes OOM error

    Long running loop apparentely causes OOM error

    Apparently something is being kept in memory during execution of a loop that causes memory to explode over time.

    First notices when using a YoloV5 object detector prototyping loop. First step should be to determine whether or not the problem is being caused by pycamloop.

    bug 
    opened by glefundes 0
  • Gracefully handle exiting by crashing/closing the cv2 window

    Gracefully handle exiting by crashing/closing the cv2 window

    By default we use the (configurable) "q" key to handle window closing and exporting of the resulting video file. If by any chance the program crashes or if someone closes the window by any other means, the video is not encoded correctly.

    It is suggested we handle this by using the method cv2.getWindowProperty('', cv2.WND_PROP_VISIBLE) which returns -1 if the window is no longer visible.

    Possibly relevant: https://stackoverflow.com/a/45564409

    enhancement good first issue 
    opened by glefundes 0
Owner
Gabriel Lefundes
Data Scientist, Computer Vision Engineer @ Amigo Edu.
Gabriel Lefundes
Developed an AI-based system to control the mouse cursor using Python and OpenCV with the real-time camera.

Developed an AI-based system to control the mouse cursor using Python and OpenCV with the real-time camera. Fingertip location is mapped to RGB images to control the mouse cursor.

Ravi Sharma 71 Dec 20, 2022
Script para controlar o movimento do mouse usando Python e openCV com câmera em tempo real que detecta pontos de referência da mão, rastreia padrões de gestos em vez de um mouse físico.

mouserController Script para controlar o movimento do mouse usando Python e openCV com câmera em tempo real que detecta pontos de referência da mão, r

Vinícius Azevedo 6 Jun 28, 2022
A simple Security Camera created using Opencv in Python where images gets saved in realtime in your Dropbox account at every 5 seconds

Security Camera using Opencv & Dropbox This is a simple Security Camera created using Opencv in Python where images gets saved in realtime in your Dro

Arpit Rath 1 Jan 31, 2022
This repo contains several opencv projects done while learning opencv in python.

opencv-projects-python This repo contains both several opencv projects done while learning opencv by python and opencv learning resources [Basic conce

Fatin Shadab 2 Nov 3, 2022
A python script based on opencv and paddleocr, which can automatically pick up tasks, make cookies, and receive rewards in the Destiny 2 Dawning Oven

A python script based on opencv and paddleocr, which can automatically pick up tasks, make cookies, and receive rewards in the Destiny 2 Dawning Oven

null 1 Dec 22, 2021
This is a passport scanning web service to help you scan, identify and validate your passport created with a simple and flexible design and ready to be integrated right into your system!

Passport-Recogniton-System This is a passport scanning web service to help you scan, identify and validate your passport created with a simple and fle

Mo'men Ashraf Muhamed 7 Jan 4, 2023
Camera Intrinsic Calibration and Hand-Eye Calibration in Pybullet

This repository is mainly for camera intrinsic calibration and hand-eye calibration. Synthetic experiments are conducted in PyBullet simulator. 1. Tes

CAI Junhao 7 Oct 3, 2022
This is a Computer vision package that makes its easy to run Image processing and AI functions. At the core it uses OpenCV and Mediapipe libraries.

CVZone This is a Computer vision package that makes its easy to run Image processing and AI functions. At the core it uses OpenCV and Mediapipe librar

CVZone 648 Dec 30, 2022
This is a project to detect gestures to zoom in or out, using the real-time distance between the index finger and the thumb. It's based on OpenCV and Mediapipe.

Pinch-zoom This is a python project based on real-time hand-gesture detection, to zoom in or out, using the distance between the index finger and the

Harshit Bhalla 6 Jul 11, 2022
Image Detector and Convertor App created using python's Pillow, OpenCV, cvlib, numpy and streamlit packages.

Image Detector and Convertor App created using python's Pillow, OpenCV, cvlib, numpy and streamlit packages.

Siva Prakash 11 Jan 2, 2022