Jetson Nano-based smart camera system that measures crowd face mask usage in real-time.

Overview

MaskCam

MaskCam is a prototype reference design for a Jetson Nano-based smart camera system that measures crowd face mask usage in real-time, with all AI computation performed at the edge. MaskCam detects and tracks people in its field of view and determines whether they are wearing a mask via an object detection, tracking, and voting algorithm. It uploads statistics (not videos) to the cloud, where a web GUI can be used to monitor face mask compliance in the field of view. It saves interesting video snippets to local disk (e.g., a sudden influx of lots of people not wearing masks) and can optionally stream video via RTSP.

MaskCam can be run on a Jetson Nano Developer Kit, or on a Jetson Nano module (SOM) with the ConnectTech Photon carrier board. It was designed to use the Raspberry Pi High Quality Camera but will also work with pretty much any USB webcam that is supported on Linux.

The on-device software stack is mostly written in Python and runs under JetPack 4.4.1 or 4.5. Edge AI processing is handled by NVIDIA’s DeepStream video analytics framework, YOLOv4-tiny, and Tryolabs' Norfair tracker. MaskCam reports statistics to and receives commands from the cloud using MQTT and a web-based GUI. The software is containerized and for evaluation can be easily installed on a Jetson Nano DevKit using docker with just a couple of commands. For production, MaskCam can run under balenaOS, which makes it easy to manage and deploy multiple devices.

We urge you to try it out! It’s easy to install on a Jetson Nano Developer Kit and requires only a web cam. (The cloud-based statistics server and web GUI are optional, but are also dockerized and easy to install on any reasonable Linux system.) See below for installation instructions.

MaskCam was developed by Berkeley Design Technology, Inc. (BDTI) and Tryolabs S.A., with development funded by NVIDIA. MaskCam is offered under the MIT License. For more information about MaskCam, please see the report from BDTI. If you have questions, please email us at [email protected]. Thanks!

Table of contents

Start Here!

Running MaskCam from a Container on a Jetson Nano Developer Kit

The easiest and fastest way to get MaskCam running on your Jetson Nano Dev Kit is using our pre-built containers. You will need:

  1. A Jetson Nano Dev Kit running JetPack 4.4.1 or 4.5
  2. An external DC 5 volt, 4 amp power supply connected through the Dev Kit's barrel jack connector (J25). (See these instructions on how to enable barrel jack power.) This software makes full use of the GPU, so it will not run with USB power.
  3. A USB webcam attached to your Nano
  4. Another computer with a program that can display RTSP streams -- we suggest VLC or QuickTime.

First, the MaskCam container needs to be downloaded from Docker Hub. On your Nano, run:

# This will take 10 minutes or more to download
sudo docker pull maskcam/maskcam-beta

Find your local Jetson Nano IP address using ifconfig. This address will be used later to view a live video stream from the camera and to interact with the Nano from a web server.

Make sure a USB camera is connected to the Nano, and then start MaskCam by running the following command. Make sure to substitute <your-jetson-ip> with your Nano's IP address.

# Connect USB camera before running this!
sudo docker run --runtime nvidia --privileged --rm -it --env MASKCAM_DEVICE_ADDRESS=<your-jetson-ip> -p 1883:1883 -p 8080:8080 -p 8554:8554 maskcam/maskcam-beta

The MaskCam container should start running the maskcam_run.py script, using the USB camera as the default input device (/dev/video0). It will produce various status output messages (and error messages, if it encounters problems). If there are errors, the process will automatically end after several seconds. Check the Troubleshooting section for tips on resolving errors.

Otherwise, after 30 seconds or so, it should continually generate status messages (such as Processed 100 frames...). Leave it running (don't press Ctrl+C, but be aware that the device will start heating up) and continue to the next section to visualize the video!

Viewing the Live Video Stream

If you scroll through the logs and don't see any errors, you should find a message like:

Streaming at rtsp://aaa.bbb.ccc.ddd:8554/maskcam

where aaa.bbb.ccc.ddd is the address that you provided in MASKCAM_DEVICE_ADDRESS previously. If you didn't provide an address, you'll see some unknown address label there, but the streaming will still work.

You can copy-paste that URL into your RSTP streaming viewer (see how to do it with VLC) on another computer. If all goes well, you should be rewarded with streaming video of your Nano, with green boxes around faces wearing masks and red boxes around faces not wearing masks. An example video of the live streaming in action is shown below.

This video stream gives a general demonstration of how MaskCam works. However, MaskCam also has other features, such as the ability to send mask detection statistics to the cloud and view them through a web browser. If you'd like to see these features in action, you'll need to set up an MQTT server, which is covered in the MQTT Server Setup section.

If you encounter any errors running the live stream, check the Troubleshooting section for tips on resolving errors.

Setting Device Configuration Parameters

MaskCam uses environment variables to configure parameters without having to rebuild the container or manually change the configuration file each time the program is run. For example, in the previous section we set the MASKCAM_DEVICE_ADDRESS variable to indicate our Nano's IP address. A list of configurable parameters is shown in maskcam_config.txt. The mapping between environment variable names and configuration parameters is defined in maskcam/config.py.

This section shows how to set environment variables to change configuration parameters. For example, if you want to use the /dev/video1 camera device rather than /dev/video0, you can define MASKCAM_INPUT when running the container:

# Run with MASKCAM_INPUT and MASKCAM_DEVICE_ADDRESS
sudo docker run --runtime nvidia --privileged --rm -it --env MASKCAM_INPUT=v4l2:///dev/video1 --env MASKCAM_DEVICE_ADDRESS=<your-jetson-ip> -p 1883:1883 -p 8080:8080 -p 8554:8554 maskcam/maskcam-beta

As another example, if you have an already set up our MQTT and web server (as shown in MQTT Server Setup section), you need to define two addtional environment variables, MQTT_BROKER_IP and MQTT_DEVICE_NAME. This allows your device to find the MQTT server and identify itself:

# Run with MQTT_BROKER_IP, MQTT_DEVICE_NAME, and MASKCAM_DEVICE_ADDRESS
sudo docker run --runtime nvidia --privileged --rm -it --env MQTT_BROKER_IP=<server IP> --env MQTT_DEVICE_NAME=<a-unique-string-you-like> --env MASKCAM_DEVICE_ADDRESS=<your-jetson-ip> -p 1883:1883 -p 8080:8080 -p 8554:8554 maskcam/maskcam-beta

If you have too many --env variables to add, it might be easier to create a .env file and point to it using the --env-file flag instead.

MQTT and Web Server Setup

Running the MQTT Broker and Web Server

MaskCam is intended to be set up with a web server that stores mask detection statistics and allows users to remotely interact with the device. We wrote code for instantiating a server that receives statistics from the device, stores them in a database, and has a web-based GUI frontend to display them. A screenshot of the frontend for an example device is shown below.

You can test out and explore this functionality by starting the server on a PC on your local network and pointing your Jetson Nano MaskCam device to it. This section gives instructions on how to do so. The MQTT broker and web server can be built and run on a Linux or OSX machine; we've tested it on Ubuntu 18.04LTS and OSX Big Sur. It can also be set up in an online AWS EC2 instance if you want to access it from outside of your local network.

The server consists of several docker containers that run together using docker-compose. Install docker-compose on your machine by following the installation instructions for your platform before continuing. All other necessary packages and libraries will be automatically installed when you set up the containers in the next steps.

After installing docker-compose, clone this repo:

git clone https://github.com/bdtinc/maskcam.git

Go to the server/ folder, which has all the needed components implemented on four containers: the Mosquitto broker, backend API, database, and Streamlit frontend.

These containers are configured using environment variables, so create the .env files by copying the default templates:

cd server
cp database.env.template database.env
cp frontend.env.template frontend.env
cp backend.env.template backend.env

The only file that needs to be changed is database.env. Open it with a text editor and replace the <DATABASE_USER>, <DATABASE_PASSWORD>, and <DATABASE_NAME> fields with your own values. Here are some example values, but you better be more creative for security reasons:

POSTGRES_USER=postgres
POSTGRES_PASSWORD=some_password
POSTGRES_DB=maskcam

NOTE: If you want to change any of the database.env values after building the containers, the easiest thing to do is to delete the pgdata volume by running docker volume rm pgdata. It will also delete all stored database information and statistics.

After editing the database environment file, you're ready to build all the containers and run them with a single command:

sudo docker-compose up -d

Wait a couple minutes after issuing the command to make sure that all containers are built and running. Then, check the local IP of your computer by running the ifconfig command. (It should be an address that starts with 192.168..., 10... or 172....) This is the server IP that will be used for connecting to the server (since the server is hosted on this computer).

Next, open a web browser and enter the server IP to visit the frontend webpage:

http://<server IP>:8501/

If you see a ConnectionError in the frontend, wait a couple more seconds and reload the page. The backend container can take some time to finish the database setup.

NOTE: If you're setting the server up on a remote instance like an AWS EC2, make sure you have ports 1883 (MQTT) and 8501 (web frontend) open for inbound and outbound traffic.

Setup a Device With Your Server

Once you've got the server set up on a local machine (or in a AWS EC2 instance with a public IP), switch back to the Jetson Nano device. Run the MaskCam container using the following command, where MQTT_BROKER_IP is set to the IP of your server. (If you're using an AWS EC2 server, make sure to configure port 1883 for inbound and outbound traffic before running this command.)

# Run with MQTT_BROKER_IP, MQTT_DEVICE_NAME, and MASKCAM_DEVICE_ADDRESS
sudo docker run --runtime nvidia --privileged --rm -it --env MQTT_BROKER_IP=<server IP> --env MQTT_DEVICE_NAME=my-jetson-1 --env MASKCAM_DEVICE_ADDRESS=<your-jetson-ip> -p 1883:1883 -p 8080:8080 -p 8554:8554 maskcam/maskcam-beta

And that's it. If the device has access to the server's IP, then you should see in the output logs some successful connection messages and then see your device listed in the drop-down menu of the frontend (reload the page if you don't see it). In the frontend, select Group data by: Second and hit Refresh status to see how the plot changes when new data arrives.

Check the next section if the MQTT connection is not established from the device to the server.

Checking MQTT Connection

If you're running the MQTT broker on a machine in your local network, make sure its IP is accessible from the Jetson device:

ping <local server IP>

NOTE: Remember to use the network address of the computer you set up the server on, which you can check using the ifconfig command and looking for an address that should start with 192.168..., 10... or 172...

If you're setting up a remote server and using its public IP to connect from your device, chances are you're not setting properly the port 1883 to be opened for inbound and outbound traffic. If you want to check the port is correctly configured, use nc from a local machine or your jetson:

nc -vz <server IP> 1883

Remember you also need to open port 8501 to access the web server frontend from a web browser, as explained in the server configuration section (but that's not relevant for the MQTT communication with the device).

Working With the MaskCam Container

Development Mode: Manually Running MaskCam

If you want to play around with the code, you probably don't want the container to automatically start running the maskcam_run.py script. The easiest way to achieve that, is by defining the environment variable DEV_MODE=1:

docker run --runtime nvidia --privileged --rm -it --env DEV_MODE=1 -p 1883:1883 -p 8080:8080 -p 8554:8554 maskcam/maskcam-beta

This will cause the container to start a /bin/bash prompt (see docker/start.sh for details), from which you could run the script manually, or any of its sub-modules as standalone processes:

# e.g: Run with a different input instead of default `/dev/video0`
./maskcam_run.py v4l2:///dev/video1

# e.g: Disable tracker to visualize raw detections and scores
MASKCAM_DISABLE_TRACKER=1 ./maskcam_run.py

Debugging: Running MaskCam Modules as Standalone Processes

The script maskcam_run.py, which is the main entrypoint for the MaskCam software, has two roles:

  • Handles all the MQTT communication (send stats and receive commands)
  • Orchestrates all other processes that live under maskcam/maskcam_*.py.

But you can actually run any of those modules as standalone processes, which can be easier for debugging.

You need to set DEV_MODE=1 as explained in the previous section to access the container prompt, and then you can run the python modules:

# e.g: Run only the static file server process
python3 -m maskcam.maskcam_fileserver
# e.g: Serve another directory to test
python3 -m maskcam.maskcam_fileserver /tmp

# e.g: Run only the inference and streaming processes
python3 -m maskcam.maskcam_streaming &
# Hit enter until you get a prompt and then:
python3 -m maskcam.maskcam_inference

Note: In the last example, maskcam_streaming is running on background, so it will not terminate if you press Ctrl+C (only maskcam_inference will, since it's running on the foreground).

To check that the streaming is still running and then bring it to foreground to terminate it, run:

jobs
fg %1
# Now you can hit Ctrl+C to terminate streaming

Additional Information

Further information about working with and customizing MaskCam is provided on separate pages in the docs folder. This section gives a brief description and link to each page.

Running on Jetson Nano Developer Kit Using BalenaOS

BalenaOS is a lightweight operating system designed for running containers on embedded devices. It provides several advantages for fleet deployment and management, especially when combined with balena's balenaCloud mangament system. If you'd like to try running MaskCam with balenaOS instead of JetPack OS on your Jetson Nano, please follow the instructions at BalenaOS-DevKit-Nano-Setup.md.

Custom Container Development

MaskCam is intended to be a reference design for any connected smart camera application. You can create your own application by starting from our pre-built container, modifying it to add the code files and packages needed for your program, and then re-building the container. The Custom-Container-Development.md gives instructions on how to build your own container based off MaskCam.

Building From Source on Jetson Nano Developer Kit

Please see How to Build your Own Container from Source on the Jetson Nano for instructions on how to build a custom MaskCam container on your Jetson Nano Developer Kit.

Using Your Own Detection Model

Please see How to Use Your Own Detection Model for instructions on how to use your own detection model rather than our mask detection model.

Installing MaskCam Manually (Without a Container)

MaskCam can also be installed manually, rather than by downloading our pre-built container. Using a manual installation of MaskCam can help with development if you'd prefer not to work with containers. If you'd like to install MaskCam without using containers, please see docs/Manual-Dependencies-Installation.md.

Running on Jetson Nano with Photon Carrier Board

For our hardware prototype of MaskCam, we used a Jetson Nano module and a Connect Tech Photon carrier board, rather than the Jetson Nano Developer Kit. We used the Photon because the Developer Kit is not sold or warrantied for production use. Using the Photon allowed us to quickly create a production-ready prototype using off-the-shelf hardware. If you have a Photon carrier board and Jetson Nano module, you can install MaskCam on them by using the setup instructions at docs/Photon-Nano-Setup.md.

Useful Development Scripts

During development, some scripts were produced which might be useful for other developers to debug or update the software. These include an MQTT sniffer, a script to run the TensorRT model on images, and to convert a model trained with the original YOLO Darknet implementation to TensorRT format. Basic usage for all these tools is covered on docs/Useful-Development-Scripts.md.

Troubleshooting Common Errors

If you run into any errors or issues while working with MaskCam, this section gives common errors and their solutions.

MaskCam consists of many different processes running in parallel. As a consequence, when there's an error on a particular process, all of them will be sent termination signals and finish gracefully. This means that you need to scroll up through the output to find out the original error that caused a failure. It should be very notorious, flagged as a red ERROR log entry, followed by the name of the process that failed and a message.

Error: camera not connected/not recognized

If you see an error containing the message Cannot identify device '/dev/video0', among other Gst and v4l messages, it means the program couldn't find the camera device. Make sure your camera is connected to the Nano and recognized by the host Ubuntu OS by issuing ls /dev and checking if /dev/video0 is present in the output.

Error: not running in privileged mode

In this case, you'll see a bunch of annoying messages like:

Error: Can't initialize nvrm channel
Couldn't create ddkvic Session: Cannot allocate memory
nvbuf_utils: Could not create Default NvBufferSession

You'll probably see multiple failures in other MaskCam processes as well. To resolve these errors, make sure you're running docker with the --privileged flag, as described in the first section.

Error: reason not negotiated/camera capabilities

If you get an error that looks like: v4l-camera-source / reason not-negotiated Then the problem is that the USB camera you're using doesn't support the default camera-framerate=30 (frames per second). If you don't have another camera, try running the script under utils/gst_capabilities.sh and find the lines with type video/x-raw ...

Find any suitable framerate=X/1 (with X being an integer like 24, 15, etc.) and set the corresponding configuration parameter with --env MASKCAM_CAMERA_FRAMERATE=X (see previous section).

Error: Streaming or file server are not accessible (nothing else seems to fail)

Make sure you're mapping the right ports from the container, with the -p container_port:host_port parameters indicated in the previous sections. The default port numbers, that should be exposed by the container, are configured in maskcam_config.txt as:

fileserver-port=8080
streaming-port=8554
mqtt-broker-port=1883

These port mappings are why we use docker run ... -p 1883:1883 -p 8080:8080 -p 8554:8554 ... with the run command. Remember that all the ports can be overriden using environment variables, as described in the previous section. Other ports like udp-port-* are not intended to be accessible from outside the container, they are used for communication between the inference process and the streaming and file-saving processes.

Other Errors

Sometimes after restarting the process or the whole docker container many times, some GPU resources can get stuck and cause unexpected errors. If that's the case, try rebooting the device and running the container again. If you find that the container fails systematically after running some sequence, please don't hesitate to report an Issue with the relevant context and we'll try to reproduce and fix it.

Questions? Need Help?

Email us at [email protected], and be sure to check out our independent report on the development of MaskCam!

Comments
  • Got an error when runing  sudo docker-compose up -d

    Got an error when runing sudo docker-compose up -d

    standard_init_linux.go:219: exec user process caused: exec format error The command '/bin/sh -c python -m pip install --upgrade pip && pip install -r requirements.txt' returned a non-zero code: 1 ERROR: Service 'backend' failed to build : Build failed

    opened by AlekZen 8
  • running masckcam manually

    running masckcam manually

    I am getting this error

    python3 maskcam_run.py | DEBUG Using selector: EpollSelector selector_events.py:54 INFO maskcam-run | Using input from config file: prints.py:48 v4l2:///dev/video0
    WARNING maskcam-run | MQTT is DISABLED since MQTT_BROKER_IP or prints.py:44 MQTT_DEVICE_NAME env vars are not defined

    INFO maskcam-run | Press Ctrl+C to stop all processes prints.py:48 INFO maskcam-run | Process file-server started with PID: prints.py:48 12650
    INFO maskcam-run | Starting streaming prints.py:48 (streaming-start-default is set)
    INFO maskcam-run | Received command: streaming_start prints.py:48 INFO maskcam-run | Process inference started with PID: 12652 prints.py:48 INFO maskcam-run | Processing command: streaming_start prints.py:48 INFO maskcam-run | Process streaming started with PID: 12653 prints.py:48 INFO mqtt | MQTT not connected. Skipping message to topic: prints.py:48 device-status
    | INFO file-server | Serving static files from directory: prints.py:48 /tmp/saved_videos
    INFO file-server | Static server STARTED at prints.py:48 http://:8080
    | INFO streaming | Codec: H264 prints.py:48 INFO streaming | prints.py:48

           Streaming at                                                         
           rtsp://<device-address-not-configured>:8554/maskcam                  
    

    | INFO inference | Auto calculated frames to skip inference: 2 prints.py:48 INFO inference | Creating Pipeline prints.py:48

    INFO inference | Creating Camera input prints.py:48 INFO inference | Creating Convertor src 2 prints.py:48 INFO inference | Creating Camera caps filter prints.py:48 INFO inference | Creating Convertor src 1 prints.py:48 INFO inference | Creating NVMM caps for input stream prints.py:48 INFO inference | Creating NvStreamMux prints.py:48 INFO inference | Creating pgie prints.py:48 INFO inference | Creating Converter NV12->RGBA prints.py:48 INFO inference | Creating OSD (nvosd) prints.py:48 INFO inference | Creating Queue prints.py:48 INFO inference | Creating Converter RGBA->NV12 prints.py:48 INFO inference | Creating capsfilter prints.py:48 INFO inference | Creating H264 stream prints.py:48 INFO inference | Creating Encoder prints.py:48 INFO inference | Creating Code Parser prints.py:48 INFO inference | Creating RTP H264 Payload prints.py:48 INFO inference | Creating Splitter file/UDP prints.py:48 INFO inference | Creating UDP queue prints.py:48 INFO inference | Creating Multi UDP Sink prints.py:48 INFO inference | Creating Fake Sink prints.py:48 INFO inference | Linking elements in the Pipeline prints.py:48

    Opening in BLOCKING MODE Opening in BLOCKING MODE ERROR: Deserialize engine failed because file path: /home/jetsonnano/maskcam/yolo/maskcam_y4t_1024_608_fp16.trt open error 0:00:02.575327981 12652 0x200a86f0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/home/jetsonnano/maskcam/yolo/maskcam_y4t_1024_608_fp16.trt failed 0:00:02.575399128 12652 0x200a86f0 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/home/jetsonnano/maskcam/yolo/maskcam_y4t_1024_608_fp16.trt failed, try rebuild 0:00:02.575429806 12652 0x200a86f0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files Yolo type is not defined from config file name: ERROR: Failed to create network using custom network creation function ERROR: Failed to get cuda engine from custom library API 0:00:02.575762260 12652 0x200a86f0 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed 0:00:02.575795021 12652 0x200a86f0 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1822> [UID = 1]: build backend context failed 0:00:02.575826220 12652 0x200a86f0 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1149> [UID = 1]: generate backend failed, check config file settings 0:00:02.576405397 12652 0x200a86f0 WARN nvinfer gstnvinfer.cpp:812:gst_nvinfer_start: error: Failed to create NvDsInferContext instance 0:00:02.576446544 12652 0x200a86f0 WARN nvinfer gstnvinfer.cpp:812:gst_nvinfer_start: error: Config file path: maskcam_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED ERROR inference | gst-resource-error-quark: Failed to create prints.py:42 NvDsInferContext instance (1): /dvs/git/dirty/git-master
    _linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvi
    nfer.cpp(812): gst_nvinfer_start ():
    /GstPipeline:pipeline0/GstNvInfer:primary-inference:
    Config file path: maskcam_config.txt, NvDsInfer Error:
    NVDSINFER_CONFIG_FAILED
    INFO inference | prints.py:48 TROUBLESHOOTING HELP

               If the error is like: v4l-camera-source / reason                 
           not-negotiated                                                       
               Solution: configure camera capabilities                          
               Run the script under utils/gst_capabilities.sh and               
           find the lines with type                                             
               video/x-raw ...                                                  
               Find a suitable framerate=X/1 (with X being an                   
           integer like 24, 15, etc.)                                           
               Then edit config_maskcam.txt and change the line:                
               camera-framerate=X                                               
               Or configure using --env MASKCAM_CAMERA_FRAMERATE=X              
           (see README)                                                         
                                                                                
               If the error is like:                                            
               /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot                  
           allocate memory in static TLS block                                  
               Solution: preload the offending library                          
               export                                                           
           LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1                   
                                                                                
               END HELP                                                         
    

    INFO inference | Inference main loop ending. prints.py:48 INFO maskcam-run | Sending interrupt to file-server process prints.py:48 INFO file-server | Shutting down static file server prints.py:48 INFO maskcam-run | Waiting for process file-server to prints.py:48 terminate...
    INFO file-server | Server shut down correctly prints.py:48 INFO file-server | Server alive threads: prints.py:48 [<_MainThread(MainThread, started 548193665040)>]
    INFO maskcam-run | Process terminated: file-server prints.py:48

    INFO maskcam-run | Sending interrupt to streaming process prints.py:48 INFO maskcam-run | Waiting for process streaming to prints.py:48 terminate...
    INFO streaming | Ending streaming prints.py:48 INFO maskcam-run | Process terminated: streaming prints.py:48

    opened by Chockaaa 5
  • Error trying to :  docker build . -t custom_maskcam

    Error trying to : docker build . -t custom_maskcam

    I would like to build a custom_maskcam container. My goal is to get a better understanding of the architecture of the project and to eventually train my own dataset. Right now I simply cloned the repository on my jetson nano. When I use the command: docker build . -t custom_maskcam inside my maskcam folder I keep getting the same error :

    Step 12/28 : RUN export GST_CFLAGS="-pthread -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include" && export GST_LIBS="-lgstreamer-1.0 -lgobject-2.0 -lglib-2.0" && git clone https://github.com/GStreamer/gst-python.git && cd gst-python && git checkout 1a8f48a && ./autogen.sh PYTHON=python3 && ./configure PYTHON=python3 && make && make install ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested ---> Running in 3eb0f983b6c4 Cloning into 'gst-python'... Note: checking out '1a8f48a'.

    You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout.

    If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example:

    git checkout -b

    HEAD is now at 1a8f48a Release 1.14.5

    • Setting up common submodule Submodule 'common' (https://gitlab.freedesktop.org/gstreamer/common.git) registered for path 'common' Cloning into '/gst-python/common'... fatal: unable to access 'https://gitlab.freedesktop.org/gstreamer/common.git/': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none fatal: clone of 'https://gitlab.freedesktop.org/gstreamer/common.git' into submodule path '/gst-python/common' failed Failed to clone 'common'. Retry scheduled Cloning into '/gst-python/common'... fatal: unable to access 'https://gitlab.freedesktop.org/gstreamer/common.git/': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none fatal: clone of 'https://gitlab.freedesktop.org/gstreamer/common.git' into submodule path '/gst-python/common' failed Failed to clone 'common' a second time, aborting There is something wrong with your source tree. You are missing common/gst-autogen.sh The command '/bin/sh -c export GST_CFLAGS="-pthread -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include" && export GST_LIBS="-lgstreamer-1.0 -lgobject-2.0 -lglib-2.0" && git clone https://github.com/GStreamer/gst-python.git && cd gst-python && git checkout 1a8f48a && ./autogen.sh PYTHON=python3 && ./configure PYTHON=python3 && make && make install' returned a non-zero code: 1

    This error occurs when installing gst-python. Can you help me with this issue. Thank you Raphael

    opened by Raphenri09 4
  • Camera get purple on rstp stream

    Camera get purple on rstp stream

    Hello, I noticed a strange bug. When I launched the rtsp stream the camera view get purple and can't recognize face anymore due to the luminosity (the camera is in front of a window). If I close my shutters the camera works perfectly. I think that you may find this interesting.
    camera_purple

    opened by DarkZerba 3
  • Namespace GstRtspServer not available

    Namespace GstRtspServer not available

    Hello, I tried to install the project without the Docker container as indicated in "Installing MaskCam Manually (Without a Container)" but I get the following error

    dlinano@dlinano-desktop:~/maskcam$ python3 maskcam_run.py Traceback (most recent call last): File "maskcam_run.py", line 72, in from maskcam.maskcam_inference import main as inference_main File "/home/dlinano/maskcam/maskcam/maskcam_inference.py", line 41, in gi.require_version("GstRtspServer", "1.0") File "/usr/lib/python3/dist-packages/gi/init.py", line 130, in require_version raise ValueError('Namespace %s not available' % namespace) ValueError: Namespace GstRtspServer not available

    I'm trying to run it on the Jetson nano, the Docker Container version works fine for me.

    thanks for sharing your knowledge Screenshot from 2021-08-25 12-50-56

    opened by ygbaldizon 1
  • Time is wrong in the web server report

    Time is wrong in the web server report

    Hi Guys, amazing work. It's just that when I check the time visible in the web report, it's showing incorrectly. Both my web server and jetson nano are already on my timezone: GMT+7. Where can I set the timezone?

    opened by freddi-muliantono-mitrais 1
  • I couldn't run the Web Server

    I couldn't run the Web Server

    I followed the instructions given in this section Running the MQTT Broker and Web Server and I did all steps; however, I couldn't run the web server it gives me some errors(see the attached picture). I tried to refresh the webpage and no use. What you think I am doing wrong here?

    FYI: I am running the web server on another laptop, not on my jetson nano. I am also using Ubuntu 20.4

    webserver

    opened by alrah003 1
  • Face Detection Max Face Size

    Face Detection Max Face Size

    There seems to be a threshold of maximum face size, when I get close enough to the camera seems like the detection doesn't see a face. Is there a size guide for the weight of Yolo Detection you are using? Thank you

    opened by manb911 1
  • Problem with Manual-Dependencies-Installation.md

    Problem with Manual-Dependencies-Installation.md

    I am trying to deploy the application without Docker. Seem to be running in to the problem of compiling YOLOv4 plugin for Deepstream. After cd <this repo path>/deepstream_plugin_yolov4 and export CUDA_VER=10.2 I went to execute 'make' next and run to the error: root@seen-desktop:/home/seen/Desktop/maskcam/deepstream_plugin_yolov4# export CUDA_VER=10.2 root@seen-desktop:/home/seen/Desktop/maskcam/deepstream_plugin_yolov4# make g++ -c -o nvdsinfer_yolo_engine.o -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations -I../../includes -I/usr/local/cuda-10.2/include -I/opt/nvidia/deepstream/deepstream-5.0/sources/includes nvdsinfer_yolo_engine.cpp nvdsinfer_yolo_engine.cpp:23:10: fatal error: nvdsinfer_custom_impl.h: No such file or directory #include "nvdsinfer_custom_impl.h" ^~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. Makefile:51: recipe for target 'nvdsinfer_yolo_engine.o' failed make: *** [nvdsinfer_yolo_engine.o] Error 1

    opened by manb911 1
  • link to raw weights, facemask-yolov4-tiny_best.weights

    link to raw weights, facemask-yolov4-tiny_best.weights

    Hello, I want to test maskcam on Jetson Xavier NX. Could your share the raw weights file (facemask-yolov4-tiny_best.weights, maskcam-yolov4-best.weights?), so as to "Convert weights generated using the original darknet implementation to TRT" (in https://github.com/bdtinc/maskcam/blob/main/docs/Useful-Development-Scripts.md) This seems to be a necessary procedure to test maskcam on Jetson Xavier NX. Thank you in advance

    opened by lmarval 1
  • what's the purpose of adding glib_cb_restart

    what's the purpose of adding glib_cb_restart

    Hi, I'm trying to write a deepstream-app, and I'm not very familiar with Glib.I met a problem before that if the rtsp sources are not stable or the internet is poor, the pipeline will run without any errors as well as any outputs. So I'm trying to find a way to restart the pipeline under this circumstance. I found your codes and the glib_cb_restart confused me. I read your annotations, so the timeout_add will call glib_cb_restart and without a return value, the function will add another timeout_add. Your annotation said:Timer to avoid GLoop locking infinitely and But we want to check periodically for other events. I also found the explanation about Glib.MainContext.iteration. I don't get the point of the usage of this function. Appreciate if you can explain a little more or give me some hints. Thanks a lot

    opened by zhouyuchong 0
  • Jetson Nano connection with MQTT server

    Jetson Nano connection with MQTT server

    After I got the server set up on my local machine, I want use Jetson Nano to run following command:

    Run with MQTT_BROKER_IP, MQTT_DEVICE_NAME, and MASKCAM_DEVICE_ADDRESS

    sudo docker run --runtime nvidia --privileged --rm -it --env MQTT_BROKER_IP= --env MQTT_DEVICE_NAME=my-jetson-1 --env MASKCAM_DEVICE_ADDRESS= -p 1883:1883 -p 8080:8080 -p 8554:8554 maskcam/maskcam-beta but I can't establish MQTT connection from the device to the server, than I use Jetson Nano try following command: ping I found I can't ping my local server IP How to fix this problem?

    opened by NSL-Wx 0
  • ERROR: Failed to create network using custom network creation function ERROR: Failed to get cuda engine from custom library API

    ERROR: Failed to create network using custom network creation function ERROR: Failed to get cuda engine from custom library API

    `Opening in BLOCKING MODE ERROR: [TRT]: INVALID_CONFIG: The engine plan file is generated on an incompatible device, expecting compute 7.2 got compute 5.3, please rebuild. ERROR: [TRT]: engine.cpp (1546) - Serialization Error in deserialize: 0 (Core engine deserialization failure) ERROR: [TRT]: INVALID_STATE: std::exception ERROR: [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed. ERROR: Deserialize engine failed from file: /opt/maskcam_1.0/yolo/maskcam_y4t_1024_608_fp16.trt 0:00:03.391191797 89 0x1cf0fd00 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/maskcam_1.0/yolo/maskcam_y4t_1024_608_fp16.trt failed 0:00:03.391251448 89 0x1cf0fd00 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/maskcam_1.0/yolo/maskcam_y4t_1024_608_fp16.trt failed, try rebuild 0:00:03.391282234 89 0x1cf0fd00 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files Yolo type is not defined from config file name: ERROR: Failed to create network using custom network creation function ERROR: Failed to get cuda engine from custom library API 0:00:03.391650410 89 0x1cf0fd00 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed 0:00:03.391677900 89 0x1cf0fd00 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1821> [UID = 1]: build backend context failed 0:00:03.391722606 89 0x1cf0fd00 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1148> [UID = 1]: generate backend failed, check config file settings 0:00:03.392212548 89 0x1cf0fd00 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start: error: Failed to create NvDsInferContext instance 0:00:03.392239590 89 0x1cf0fd00 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start: error: Config file path: maskcam_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED ERROR inference | gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(809): prints.py:42 gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
    Config file path: maskcam_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
    INFO inference | prints.py:48 TROUBLESHOOTING HELP

               If the error is like: v4l-camera-source / reason not-negotiated                                                                                                                                
               Solution: configure camera capabilities                                                                                                                                                        
               Run the script under utils/gst_capabilities.sh and find the lines with type                                                                                                                    
               video/x-raw ...                                                                                                                                                                                
               Find a suitable framerate=X/1 (with X being an integer like 24, 15, etc.)                                                                                                                      
               Then edit config_maskcam.txt and change the line:                                                                                                                                              
               camera-framerate=X                                                                                                                                                                             
               Or configure using --env MASKCAM_CAMERA_FRAMERATE=X (see README)                                                                                                                               
                                                                                                                                                                                                              
               If the error is like:                                                                                                                                                                          
               /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block                                                                                                            
               Solution: preload the offending library                                                                                                                                                        
               export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1                                                                                                                                      
                                                                                                                                                                                                              
               END HELP                                                                                                                                                                                       
    

    INFO inference | Inference main loop ending. prints.py:48 INFO inference | Output file saved: output_Robot.mp4 prints.py:48 INFO maskcam-run | Sending interrupt to streaming process prints.py:48 INFO maskcam-run | Waiting for process streaming to terminate... prints.py:48 INFO streaming | Ending streaming prints.py:48 INFO maskcam-run | Process terminated: streaming prints.py:48 `

    opened by jingruhou 1
  • rtsp error

    rtsp error

    When I pasted the rtsp url and tried to run on a VLC media player on another system, It is not displaying the faces but the time is progressing what could be the reason for that? It is sometimes showing me an error message as follows. Screenshot (262)

    opened by dihitha 2
  • RTMP Input Video Source

    RTMP Input Video Source

    Hi, Instead of using usb web cam or mp4 file, I am wondering whether a RTMP or HLS video feed can be used as an input source. Any hint is given to modify the python script? Thanks.

    opened by davidfungf 1
  • Training Own Neural Network

    Training Own Neural Network

    Hello, I am having trouble understanding the procedure to train my own detection model. I have a Jetson Nano 2GB and 4GB variant with me. My objective is to detect if a person wears sunglasses or not. To accomplish this objective, my main queries are as follows.

    1. I will have to train a detection model on my own dataset. It is mentioned in the Custom Containers document that I need to have one that is compatible with DeepStream. If I do manage to do this, what change should I make in which codes in the docker container so that it runs this different object detection neural network?
    2. I am under the assumption that if I manage to train a custom object detection neural network following the instructions on the Deep Stream docs page, I will have a compatible neural network. I should then put these weights in a shared drive and run the container, putting the trained weights in a particular folder (which I do not know the location of) and make changes in maskcam_run.py or maskcam_inference.py to point it to the updated weights. Are there flaws in my assumptions? Could you please correct me if I am wrong? I am new to docker as well so I might be missing something fundamental.

    My work flow is the exact same as mask cam, with remote deployment and web server accessing and the rest. I just need to change the object detection mechanism. Even the statistics that it provides will be unchanged.

    Thank you.

    opened by SmellingSalt 8
Owner
BDTI
BDTI
A Python training and inference implementation of Yolov5 helmet detection in Jetson Xavier nx and Jetson nano

yolov5-helmet-detection-python A Python implementation of Yolov5 to detect head or helmet in the wild in Jetson Xavier nx and Jetson nano. In Jetson X

null 12 Dec 5, 2022
Blender add-on: Add to Cameras menu: View → Camera, View → Add Camera, Camera → View, Previous Camera, Next Camera

Blender add-on: Camera additions In 3D view, it adds these actions to the View|Cameras menu: View → Camera : set the current camera to the 3D view Vie

German Bauer 11 Feb 8, 2022
Predict bus arrival time using VertexAI and Nvidia's Jetson Nano

bus_prediction predict bus arrival time using VertexAI and Nvidia's Jetson Nano imagenet the command for imagenet.py look like this python3 /path/to/i

null 10 Dec 22, 2022
The Face Mask recognition system uses AI technology to detect the person with or without a mask.

Face Mask Detection Face Mask Detection system built with OpenCV, Keras/TensorFlow using Deep Learning and Computer Vision concepts in order to detect

Rohan Kasabe 4 Apr 5, 2022
🦕 NanoSaur is a little tracked robot ROS2 enabled, made for an NVIDIA Jetson Nano

?? nanosaur NanoSaur is a little tracked robot ROS2 enabled, made for an NVIDIA Jetson Nano Website: nanosaur.ai Do you need an help? Discord For tech

NanoSaur 162 Dec 9, 2022
An open source Jetson Nano baseboard and tools to design your own.

My Jetson Nano Baseboard This basic baseboard gives the user the foundation and the flexibility to design their own baseboard for the Jetson Nano. It

NVIDIA AI IOT 57 Dec 29, 2022
Face Mask Detection is a project to determine whether someone is wearing mask or not, using deep neural network.

face-mask-detection Face Mask Detection is a project to determine whether someone is wearing mask or not, using deep neural network. It contains 3 scr

amirsalar 13 Jan 18, 2022
Real-Time-Student-Attendence-System - Real Time Student Attendence System

Real-Time-Student-Attendence-System The Student Attendance Management System Pro

Rounak Das 1 Feb 15, 2022
Easy to use Python camera interface for NVIDIA Jetson

JetCam JetCam is an easy to use Python camera interface for NVIDIA Jetson. Works with various USB and CSI cameras using Jetson's Accelerated GStreamer

NVIDIA AI IOT 358 Jan 2, 2023
DCT-Mask: Discrete Cosine Transform Mask Representation for Instance Segmentation

DCT-Mask: Discrete Cosine Transform Mask Representation for Instance Segmentation This project hosts the code for implementing the DCT-MASK algorithms

Alibaba Cloud 57 Nov 27, 2022
Face Recognition & AI Based Smart Attendance Monitoring System.

In today’s generation, authentication is one of the biggest problems in our society. So, one of the most known techniques used for authentication is h

Sagar Saha 1 Jan 14, 2022
Face Library is an open source package for accurate and real-time face detection and recognition

Face Library Face Library is an open source package for accurate and real-time face detection and recognition. The package is built over OpenCV and us

null 52 Nov 9, 2022
Face Mask Detection System built with OpenCV, TensorFlow using Computer Vision concepts

Face mask detection Face Mask Detection System built with OpenCV, TensorFlow using Computer Vision concepts in order to detect face masks in static im

Vaibhav Shukla 1 Oct 27, 2021
Face-Recognition-Attendence-System - This face recognition Attendence system using Python

Face-Recognition-Attendence-System I have developed this face recognition Attend

Riya Gupta 4 May 10, 2022
Python tools for 3D face: 3DMM, Mesh processing(transform, camera, light, render), 3D face representations.

face3d: Python tools for processing 3D face Introduction This project implements some basic functions related to 3D faces. You can use this to process

Yao Feng 2.3k Dec 30, 2022
Face recognition system using MTCNN, FACENET, SVM and FAST API to track participants of Big Brother Brasil in real time.

BBB Face Recognizer Face recognition system using MTCNN, FACENET, SVM and FAST API to track participants of Big Brother Brasil in real time. Instalati

Rafael Azevedo 232 Dec 24, 2022
This is a Keras implementation of a CNN for estimating age, gender and mask from a camera.

face-detector-age-gender This is a Keras implementation of a CNN for estimating age, gender and mask from a camera. Before run face detector app, expr

Devdreamsolution 2 Dec 4, 2021
Face Recognize System on camera AI OAK1

FRS on OAK1 Face Recognize System on camera OAK1 This project contains our work that deploy on camera OAK1 Features Anti-Spoofing Face detection Face

Tran Anh Tuan 6 Aug 8, 2022
Energy consumption estimation utilities for Jetson-based platforms

This repository contains a utility for measuring energy consumption when running various programs in NVIDIA Jetson-based platforms. Currently TX-2, NX, and AGX are supported.

OpenDR 10 Jun 17, 2022