Social Distancing Detector using deep learning and capable to run on edge AI devices such as NVIDIA Jetson, Google Coral, and more.

Overview

License

Smart Social Distancing

Introduction

Smart Distancing is an open-source application to quantify social distancing measures using edge computer vision systems. Since all computation runs on the device, it requires minimal setup and minimizes privacy and security concerns. It can be used in retail, workplaces, schools, construction sites, healthcare facilities, factories, etc.

You can run this application on edge devices such as NVIDIA's Jetson Nano / TX2 or Google's Coral Edge-TPU. This application measures social distancing rates and gives proper notifications each time someone ignores social distancing rules. By generating and analyzing data, this solution outputs statistics about high-traffic areas that are at high risk of exposure to COVID-19 or any other contagious virus.

If you want to understand more about the architecture you can read the following post.

Please join our slack channel or reach out to [email protected] if you have any questions.

Getting Started

You can read the Get Started tutorial on Lanthorn's website. The following instructions will help you get started.

Prerequisites

Hardware

A host edge device. We currently support the following:

  • NVIDIA Jetson Nano
  • NVIDIA Jetson TX2
  • Coral Dev Board
  • AMD64 node with attached Coral USB Accelerator
  • X86 node (also accelerated with Openvino)
  • X86 node with Nvidia GPU

The features supported, the detection accuracy reached and the performance can vary from device to device.

Software

You should have Docker on your device.

Download a sample video (Optional)

If you don't have any camera to test the solution you can use any video as an input source. You can download an example with the following command.

# Download a sample video file from multiview object tracking dataset
# The video is complied from this dataset: https://researchdatafinder.qut.edu.au/display/n27416
./download_sample_video.sh

Usage

The smart social distancing app consists of two components: the frontend and the processor.

Frontend

The frontend is a public web app provided by lanthorn where you can signup for free. This web app allows you to configure some aspects of the processor (such as notifications and camera calibration) using a friendly UI. Moreover, it provides a dashboard that helps you to analyze the data that your cameras are processing.

The frontend site uses HTTPs, in order to have it communicate with the processor, the latter must be either Running with SSL enabled (See Enabling SSL on this Readme), or you must edit your site settings for https://app.lanthorn.ai in order to allow for Mixed Content (Insecure Content). Without doing any of these, communication with the local processor will fail

Running the processor

Make sure you have Docker installed on your device by following these instructions. The command that you need to execute will depend on the chosen device because each one has an independent Dockerfile.

There are two alternatives to run the processor in your device:

  1. Using git and building the docker image yourself (Follow the guide in this section).
  2. Pulling the (already built) image from Neuralet's Docker Hub repository (Follow the guide in this section).
Running a proof of concept

If you want to simply run the processor for just trying it out, then from the following steps you should only:

  1. Select your device and find its docker image. On x86, without a dedicated edge device, you should use either: a. If the device has access to an Nvidia GPU: GPU with TensorRT optimization. b. If the device has access to an Intel CPU: x86 using OpenVino. c. Otherwise: x86.
  2. Either build the image or pull it from Dockerhub. Don't forget to follow the script and download the model.
  3. Download the sample video running ./download_sample_video.sh.
  4. Run the processor using the script listed in its device.

This way you can skip security steps such as enabling HTTPs communication or oauth and get a simple version of the processor running to see if it fits your use case.

Afterwards, if you intend on running the processor while consuming from a dedicated video feed, we advise you to return to this README and read it fully.

Running the processor building the image

Make sure your system fulfills the prerequisites and then clone this repository to your local system by running this command:

git clone https://github.com/neuralet/smart-social-distancing.git
cd smart-social-distancing

After that, checkout to the latest release:

git fetch --tags
# Checkout to the latest release tag
git checkout $(git tag | tail -1)
Run on Jetson Nano
  • You need to have JetPack 4.3 installed on your Jetson Nano.
# 1) Download TensorRT engine file built with JetPack 4.3:
./download_jetson_nano_trt.sh

# 2) Build Docker image for Jetson Nano
docker build -f jetson-nano.Dockerfile -t "neuralet/smart-social-distancing:latest-jetson-nano" .

# 3) Run Docker container:
docker run -it --runtime nvidia --privileged -p HOST_PORT:8000 -v "$PWD":/repo -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-jetson-nano
Run on Jetson TX2
  • You need to have JetPack 4.4 installed on your Jetson TX2. If you are using Openpifpaf as a detector, skip the first step as the TensorRT engine will be generated automatically with calling the generate_tensorrt.bash script by detector.
# 1) Download TensorRT engine file built with JetPack 4.3:
./download_jetson_tx2_trt.sh

# 2) Build Docker image for Jetson TX2
docker build -f jetson-tx2.Dockerfile -t "neuralet/smart-social-distancing:latest-jetson-tx2" .

# 3) Run Docker container:
docker run -it --runtime nvidia --privileged -p HOST_PORT:8000 -v "$PWD":/repo -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-jetson-tx2
Run on Coral Dev Board
# 1) Build Docker image (This step is optional, you can skip it if you want to pull the container from neuralet dockerhub)
docker build -f coral-dev-board.Dockerfile -t "neuralet/smart-social-distancing:latest-coral-dev-board" .

# 2) Run Docker container:
docker run -it --privileged -p HOST_PORT:8000 -v "$PWD":/repo -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-coral-dev-board
Run on AMD64 node with a connected Coral USB Accelerator
# 1) Build Docker image
docker build -f amd64-usbtpu.Dockerfile -t "neuralet/smart-social-distancing:latest-amd64" .

# 2) Run Docker container:
docker run -it --privileged -p HOST_PORT:8000 -v "$PWD":/repo -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-amd64
Run on x86
# If you use the OpenPifPaf model, download the model first:
./download-x86-openpifpaf-model.sh

# If you use the MobileNet model run this instead:
# ./download_x86_model.sh

# 1) Build Docker image
docker build -f x86.Dockerfile -t "neuralet/smart-social-distancing:latest-x86_64" .

# 2) Run Docker container:
docker run -it -p HOST_PORT:8000 -v "$PWD":/repo -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-x86_64
Run on x86 with GPU

Note that you should have Nvidia Docker Toolkit to run the app with GPU support

# If you use the OpenPifPaf model, download the model first:
./download-x86-openpifpaf-model.sh

# If you use the MobileNet model run this instead:
# ./download_x86_model.sh

# 1) Build Docker image
docker build -f x86-gpu.Dockerfile -t "neuralet/smart-social-distancing:latest-x86_64_gpu" .

# 2) Run Docker container:
Notice: you must have Docker >= 19.03 to run the container with `--gpus` flag.
docker run -it --gpus all -p HOST_PORT:8000 -v "$PWD":/repo -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-x86_64_gpu
Run on x86 with GPU using TensorRT optimization

Note that you should have Nvidia Docker Toolkit to run the app with GPU support

# 1) Build Docker image
docker build -f x86-gpu-tensorrt-openpifpaf.Dockerfile -t "neuralet/smart-social-distancing:latest-x86_64_gpu_tensorrt" .

# 2) Run Docker container:
# Notice: you must have Docker >= 19.03 to run the container with `--gpus` flag.
docker run -it --gpus all -p HOST_PORT:8000 -v "$PWD":/repo -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-x86_64_gpu_tensorrt
Run on x86 using OpenVino
# download model first
./download_openvino_model.sh

# 1) Build Docker image
docker build -f x86-openvino.Dockerfile -t "neuralet/smart-social-distancing:latest-x86_64_openvino" .

# 2) Run Docker container:
docker run -it -p HOST_PORT:8000 -v "$PWD":/repo  -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-x86_64_openvino
Running the processor from neuralet Docker Hub repository

Before running any of the images available in the Docker repository, you need to follow these steps to have your device ready.

  1. Create a data folder.
  2. Copy the config file (available in this repository) corresponding to your device.
  3. Copy the bash script(s) (available in this repository) required to download the model(s) your device requires.
  4. Optionally, copy the script timezone.sh (available in this repository) to run the processor using your system timezone instead of UTC.

Alternatively you may simply pull the folder structure from this repository.

Run on Jetson Nano
  • You need to have JetPack 4.3 installed on your Jetson Nano.
# Download TensorRT engine file built with JetPack 4.3:
mkdir data/jetson
./download_jetson_nano_trt.sh

# Run Docker container:
docker run -it --runtime nvidia --privileged -p HOST_PORT:8000 -v $PWD/data:/repo/data -v $PWD/config-jetson-nano.ini:/repo/config-jetson-nano.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-jetson-nano
Run on Jetson TX2
  • You need to have JetPack 4.4 installed on your Jetson TX2. If you are using Openpifpaf as a detector, skip the first step as the TensorRT engine will be generated automatically with calling the generate_tensorrt.bash script by detector.
# Download TensorRT engine file built with JetPack 4.4
mkdir data/jetson
./download_jetson_tx2_trt.sh

# Run Docker container:
docker run -it --runtime nvidia --privileged -p HOST_PORT:8000 -v $PWD/data:/repo/data -v $PWD/config-jetson-tx2.ini:/repo/config-jetson-tx2.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-jetson-tx2
Run on Coral Dev Board
# Run Docker container:
docker run -it --privileged -p HOST_PORT:8000 -v $PWD/data:/repo/data -v $PWD/config-coral.ini:/repo/config-coral.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-coral-dev-board
Run on AMD64 node with a connected Coral USB Accelerator
# Run Docker container:
docker run -it --privileged -p HOST_PORT:8000 -v $PWD/data:/repo/data -v $PWD/config-coral.ini:/repo/config-coral.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-amd64
Run on x86
# Download the models
mkdir data/x86
# If you use the OpenPifPaf model, download the model first:
./download-x86-openpifpaf-model.sh
# If you use the MobileNet model run this instead:
# ./download_x86_model.sh

# Run Docker container:
docker run -it -p HOST_PORT:8000 -v $PWD/data:/repo/data -v $PWD/config-x86.ini:/repo/config-x86.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-x86_64
Run on x86 with GPU

Note that you should have Nvidia Docker Toolkit to run the app with GPU support

# Download the models
mkdir data/x86
# If you use the OpenPifPaf model, download the model first:
./download-x86-openpifpaf-model.sh
# If you use the MobileNet model run this instead:
# ./download_x86_model.sh

# Docker container:
# Notice: you must have Docker >= 19.03 to run the container with `--gpus` flag.
docker run -it --gpus all -p HOST_PORT:8000 -v $PWD/data:/repo/data -v $PWD/config-x86-gpu.ini:/repo/config-x86-gpu.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-x86_64_gpu
Run on x86 with GPU using TensorRT optimization

Note that you should have Nvidia Docker Toolkit to run the app with GPU support

# Run Docker container:
# Notice: you must have Docker >= 19.03 to run the container with `--gpus` flag.
docker run -it --gpus all -p HOST_PORT:8000 -v $PWD/data:/repo/data -v $PWD/config-x86-gpu-tensorrt.ini:/repo/config-x86-gpu-tensorrt.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-x86_64_gpu_tensorrt
Run on x86 using OpenVino
# Download the model
mkdir data/x86
./download_openvino_model.sh

# Run Docker container:
docker run -it -p HOST_PORT:8000 -v $PWD/data:/repo/data -v $PWD/config-x86-openvino.ini:/repo/config-x86-openvino.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-x86_64_openvino

Processor

Optional Parameters

This is a list of optional parameters for the docker run commands. They are included in the examples of the Run the processor section.

Logging in the system's timezone

By default all docker containers use UTC as timezone, passing the flag -e TZ=`./timezone.sh` will make the container run on your system's timezone.

You may hardcode a value rather than using the timezone.sh script, such as US/Pacific. Changing the processor's timezone allows to have better control of when the reports are generated and the hours to correlate to the place where the processor is running.

Please note that the bash script may require permissions to execute (run chmod +x timezone.sh)

If you are running the processor directly from the Docker Hub repository, remember to copy/paste the script in the execution folder before adding the flag -e TZ=`./timezone.sh`.

Persisting changes

We recommend adding the projects folder as a mounted volume (-v "$PWD":/repo) if you are building the docker image. If you are using the already built one we recommend creating a directory named data and mount it (-v $PWD/data:/repo/data).

Configuring AWS credentials

Some of the implemented features allow you to upload files into an S3 bucket. To do that you need to provide the envs AWS_BUCKET_REGION, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. An easy way to do that is to create a .env file (following the template .env.example) and pass the flag --env-file .env when you run the processor.

Enabling SSL

We recommend exposing the processors' APIs using HTTPS. To do that, you need to create a folder named certs with a valid certificate for the processor (with its corresponding private key) and configure it in the config-*.ini file (SSLCertificateFile and SSLKeyFile configurations).

If you don't have a certificate for the processor, you can create a self-signed one using openssl and the scripts create_ca.sh and create_processor_certificate.sh.

# 1) Create your own CA (certification authority)
./create_ca.sh
# After the script execution, you should have a folder `certs/ca` with the corresponding *.key, *.pem and *.srl files

# 2) Create a certificate for the processor
./create_processor_certificate.sh <PROCESSOR_IP>
# After the script execution, you should have a folder `certs/processor` with the corresponding *.key, *.crt, *.csr and *.ext files

As you are using a self-signed certificate you will need to import the created CA (using the .pem file) in your browser as a trusted CA.

Configuring OAuth2 in the endpoints

By default, all the endpoints exposed by the processors are accessible by everyone with access to the LAN. To avoid this vulnerability, the processor includes the possibility of configuring OAuth2 to keep your API secure.

To configure OAuth2 in the processor you need to follow these steps:

  1. Enabling OAuth2 in the API by setting in True the parameter UseAuthToken (included in the API section).

  2. Set into the container the env SECRET_ACCESS_KEY. This env is used to encode the JWT token. An easy way to do that is to create a .env file (following the template .env.example) and pass the flag --env-file .env when you run the processor.

  3. Create an API user. You can do that in two ways:

    1. Using the create_api_user.py script:

    Inside the docker container, execute the script python3 create_api_user.py --user=<USER> --password=<PASSWORD>. For example, if you are using an x86 device, you can execute the following script.

    docker run -it -p HOST_PORT:8000 -v "$PWD":/repo -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-x86_64 python3 create_api_user.py --user=<USER> --password=<PASSWORD>
    1. Using the /auth/create_api_user endpoint: Send a POST request to the endpoint http://<PROCESSOR_HOST>:<PROCESSOR_PORT>/auth/create_api_user with the following body:
    {
        "user": <USER>,
        "password": <PASSWORD>
    }
    

    After executing one of these steps, the user and password (hashed) will be stored in the file /repo/data/auth/api_user.txt inside the container. To avoid losing that file when the container is restarted, we recommend mounting the /repo directory as a volume.

  4. Request a valid token. You can obtain one by sending a PUT request to the endpoint http://<PROCESSOR_HOST>:<PROCESSOR_PORT>/auth/access_token with the following body:

    {
        "user": <USER>,
        "password": <PASSWORD>
    }
    

    The obtained token will be valid for 1 week (then a new one must be requested from the API) and needs to be sent as an Authorization header in all the requests. If you don't send the token (when the UseAuthToken attribute is set in True), you will receive a 401 Unauthorized response.

Supported video feeds formats

This processor uses OpenCV VideoCapture, which means that it can process:

  • Video files that are compatible with FFmpeg
  • Any URL of video stream in a public protocol such as RTSP and HTTP (protocol://host:port/script_name?script_params|auth)

Please note that:

  • Although this processor can read and process a video file, this is mostly a development functionality; this is due to the fact that loggers yield statistics that are time dependant that assume a real-time stream being processed, in which if the processing capacity is lower than the FPS, frames are lost in favour of processing new frames. With a video file all frames are processed and on a slower model this might take a while (and yield wrong analytics).
  • Some IP cameras implement their own private protocol that's not compatible with OpenCV.

If you want to integrate an IP camera that uses a private protocol, you should check with the camera provider if the device supports exporting its stream in a public protocol. For example, WYZE doesn't support RTSP as default, but you have the possibility of installing a firmware that supports it. Same goes for Google Nest Cameras, although here a token must be kept alive to access the RTSP stream

Change the default configuration

You can read and modify the configurations in config-*.ini files, accordingly:

config-jetson-nano.ini: for Jetson Nano

config-jetson-tx2.ini: for Jetson TX2

config-coral.ini: for Coral dev board / usb accelerator

config-x86.ini: for plain x86 (cpu) platforms without any acceleration

config-x86-openvino.ini: for x86 systems accelerated with Openvino

Please note that if you modify these values you should also set [App] HasBeenConfigured to "True". This allows for a client to recognize if this processor was previously configured.

You can also modify some of them using the UI. If you choose this option, make sure to mount the config file as a volume to keep the changes after any restart of the container (see section Persisting changes).

All the configurations are grouped in sections and some of them can vary depending on the chosen device.

  • [App]

    • HasBeenConfigured: A boolean parameter that states whether the config.ini was set up or not.
    • Resolution: Specifies the image resolution that the whole processor will use. If you are using a single camera we recommend using that resolution.
    • Encoder: Specifies the video encoder used by the processing pipeline.
    • MaxProcesses: Defines the number of processes executed in the processor. If you are using multiple cameras per processor we recommend increasing this number.
    • DashboardURL: Sets the url where the frontend is running. Unless you are using a custom domain, you should keep this value as https://app.lanthorn.ai/.
    • SlackChannel: Configures the slack channel used by the notifications. The chosen slack channel must exist in the configured workspace.
    • OccupancyAlertsMinInterval: Sets the desired interval (in seconds) between occupancy alerts.
    • MaxThreadRestarts: Defines the number of restarts allowed per thread.
    • HeatmapResolution: Sets the resolution used by the heatmap report.
    • LogPerformanceMetrics: A boolean parameter to enable/disable the logging of "Performance Metrics" in the default processor log. We recommend enabling it to compare the performance of different devices, models, resolutions, etc. When it's enabled, the processor logs will include the following information every time 100 frames are processed:
      • Frames per second (FPS):
      • Average Detector time:
      • Average Classifier time:
      • Average Tracker time:
      • Post processing steps:
        • Average Objects Filtering time:
        • Average Social Distance time:
        • Average Anonymizer time:
    • LogPerformanceMetricsDirectory: When LogPerformanceMetrics is enabled, you can store the performance metrics into a CSV file setting the destination directory.
    • EntityConfigDirectory: Defines the location where the configurations of entities (such as sources and areas) are located.
  • [Api]

    • Host: Configures the host IP of the processor's API (inside docker). We recommend don't change that value and keep it as 0.0.0.0.
    • Post: Configures the port of the processor's API (inside docker). Take care that if you change the default value (8000) you will need to change the startup command to expose the configured endpoint.
    • UseAuthToken: A boolean parameter to enable/disable OAuth2 in the API. If you set this value in True remember to follow the steps explained in the section Configuring OAuth2 in the endpoints.
    • SSLEnabled: A boolean parameter to enable/disable https/ssl in the API. We recommend setting this value in True.
    • SSLCertificateFile: Specifies the location of the SSL certificate (required when you have SSL enabled). If you generate it following the steps defined in this Readme you should put /repo/certs/<your_ip>.crt
    • [SSLKeyFile]: Specifies the location of the SSL key file (required when you have SSL enabled). If you generate it following the steps defined in this Readme you should put /repo/certs/<your_ip>.key
  • [Core]:

    • Host: Sets the host IP of the QueueManager (inside docker).
    • QueuePort: Sets the port of the QueueManager (inside docker).
    • QueueAuthKey: Configures the auth key required to interact with the QueueManager.
  • [Area_N]:

    A single processor can manage multiple areas and all of them must be configured in the config file. You can generate this configuration in 3 different ways: directly in the config file, using the UI or using the API.

    • Id: A string parameter to identify each area. This value must be unique.
    • Name: A string parameter to name each area. Although you can repeat the same name in multiple areas, we recommend don't do that.
    • Cameras: Configures the cameras (using the ids) included in the area. If you are configuring multiple cameras you should write the ids separated by commas. Each area should have at least one camera.
    • NotifyEveryMinutes and ViolationThreshold: Defines the period of time and number of social distancing violations desired to send notifications. For example, if you want to notify when occurs more than 10 violations every 15 minutes, you must set NotifyEveryMinutes in 15 and ViolationThreshold in 10.
    • Emails: Defines the emails list to receive the notification. Multiple emails can be written separating them by commas.
    • EnableSlackNotifications: A boolean parameter to enable/disable the Slack integration for notifications and daily reports. We recommend not editing this parameter directly and manage it from the UI to configure your workspace correctly.
    • OccupancyThreshold: Defines the occupancy violation threshold. For example, if you want to notify when there is more than 20 persons in the area you must set OccupancyThreshold in 20.
    • DailyReport: When the parameter is set in True, the information of the previous day is sent in a summary report.
    • DailyReportTime: If the daily report is enabled, you can choose the time to receive the report. By default, the report is sent at 06:00.
  • [Source_N]:

    In the config files, we use the source sections to specifies the camera's configurations. Similarly to the areas, a single processor can manage multiple cameras and all of them must be configured in the config file. You can generate this configuration in 3 different ways: directly in the config file, using the UI or using the API.

    • Id: A string parameter to identify each camera. This value must be unique.
    • Name: A string parameter to name each area. Although you can repeat the same name in multiple cameras, we recommend don't do that.
    • VideoPath: Sets the path or url required to get the camera's video stream.
    • Tags: List of tags (separated by commas). This field only has an informative propose, change that value doesn't affect the processor behavior.
    • NotifyEveryMinutes and ViolationThreshold: Defines the period of time and number of social distancing violations desired to send notifications. For example, if you want to notify when occurs more than 10 violations every 15 minutes, you must set NotifyEveryMinutes in 15 and ViolationThreshold in 10.
    • Emails: Defines the emails list to receive the notification. Multiple emails can be written separating them by commas.
    • EnableSlackNotifications: A boolean parameter to enable/disable the Slack integration for notifications and daily reports. We recommend not editing this parameter directly and manage it from the UI to configure your workspace correctly.
    • DailyReport: When the parameter is set in True, the information of the previous day is sent in a summary report.
    • DailyReportTime: If the daily report is enabled, you can choose the time to receive the report. By default, the report is sent at 06:00.
    • DistMethod: Configures the chosen distance method used by the processor to detect the violations. There are three different values: CalibratedDistance, CenterPointsDistance and FourCornerPointsDistance. If you want to use CalibratedDistance you will need to calibrate the camera from the UI.
    • LiveFeedEnabled: A boolean parameter that enables/disables the video live feed for the source.
  • [Detector]:

    • Device: Specifies the device. The available values are Jetson, EdgeTPU, Dummy, x86, x86-gpu
    • Name: Defines the detector's models used by the processor. The models available varies from device to device. Information about the supported models are specified in a comment in the corresponding config-.ini file.
    • ImageSize: Configures the moedel input size. When the image has a different resolution, it is resized to fit the model ones. The available values of this parameter depends on the model chosen.
    • ModelPath: Some of the supported models allow you to overwrite the default one. For example, if you have a specific model trained for your scenario you can use it.
    • ClassID: When you are using a multi-class detection model, you can definde the class id related to pedestrian in this parameter.
    • MinScore: Defines the person detection threshold. Any person detected by the model with a score less than the threshold will be ignored.
    • TensorrtPrecision: When you are using TensorRT version of Openpifpaf with GPU, Set TensorRT Precison 32 for float32 and 16 for float16 precision based on your GPU, if it supports both of them, float32 engine is more accurate and float16 is faster.
  • [Classifier]:

    Some of the supported devices include models that allow for detecting the body-pose of a person. This is a key component to Facemask Detection. If you want to include this feature, you need to uncomment this section, and use a model that supports the Classifier. Otherwise, you can delete or uncomment this section of the config file to save on CPU usage.

    • Device: Specifies the device. The available values are Jetson, EdgeTPU, Dummy, x86, x86-gpu
    • Name: Name of the facemask classifier used.
    • ImageSize: Configures the moedel input size. When the image has a different resolution, it is resized to fit the model ones. The available values of this parameter depends on the model chosen.
    • ModelPath: The same behavior as in the section Detector.
    • MinScore: Defines the facemask detection threshold. Any facemask detected by the model with a score less than the threshold will be ignored.
    • TensorrtPrecision: When you are using TensorRT version of Openpifpaf with GPU, Set TensorRT Precison 32 for float32 and 16 for float16 precision based on your GPU, if it supports both of them, float32 engine is more accurate and float16 is faster.
  • [Tracker]:

    • Name: Name of the tracker used.
    • MaxLost: Defines the number of frames that an object should disappear to be considered as lost.
    • TrackerIOUThreshold: Configures the threshold of IoU to consider boxes at two frames as referring to the same object at IoU tracker.
  • [SourcePostProcessor_N]:

    In the config files, we use the SourcePostProcessor sections to specify additional processing steps after running the detector and face mask classifier (if available) on the video sources. We support 3 different ones (identified by the field Name) that you enable/disable uncommenting/commenting them or with the Enabled flag.

    • objects_filtering: Used to remove invalid objects (duplicates or large).
      • NMSThreshold: Configures the threshold of minimum IoU to detect two boxes as referring to the same object.
    • social_distance: Used to measure the distance between objects and detect social distancing violations.
      • DefaultDistMethod: Defines the default distance algorithm for the cameras without DistMethod configuration.
      • DistThreshold: Configures the distance threshold for the social distancing violations
    • anonymizer: A step used to enable anonymization of faces in videos and screenshots.
  • [SourceLogger_N]:

    Similar to the section SourcePostProcessor_N, we support multiple loggers (right now 4) that you enable/disable uncommenting/commenting them or with the Enabled flag.

    • video_logger: Generates a video stream with the processing results. It is a useful logger to monitor in real-time your sources.
    • s3_logger: Stores a screenshot of all the cameras in a S3 bucket.
      • ScreenshotPeriod: Defines a time period (expressed in minutes) to take a screenshot of all the cameras and store them in S3. If you set the value to 0, no screenshots will be taken.
      • ScreenshotS3Bucket: Configures the S3 Bucket used to store the screenshot.
    • file_system_logger: Stores the processed data in a folder inside the processor.
      • TimeInterval: Sets the desired logging interval for objects detections and violations.
      • LogDirectory: Defines the location where the generated files will be stored.
      • ScreenshotPeriod: Defines a time period (expressed in minutes) to take a screenshot of all the cameras and store them. If you set the value to 0, no screenshots will be taken.
      • ScreenshotsDirectory: Configures the folder dedicated to storing all the images generated by the processor. We recommend to set this folder to a mounted directory (such as /repo/data/processor/static/screenshots).
    • web_hook_logger: Allows you to configure an external endpoint to receive in real-time the object detections and violations.
      • Endpoint: Configures an endpoint url.
  • [AreaLogger_N]:

    Similar to the section SourceLogger_N (for areas instead of cameras), we support multiple loggers (right now only 1, but we plan to include new ones in the future) that you enable/disable uncommenting/commenting them or with the Enabled flag.

    • file_system_logger: Stores the occupancy data in a folder inside the processor.
      • LogDirectory: Defines the location where the generated files will be stored.
  • [PeriodicTask_N]:

    The processor also supports the execution of periodic tasks to generate reports, accumulate metrics, backup your files, etc. For now, we support the metrics and s3_backup tasks. You can enable/disable these functionalities uncommenting/commenting the section or with the Enabled flag.

    • metrics: Generates different reports (hourly, daily and live) with information about the social distancing infractions, facemask usage and occupancy in your cameras and areas. You need to have it enabled to see data in the UI dashboard or use the /metrics endpoints.
      • LiveInterval: Expressed in minutes. Defines the time interval desired to generate live information.
    • s3_backup: Back up into an S3 bucket all the generated data (raw data and reports). To enable the functionality you need to configure the aws credentials following the steps explained in the section Configuring AWS credentials.
      • BackupInterval: Expressed in minutes. Defines the time interval desired to back up the raw data.
      • BackupS3Bucket: Configures the S3 Bucket used to store the backups.

API usage

After you run the processor on your node, you can use the exposed API to control the Processor's Core, where all the process is getting done.

The available endpoints are grouped in the following subapis:

  • /config: provides a pair of endpoint to retrieve and overwrite the current configuration file.
  • /cameras: provides endpoints to execute all the CRUD operations required by cameras. These endpoints are very useful to edit the camera's configuration without restarting the docker process. Additionally, this subapi exposes the calibration endpoints.
  • /areas: provides endpoints to execute all the CRUD operations required by areas.
  • /app: provides endpoints to retrieve and update the App section in the configuration file.
  • /api: provides endpoints to retrieve the API section in the configuration file.
  • /core: provides endpoints to retrieve and update the CORE section in the configuration file.
  • /detector: provides endpoints to retrieve and update the Detector section in the configuration file.
  • /classifier: provides endpoints to retrieve and update the Classifier section in the configuration file.
  • /tracker: provides endpoints to retrieve and update the Tracker section in the configuration file.
  • /source_post_processors: provides endpoints to retrieve and update the SourcePostProcessor_N sections in the configuration file. You can use that endpoint to enable/disable a post processor step, change a parameter, etc.
  • /source_loggers: provides endpoints to retrieve and update the SourceLoggers_N sections in the configuration file. You can use that endpoint to enable/disable a logger, change a parameter, etc.
  • /area_loggers: provides endpoints to retrieve and update the AreaLoggers_N sections in the configuration file. You can use that endpoint to enable/disable a post processor step, change a parameter, etc.
  • /periodict_tasks: provides endpoints to retrieve and update the PeriodicTask_N sections in the configuration file. You can use that endpoint to enable/disable the metrics generation.
  • /metrics: a set of endpoints to retrieve the data generated by the metrics periodic task.
  • /export: an endpoint to export (in zip format) all the data generated by the processor.
  • /slack: a set of endpoints required to configure Slack correctly in the processor. We recommend to use these endpoints from the UI instead of calling them directly.
  • /auth: a set of endpoints required to configure OAuth2 in the processors' endpoints.

Additionally, the API exposes 2 endpoints to stop/start the video processing

  • PUT PROCESSOR_IP:PROCESSOR_PORT/start-process-video: Sends command PROCESS_VIDEO_CFG to core and returns the response. It starts to process the video adressed in the configuration file. If the response is true, it means the core is going to try to process the video (no guarantee if it will do it), and if the response is false, it means the process can not be started now (e.g. another process is already requested and running).

  • PUT PROCESSOR_IP:PROCESSOR_PORT/stop-process-video: Sends command STOP_PROCESS_VIDEO to core and returns the response. It stops processing the video at hand, returns the response true if it stopped or false, meaning it can not (e.g. no video is already being processed to stop!).

The complete list of endpoints, with a short description and the signature specification is documented (with swagger) in the url PROCESSOR_IP:PROCESSOR_PORT/docs.

NOTE Most of the endpoints update the config file given in the Dockerfile. If you don't have this file mounted (see section Persisting changes), these changes will be inside your container and will be lost after stopping it.

Interacting with the processors' generated information

Generated information

The generated information can be split into 3 categories:

  • Raw data: This is the most basic level of information. It only includes the results of the detector, classifier, tracker, and any configured post-processor step.
  • Metrics data: Only written if you have enabled the metrics periodic task (see section). These include metrics related to occupancy, social-distancing, and facemask usage; aggregated by hour and day.
  • Notifications: Situations that require an immediate response (such as surpassing the maximum occupancy threshold for an area) and need to be notified ASAP. The currently supported notification channels are email and slack.

Accessing and storing the information

All of the information that is generated by the processor is stored (by default) inside the edge device for security reasons. However, the processor provides features to easily export or backup the data to another system if required.

Storing the raw data

The raw data storage is managed by the SourceLogger and AreaLogger steps. By default, only the video_logger and the file_system_logger are enabled. As both steps store the data inside the processor (by default the folder /repo/data/processor/static/), we strongly recommend mounting that folder to keep the data safe when the process is restarted (Persisting changes). Moreover, we recommend keeping active these steps because the frontend and the metrics need them.

If you need to store (or process) the raw data in real-time outside the processor, you can activate the web_hook_logger and implement an endpoint that handles these events. The web_hook_logger step is configured to send an event (a PUT request) using the following format:

{
            "version": ...,
            "timestamp": ...,
            "detected_objects": ...,
            "violating_objects": ...,
            "environment_score": ...,
            "detections": ...,
            "violations_indexes": ...
        }

You only need to implement an endpoint that matches the previous signature; configure its URL in the config file and the integration will be done. We recommend this approach if you want to integrate "Smart social distancing" with another existing system with real-time data.

Another alternative is to activate the periodic task s3_backup. This task will back up all the generated data (raw data and metrics) inside the configured S3 bucket, according to the time interval defined by the BackupInterval parameter. Before enabling this feature remember to configure AWS following the steps defined in the section Configuring AWS credentials.

Accessing the metrics data

The data of aggregated metrics is stored in a set of CSV files inside the device. For now, we don't have implemented any mechanism to store these files outside the processor (the web_hook_logger only sends "raw data" events). However, if you enable the s3_backup task, the previous day's metrics files will be backed up at AWS at the beginning of the day.

You can easily visualize the metrics information in the dashboard exposed in the frontend. In addition, you can retrieve the same information through the API (see the metrics section in the API documentation exposed in http://<PROCESSOR_HOST>:<PROCESSOR_PORT>/docs#/Metrics).

Exporting the data

In addition to the previous features, the processor exposes an endpoint to export in zip format all the generated data. The signature of this endpoint can be found in http://<PROCESSOR_HOST>:<PROCESSOR_PORT>/docs#/Export.

Issues and Contributing

The project is under substantial active development; you can find our roadmap at https://github.com/neuralet/neuralet/projects/1. Feel free to open an issue, send a Pull Request, or reach out if you have any feedback.

Contact Us

License

Most of the code in this repo is licensed under the Apache 2 license. However, some sections/classifiers include separate licenses.

These include:

  • Openpifpaf model for x86 (see license)
  • OFM facemask classifier model (see license)
Issues
  • libnvinfer and wrong datadirectory, not writable?

    libnvinfer and wrong datadirectory, not writable?

    Hello

    I've been trying to get this to run for use in our facility, but to no avail. I feel I'm close though. I have two issues when trying to run the docker command:

    sudo docker run -it --runtime nvidia --privileged -p 8000:8000 -v "$PWD":/repo neuralet/smart-social-distancing:latest-jetson-nano

    INFO:libs.area_threading:[70] taking on notifications for 1 areas ERROR:root:libnvinfer.so.6: cannot open shared object file: No such file or directory Traceback (most recent call last): File "/repo/libs/engine_threading.py", line 49, in run self.engine = CvEngine(self.config, self.source["section"], self.live_feed_enabled) File "/repo/libs/cv_engine.py", line 23, in init self.detector = Detector(self.config) File "/repo/libs/detectors/detector.py", line 23, in init self.detector = JetsonDetector(self.config) File "/repo/libs/detectors/jetson/detector.py", line 22, in init from . import mobilenet_ssd_v2 File "/repo/libs/detectors/jetson/mobilenet_ssd_v2.py", line 3, in import tensorrt as trt File "/usr/local/lib/python3.6/dist-packages/tensorrt/init.py", line 1, in from .tensorrt import * ImportError: libnvinfer.so.6: cannot open shared object file: No such file or directory 100 4 100 4 0 0 8 0 --:--:-- --:--:-- --:--:-- 8 ok video is going to be processed

    The processor keeps going though so I'm not sure this is an actual problem. Then it starts looping this:

    INFO:root:Exception processing area Kitchen INFO:root:Restarting the area processing INFO:libs.area_engine:Enabled processing area - area0: Kitchen with 1 cameras INFO:libs.area_engine:Area reporting on - area0: Kitchen is waiting for reports to be created ERROR:root:[Errno 2] No such file or directory: '/repo/data/processor/static/data/sources/default/objects_log/2020-12-30.csv' Traceback (most recent call last): File "/repo/libs/area_threading.py", line 46, in run self.engine.process_area() File "/repo/libs/area_engine.py", line 67, in process_area with open(os.path.join(camera["file_path"], str(date.today()) + ".csv"), "r") as log: FileNotFoundError: [Errno 2] No such file or directory: '/repo/data/processor/static/data/sources/default/objects_log/2020-12-30.csv' INFO:root:Exception processing area Kitchen ERROR:root:[Errno 2] No such file or directory: '/repo/data/processor/static/data/sources/default/objects_log/2020-12-30.csv' Traceback (most recent call last): File "/repo/libs/area_threading.py", line 54, in run raise e File "/repo/libs/area_threading.py", line 46, in run self.engine.process_area() File "/repo/libs/area_engine.py", line 67, in process_area with open(os.path.join(camera["file_path"], str(date.today()) + ".csv"), "r") as log: FileNotFoundError: [Errno 2] No such file or directory: '/repo/data/processor/static/data/sources/default/objects_log/2020-12-30.csv'

    The directory that it is looking for 'sources' does not exist. There is a directory data/processor/static/data/default/objects_log, but even when I create the sources directory, the bug persists. When I create the csv manually (which can't be intended), I get this:

    IndexError: deque index out of range INFO:root:Exception processing area Kitchen INFO:root:Restarting the area processing INFO:libs.area_engine:Enabled processing area - area0: Kitchen with 1 cameras ERROR:root:deque index out of range Traceback (most recent call last): File "/repo/libs/area_threading.py", line 46, in run self.engine.process_area() File "/repo/libs/area_engine.py", line 68, in process_area last_log = deque(csv.DictReader(log), 1)[0] IndexError: deque index out of range INFO:root:Exception processing area Kitchen ERROR:root:deque index out of range Traceback (most recent call last): File "/repo/libs/area_threading.py", line 54, in run raise e File "/repo/libs/area_threading.py", line 46, in run self.engine.process_area() File "/repo/libs/area_engine.py", line 68, in process_area last_log = deque(csv.DictReader(log), 1)[0] IndexError: deque index out of range

    I'm by no measure a Linux or Docker expert so this might be a small issue to resolve, but I need help.

    Thanks in advance and great work so far!

    opened by ictfpcgent 14
  • Face mask classifier

    Face mask classifier

    A structure for a classifier is added to the code. The classifier works with an openpifpaf detector at x86 devices.

    opened by mrn-mln 10
  • Black feed previews when input feed is from IP Camera

    Black feed previews when input feed is from IP Camera

    Hello, I'm testing smart-social-distancing using a jetson nano and a D-Link DCS-5222LB IP Camera. The frontend is running locally on my laptop, the Processor is running on the Jetson and the Dlink camera are all on the same subnet.

    Laptop: 192.168.188.20 Dlink camera: 192.168.188.23 Jetson: 192.168.188.81

    Here are my configurations:

    config-frontend.ini

    [App]
    Host: 0.0.0.0
    Port: 8000
    
    [Processor]
    Host: 192.168.188.81
    Port: 8000
    

    config-jetson.ini

    [App]
    VideoPath = rtsp://username:[email protected]/live1.sdp
    Resolution = 640,360
    
    Encoder: videoconvert ! video/x-raw,format=I420 ! x264enc speed-preset=ultrafast
    
    [API]
    Host = 0.0.0.0
    Port = 8000
    
    [CORE]
    Host = 0.0.0.0
    QueuePort = 8010
    QueueAuthKey = shibalba
    

    I'm running the Processor on the Jetson with this command:

    docker build -f jetson-nano.Dockerfile -t "neuralet/smart-social-distancing:latest-jetson-nano" .
    docker run -it --runtime nvidia --privileged -p 8000:8000 -v "$PWD/data":/repo/data neuralet/smart-social-distancing:latest-jetson-nano
    

    and this is the output I get on the console:

    ok video is going to be processed
    [TensorRT] ERROR: Could not register plugin creator:  FlattenConcat_TRT in namespace:
    INFO:libs.detectors.jetson.mobilenet_ssd_v2:allocated buffers
    Device is:  Jetson
    Detector is:  ssd_mobilenet_v2_pedestrian_softbio
    image size:  [300, 300, 3]
    INFO:libs.distancing:opened video rtsp://admin:[email protected]/live2.sdp
    error: XDG_RUNTIME_DIR not set in the environment.
    0:00:02.002293373    68   0x55a7901730 ERROR                default gstvaapi.c:254:plugin_init: Cannot create a VA display
    INFO:libs.distancing:processed frame 1 for rtsp://username:[email protected]/live1.sdp
    INFO:libs.distancing:processed frame 11 for rtsp://username:[email protected]/live1.sdp
    INFO:libs.distancing:processed frame 21 for rtsp://username:[email protected]/live1.sdp
    INFO:libs.distancing:processed frame 31 for rtsp://username:[email protected]/live1.sdp
    INFO:libs.distancing:processed frame 41 for rtsp://username:[email protected]/live1.sdp
    INFO:libs.distancing:processed frame 51 for rtsp://username:[email protected]/live1.sdp
    INFO:libs.distancing:processed frame 61 for rtsp://username:[email protected]/live1.sdp
    INFO:libs.distancing:processed frame 71 for rtsp://username:[email protected]/live1.sdp
    
    [...] and so on
    
    

    I'm running the webapp locally on my laptop with this command:

    docker build -f frontend.Dockerfile -t "neuralet/smart-social-distancing:latest-frontend" .
    docker build -f web-gui.Dockerfile -t "neuralet/smart-social-distancing:latest-web-gui" .
    docker run -it -p 8000:8000 --rm neuralet/smart-social-distancing:latest-web-gui 
    

    This is the output I get from the console:

    Successfully built 0047284f1577
    Successfully tagged neuralet/smart-social-distancing:latest-web-gui
    INFO:     Started server process [1]
    INFO:uvicorn.error:Started server process [1]
    INFO:     Waiting for application startup.
    INFO:uvicorn.error:Waiting for application startup.
    INFO:     Application startup complete.
    INFO:uvicorn.error:Application startup complete.
    INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
    INFO:uvicorn.error:Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
    

    When I browse the frontend locally with the latest Chrome on http://0.0.0.0:8000/panel/#/live I see the Camera Feed and the Bird's View box completely black. See the screenshot below:

    screen_0

    The plot underneath the camera feeds is working because if I step back and forth in front of the camera seems like it's recognizing me.

    If I open Chrome's inspector I can see that there are two errors in the console.

    screen_errors

    And if I try performing a GET to that URL (or any similar path) I get a 404.

    errors details

    and a "Not Found" response.

    screen_not_found

    Has anyone of you experienced something similar when working with IP Cams? Any idea on how to fix this?

    Thank you very much and big ups for the beautiful project!

    Cheers

    opened by ing-norante 6
  • XGD_Runtime Error and Loading Video problem

    XGD_Runtime Error and Loading Video problem

    Hello, thank you to everyone who contributed to this project. When I try it on Jetson nano, the video feels like loading continuously. It plays after 5 seconds, loads again, plays again for 5 seconds, then pauses. I refresh the page but the same problem continue. It also gives errors related to XGD_Runtime on console.

    I created the docker image with the command below. docker build -f jetson-nano.Dockerfile -t "neuralet/smart-social-distancing:latest-jetson-nano" .

    Also I tried it on my PC, the result is same. What am i missing ?

    Screen Shot 2020-06-26 at 20 45 45
    opened by memina 6
  • Issue with video feeds buffering and becoming out-of-sync with each other

    Issue with video feeds buffering and becoming out-of-sync with each other

    Hi all,

    I managed to get this up and running on a Jetson Nano a couple of months ago. Coming back to it now everything has changed! 👍

    I'm running both the front-end and the processor on the Nano. I have no experience with docker so was not sure how to get the two parts communicating, eventually I got something though. I detail the steps here just in case.

    I built and run everything on the Nano itself in headless mode as follows:

    Modified the config-frontend.ini file as follows:

    [App]
    Host: 0.0.0.0
    Port: 8000
    
    [Processor]
    ; The IP and Port on which your Processor node is runnning (according to your docker run's -p HOST_PORT:8000 ... for processor's docker run command
    Host: 192.168.1.104 <-- changed this line to reflect the IP address of the Nano
    Port: 8001
    

    Build the docker files:

    docker build -f frontend.Dockerfile -t "neuralet/smart-social-distancing:latest-frontend" .
    docker build -f jetson-web-gui.Dockerfile -t "neuralet/smart-social-distancing:latest-web-gui-jetson" .
    
    docker build -f jetson-nano.Dockerfile -t "neuralet/smart-social-distancing:latest-jetson-nano" .
    

    Start the front-end:

    sudo docker run -it -p 8000:8000 --rm neuralet/smart-social-distancing:latest-web-gui-jetson
    
    # Output
    INFO:     Started server process [1]
    INFO:uvicorn.error:Started server process [1]
    INFO:     Waiting for application startup.
    INFO:uvicorn.error:Waiting for application startup.
    INFO:     Application startup complete.
    INFO:uvicorn.error:Application startup complete.
    INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
    INFO:uvicorn.error:Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
    

    Start the processor:

    sudo docker run -it --runtime nvidia --privileged -p 8001:8000 -v "$PWD/data":/repo/data -v "$PWD/config-jetson.ini":/repo/config-jetson.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-jetson-nano
    
    # Output
    video file at  /repo/data/softbio_vid.mp4 not exists, downloading...
    --2020-10-15 10:46:41--  https://social-distancing-data.s3.amazonaws.com/softbio_vid.mp4
    Resolving social-distancing-data.s3.amazonaws.com (social-distancing-data.s3.amazonaws.com)... 52.217.65.116
    Connecting to social-distancing-data.s3.amazonaws.com (social-distancing-data.s3.amazonaws.com)|52.217.65.116|:443... connected.
    HTTP request sent, awaiting response... INFO:__main__:Reporting disabled!
    200 OK
    Length: 25371423 (24M) [video/mp4]
    Saving to: 'data/softbio_vid.mp4'
    
    data/softbio_vid.mp4            2%[>                                                 ] 534.65K   730KB/s               INFO:libs.processor_core:Core's queue has been initiated
    INFO:__main__:Core Started.
    INFO:root:Starting processor core
    INFO:libs.processor_core:Core is listening for commands ...
    data/softbio_vid.mp4            7%[==>                                               ]   1.81M  1.17MB/s               INFO:api.processor_api:Connection established to Core's queue
    data/softbio_vid.mp4            8%[===>                                              ]   2.15M  1.23MB/s               INFO:__main__:API Started.
    data/softbio_vid.mp4           10%[====>                                             ]   2.47M  1.26MB/s               INFO:     Started server process [11]
    INFO:uvicorn.error:Started server process [11]
    INFO:     Waiting for application startup.
    INFO:uvicorn.error:Waiting for application startup.
    INFO:     Application startup complete.
    INFO:uvicorn.error:Application startup complete.
    INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
    INFO:uvicorn.error:Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
    data/softbio_vid.mp4          100%[=================================================>]  24.20M  1.62MB/s    in 16s
    
    2020-10-15 10:46:57 (1.52 MB/s) - 'data/softbio_vid.mp4' saved [25371423/25371423]
    
    running curl 0.0.0.0:8000/process-video-cfg
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0INFO:api.processor_api:process-video-cfg requests on api
    INFO:api.processor_api:waiting for core's response...
    INFO:libs.processor_core:command received: Commands.PROCESS_VIDEO_CFG
    INFO:libs.processor_core:Setup scheduled tasks
    INFO:libs.processor_core:should not send notification for camera default
    INFO:libs.processor_core:started to process video ...
    100     4  100     4    0     0     59      0 --:--:-- --:--:-- --:--:--    60
    ok video is going to be processed
    INFO:libs.engine_threading:[68] taking on 1 cameras
    [TensorRT] ERROR: Could not register plugin creator:  FlattenConcat_TRT in namespace:
    INFO:libs.detectors.jetson.mobilenet_ssd_v2:allocated buffers
    Device is:  Jetson
    Detector is:  ssd_mobilenet_v2_coco
    image size:  [300, 300, 3]
    INFO:libs.distancing:opened video /repo/data/softbio_vid.mp4
    error: XDG_RUNTIME_DIR not set in the environment.
    0:00:00.812662754    79   0x55b7599660 ERROR                default gstvaapi.c:254:plugin_init: Cannot create a VA display
    INFO:libs.distancing:processed frame 1 for /repo/data/softbio_vid.mp4
    INFO:libs.distancing:processed frame 101 for /repo/data/softbio_vid.mp4
    INFO:libs.distancing:processed frame 201 for /repo/data/softbio_vid.mp4
    INFO:libs.distancing:processed frame 301 for /repo/data/softbio_vid.mp4
    

    This seems to be good so far, I start getting the graph appearing in my browser when I visit 192.168.1.104:8000.

    However, the video feeds seem to buffer and stutter a lot. Furthermore, the birds-eye view and the video feed quickly become out-of-sync with one another. If I look back at the running processor I observe something along the following lines (which happens an awful lot):

    INFO:libs.distancing:processed frame 3301 for /repo/data/softbio_vid.mp4
    INFO:libs.distancing:processed frame 3401 for /repo/data/softbio_vid.mp4
    ERROR:    Exception in ASGI application
    Traceback (most recent call last):
      File "/usr/local/lib/python3.6/dist-packages/uvicorn/protocols/http/h11_impl.py", line 389, in run_asgi
        result = await app(self.scope, self.receive, self.send)
      File "/usr/local/lib/python3.6/dist-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
        return await self.app(scope, receive, send)
      File "/usr/local/lib/python3.6/dist-packages/fastapi/applications.py", line 181, in __call__
        await super().__call__(scope, receive, send)  # pragma: no cover
      File "/usr/local/lib/python3.6/dist-packages/starlette/applications.py", line 111, in __call__
        await self.middleware_stack(scope, receive, send)
      File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/errors.py", line 181, in __call__
        raise exc from None
      File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/errors.py", line 159, in __call__
        await self.app(scope, receive, _send)
      File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/cors.py", line 86, in __call__
        await self.simple_response(scope, receive, send, request_headers=headers)
      File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/cors.py", line 142, in simple_response
        await self.app(scope, receive, send)
      File "/usr/local/lib/python3.6/dist-packages/starlette/exceptions.py", line 82, in __call__
        raise exc from None
      File "/usr/local/lib/python3.6/dist-packages/starlette/exceptions.py", line 71, in __call__
        await self.app(scope, receive, sender)
      File "/usr/local/lib/python3.6/dist-packages/starlette/routing.py", line 566, in __call__
        await route.handle(scope, receive, send)
      File "/usr/local/lib/python3.6/dist-packages/starlette/routing.py", line 376, in handle
        await self.app(scope, receive, send)
      File "/usr/local/lib/python3.6/dist-packages/starlette/staticfiles.py", line 94, in __call__
        await response(scope, receive, send)
      File "/usr/local/lib/python3.6/dist-packages/starlette/responses.py", line 314, in __call__
        "more_body": more_body,
      File "/usr/local/lib/python3.6/dist-packages/starlette/exceptions.py", line 68, in sender
        await send(message)
      File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/cors.py", line 148, in send
        await send(message)
      File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/errors.py", line 156, in _send
        await send(message)
      File "/usr/local/lib/python3.6/dist-packages/uvicorn/protocols/http/h11_impl.py", line 483, in send
        output = self.conn.send(event)
      File "/usr/local/lib/python3.6/dist-packages/h11/_connection.py", line 469, in send
        data_list = self.send_with_data_passthrough(event)
      File "/usr/local/lib/python3.6/dist-packages/h11/_connection.py", line 502, in send_with_data_passthrough
        writer(event, data_list.append)
      File "/usr/local/lib/python3.6/dist-packages/h11/_writers.py", line 78, in __call__
        self.send_data(event.data, write)
      File "/usr/local/lib/python3.6/dist-packages/h11/_writers.py", line 98, in send_data
        raise LocalProtocolError("Too much data for declared Content-Length")
    h11._util.LocalProtocolError: Too much data for declared Content-Length
    ERROR:uvicorn.error:Exception in ASGI application
    Traceback (most recent call last):
      File "/usr/local/lib/python3.6/dist-packages/uvicorn/protocols/http/h11_impl.py", line 389, in run_asgi
        result = await app(self.scope, self.receive, self.send)
      File "/usr/local/lib/python3.6/dist-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
        return await self.app(scope, receive, send)
      File "/usr/local/lib/python3.6/dist-packages/fastapi/applications.py", line 181, in __call__
        await super().__call__(scope, receive, send)  # pragma: no cover
      File "/usr/local/lib/python3.6/dist-packages/starlette/applications.py", line 111, in __call__
        await self.middleware_stack(scope, receive, send)
      File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/errors.py", line 181, in __call__
        raise exc from None
      File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/errors.py", line 159, in __call__
        await self.app(scope, receive, _send)
      File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/cors.py", line 86, in __call__
        await self.simple_response(scope, receive, send, request_headers=headers)
      File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/cors.py", line 142, in simple_response
        await self.app(scope, receive, send)
      File "/usr/local/lib/python3.6/dist-packages/starlette/exceptions.py", line 82, in __call__
        raise exc from None
      File "/usr/local/lib/python3.6/dist-packages/starlette/exceptions.py", line 71, in __call__
        await self.app(scope, receive, sender)
      File "/usr/local/lib/python3.6/dist-packages/starlette/routing.py", line 566, in __call__
        await route.handle(scope, receive, send)
      File "/usr/local/lib/python3.6/dist-packages/starlette/routing.py", line 376, in handle
        await self.app(scope, receive, send)
      File "/usr/local/lib/python3.6/dist-packages/starlette/staticfiles.py", line 94, in __call__
        await response(scope, receive, send)
      File "/usr/local/lib/python3.6/dist-packages/starlette/responses.py", line 314, in __call__
        "more_body": more_body,
      File "/usr/local/lib/python3.6/dist-packages/starlette/exceptions.py", line 68, in sender
        await send(message)
      File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/cors.py", line 148, in send
        await send(message)
      File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/errors.py", line 156, in _send
        await send(message)
      File "/usr/local/lib/python3.6/dist-packages/uvicorn/protocols/http/h11_impl.py", line 483, in send
        output = self.conn.send(event)
      File "/usr/local/lib/python3.6/dist-packages/h11/_connection.py", line 469, in send
        data_list = self.send_with_data_passthrough(event)
      File "/usr/local/lib/python3.6/dist-packages/h11/_connection.py", line 502, in send_with_data_passthrough
        writer(event, data_list.append)
      File "/usr/local/lib/python3.6/dist-packages/h11/_writers.py", line 78, in __call__
        self.send_data(event.data, write)
      File "/usr/local/lib/python3.6/dist-packages/h11/_writers.py", line 98, in send_data
        raise LocalProtocolError("Too much data for declared Content-Length")
    h11._util.LocalProtocolError: Too much data for declared Content-Length
    

    My questions are; Is this expected behavior? Is the software not robust/mature enough at this stage? Is there something wrong with how I have configured the docker images or how I am running them? Is it the Nano which is just not powerful enough (e.g. in terms of pfs, or for running both processor and serving front-end)?

    I only ask because I seem to recall that the simpler version I used some months back worked better for me. The front-end was super trivial, but it didn't suffer from these issues. I'm happy to roll back to that older version if required but first wanted to check this is not something that can be addressed.

    Many thanks

    opened by rlsse 5
  • How to use PI camera V2.0 as a video input?

    How to use PI camera V2.0 as a video input?

    Thanks for such a great and useful project.

    Recently, I've been trying to test it with my PI camera V2.0(CSI-Camera) on Jetson Nano with JetPack 4.3, Ubuntu 18.04, and CUDA 10.0.

    I modified the Dockerfile of Nano you offer to disable running the command. Then I tried to run the gstreamer command inside the container but I got this error as shown here:

    image

    And the same command works on the same machine outside the docker container.

    Could you help me to get it running with this CSI-Camera?

    opened by mhaboali 5
  • Import error for Detector

    Import error for Detector

    I am trying to run Smart Distance on Jetson TX2, facing below issue. Please advice. Thank you.

    Env -

    1. Jetson Tx2
    2. Jetpack 4.3
    3. CUDA 10.2

    Steps to reproduce -

    1. Follow all instructions for Jetson Tx2.
    2. Run below Docker command.
    docker run -it --runtime nvidia --privileged -p HOST_PORT:8000 -v "$PWD/data":/repo/data neuralet/smart-social-distancing:latest-jetson-nano
    

    Output -

    image image

    Expected - Live feed with smart distance for the video present in data folder I also tried to correct the Download path, as per this PR

    opened by ajay443 5
  • Face-Mask Detector/Classifier

    Face-Mask Detector/Classifier

    Train a face-mask classifier/detector model

    opened by mrn-mln 4
  • docker: invalid publish opts format (should be name=value but got 'HOST_PORT:8000')

    docker: invalid publish opts format (should be name=value but got 'HOST_PORT:8000')

    Hi, i am new to docker and follow exactly as per the instruction $ sudo docker run -it --runtime nvidia --privileged -p HOST_PORT:8000 -v "$PWD/data":/data neuralet/smart-social-distancing:latest-jetson-nano here's that i got docker: invalid publish opts format (should be name=value but got 'HOST_PORT:8000')

    please advise what i may probably miss

    thank you

    opened by mdinata 4
  • WIP: Remove deprecated features

    WIP: Remove deprecated features

    • Remove area definition
    • Remove metrics
    • Remove notifications
    opened by pgrill 0
  • Bump fastapi from 0.61.1 to 0.65.2 in /api

    Bump fastapi from 0.61.1 to 0.65.2 in /api

    Bumps fastapi from 0.61.1 to 0.65.2.

    Release notes

    Sourced from fastapi's releases.

    0.65.2

    Security fixes

    This change fixes a CSRF security vulnerability when using cookies for authentication in path operations with JSON payloads sent by browsers.

    In versions lower than 0.65.2, FastAPI would try to read the request payload as JSON even if the content-type header sent was not set to application/json or a compatible JSON media type (e.g. application/geo+json).

    So, a request with a content type of text/plain containing JSON data would be accepted and the JSON data would be extracted.

    But requests with content type text/plain are exempt from CORS preflights, for being considered Simple requests. So, the browser would execute them right away including cookies, and the text content could be a JSON string that would be parsed and accepted by the FastAPI application.

    See CVE-2021-32677 for more details.

    Thanks to Dima Boger for the security report! 🙇🔒

    Internal

    0.65.1

    Security fixes

    0.65.0

    Breaking Changes - Upgrade

    • ⬆️ Upgrade Starlette to 0.14.2, including internal UJSONResponse migrated from Starlette. This includes several bug fixes and features from Starlette. PR #2335 by @​hanneskuettner.

    Translations

    Internal

    0.64.0

    Features

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies python 
    opened by dependabot[bot] 0
  • can i host my own copy of the new react dashboard/frontend?

    can i host my own copy of the new react dashboard/frontend?

    https://github.com/neuralet/smart-social-distancing/issues/25#issuecomment-762982017

    Where is the code to host my own copy of the frontend or is that totally deprecated and the new dashboard/frontend is closed source?

    • DashboardURL: Sets the url where the frontend is running. Unless you are using a custom domain, you should keep this value as https://app.lanthorn.ai/.
      • DashboardAuthorizationToken: Configures the Authorization header required to sync the processor and the dashboard.
    opened by saket424 1
  • deploy infrastructure with terraform in AWS

    deploy infrastructure with terraform in AWS

    @pgrill @mats-claassen

    opened by agustinasierralima 0
  • Include Jetpack 4.4 support for jetson nano

    Include Jetpack 4.4 support for jetson nano

    The master branch is broken because the devices Jetson Nano and Jetson TX2 use the same config-file but different dockerfiles. TheTX2 image uses Jetpack 4.4. However, the nano image uses Jetpack 4.3.

    To fix the issue I see 2 approaches:

    • Split the config file into 2 (https://github.com/neuralet/smart-social-distancing/pull/117)
    • Make the jetson-nano image compatible with Jetpack 4.4.

    @mhejrati , what do you think?

    opened by pgrill 0
  • Alphapose

    Alphapose

    opened by alpha-carinae29 0
  • Master is broken for x86-gpu

    Master is broken for x86-gpu

    Hi,

    Master does not work well for x86-gpu, seems prints from the Detector class are not showing (the first thing that made me think it is not working), but I can see some outputs in Lanthorn UI, which stops after a few frames and fps becomes zero.

    The weird thing is that the lines: INFO:libs.distancing:processed frame 1 for /repo/data/softbio_vid.mp4 INFO:openpifpaf.decoder.generator.cifcaf:3 annotations: [13, 10, 9] ... are showing for x86, but not for x86-gpu (while it processes some frames initially). What has been changed?

    opened by JsonSadler 5
  • Adaptive learning

    Adaptive learning

    add Adaptive Learning support for USB TPU. for more information about Adaptive Learning visit here

    opened by alpha-carinae29 0
  • Add Pose Estimation Support for Jetsons

    Add Pose Estimation Support for Jetsons

    I tried to run existing pose estimation algorithms on Jetson devices as what @alpha-carinae29 has done for X86, GPU and Coral. I’m going to write my results and issues here.

    opened by undefined-references 2
  • Update README.md

    Update README.md

    opened by robert-p97 1
Releases(0.7.0)
  • 0.7.0(Apr 13, 2021)

    Added:

    • Partial (read-only) support for global Area (#150)
    • Occupancy rules for business hours (#154, #160)
    • Include processor capabilities in the config/info endpoint (#159)
    • Include authorization header in webhook logger (#161)
    • Support different models per camera (#148)
    • Estimate occupancy with In/Out metrics (#157)

    Fixed:

    • Add .keep in occupancy log directories (#155)
    Source code(tar.gz)
    Source code(zip)
  • 0.6.0(Mar 23, 2021)

    Added:

    • Draw Region of Interest contour in live feed (#136)
    • List overview of camera configuration (#138)
    • In/Out Boundary metric: (#135, #139, #144), #147)
    • Unitary tests for the metrics endpoints (#140)
    • Support for Docker Compose (#149)

    Fixed:

    • Bad reference in config files (#137)
    • Area logger sometimes tried to start before processing started (causing error) (#141)
    • Bad reference in accessing the cameras of a config (#142)
    • Crash when logging without a classifier (#143)
    Source code(tar.gz)
    Source code(zip)
  • 0.5.0(Feb 10, 2021)

    Added:

    • Camera IDs created with the API are now generated by the controller (#127)
    • Unitary tests for the API Router (#105)
    • Unitary tests for the Camera Router (#130)
    • Endpoints to define a Region of Interest in a given camera (#131)

    Updated:

    • Readme: Supported video formats (#126)
    • Readme: DockerHub information (#129)
    • Moved references from beta.lanthorn.ai to app.lanthorn.ai (#132)
    • Readme: Added documentation on how to quickly get the processor running for a PoC (#133)
    • Polished configs making all default IDs 0 (instead of default or area0) (#133)

    Fixed:

    • Broken Posenet reference on Coral (#128)
    Source code(tar.gz)
    Source code(zip)
  • 0.4.0(Jan 15, 2021)

    Added:

    • Backup files into S3 (#106)
    • Yolov3 detector for x86 devices (#103)
    • Openpifpaf and OFM face-mask classifier TensorRT support for Jetson TX2 (#101)
    • OAuth in endpoints (#112)
    • Measure performance (#123)

    Updated:

    • Screenshot logger (#111)
    • Video live feed (#110)
    • Readme (#118)
    • Export endpoints (#114)
    • Small refactor in API responses (#120)
    • Split config-jetson.ini into config-jetson-nano.ini and config-jetson-tx2.ini (#117)

    Fixed:

    • Fixed minor issues in classifier (#102)
    • Fixed minor issues in occupancy metrics (#115)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.0(Dec 22, 2020)

    Added:

    • Tracker parameters to GPU config (#89)
    • Parameter reboot_processor on all endpoints that update config (#87)
    • Enable slack notifications per entity (#86)
    • Openpifpaf TensorRT support (#91)
    • Global reporting (#92)
    • Add export_all endpoint (#94)
    • Occupancy metrics (#97, (#104)
    • Allow retrieving and updating all the sections in the configuration file using the API (#98)

    Updated:

    • Refactor video processing pipeline (#95)
    • Extend config api (#98)
    • Use tracking information to calculate social distancing metrics (#97)
    • Reports are now generated by hour (#97)

    Fixed:

    • Improved tracking (#91)
    • Fixed minor issues at classifier inference (#96)
    • Camera image endpoints capture default image (#100)
    Source code(tar.gz)
    Source code(zip)
  • 0.2.0(Nov 20, 2020)

    Added:

    • Support for running on x86 with GPU (#72)
    • Endpoint to get version, device and whether the processor has been set up (#84)
    • Endpoints to export raw data (#74)
    • Improve fault tolerance (#82)

    Updated:

    • Documentation in Readme (several, mainly (#73)
    • Refactored Endpoints to not end with / (#76)
    • Some improvements in face mask detection like adding a label on top of bounding boxes (#77)
    • Improved Object tracker (IOU tracker added) (#79)

    Fixed:

    • An error in face anonymizer when using PoseNet (#80, #81)

    Removed:

    • Deprecated frontend and ui backend (#73)
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0(Nov 18, 2020)

    This is the first release of the Smart Social Distancing app. The app is dockerized and can run on Coral Dev Board, Coral USB Accelerator, Jetson Nano, x86 or Openvino. It supports close contact detection, occupancy alerts and facemask detection on multiple video sources.

    It also includes a frontend React App and a separate backend that manages some endpoints which both have been deprecated and will be removed in future versions

    Source code(tar.gz)
    Source code(zip)
Owner
Neuralet
Neuralet is an open-source platform for edge deep learning models on GPU, TPU, and more.
Neuralet
Web Inventory tool, takes screenshots of webpages using Pyppeteer (headless Chrome/Chromium) and provides some extra bells & whistles to make life easier.

WitnessMe WitnessMe is primarily a Web Inventory tool inspired by Eyewitness, its also written to be extensible allowing you to create custom function

byt3bl33d3r 525 Oct 22, 2021
基于Pytorch的脚手架项目,Celery+FastAPI+Gunicorn+Nginx+Supervisor实现服务部署,支持Docker发布

cookiecutter-pytorch-fastapi 基于Pytorch的 脚手架项目 按规范添加推理函数即可实现Celery+FastAPI+Gunicorn+Nginx+Supervisor+Docker的快速部署 Requirements Python >= 3.6 with pip in

null 12 Oct 15, 2021
Monitor Python applications using Spring Boot Admin

Pyctuator Monitor Python web apps using Spring Boot Admin. Pyctuator supports Flask, FastAPI, aiohttp and Tornado. Django support is planned as well.

SolarEdge Technologies 97 Oct 22, 2021
Deploy an inference API on AWS (EC2) using FastAPI Docker and Github Actions

Deploy an inference API on AWS (EC2) using FastAPI Docker and Github Actions To learn more about this project: medium blog post The goal of this proje

Ahmed BESBES 26 Sep 13, 2021
row level security for FastAPI framework

Row Level Permissions for FastAPI While trying out the excellent FastApi framework there was one peace missing for me: an easy, declarative way to def

Holger Frey 187 Oct 17, 2021
The base to start an openapi project featuring: SQLModel, Typer, FastAPI, JWT Token Auth, Interactive Shell, Management Commands.

The base to start an openapi project featuring: SQLModel, Typer, FastAPI, JWT Token Auth, Interactive Shell, Management Commands.

Bruno Rocha 56 Oct 17, 2021
FastAPI on Google Cloud Run

cloudrun-fastapi Boilerplate for running FastAPI on Google Cloud Run with Google Cloud Build for deployment. For all documentation visit the docs fold

Anthony Corletti 65 Oct 5, 2021
A Prometheus Python client library for asyncio-based applications

aioprometheus aioprometheus is a Prometheus Python client library for asyncio-based applications. It provides metrics collection and serving capabilit

null 90 Oct 20, 2021
Admin Panel for GinoORM - ready to up & run (just add your models)

Gino-Admin Docs (state: in process): Gino-Admin docs Play with Demo (current master 0.2.3) >>>> Gino-Admin demo <<<< (login: admin, pass: 1234) Admin

Iuliia Volkova 38 Jun 16, 2021
TODO aplication made with Python's FastAPI framework and Hexagonal Architecture

FastAPI Todolist Description Todolist aplication made with Python's FastAPI framework and Hexagonal Architecture. This is a test repository for the pu

Giovanni Armane 40 Oct 14, 2021
Web Version of avatarify to democratize even further

Web-avatarify for image animations This is the code base for this website and its backend. This aims to bring technology closer to everyone, just by a

Carlos Andrés Álvarez Restrepo 59 Sep 9, 2021
FastAPI Skeleton App to serve machine learning models production-ready.

FastAPI Model Server Skeleton Serving machine learning models production-ready, fast, easy and secure powered by the great FastAPI by Sebastián Ramíre

null 208 Oct 20, 2021
Backend, modern REST API for obtaining match and odds data crawled from multiple sites. Using FastAPI, MongoDB as database, Motor as async MongoDB client, Scrapy as crawler and Docker.

Introduction Apiestas is a project composed of a backend powered by the awesome framework FastAPI and a crawler powered by Scrapy. This project has fo

Fran Lozano 43 Oct 9, 2021
Analytics service that is part of iter8. Robust analytics and control to unleash cloud-native continuous experimentation.

iter8-analytics iter8 enables statistically robust continuous experimentation of microservices in your CI/CD pipelines. For in-depth information about

null 16 Oct 14, 2021
A rate limiter for Starlette and FastAPI

SlowApi A rate limiting library for Starlette and FastAPI adapted from flask-limiter. Note: this is alpha quality code still, the API may change, and

Laurent Savaete 308 Oct 21, 2021
python template private service

Template for private python service This is a cookiecutter template for an internal REST API service, written in Python, inspired by layout-golang. Th

UrvanovCompany 12 Nov 24, 2020
Minimal example utilizing fastapi and celery with RabbitMQ for task queue, Redis for celery backend and flower for monitoring the celery tasks.

FastAPI with Celery Minimal example utilizing FastAPI and Celery with RabbitMQ for task queue, Redis for Celery backend and flower for monitoring the

Grega Vrbančič 244 Oct 15, 2021
asgi-server-timing-middleware

ASGI Server-Timing middleware An ASGI middleware that wraps the excellent yappi profiler to let you measure the execution time of any function or coro

null 19 Oct 5, 2021
API & Webapp to answer questions about COVID-19. Using NLP (Question Answering) and trusted data sources.

This open source project serves two purposes. Collection and evaluation of a Question Answering dataset to improve existing QA/search methods - COVID-

deepset 307 Aug 31, 2021