This repository allows you to anonymize sensitive information in images/videos. The solution is fully compatible with the DL-based training/inference solutions that we already published/will publish for Object Detection and Semantic Segmentation.

Overview

BMW-Anonymization-Api

Data privacy and individuals’ anonymity are and always have been a major concern for data-driven companies.

Therefore, we designed and implemented an anonymization API that localizes and obfuscates (i.e. hides) sensitive information in images/videos in order to preserve the individuals' anonymity. The main features of our anonymization tool are the following:

  • Agnostic in terms of localization techniques: our API currently supports Semantic segmentation or Object Detection.
  • Modular in terms of sensitive information: the user can train a Deep Learning (DL) model for object detection and semantic segmentation (the training gui will be published soon) to localize the sensitive information she/he wishes to protect, e.g., individual's face or body, personal belongings, vehicles...
  • Scalable in terms of anonymization techniques: our API currently supports pixelating, blurring, blackening (masking). Also, additinal anonymization techniques can be configured as stated below. For the highest level of privacy, we recommend using the blackening technique with degree 1.
  • Supports DL-based models optimized via the Intel® OpenVINO™ toolkit v2021.1 for CPU usage: DL-based models optimized and deployed via the Openvino Segmentation Inference API and the Openvino Detection Inference API can also be used.
  • Compatible with the BMW Deep Learning tools: DL models trained via our training and deployed via our inference APIs are compatible with this anonymization API.

animated

General Architecture & Deployment Mode:

Our anonymization API receives an image along with a JSON object through which the user specifies mainly:

  • The sensitive information she/he wishes to obfuscate.
  • The anonymization technique.
  • The anonymization degree.
  • The localization technique.

You can deploy the anonymization API either:

  • As a standalone docker container which can be connected to other inference APIs (object detection or semantic segmentation) deployed within a standalone docker container as well.
  • As a network of docker containers along with other inference APIs running on the same machine via docker-compose. (please check the following link for the docker-compose deployment).

Prerequisites:

  • docker
  • docker-compose

Check for prerequisites

To check if docker-ce is installed:

docker --version

To check if docker-compose is installed:

docker-compose --version

Install prerequisites

Ubuntu

To install Docker and Docker Compose on Ubuntu, please follow the link.

Windows 10

To install Docker on Windows, please follow the link.

P.S: For Windows users, open the Docker Desktop menu by clicking the Docker Icon in the Notifications area. Select Settings, and then Advanced tab to adjust the resources available to Docker Engine.

Build The Docker Image

As mentioned before, this container can be deployed using either docker or docker-compose.

  • If you wish to deploy this API using docker-compose, please refer to following link. After deploying the API with docker compose, please consider returning to this documentation for further information about the API Endpoints and use configuration file sample sections.

  • If you wish to deploy this API using docker, please continue with the following docker build and run commands.

In order to build the project run the following command from the project's root directory:

 docker build -t anonymization_api -f docker/dockerfile .

Build behind a proxy

In order to build the image behind a proxy use the following command in the project's root directory:

docker build --build-arg http_proxy='your_proxy' --build-arg https_proxy='your_proxy' -t anonymization_api -f ./docker/dockerfile .

In case of build failure, the docker image python:3.6 should be updated to a newer version:

docker pull python:3.6

Run the docker container

To run the API, go to the API's directory and run the following:

Using Linux based docker:

sudo docker run -itv $(pwd)/src/main:/main -v $(pwd)/jsonFiles:/jsonFiles -p <port_of_your_choice>:4343 anonymization_api
Behind a proxy:
sudo docker run -itv $(pwd)/src/main:/main -v $(pwd)/jsonFiles:/jsonFiles  --env HTTP_PROXY="" --env HTTPS_PROXY="" --env http_proxy="" --env https_proxy="" -p 5555:4343 anonymization_api

Using Windows based docker:

docker run -itv ${PWD}/src/main:/main -v ${PWD}/jsonFiles:/jsonFiles -p <port_of_your_choice>:4343 anonymization_api

The API file will be run automatically, and the service will listen to http requests on the chosen port.

API Endpoints

To see all available endpoints, open your favorite browser and navigate to:

http://<machine_IP>:<docker_host_port>/docs

Endpoints summary

Configuration

/set_url (POST)

Set the URL of the inference API that you wish to connect to the Anonymization API. If the specified URL is unreachable due to connection problems, it will not be added to the JSON url_configuration file. The URL should be specified in the following format "http://ip:port/".

/list_urls (GET)

Returns the URLs of the inference APIs that were already configured via the /set_url POST request.

/remove_url (POST)

Removes the specified URL from the JSON url_configuration file

/remove_all_urls (POST)

Removes all available urls from the JSON url_configuration file

/available_methods/ (GET)

After setting the inference URLs via the /set_url request, the user can view the Anonymization API's configuration by issuing the /available_methods request. Mainly the user can view (i) the supported sensitive information (label_names) , (ii) the supported localization techniques, (iii) the inference URLs and (iv) the DL model name that are configured in the deployed anonymization API as seen below.

Anonymization

/anonymize/ (POST)

Anonymizes the input image based on the user's JSON configuration file

/anonymize_video/ (POST)

Anonymizes a video based on the user's sensitive info and save the anonymized video in src/main/anonymized_videos under <original_video_name>_TIMESTAMP.mp4

Video Anonymization Time

The video might take a while, actually you can estimate the time that it may take by using the following formula: Video_Anonymization_Time = Video_Length x Number_Of_Frames_Per_Second x Anonymization_Time_Of_Each_Frame

User configuration file sample

In order to anonymize an image, the user should specify the different details in the user's JSON configuration file

Please check a sample in the below image:

Note that the URL field is an optional field that you can add in case you wanted to use a specific URL of a running API. You can just add the URL as an optional field in this file as shown in the first sensitive info. In case this field is not specified, the URL defined in the url_configuration.json file will be used by default if it matches all the requirements.

To add a new technique to the API:

Please refer to the following link add new technique documentation for more information on how to add a new anonymization technique to the APIs with common and custom labels.

Benchmark

Object Detection

GPU Network Width Height Inference time Anonymization time Total
Titan RTX yolov4 640 768 0.2 s 0.07 s 0.27 s
Titan RTX yolov4 1024 768 0.4 s 0.14 s 0.54 s
Titan RTX yolov4 2048 1024 1.2 s 0.6 s 1.8 s
Titan RTX yolov4 3840 2160 4.8 s 0.6 s 5.4 s

Semantic Segmentation

GPU Network Width Height Inference time Anonymization time Total
Titan RTX psp resnet 101 640 768 0.2 s 0.8 s 1 s
Titan RTX psp resnet 101 1024 768 0.3 s 0.8 s 1.1 s
Titan RTX psp resnet 101 2048 1024 0.9 s 1 s 1.9 s
Titan RTX psp resnet 101 3840 2160 2 s 3 s 5 s

Possible Error

  • You may encounter the below error when running the docker container at startup in standalone version or docker-compose version url_error

  • In case you do, please make sure that the URL of the inference APIs listed in the jsonFiles/url_configuration.json are still recheable. A possible solution would be to empty jsonFiles/url_configuration.json as seen below before starting the container:

    {
    "urls": [
    ]
    }
    

Acknowledgments

Ghenwa Aoun, BMW Innovation Lab, Munich, Germany

Antoine Charbel, inmind.ai, Beirut, Lebanon

Roy Anwar, BMW Innovation Lab, Munich, Germany

Fady Dib, BMW Innovation Lab, Munich, Germany

Jimmy Tekli, BMW Innovation Lab, Munich, Germany

You might also like...
Provided is code that demonstrates the training and evaluation of the work presented in the paper:
Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face Manipulation" published in CVPR 2020.

FFD Source Code Provided is code that demonstrates the training and evaluation of the work presented in the paper: "On the Detection of Digital Face M

 Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation)
Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation)

Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation) Download Synthia dataset The model uses

This is the unofficial code of  Deep Dual-resolution Networks for Real-time and Accurate Semantic Segmentation of Road Scenes. which achieve state-of-the-art trade-off between accuracy and speed on cityscapes and camvid, without using inference acceleration and extra data
This is the code repository for the paper A hierarchical semantic segmentation framework for computer-vision-based bridge column damage detection

Bridge-damage-segmentation This is the code repository for the paper A hierarchical semantic segmentation framework for computer-vision-based bridge c

Conflict-aware Inference of Python Compatible Runtime Environments with Domain Knowledge Graph, ICSE 2022

PyCRE Conflict-aware Inference of Python Compatible Runtime Environments with Domain Knowledge Graph, ICSE 2022 Dependencies This project is developed

Ever felt tired after preprocessing the dataset, and not wanting to write any code further to train your model? Ever encountered a situation where you wanted to record the hyperparameters of the trained model and able to retrieve it afterward? Models Playground is here to help you do that. Models playground allows you to train your models right from the browser. A Python training and inference implementation of Yolov5 helmet detection in Jetson Xavier nx and Jetson nano
A Python training and inference implementation of Yolov5 helmet detection in Jetson Xavier nx and Jetson nano

yolov5-helmet-detection-python A Python implementation of Yolov5 to detect head or helmet in the wild in Jetson Xavier nx and Jetson nano. In Jetson X

Siamese-nn-semantic-text-similarity - A repository containing comprehensive Neural Networks based PyTorch implementations for the semantic text similarity task End-to-End Object Detection with Fully Convolutional Network
End-to-End Object Detection with Fully Convolutional Network

This project provides an implementation for "End-to-End Object Detection with Fully Convolutional Network" on PyTorch.

Comments
  • Bump numpy from 1.19.5 to 1.21.0 in /docker

    Bump numpy from 1.19.5 to 1.21.0 in /docker

    Bumps numpy from 1.19.5 to 1.21.0.

    Release notes

    Sourced from numpy's releases.

    v1.21.0

    NumPy 1.21.0 Release Notes

    The NumPy 1.21.0 release highlights are

    • continued SIMD work covering more functions and platforms,
    • initial work on the new dtype infrastructure and casting,
    • universal2 wheels for Python 3.8 and Python 3.9 on Mac,
    • improved documentation,
    • improved annotations,
    • new PCG64DXSM bitgenerator for random numbers.

    In addition there are the usual large number of bug fixes and other improvements.

    The Python versions supported for this release are 3.7-3.9. Official support for Python 3.10 will be added when it is released.

    :warning: Warning: there are unresolved problems compiling NumPy 1.21.0 with gcc-11.1 .

    • Optimization level -O3 results in many wrong warnings when running the tests.
    • On some hardware NumPy will hang in an infinite loop.

    New functions

    Add PCG64DXSM BitGenerator

    Uses of the PCG64 BitGenerator in a massively-parallel context have been shown to have statistical weaknesses that were not apparent at the first release in numpy 1.17. Most users will never observe this weakness and are safe to continue to use PCG64. We have introduced a new PCG64DXSM BitGenerator that will eventually become the new default BitGenerator implementation used by default_rng in future releases. PCG64DXSM solves the statistical weakness while preserving the performance and the features of PCG64.

    See upgrading-pcg64 for more details.

    (gh-18906)

    Expired deprecations

    • The shape argument numpy.unravel_index cannot be passed as dims keyword argument anymore. (Was deprecated in NumPy 1.16.)

    ... (truncated)

    Commits
    • b235f9e Merge pull request #19283 from charris/prepare-1.21.0-release
    • 34aebc2 MAINT: Update 1.21.0-notes.rst
    • 493b64b MAINT: Update 1.21.0-changelog.rst
    • 07d7e72 MAINT: Remove accidentally created directory.
    • 032fca5 Merge pull request #19280 from charris/backport-19277
    • 7d25b81 BUG: Fix refcount leak in ResultType
    • fa5754e BUG: Add missing DECREF in new path
    • 61127bb Merge pull request #19268 from charris/backport-19264
    • 143d45f Merge pull request #19269 from charris/backport-19228
    • d80e473 BUG: Removed typing for == and != in dtypes
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump pillow from 8.4.0 to 9.0.0 in /docker

    Bump pillow from 8.4.0 to 9.0.0 in /docker

    Bumps pillow from 8.4.0 to 9.0.0.

    Release notes

    Sourced from pillow's releases.

    9.0.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.0.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.0.0 (2022-01-02)

    • Restrict builtins for ImageMath.eval(). CVE-2022-22817 #5923 [radarhere]

    • Ensure JpegImagePlugin stops at the end of a truncated file #5921 [radarhere]

    • Fixed ImagePath.Path array handling. CVE-2022-22815, CVE-2022-22816 #5920 [radarhere]

    • Remove consecutive duplicate tiles that only differ by their offset #5919 [radarhere]

    • Improved I;16 operations on big endian #5901 [radarhere]

    • Limit quantized palette to number of colors #5879 [radarhere]

    • Fixed palette index for zeroed color in FASTOCTREE quantize #5869 [radarhere]

    • When saving RGBA to GIF, make use of first transparent palette entry #5859 [radarhere]

    • Pass SAMPLEFORMAT to libtiff #5848 [radarhere]

    • Added rounding when converting P and PA #5824 [radarhere]

    • Improved putdata() documentation and data handling #5910 [radarhere]

    • Exclude carriage return in PDF regex to help prevent ReDoS #5912 [hugovk]

    • Fixed freeing pointer in ImageDraw.Outline.transform #5909 [radarhere]

    • Added ImageShow support for xdg-open #5897 [m-shinder, radarhere]

    • Support 16-bit grayscale ImageQt conversion #5856 [cmbruns, radarhere]

    • Convert subsequent GIF frames to RGB or RGBA #5857 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • bmw-annomization-api does not start

    bmw-annomization-api does not start

    Hi,

    I failed to get the docker-compose path to work. Links to underling services in docker-compose path does not point to repos anymore. I used the links from the main page https://github.com/BMW-InnovationLab/BMW-IntelOpenVINO-Detection-Inference-API, https://github.com/BMW-InnovationLab/BMW-IntelOpenVINO-Detection-Inference-API.

    I see following errors during docker-compose up:

    bmw-anonymization-api-openvino_detection_api-1     | {"loglevel": "debug", "workers": 1, "bind": "0.0.0.0:80", "timeout": 3600, "workers_per_core": 1.0, "host": "0.0.0.0", "port": "80"}
    bmw-anonymization-api-openvino_detection_api-1     | 2022-09-20 16:16:27,530 [gunicorn.error] INFO Starting gunicorn 20.0.4
    bmw-anonymization-api-openvino_detection_api-1     | 2022-09-20 16:16:27,530 [gunicorn.error] INFO Listening at: http://0.0.0.0:80 (1)
    bmw-anonymization-api-openvino_detection_api-1     | 2022-09-20 16:16:27,530 [gunicorn.error] INFO Using worker: uvicorn.workers.UvicornWorker
    bmw-anonymization-api-openvino_detection_api-1     | 2022-09-20 16:16:27,532 [gunicorn.error] INFO Booting worker with pid: 25
    bmw-anonymization-api-openvino_segmentation_api-1  | {"loglevel": "debug", "workers": 1, "bind": "0.0.0.0:80", "timeout": 3600, "workers_per_core": 1.0, "host": "0.0.0.0", "port": "80"}
    bmw-anonymization-api-openvino_segmentation_api-1  | 2022-09-20 16:16:27,598 [gunicorn.error] INFO Starting gunicorn 20.0.4
    bmw-anonymization-api-openvino_segmentation_api-1  | 2022-09-20 16:16:27,599 [gunicorn.error] INFO Listening at: http://0.0.0.0:80 (1)
    bmw-anonymization-api-openvino_segmentation_api-1  | 2022-09-20 16:16:27,599 [gunicorn.error] INFO Using worker: uvicorn.workers.UvicornWorker
    bmw-anonymization-api-openvino_segmentation_api-1  | 2022-09-20 16:16:27,600 [gunicorn.error] INFO Booting worker with pid: 25
    bmw-anonymization-api-openvino_detection_api-1     | 2022-09-20 16:16:33,145 [uvicorn.access] INFO 172.18.0.4:43614 - "GET /models HTTP/1.1" 200
    bmw-anonymization-api-openvino_detection_api-1     | Incorrect weights in bin file!
    bmw-anonymization-api-openvino_detection_api-1     | Error loading model
    bmw-anonymization-api-openvino_detection_api-1     | 2022-09-20 16:16:33,325 [uvicorn.access] INFO 172.18.0.4:43616 - "GET /models/detect_cups/labels HTTP/1.1" 500
    bmw-anonymization-api-openvino_detection_api-1     | Incorrect weights in bin file!
    bmw-anonymization-api-openvino_detection_api-1     | Error loading model
    bmw-anonymization-api-openvino_detection_api-1     | 2022-09-20 16:16:33,332 [uvicorn.access] INFO 172.18.0.4:43618 - "GET /models/detect_cups/config HTTP/1.1" 500
    bmw-anonymization-api-anonymization_api-1          | Traceback (most recent call last):
    bmw-anonymization-api-anonymization_api-1          |   File "/usr/local/bin/uvicorn", line 8, in <module>
    bmw-anonymization-api-anonymization_api-1          |     sys.exit(main())
    bmw-anonymization-api-anonymization_api-1          |   File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1130, in __call__
    bmw-anonymization-api-anonymization_api-1          |     return self.main(*args, **kwargs)
    bmw-anonymization-api-anonymization_api-1          |   File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1055, in main
    bmw-anonymization-api-anonymization_api-1          |     rv = self.invoke(ctx)
    bmw-anonymization-api-anonymization_api-1          |   File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1404, in invoke
    bmw-anonymization-api-anonymization_api-1          |     return ctx.invoke(self.callback, **ctx.params)
    bmw-anonymization-api-anonymization_api-1          |   File "/usr/local/lib/python3.7/site-packages/click/core.py", line 760, in invoke
    bmw-anonymization-api-anonymization_api-1          |     return __callback(*args, **kwargs)
    bmw-anonymization-api-anonymization_api-1          |   File "/usr/local/lib/python3.7/site-packages/uvicorn/main.py", line 426, in main
    bmw-anonymization-api-anonymization_api-1          |     run(app, **kwargs)
    bmw-anonymization-api-anonymization_api-1          |   File "/usr/local/lib/python3.7/site-packages/uvicorn/main.py", line 452, in run
    bmw-anonymization-api-anonymization_api-1          |     server.run()
    bmw-anonymization-api-anonymization_api-1          |   File "/usr/local/lib/python3.7/site-packages/uvicorn/server.py", line 68, in run
    bmw-anonymization-api-anonymization_api-1          |     return asyncio.run(self.serve(sockets=sockets))
    bmw-anonymization-api-anonymization_api-1          |   File "/usr/local/lib/python3.7/asyncio/runners.py", line 43, in run
    bmw-anonymization-api-anonymization_api-1          |     return loop.run_until_complete(main)
    bmw-anonymization-api-anonymization_api-1          |   File "/usr/local/lib/python3.7/asyncio/base_events.py", line 587, in run_until_complete
    bmw-anonymization-api-anonymization_api-1          |     return future.result()
    bmw-anonymization-api-anonymization_api-1          |   File "/usr/local/lib/python3.7/site-packages/uvicorn/server.py", line 76, in serve
    bmw-anonymization-api-anonymization_api-1          |     config.load()
    bmw-anonymization-api-anonymization_api-1          |   File "/usr/local/lib/python3.7/site-packages/uvicorn/config.py", line 456, in load
    bmw-anonymization-api-anonymization_api-1          |     self.loaded_app = import_from_string(self.app)
    bmw-anonymization-api-anonymization_api-1          |   File "/usr/local/lib/python3.7/site-packages/uvicorn/importer.py", line 21, in import_from_string
    bmw-anonymization-api-anonymization_api-1          |     module = importlib.import_module(module_str)
    bmw-anonymization-api-anonymization_api-1          |   File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
    bmw-anonymization-api-anonymization_api-1          |     return _bootstrap._gcd_import(name[level:], package, level)
    bmw-anonymization-api-anonymization_api-1          |   File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
    bmw-anonymization-api-anonymization_api-1          |   File "<frozen importlib._bootstrap>", line 983, in _find_and_load
    bmw-anonymization-api-anonymization_api-1          |   File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
    bmw-anonymization-api-anonymization_api-1          |   File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
    bmw-anonymization-api-anonymization_api-1          |   File "<frozen importlib._bootstrap_external>", line 728, in exec_module
    bmw-anonymization-api-anonymization_api-1          |   File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
    bmw-anonymization-api-anonymization_api-1          |   File "./start.py", line 23, in <module>
    bmw-anonymization-api-anonymization_api-1          |     config.master_dict = labels_methods()
    bmw-anonymization-api-anonymization_api-1          |   File "./labels.py", line 15, in labels_methods
    bmw-anonymization-api-anonymization_api-1          |     api_client = ApiClient()
    bmw-anonymization-api-anonymization_api-1          |   File "./APIClient.py", line 15, in __init__
    bmw-anonymization-api-anonymization_api-1          |     self.get_api_configuration()
    bmw-anonymization-api-anonymization_api-1          |   File "./APIClient.py", line 39, in get_api_configuration
    bmw-anonymization-api-anonymization_api-1          |     self.get_models(url)
    bmw-anonymization-api-anonymization_api-1          |   File "./APIClient.py", line 59, in get_models
    bmw-anonymization-api-anonymization_api-1          |     model_type = self.get_model_configuration(url, model_name)
    bmw-anonymization-api-anonymization_api-1          |   File "./APIClient.py", line 90, in get_model_configuration
    bmw-anonymization-api-anonymization_api-1          |     return response.json()["data"]["type"]
    bmw-anonymization-api-anonymization_api-1          | TypeError: 'NoneType' object is not subscriptable
    bmw-anonymization-api-anonymization_api-1 exited with code 1
    

    I'm not able to resolve labels in listed model:

    mw-anonymization-api-openvino_detection_api-1     | 2022-09-20 16:27:15,424 [uvicorn.access] INFO 172.18.0.1:51654 - "GET /models/detect_cups/labels HTTP/1.1" 500
    

    HW reference:

    01: None 00.0: 10103 CPU                                        
      [Created at cpu.465]
      Unique ID: rdCR.j8NaKXDZtZ6
      Hardware Class: cpu
      Arch: X86-64
      Vendor: "GenuineIntel"
      Model: 6.158.10 "Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz"
      Features: fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,art,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,cpuid,aperfmperf,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,sdbg,fma,cx16,xtpr,pdcm,pcid,sse4_1,sse4_2,x2apic,movbe,popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand,lahf_lm,abm,3dnowprefetch,cpuid_fault,invpcid_single,pti,ssbd,ibrs,ibpb,stibp,tpr_shadow,vnmi,flexpriority,ept,vpid,ept_ad,fsgsbase,tsc_adjust,bmi1,avx2,smep,bmi2,erms,invpcid,mpx,rdseed,adx,smap,clflushopt,intel_pt,xsaveopt,xsavec,xgetbv1,xsaves,dtherm,ida,arat,pln,pts,hwp,hwp_notify,hwp_act_window,hwp_epp,md_clear,flush_l1d,arch_capabilities
      Clock: 3700 MHz
      BogoMips: 7399.70
      Cache: 12288 kb
      Units/Processor: 16
      Config Status: cfg=new, avail=yes, need=no, active=unknown
    
    opened by marmile 1
Owner
BMW TechOffice MUNICH
This organization contains software for realtime computer vision published by the members, partners and friends of the BMW TechOffice MUNICH and InnovationLab.
BMW TechOffice MUNICH
Anonymize BLM Protest Images

Anonymize BLM Protest Images This repository automates @BLMPrivacyBot, a Twitter bot that shows the anonymized images to help keep protesters safe. Us

Stanford Machine Learning Group 40 Oct 13, 2022
The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

MIC-DKFZ 1.2k Jan 4, 2023
Tensorflow 2.x implementation of Panoramic BlitzNet for object detection and semantic segmentation on indoor panoramic images.

Deep neural network for object detection and semantic segmentation on indoor panoramic images. The implementation is based on the papers:

Alejandro de Nova Guerrero 9 Nov 24, 2022
TorchDistiller - a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

This project is a collection of the open source pytorch code for knowledge distillation, especially for the perception tasks, including semantic segmentation, depth estimation, object detection and instance segmentation.

yifan liu 147 Dec 3, 2022
A Data Annotation Tool for Semantic Segmentation, Object Detection and Lane Line Detection.(In Development Stage)

Data-Annotation-Tool How to Run this Tool? To run this software, follow the steps: git clone https://github.com/Autonomous-Car-Project/Data-Annotation

TiVRA AI 13 Aug 18, 2022
This is a repository for a Semantic Segmentation inference API using the Gluoncv CV toolkit

BMW Semantic Segmentation GPU/CPU Inference API This is a repository for a Semantic Segmentation inference API using the Gluoncv CV toolkit. The train

BMW TechOffice MUNICH 56 Nov 24, 2022
This is a repository for a semantic segmentation inference API using the OpenVINO toolkit

BMW-IntelOpenVINO-Segmentation-Inference-API This is a repository for a semantic segmentation inference API using the OpenVINO toolkit. It's supported

BMW TechOffice MUNICH 34 Nov 24, 2022
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing

SVM Données Une base d’images contient 490 images pour l’apprentissage (400 voitures et 90 bateaux), et encore 21 images pour fait des tests. Prétrait

Achraf Rahouti 3 Nov 30, 2021
This is a repository for a No-Code object detection inference API using the OpenVINO. It's supported on both Windows and Linux Operating systems.

OpenVINO Inference API This is a repository for an object detection inference API using the OpenVINO. It's supported on both Windows and Linux Operati

BMW TechOffice MUNICH 68 Nov 24, 2022
Deep RGB-D Saliency Detection with Depth-Sensitive Attention and Automatic Multi-Modal Fusion (CVPR'2021, Oral)

DSA^2 F: Deep RGB-D Saliency Detection with Depth-Sensitive Attention and Automatic Multi-Modal Fusion (CVPR'2021, Oral) This repo is the official imp

如今我已剑指天涯 46 Dec 21, 2022