A tutorial showing how to train, convert, and run TensorFlow Lite object detection models on Android devices, the Raspberry Pi, and more!

Overview

TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi

A guide showing how to train TensorFlow Lite object detection models and run them on Android, the Raspberry Pi, and more!

Introduction

TensorFlow Lite is an optimized framework for deploying lightweight deep learning models on resource-constrained edge devices. TensorFlow Lite models have faster inference time and require less processing power, so they can be used to obtain faster performance in realtime applications. This guide provides step-by-step instructions for how train a custom TensorFlow Object Detection model, convert it into an optimized format that can be used by TensorFlow Lite, and run it on Android phones or the Raspberry Pi.

The guide is broken into three major portions. Each portion will have its own dedicated README file in this repository.

  1. How to Train, Convert, and Run Custom TensorFlow Lite Object Detection Models on Windows 10 <--- You are here!
  2. How to Run TensorFlow Lite Object Detection Models on the Raspberry Pi (with optional Coral USB Accelerator)
  3. How to Run TensorFlow Lite Object Detection Models on Android Devices (Still not complete)

This repository also contains Python code for running the newly converted TensorFlow Lite model to perform detection on images, videos, or webcam feeds.

A Note on Versions

I used TensorFlow v1.13 while creating this guide, because TF v1.13 is a stable version that has great support from Anaconda. I will periodically update the guide to make sure it works with newer versions of TensorFlow.

The TensorFlow team is always hard at work releasing updated versions of TensorFlow. I recommend picking one version and sticking with it for all your TensorFlow projects. Every part of this guide should work with newer or older versions, but you may need to use different versions of the tools needed to run or build TensorFlow (CUDA, cuDNN, bazel, etc). Google has provided a list of build configurations for Linux, macOS, and Windows that show which tool versions were used to build and run each version of TensorFlow.

Part 1 - How to Train, Convert, and Run Custom TensorFlow Lite Object Detection Models on Windows 10

Part 1 of this guide gives instructions for training and deploying your own custom TensorFlow Lite object detection model on a Windows 10 PC. The guide is based off the tutorial in the TensorFlow Object Detection repository, but it gives more detailed instructions and is written specifically for Windows. (It will work on Linux too with some minor changes, which I leave as an exercise for the Linux user.)

There are three primary steps to training and deploying a TensorFlow Lite model:

  1. Train a quantized SSD-MobileNet model using TensorFlow, and export frozen graph for TensorFlow Lite
  2. Build TensorFlow from source on your PC
  3. Use TensorFlow Lite Optimizing Converter (TOCO) to create optimzed TensorFlow Lite model

This portion is a continuation of my previous guide: How To Train an Object Detection Model Using TensorFlow on Windows 10. I'll assume you have already set up TensorFlow to train a custom object detection model as described in that guide, including:

  • Setting up an Anaconda virtual environment for training
  • Setting up TensorFlow directory structure
  • Gathering and labeling training images
  • Preparing training data (generating TFRecords and label map)

This tutorial uses the same Anaconda virtual environment, files, and directory structure that was set up in the previous one.

Through the course of the guide, I'll use a bird, squirrel, and raccoon detector model I've been working on as an example. The intent of this detection model is to watch a bird feeder, and record videos of birds while triggering an alarm if a squirrel or raccoon is stealing from it! I'll show the steps needed to train, convert, and run a quantized TensorFlow Lite version of the bird/squirrel/raccoon detector.

Parts 2 and 3 of this guide will go on to show how to deploy this newly trained TensorFlow Lite model on the Raspberry Pi or an Android device. If you're not feeling up to training and converting your own TensorFlow Lite model, you can skip Part 1 and use my custom-trained TFLite BSR detection model (which you can download from Dropbox here) or use the TF Lite starter detection model (taken from https://www.tensorflow.org/lite/models/object_detection/overview) for Part 2 or Part 3.

Step 1: Train Quantized SSD-MobileNet Model and Export Frozen TensorFlow Lite Graph

First, we’ll use transfer learning to train a “quantized” SSD-MobileNet model. Quantized models use 8-bit integer values instead of 32-bit floating values within the neural network, allowing them to run much more efficiently on GPUs or specialized TPUs (TensorFlow Processing Units).

You can also use a standard SSD-MobileNet model (V1 or V2), but it will not run quite as fast as the quantized model. Also, you will not be able to run it on the Google Coral TPU Accelerator. If you’re using an SSD-MobileNet model that has already been trained, you can skip to Step 1d of this guide.

If you get any errors during this process, please look at the FAQ section at the bottom of this guide! It gives solutions to common errors that occur.

As I mentioned prevoiusly, this guide assumes you have already followed my previous TensorFlow tutorial and set up the Anaconda virtual environment and full directory structure needed for using the TensorFlow Object Detection API. If you've done so, you should have a folder at C:\tensorflow1\models\research\object_detection that has everything needed for training. (If you used a different base folder name than "tensorflow1", that's fine - just make sure you continue to use that name throughout this guide.)

Here's what your \object_detection folder should look like:

If you don't have this folder, please go to my previous tutorial and work through at least Steps 1 and 2. If you'd like to train your own model to detect custom objects, you'll also need to work through Steps 3, 4, and 5. If you don't want to train your own model but want to practice the process for converting a model to TensorFlow Lite, you can download the quantized MobileNet-SSD model (see next paragraph) and then skip to Step 1d.

Step 1a. Download and extract quantized SSD-MobileNet model

Google provides several quantized object detection models in their detection model zoo. This tutorial will use the SSD-MobileNet-V2-Quantized-COCO model. Download the model here. Note: TensorFlow Lite does NOT support RCNN models such as Faster-RCNN! It only supports SSD models.

Move the downloaded .tar.gz file to the C:\tensorflow1\models\research\object_detection folder. (Henceforth, this folder will be referred to as the “\object_detection” folder.) Unzip the .tar.gz file using a file archiver like WinZip or 7-Zip. After the file has been fully unzipped, you should have a folder called "ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03" within the \object_detection folder.

Step 1b. Configure training

If you're training your own TensorFlow Lite model, make sure the following items from my previous guide have been completed:

  • Train and test images and their XML label files are placed in the \object_detection\images\train and \object_detection\images\test folders
  • train_labels.csv and test_labels.csv have been generated and are located in the \object_detection\images folder
  • train.record and test.record have been generated and are located in the \object_detection folder
  • labelmap.pbtxt file has been created and is located in the \object_detection\training folder
  • proto files in \object_detection\protos have been generated

If you have any questions about these files or don’t know how to generate them, Steps 2, 3, 4, and 5 of my previous tutorial show how they are all created.

Copy the ssd_mobilenet_v2_quantized_300x300_coco.config file from the \object_detection\samples\configs folder to the \object_detection\training folder. Then, open the file using a text editor.

Make the following changes to the ssd_mobilenet_v2_quantized_300x300_coco.config file. Note: The paths must be entered with single forward slashes (NOT backslashes), or TensorFlow will give a file path error when trying to train the model! Also, the paths must be in double quotation marks ( " ), not single quotation marks ( ' ).

  • Line 9. Change num_classes to the number of different objects you want the classifier to detect. For my bird/squirrel/raccoon detector example, there are three classes, so I set num_classes: 3

  • Line 141. Change batch_size: 24 to batch_size: 6 . The smaller batch size will prevent OOM (Out of Memory) errors during training.

  • Line 156. Change fine_tune_checkpoint to: "C:/tensorflow1/models/research/object_detection/ ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03/model.ckpt"

  • Line 175. Change input_path to: "C:/tensorflow1/models/research/object_detection/train.record"

  • Line 177. Change label_map_path to: "C:/tensorflow1/models/research/object_detection/training/labelmap.pbtxt"

  • Line 181. Change num_examples to the number of images you have in the \images\test directory. For my bird/squirrel/raccoon detector example, there are 582 test images, so I set num_examples: 582.

  • Line 189. Change input_path to: "C:/tensorflow1/models/research/object_detection/test.record"

  • Line 191. Change label_map_path to: "C:/tensorflow1/models/research/object_detection/training/labelmap.pbtxt"

Save and exit the training file after the changes have been made.

Step 1c. Run training in Anaconda virtual environment

All that's left to do is train the model! First, move the “train.py” file from the \object_detection\legacy folder into the main \object_detection folder. (See the FAQ for why I am using the legacy train.py script rather than model_main.py for training.)

Then, open a new Anaconda Prompt window by searching for “Anaconda Prompt” in the Start menu and clicking on it. Activate the “tensorflow1” virtual environment (which was set up in my previous tutorial) by issuing:

activate tensorflow1

Then, set the PYTHONPATH environment variable by issuing:

set PYTHONPATH=C:\tensorflow1\models;C:\tensorflow1\models\research;C:\tensorflow1\models\research\slim

Next, change directories to the \object_detection folder:

cd C:\tensorflow1\models\research\object_detection

Finally, train the model by issuing:

python train.py --logtostderr –train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v2_quantized_300x300_coco.config

If everything was set up correctly, the model will begin training after a couple minutes of initialization.

Allow the model to train until the loss consistently drops below 2. For my bird/squirrel/raccoon detector model, this took about 9000 steps, or 8 hours of training. (Time will vary depending on how powerful your CPU and GPU are. Please see Step 6 of my previous tutorial for more information on training and an explanation of how to view the progress of the training job using TensorBoard.)

Once training is complete (i.e. the loss has consistently dropped below 2), press Ctrl+C to stop training. The latest checkpoint will be saved in the \object_detection\training folder, and we will use that checkpoint to export the frozen TensorFlow Lite graph. Take note of the checkpoint number of the model.ckpt file in the training folder (i.e. model.ckpt-XXXX), as it will be used later.

Step 1d. Export frozen inference graph for TensorFlow Lite

Now that training has finished, the model can be exported for conversion to TensorFlow Lite using the export_tflite_ssd_graph.py script. First, create a folder in \object_detection called “TFLite_model” by issuing:

mkdir TFLite_model

Next, let’s set up some environment variables so the commands are easier to type out. Issue the following commands in Anaconda Prompt. (Note, the XXXX in the second command should be replaced with the highest-numbered model.ckpt file in the \object_detection\training folder.)

set CONFIG_FILE=C:\\tensorflow1\models\research\object_detection\training\ssd_mobilenet_v2_quantized_300x300_coco.config
set CHECKPOINT_PATH=C:\\tensorflow1\models\research\object_detection\training\model.ckpt-XXXX
set OUTPUT_DIR=C:\\tensorflow1\models\research\object_detection\TFLite_model

Now that those are set up, issue this command to export the model for TensorFlow Lite:

python export_tflite_ssd_graph.py --pipeline_config_path=%CONFIG_FILE% --trained_checkpoint_prefix=%CHECKPOINT_PATH% --output_directory=%OUTPUT_DIR% --add_postprocessing_op=true

After the command has executed, there should be two new files in the \object_detection\TFLite_model folder: tflite_graph.pb and tflite_graph.pbtxt.

That’s it! The new inference graph has been trained and exported. This inference graph's architecture and network operations are compatible with TensorFlow Lite's framework. However, the graph still needs to be converted to an actual TensorFlow Lite model. We'll do that in Step 3. First, we have to build TensorFlow from source. On to Step 2!

Step 2. Build TensorFlow From Source

To convert the frozen graph we just exported into a model that can be used by TensorFlow Lite, it has to be run through the TensorFlow Lite Optimizing Converter (TOCO). Unfortunately, to use TOCO, we have to build TensorFlow from source on our computer. To do this, we’ll create a separate Anaconda virtual environment for building TensorFlow.

This part of the tutorial breaks down step-by-step how to build TensorFlow from source on your Windows PC. It follows the Build TensorFlow From Source on Windows instructions given on the official TensorFlow website, with some slight modifications.

This guide will show how to build either the CPU-only version of TensorFlow or the GPU-enabled version of TensorFlow v1.13. If you would like to build a version other than TF v1.13, you can still use this guide, but check the build configuration list and make sure you use the correct package versions.

If you are only building TensorFlow to convert a TensorFlow Lite object detection model, I recommend building the CPU-only version! It takes very little computational effort to export the model, so your CPU can do it just fine without help from your GPU. If you’d like to build the GPU-enabled version anyway, then you need to have the appropriate version of CUDA and cuDNN installed. The TensorFlow installation guide explains how to install CUDA and cuDNN. Check the build configuration list to see which versions of CUDA and cuDNN are compatible with which versions of TensorFlow.

If you get any errors during this process, please look at the FAQ section at the bottom of this guide! It gives solutions to common errors that occur.

Step 2a. Install MSYS2

MSYS2 has some binary tools needed for building TensorFlow. It also automatically converts Windows-style directory paths to Linux-style paths when using Bazel. The Bazel build won’t work without MSYS2 installed!

First, install MSYS2 by following the instructions on the MSYS2 website. Download the msys2-x86_64 executable file and run it. Use the default options for installation. After installing, open MSYS2 and issue:

pacman -Syu

After it's completed, close the window, re-open it, and then issue the following two commands:

pacman -Su
pacman -S patch unzip

This updates MSYS2’s package manager and downloads the patch and unzip packages. Now, close the MSYS2 window. We'll add the MSYS2 binary to the PATH environment variable in Step 2c.

Step 2b. Install Visual C++ Build Tools 2015

Install Microsoft Build Tools 2015 and Microsoft Visual C++ 2015 Redistributable by visiting the Visual Studio older downloads page. Click the “Redistributables and Build Tools” dropdown at the bottom of the list. Download and install the following two packages:

  • Microsoft Build Tools 2015 Update 3 - Use the default installation options in the install wizard. Once you begin installing, it goes through a fairly large download, so it will take a while if you have a slow internet connection. It may give you some warnings saying build tools or redistributables have already been installed. If so, that's fine; just click through them.
  • Microsoft Visual C++ 2015 Redistributable Update 3 – This may give you an error saying the redistributable has already been installed. If so, that’s fine.

Restart your PC after installation has finished.

Step 2c. Update Anaconda and create tensorflow-build environment

Now that the Visual Studio tools are installed and your PC is freshly restarted, open a new Anaconda Prompt window. First, update Anaconda to make sure its package list is up to date. In the Anaconda Prompt window, issue these two commands:

conda update -n base -c defaults conda
conda update --all

The update process may take up to an hour, depending on how it's been since you installed or updated Anaconda. Next, create a new Anaconda virtual environment called “tensorflow-build”. We’ll work in this environment for the rest of the build process. Create and activate the environment by issuing:

conda create -n tensorflow-build pip python=3.6
conda activate tensorflow-build

After the environment is activated, you should see (tensorflow-build) before the active path in the command window.

Update pip by issuing:

python -m pip install --upgrade pip

We'll use Anaconda's git package to download the TensorFlow repository, so install git using:

conda install -c anaconda git

Next, add the MSYS2 binaries to this environment's PATH variable by issuing:

set PATH=%PATH%;C:\msys64\usr\bin

(If MSYS2 is installed in a different location than C:\msys64, use that location instead.) You’ll have to re-issue this PATH command if you ever close and re-open the Anaconda Prompt window.

Step 2d. Download Bazel and Python package dependencies

Next, we’ll install Bazel and some other Python packages that are used for building TensorFlow. Install the necessary Python packages by issuing:

pip install six numpy wheel
pip install keras_applications==1.0.6 --no-deps
pip install keras_preprocessing==1.0.5 --no-deps

Then install Bazel v0.21.0 by issuing the following command. (If you are building a version of TensorFlow other than v1.13, you may need to use a different version of Bazel.)

conda install -c conda-forge bazel=0.21.0

Step 2d. Download TensorFlow source and configure build

Time to download TensorFlow’s source code from GitHub! Issue the following commands to create a new folder directly in C:\ called “tensorflow-build” and cd into it:

mkdir C:\tensorflow-build
cd C:\tensorflow-build

Then, clone the TensorFlow repository and cd into it by issuing:

git clone https://github.com/tensorflow/tensorflow.git 
cd tensorflow 

Next, check out the branch for TensorFlow v1.13:

git checkout r1.13

The version you check out should match the TensorFlow version you used to train your model in Step 1. If you used a different version than TF v1.13, then replace "1.13" with the version you used. See the FAQs section for instructions on how to check the TensorFlow version you used for training.

Next, we’ll configure the TensorFlow build using the configure.py script. From the C:\tensorflow-build\tensorflow directory, issue:

python ./configure.py

This will initiate a Bazel session. As I mentioned before, you can build either the CPU-only version of TensorFlow or the GPU-enabled version of TensorFlow. If you're only using this TensorFlow build to convert your TensorFlow Lite model, I recommend building the CPU-only version. If you’d still like to build the GPU-enabled version for some other reason, then you need to have the appropriate version of CUDA and cuDNN installed. This guide doesn't cover building the GPU-enabled version of TensorFlow, but you can try following the official build instructions on the TensorFlow website.

Here’s what the configuration session will look like if you are building for CPU only. Basically, press Enter to select the default option for each question.

You have bazel 0.21.0- (@non-git) installed. 

Please specify the location of python. [Default is C:\ProgramData\Anaconda3\envs\tensorflow-build\python.exe]: 
  
Found possible Python library paths: 

  C:\ProgramData\Anaconda3\envs\tensorflow-build\lib\site-packages 

Please input the desired Python library path to use.  Default is [C:\ProgramData\Anaconda3\envs\tensorflow-build\lib\site-packages] 

Do you wish to build TensorFlow with XLA JIT support? [y/N]: N 
No XLA JIT support will be enabled for TensorFlow. 

Do you wish to build TensorFlow with ROCm support? [y/N]: N 
No ROCm support will be enabled for TensorFlow. 
  
Do you wish to build TensorFlow with CUDA support? [y/N]: N 
No CUDA support will be enabled for TensorFlow. 

Once the configuration is finished, TensorFlow is ready to be bulit!

Step 2e. Build TensorFlow package

Next, use Bazel to create the package builder for TensorFlow. To create the CPU-only version, issue the following command. The build process took about 70 minutes on my computer.

bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package 

Now that the package builder has been created, let’s use it to build the actual TensorFlow wheel file. Issue the following command (it took about 5 minutes to complete on my computer):

bazel-bin\tensorflow\tools\pip_package\build_pip_package C:/tmp/tensorflow_pkg 

This creates the wheel file and places it in C:\tmp\tensorflow_pkg.

Step 2f. Install TensorFlow and test it out!

TensorFlow is finally ready to be installed! Open File Explorer and browse to the C:\tmp\tensorflow_pkg folder. Copy the full filename of the .whl file, and paste it in the following command:

pip3 install C:/tmp/tensorflow_pkg/

That's it! TensorFlow is installed! Let's make sure it installed correctly by opening a Python shell:

python

Once the shell is opened, issue these commands:

>>> import tensorflow as tf
>>> tf.__version__

If everything was installed properly, it will respond with the installed version of TensorFlow. Note: You may get some deprecation warnings after the "import tensorflow as tf" command. As long as they are warnings and not actual errors, you can ignore them! Exit the shell by issuing:

exit()

With TensorFlow installed, we can finally convert our trained model into a TensorFlow Lite model. On to the last step: Step 3!

Step 3. Use TOCO to Create Optimzed TensorFlow Lite Model, Create Label Map, Run Model

Although we've already exported a frozen graph of our detection model for TensorFlow Lite, we still need run it through the TensorFlow Lite Optimizing Converter (TOCO) before it will work with the TensorFlow Lite interpreter. TOCO converts models into an optimized FlatBuffer format that allows them to run efficiently on TensorFlow Lite. We also need to create a new label map before running the model.

Step 3a. Create optimized TensorFlow Lite model

First, we’ll run the model through TOCO to create an optimzed TensorFLow Lite model. The TOCO tool lives deep in the C:\tensorflow-build directory, and it will be run from the “tensorflow-build” Anaconda virtual environment that we created and used during Step 2. Meanwhile, the model we trained in Step 1 lives inside the C:\tensorflow1\models\research\object_detection\TFLite_model directory. We’ll create an environment variable called OUTPUT_DIR that points at the correct model directory to make it easier to enter the TOCO command.

If you don't already have an Anaconda Prompt window open with the "tensorflow-build" environment active and working in C:\tensorflow-build, open a new Anaconda Prompt window and issue:

activate tensorflow-build
cd C:\tensorflow-build

Create the OUTPUT_DIR environment variable by issuing:

set OUTPUT_DIR=C:\\tensorflow1\models\research\object_detection\TFLite_model

Next, use Bazel to run the model through the TOCO tool by issuing this command:

bazel run --config=opt tensorflow/lite/toco:toco -- --input_file=%OUTPUT_DIR%/tflite_graph.pb --output_file=%OUTPUT_DIR%/detect.tflite --input_shapes=1,300,300,3 --input_arrays=normalized_input_image_tensor --output_arrays=TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3 --inference_type=QUANTIZED_UINT8 --mean_values=128 --std_values=128 --change_concat_input_ranges=false --allow_custom_ops 

Note: If you are using a floating, non-quantized SSD model (e.g. the ssdlite_mobilenet_v2_coco model rather than the ssd_mobilenet_v2_quantized_coco model), the Bazel TOCO command must be modified slightly:

bazel run --config=opt tensorflow/lite/toco:toco -- --input_file=$OUTPUT_DIR/tflite_graph.pb --output_file=$OUTPUT_DIR/detect.tflite --input_shapes=1,300,300,3 --input_arrays=normalized_input_image_tensor --output_arrays=TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3 --inference_type=FLOAT --allow_custom_ops 

If you are using Linux, make sure to use the commands given in the official TensorFlow instructions here. I removed the ' characters from the command, because for some reason they cause errors on Windows!

After the command finishes running, you should see a file called detect.tflite in the \object_detection\TFLite_model directory. This is the model that can be used with TensorFlow Lite!

Step 3b. Create new label map

For some reason, TensorFlow Lite uses a different label map format than classic TensorFlow. The classic TensorFlow label map format looks like this (you can see an example in the \object_detection\data\mscoco_label_map.pbtxt file):

item { 
  name: "/m/01g317" 
  id: 1 
  display_name: "person" 
} 
item { 
  name: "/m/0199g" 
  id: 2 
  display_name: "bicycle" 
} 
item { 
  name: "/m/0k4j" 
  id: 3 
  display_name: "car" 
} 
item { 
  name: "/m/04_sv" 
  id: 4 
  display_name: "motorcycle" 
} 
And so on...

However, the label map provided with the example TensorFlow Lite object detection model looks like this:

person 
bicycle 
car 
motorcycle 
And so on...

Basically, rather than explicitly stating the name and ID number for each class like the classic TensorFlow label map format does, the TensorFlow Lite format just lists each class. To stay consistent with the example provided by Google, I’m going to stick with the TensorFlow Lite label map format for this guide.

Thus, we need to create a new label map that matches the TensorFlow Lite style. Open a text editor and list each class in order of their class number. Then, save the file as “labelmap.txt” in the TFLite_model folder. As an example, here's what the labelmap.txt file for my bird/squirrel/raccoon detector looks like:

Now we’re ready to run the model!

Step 3c. Run the TensorFlow Lite model!

I wrote three Python scripts to run the TensorFlow Lite object detection model on an image, video, or webcam feed: TFLite_detection_image.py, TFLite_detection_video.py, and TFLite_detection_wecam.py. The scripts are based off the label_image.py example given in the TensorFlow Lite examples GitHub repository.

We’ll download the Python scripts directly from this repository. First, install wget for Anaconda by issuing:

conda install -c menpo wget

Once it's installed, download the scripts by issuing:

wget https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/master/TFLite_detection_image.py --no-check-certificate
wget https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/master/TFLite_detection_video.py --no-check-certificate
wget https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/master/TFLite_detection_webcam.py --no-check-certificate

The following instructions show how to run the webcam, video, and image scripts. These instructions assume your .tflite model file and labelmap.txt file are in the “TFLite_model” folder in your \object_detection directory as per the instructions given in this guide.

If you’d like try using the sample TFLite object detection model provided by Google, simply download it here and unzip it into the \object_detection folder. Then, use --modeldir=coco_ssd_mobilenet_v1_1.0_quant_2018_06_29 rather than --modeldir=TFLite_model when running the script.

For more information on options that can be used while running the scripts, use the -h option when calling the script. For example:

python TFLite_detection_image.py -h
Webcam

Make sure you have a USB webcam plugged into your computer. If you’re on a laptop with a built-in camera, you don’t need to plug in a USB webcam.

From the \object_detection directory, issue:

python TFLite_detection_webcam.py --modeldir=TFLite_model 

After a few moments of initializing, a window will appear showing the webcam feed. Detected objects will have bounding boxes and labels displayed on them in real time.

Video stream

To run the script to detect images in a video stream (e.g. a remote security camera), issue:

python TFLite_detection_stream.py --modeldir=TFLite_model --streamurl="http://ipaddress:port/stream/video.mjpeg" 

After a few moments of initializing, a window will appear showing the video stream. Detected objects will have bounding boxes and labels displayed on them in real time.

Make sure to update the URL parameter to the one that's being used by your security camera. It has to include authentication information in case the stream is secured.

If the bounding boxes are not matching the detected objects, probably the stream resolution wasn't detected. In this case you can set it explicitly by using the --resolution parameter:

python TFLite_detection_stream.py --modeldir=TFLite_model --streamurl="http://ipaddress:port/stream/video.mjpeg" --resolution=1920x1080
Video

To run the video detection script, issue:

python TFLite_detection_image.py --modeldir=TFLite_model

A window will appear showing consecutive frames from the video, with each object in the frame labeled. Press 'q' to close the window and end the script. By default, the video detection script will open a video named 'test.mp4'. To open a specific video file, use the --video option:

python TFLite_detection_image.py --modeldir=TFLite_model --video='birdy.mp4'

Note: Video detection will run at a slower FPS than realtime webcam detection. This is mainly because loading a frame from a video file requires more processor I/O than receiving a frame from a webcam.

Image

To run the image detection script, issue:

python TFLite_detection_image.py --modeldir=TFLite_model

The image will appear with all objects labeled. Press 'q' to close the image and end the script. By default, the image detection script will open an image named 'test1.jpg'. To open a specific image file, use the --image option:

python TFLite_detection_image.py --modeldir=TFLite_model --image=squirrel.jpg

It can also open an entire folder full of images and perform detection on each image. There can only be images files in the folder, or errors will occur. To specify which folder has images to perform detection on, use the --imagedir option:

python TFLite_detection_image.py --modeldir=TFLite_model --imagedir=squirrels

Press any key (other than 'q') to advance to the next image. Do not use both the --image option and the --imagedir option when running the script, or it will throw an error.

If you encounter errors while running these scripts, please check the FAQ section of this guide. It has a list of common errors and their solutions. If you can successfully run the script, but your object isn’t detected, it is most likely because your model isn’t accurate enough. The FAQ has further discussion on how to resolve this.

Next Steps

This concludes Part 1 of my TensorFlow Lite guide! You now have a trained TensorFlow Lite model and the scripts needed to run it on a PC.

But who cares about running it on a PC? The whole reason we’re using TensorFlow Lite is so we can run our models on lightweight devices that are more portable and less power-hungry than a PC! The next two parts of my guide show how to run this TFLite model on a Raspberry Pi or an Android Device.

Links to be added when these are completed!

Frequently Asked Questions and Common Errors

Why does this guide use train.py rather than model_main.py for training?

This guide uses "train.py" to run training on the TFLite detection model. The train.py script is deprecated, but the model_main.py script that replaced it doesn't log training progress by default, and it requires pycocotools to be installed. Using model_main.py requires a few extra setup steps, and I want to keep this guide as simple as possible. Since there are no major differences between train.py and model_main.py that will affect training (see TensorFlow Issue #6100), I use train.py for this guide.

How do I check which TensorFlow version I used to train my detection model?

Here’s how you can check the version of TensorFlow you used for training.

  1. Open a new Anaconda Prompt window and issue activate tensorflow1 (or whichever environment name you used)
  2. Open a python shell by issuing python
  3. Within the Python shell, import TensorFlow by issuing import tensorflow as tf
  4. Check the TensorFlow version by issuing tf.__version__ . It will respond with the version of TensorFlow. This is the version that you used for training.

Building TensorFlow from source

In case you run into error error C2100: illegal indirection during TensorFlow compilation, simply edit the file tensorflow-build\tensorflow\tensorflow\core\framework\op_kernel.h, go to line 405, and change reference operator*() { return (*list_)[i_]; } to reference operator*() const { return (*list_)[i_]; }. Credits go to: https://github.com/tensorflow/tensorflow/issues/15925#issuecomment-499569928

Comments
  • Error while building tensorflow with bazel

    Error while building tensorflow with bazel

    Hello, i got this error:

    ERROR: C:/tensorflow-build/tensorflow/tensorflow/core/BUILD:2515:1: Executing genrule //tensorflow/core:version_info_gen failed (Exit 2): bash.exe failed: error executing command
      cd C:/users/*user*/_bazel_*user*/j7bi4x5j/execroot/org_tensorflow
      SET PATH=C:\msys64\usr\bin;C:\msys64\bin
        SET PYTHON_BIN_PATH=C:/Users/*user*/Anaconda3/envs/tensorflow-build/python.exe
        SET PYTHON_LIB_PATH=C:/tensorflow1/models/research/slim
        SET TF_DOWNLOAD_CLANG=0
        SET TF_NEED_CUDA=0
        SET TF_NEED_OPENCL_SYCL=0
        SET TF_NEED_ROCM=0
      C:/msys64/usr/bin/bash.exe -c source external/bazel_tools/tools/genrule/genrule-setup.sh; bazel-out/x64_windows-opt/bin/tensorflow/tools/git/gen_git_source.exe --generate external/local_config_git/gen/spec.json external/local_config_git/gen/head external/local_config_git/gen/branch_ref "bazel-out/x64_windows-opt/genfiles/tensorflow/core/util/version_info.cc" --git_tag_override=${GIT_TAG_OVERRIDE:-}
    Execution platform: @bazel_tools//platforms:host_platform
    C:/Users/*user*/Anaconda3/envs/tensorflow-build/python.exe: can't open file 'C:\users\*user*': [Errno 2] No such file or directory
    Target //tensorflow/tools/pip_package:build_pip_package failed to build
    INFO: Elapsed time: 45,888s, Critical Path: 3,68s
    INFO: 7 processes: 7 local.
    FAILED: Build did NOT complete successfully
    

    Please help.

    opened by AndrejPiecka 35
  • About cv2

    About cv2

    Hi I am execute line python3 TFLite_detection_webcam.py --modeldir = Sample_TFLite_model but that result is Traceback (most recent call last): File "TFLite_detection_webcam.py", line 19, in import cv2 ImportError: No module named 'cv2'

    Please help me..

    opened by SeongeunYang 27
  • Help wanted: Test Colab notebook for training TFLite models

    Help wanted: Test Colab notebook for training TFLite models

    It's been a while since I've worked on this repository, but I'm diving back into it to make some improvements! Today I added a Google Colab notebook that allows you use Google's servers to train, convert, test, and export a TensorFlow Lite model. It makes training a custom TFLite detection model easy! It uses the TensorFlow Object Detection API, which provides high configurability and is best for training with large datasets.

    I'm looking for help with testing the Colab notebook to confirm it works for all users. Have you been looking to train an SSD-MobileNet or EfficientDet model and and deploy it with TFLite? If so, can you try stepping through this notebook to see if it works for you and post in this thread on how it turned out? Let me know if you run in to any errors or have questions.

    Here's a link to the Colab notebook: Train_TFLite2_Object_Detction_Model.ipynb

    Open In Colab

    If you need a dataset to use, I uploaded my "bird, squirrel, raccoon" dataset of 900 images to Dropbox. I included a command to download this dataset into the Colab as one of the options in Step 3.

    I'm planning to make a video that shows how to go through the notebook step-by-step, but it will be a few weeks until it's ready. For now, hopefully the instructions inside the Colab doc do a good enough job explaining.

    Feedback I'm looking for:

    • Were you able to successfully train a ssd-mobilenet-v2 model, test it, and see detection results on the test images? (i.e. were you able to make it through Steps 1 - 7 without errors?)
    • If you ran into errors, what were they?
    • Was anything in the notebook confusing or unclear? Were there any points where you weren't sure what to do next?
    • Do you have any suggestions for improving the notebook?

    Known issues:

    • The centernet-mobilenet model still doesn't work with TensorFlow Lite, I'm still trying to get that one figured out.
    • Accuracy drops significantly when quantizing the ssd-mobilenet-v2 model
    • efficentdet, centernet-mobilenet, and ssd-mobilenet-fpnlite models can't be quantized
    • In other words, quantization still isn't really working. But the unquantized TFLite models seem to work well!
    help wanted 
    opened by EdjeElectronics 24
  • FAILED: Build did NOT complete successfully

    FAILED: Build did NOT complete successfully

    I am following the instructions laid out in this tutorial (starting at Step 2) to convert my model for use by tflite: https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi#part-1---how-to-train-convert-and-run-custom-tensorflow-lite-object-detection-models-on-windows-10

    My model works correctly on my windows machine but when attempting to convert it to a tfliet model I cannon get tensorflow to build, errors every time. I am following the directions very carefully and even restarted the tensorflow-build process from scratch again just to be sure

    System information

    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
    • TensorFlow installed from (source or binary): Source
    • TensorFlow version: 1.13
    • Python version: 3.6
    • Installed using conda?: Conda
    • Bazel version (if compiling from source): 0.21.0
    • CUDA/cuDNN version: just building for CPU
    • GPU model and memory: GTX 980

    everything seems to work find until I try and build using babel with this command:

    bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package

    It start off fine and runs for a while but then fails with this output:

    INFO: From Linking tensorflow/python/gen_lookup_ops_py_wrappers_cc.exe: Creating library bazel-out/x64_windows-opt/bin/tensorflow/python/gen_lookup_ops_py_wrappers_cc.lib and object bazel-out/x64_windows-opt/bin/tensorflow/python/gen_lookup_ops_py_wrappers_cc.exp INFO: From Linking tensorflow/python/gen_script_ops_py_wrappers_cc.exe: Creating library bazel-out/x64_windows-opt/bin/tensorflow/python/gen_script_ops_py_wrappers_cc.lib and object bazel-out/x64_windows-opt/bin/tensorflow/python/gen_script_ops_py_wrappers_cc.exp INFO: From Linking tensorflow/python/gen_bitwise_ops_py_wrappers_cc.exe: Creating library bazel-out/x64_windows-opt/bin/tensorflow/python/gen_bitwise_ops_py_wrappers_cc.lib and object bazel-out/x64_windows-opt/bin/tensorflow/python/gen_bitwise_ops_py_wrappers_cc.exp INFO: From Linking tensorflow/python/gen_ragged_conversion_ops_py_wrappers_cc.exe: Creating library bazel-out/x64_windows-opt/bin/tensorflow/python/gen_ragged_conversion_ops_py_wrappers_cc.lib and object bazel-out/x64_windows-opt/bin/tensorflow/python/gen_ragged_conversion_ops_py_wrappers_cc.exp INFO: From Linking tensorflow/python/gen_ragged_math_ops_py_wrappers_cc.exe: Creating library bazel-out/x64_windows-opt/bin/tensorflow/python/gen_ragged_math_ops_py_wrappers_cc.lib and object bazel-out/x64_windows-opt/bin/tensorflow/python/gen_ragged_math_ops_py_wrappers_cc.exp ERROR: C:/tensorflow-build/tensorflow/tensorflow/python/BUILD:293:1: C++ compilation of rule '//tensorflow/python:bfloat16_lib' failed (Exit 2): cl.exe failed: error executing command cd C:/users/james/bazel_james/j7bi4x5j/execroot/org_tensorflow SET INCLUDE=C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE;C:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt;C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6\include\um;C:\Program Files (x86)\Windows Kits\8.1\include\shared;C:\Program Files (x86)\Windows Kits\8.1\include\um;C:\Program Files (x86)\Windows Kits\8.1\include\winrt; SET PATH=C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64;C:\WINDOWS\Microsoft.NET\Framework64\v4.0.30319;C:\WINDOWS\Microsoft.NET\Framework64;C:\Program Files (x86)\Windows Kits\8.1\bin\x64;C:\Program Files (x86)\Windows Kits\8.1\bin\x86;C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.6 Tools\x64;;C:\WINDOWS\system32 SET PWD=/proc/self/cwd SET PYTHON_BIN_PATH=C:/ProgramData/Anaconda3/envs/tensorflow-build/python.exe SET PYTHON_LIB_PATH=C:/ProgramData/Anaconda3/envs/tensorflow-build/lib/site-packages SET TEMP=C:\Users\james\AppData\Local\Temp SET TF_DOWNLOAD_CLANG=0 SET TF_NEED_CUDA=0 SET TF_NEED_OPENCL_SYCL=0 SET TF_NEED_ROCM=0 SET TMP=C:\Users\james\AppData\Local\Temp C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/amd64/cl.exe /nologo /DCOMPILER_MSVC /DNOMINMAX /D_WIN32_WINNT=0x0601 /D_CRT_SECURE_NO_DEPRECATE /D_CRT_SECURE_NO_WARNINGS /bigobj /Zm500 /EHsc /wd4351 /wd4291 /wd4250 /wd4996 /I. /Ibazel-out/x64_windows-opt/genfiles /Ibazel-out/x64_windows-opt/bin /Iexternal/com_google_absl /Ibazel-out/x64_windows-opt/genfiles/external/com_google_absl /Ibazel-out/x64_windows-opt/bin/external/com_google_absl /Iexternal/eigen_archive /Ibazel-out/x64_windows-opt/genfiles/external/eigen_archive /Ibazel-out/x64_windows-opt/bin/external/eigen_archive /Iexternal/local_config_sycl /Ibazel-out/x64_windows-opt/genfiles/external/local_config_sycl /Ibazel-out/x64_windows-opt/bin/external/local_config_sycl /Iexternal/nsync /Ibazel-out/x64_windows-opt/genfiles/external/nsync /Ibazel-out/x64_windows-opt/bin/external/nsync /Iexternal/gif_archive /Ibazel-out/x64_windows-opt/genfiles/external/gif_archive /Ibazel-out/x64_windows-opt/bin/external/gif_archive /Iexternal/jpeg /Ibazel-out/x64_windows-opt/genfiles/external/jpeg /Ibazel-out/x64_windows-opt/bin/external/jpeg /Iexternal/protobuf_archive /Ibazel-out/x64_windows-opt/genfiles/external/protobuf_archive /Ibazel-out/x64_windows-opt/bin/external/protobuf_archive /Iexternal/com_googlesource_code_re2 /Ibazel-out/x64_windows-opt/genfiles/external/com_googlesource_code_re2 /Ibazel-out/x64_windows-opt/bin/external/com_googlesource_code_re2 /Iexternal/farmhash_archive /Ibazel-out/x64_windows-opt/genfiles/external/farmhash_archive /Ibazel-out/x64_windows-opt/bin/external/farmhash_archive /Iexternal/fft2d /Ibazel-out/x64_windows-opt/genfiles/external/fft2d /Ibazel-out/x64_windows-opt/bin/external/fft2d /Iexternal/highwayhash /Ibazel-out/x64_windows-opt/genfiles/external/highwayhash /Ibazel-out/x64_windows-opt/bin/external/highwayhash /Iexternal/zlib_archive /Ibazel-out/x64_windows-opt/genfiles/external/zlib_archive /Ibazel-out/x64_windows-opt/bin/external/zlib_archive /Iexternal/double_conversion /Ibazel-out/x64_windows-opt/genfiles/external/double_conversion /Ibazel-out/x64_windows-opt/bin/external/double_conversion /Iexternal/snappy /Ibazel-out/x64_windows-opt/genfiles/external/snappy /Ibazel-out/x64_windows-opt/bin/external/snappy /Iexternal/local_config_python /Ibazel-out/x64_windows-opt/genfiles/external/local_config_python /Ibazel-out/x64_windows-opt/bin/external/local_config_python /Iexternal/local_config_cuda /Ibazel-out/x64_windows-opt/genfiles/external/local_config_cuda /Ibazel-out/x64_windows-opt/bin/external/local_config_cuda /Iexternal/lmdb /Ibazel-out/x64_windows-opt/genfiles/external/lmdb /Ibazel-out/x64_windows-opt/bin/external/lmdb /Iexternal/org_sqlite /Ibazel-out/x64_windows-opt/genfiles/external/org_sqlite /Ibazel-out/x64_windows-opt/bin/external/org_sqlite /Iexternal/png_archive /Ibazel-out/x64_windows-opt/genfiles/external/png_archive /Ibazel-out/x64_windows-opt/bin/external/png_archive /Iexternal/icu /Ibazel-out/x64_windows-opt/genfiles/external/icu /Ibazel-out/x64_windows-opt/bin/external/icu /Iexternal/grpc /Ibazel-out/x64_windows-opt/genfiles/external/grpc /Ibazel-out/x64_windows-opt/bin/external/grpc /Iexternal/com_github_nanopb_nanopb /Ibazel-out/x64_windows-opt/genfiles/external/com_github_nanopb_nanopb /Ibazel-out/x64_windows-opt/bin/external/com_github_nanopb_nanopb /Iexternal/boringssl /Ibazel-out/x64_windows-opt/genfiles/external/boringssl /Ibazel-out/x64_windows-opt/bin/external/boringssl /Iexternal/eigen_archive /Ibazel-out/x64_windows-opt/genfiles/external/eigen_archive /Ibazel-out/x64_windows-opt/bin/external/eigen_archive /Iexternal/nsync/public /Ibazel-out/x64_windows-opt/genfiles/external/nsync/public /Ibazel-out/x64_windows-opt/bin/external/nsync/public /Iexternal/gif_archive/lib /Ibazel-out/x64_windows-opt/genfiles/external/gif_archive/lib /Ibazel-out/x64_windows-opt/bin/external/gif_archive/lib /Iexternal/gif_archive/windows /Ibazel-out/x64_windows-opt/genfiles/external/gif_archive/windows /Ibazel-out/x64_windows-opt/bin/external/gif_archive/windows /Iexternal/protobuf_archive/src /Ibazel-out/x64_windows-opt/genfiles/external/protobuf_archive/src /Ibazel-out/x64_windows-opt/bin/external/protobuf_archive/src /Iexternal/farmhash_archive/src /Ibazel-out/x64_windows-opt/genfiles/external/farmhash_archive/src /Ibazel-out/x64_windows-opt/bin/external/farmhash_archive/src /Iexternal/zlib_archive /Ibazel-out/x64_windows-opt/genfiles/external/zlib_archive /Ibazel-out/x64_windows-opt/bin/external/zlib_archive /Iexternal/double_conversion /Ibazel-out/x64_windows-opt/genfiles/external/double_conversion /Ibazel-out/x64_windows-opt/bin/external/double_conversion /Iexternal/local_config_python/numpy_include /Ibazel-out/x64_windows-opt/genfiles/external/local_config_python/numpy_include /Ibazel-out/x64_windows-opt/bin/external/local_config_python/numpy_include /Iexternal/local_config_python/python_include /Ibazel-out/x64_windows-opt/genfiles/external/local_config_python/python_include /Ibazel-out/x64_windows-opt/bin/external/local_config_python/python_include /Iexternal/local_config_cuda/cuda /Ibazel-out/x64_windows-opt/genfiles/external/local_config_cuda/cuda /Ibazel-out/x64_windows-opt/bin/external/local_config_cuda/cuda /Iexternal/local_config_cuda/cuda/cuda/include /Ibazel-out/x64_windows-opt/genfiles/external/local_config_cuda/cuda/cuda/include /Ibazel-out/x64_windows-opt/bin/external/local_config_cuda/cuda/cuda/include /Iexternal/local_config_cuda/cuda/cuda/include/crt /Ibazel-out/x64_windows-opt/genfiles/external/local_config_cuda/cuda/cuda/include/crt /Ibazel-out/x64_windows-opt/bin/external/local_config_cuda/cuda/cuda/include/crt /Iexternal/png_archive /Ibazel-out/x64_windows-opt/genfiles/external/png_archive /Ibazel-out/x64_windows-opt/bin/external/png_archive /Iexternal/icu/icu4c/source/common /Ibazel-out/x64_windows-opt/genfiles/external/icu/icu4c/source/common /Ibazel-out/x64_windows-opt/bin/external/icu/icu4c/source/common /Iexternal/grpc/include /Ibazel-out/x64_windows-opt/genfiles/external/grpc/include /Ibazel-out/x64_windows-opt/bin/external/grpc/include /Iexternal/grpc/third_party/address_sorting/include /Ibazel-out/x64_windows-opt/genfiles/external/grpc/third_party/address_sorting/include /Ibazel-out/x64_windows-opt/bin/external/grpc/third_party/address_sorting/include /Iexternal/boringssl/src/include /Ibazel-out/x64_windows-opt/genfiles/external/boringssl/src/include /Ibazel-out/x64_windows-opt/bin/external/boringssl/src/include /D__CLANG_SUPPORT_DYN_ANNOTATION_ /DEIGEN_MPL2_ONLY /DEIGEN_MAX_ALIGN_BYTES=64 /DEIGEN_HAS_TYPE_TRAITS=0 /DTF_USE_SNAPPY /DSQLITE_OMIT_DEPRECATED /DGRPC_ARES=0 /DPB_FIELD_32BIT=1 /showIncludes /MD /O2 /Oy- /DNDEBUG /wd4117 -D__DATE__="redacted" -D__TIMESTAMP__="redacted" -D__TIME__="redacted" /Gy /Gw -w /arch:AVX /Fobazel-out/x64_windows-opt/bin/tensorflow/python/_objs/bfloat16_lib/bfloat16.obj /c tensorflow/python/lib/core/bfloat16.cc Execution platform: @bazel_tools//platforms:host_platform **tensorflow/python/lib/core/bfloat16.cc(634): error C2664: 'bool tensorflow::anonymous-namespace'::Initialize::<lambda_d0df84676ae54709cf34c880a157d294>::operator ()(const char *,PyUFuncGenericFunction,const std::array<int,3> &) const': cannot convert argument 2 from 'void (__cdecl *)(char **,npy_intp *,npy_intp *,void *)' to 'PyUFuncGenericFunction' tensorflow/python/lib/core/bfloat16.cc(634): note: None of the functions with this name in scope match the target type** tensorflow/python/lib/core/bfloat16.cc(638): error C2664: 'bool tensorflow::anonymous-namespace'::Initialize::<lambda_d0df84676ae54709cf34c880a157d294>::operator ()(const char *,PyUFuncGenericFunction,const std::array<int,3> &) const': cannot convert argument 2 from 'void (__cdecl *)(char **,npy_intp *,npy_intp *,void *)' to 'PyUFuncGenericFunction' tensorflow/python/lib/core/bfloat16.cc(638): note: None of the functions with this name in scope match the target type tensorflow/python/lib/core/bfloat16.cc(641): error C2664: 'bool tensorflow::anonymous-namespace'::Initialize::<lambda_d0df84676ae54709cf34c880a157d294>::operator ()(const char *,PyUFuncGenericFunction,const std::array<int,3> &) const': cannot convert argument 2 from 'void (__cdecl *)(char **,npy_intp *,npy_intp *,void *)' to 'PyUFuncGenericFunction' tensorflow/python/lib/core/bfloat16.cc(641): note: None of the functions with this name in scope match the target type tensorflow/python/lib/core/bfloat16.cc(645): error C2664: 'bool tensorflow::anonymous-namespace'::Initialize::<lambda_d0df84676ae54709cf34c880a157d294>::operator ()(const char *,PyUFuncGenericFunction,const std::array<int,3> &) const': cannot convert argument 2 from 'void (__cdecl *)(char **,npy_intp *,npy_intp *,void *)' to 'PyUFuncGenericFunction' tensorflow/python/lib/core/bfloat16.cc(645): note: None of the functions with this name in scope match the target type tensorflow/python/lib/core/bfloat16.cc(649): error C2664: 'bool tensorflow::anonymous-namespace'::Initialize::<lambda_d0df84676ae54709cf34c880a157d294>::operator ()(const char *,PyUFuncGenericFunction,const std::array<int,3> &) const': cannot convert argument 2 from 'void (__cdecl *)(char **,npy_intp *,npy_intp *,void *)' to 'PyUFuncGenericFunction' tensorflow/python/lib/core/bfloat16.cc(649): note: None of the functions with this name in scope match the target type tensorflow/python/lib/core/bfloat16.cc(653): error C2664: 'bool tensorflow::anonymous-namespace'::Initialize::<lambda_d0df84676ae54709cf34c880a157d294>::operator ()(const char *,PyUFuncGenericFunction,const std::array<int,3> &) const': cannot convert argument 2 from 'void (__cdecl *)(char **,npy_intp *,npy_intp *,void *)' to 'PyUFuncGenericFunction' tensorflow/python/lib/core/bfloat16.cc(653): note: None of the functions with this name in scope match the target type Target //tensorflow/tools/pip_package:build_pip_package failed to build INFO: Elapsed time: 107.347s, Critical Path: 59.11s INFO: 206 processes: 206 local. FAILED: Build did NOT complete successfully

    opened by contractorwolf 15
  • Problems while training ssd_mobilenet_v2_quantized_coco on GPU but same works well for faster_rcnn_inception_v2_coco

    Problems while training ssd_mobilenet_v2_quantized_coco on GPU but same works well for faster_rcnn_inception_v2_coco

    I am trying to train a custom model that I will use later on raspberry pi for object detection. The model I want to train using GPU TensorFlow is ssd_mobilenet_v2_quantized_coco but when I try to run the training, itload all the gpu files successfuly but ran into the error of memory which surprisingly works perfectly well in case of training the faster_rcnn_inception_v2_coco. CPU version of TensorFlow works fine with both of the models.

    My system specifications are: Operating System: Windows 10 Graphics Card : Nvidia MX250 Ram : 16GB Processor: Intel core i7-10th Gen. Tensorflow Version: Tensorflow GPU 1.15.0 Cuda : 10.0 CuDNN : 7.4.6 (As the enssorflow model-master was compiled using this version) Any recommendations that helps me fasten my SSD network training process will be highly appreciated.

    opened by saqibshakeel035 14
  • Edge TPU program not working

    Edge TPU program not working

    Thanks for the great tutorial. I followed instructions in "Section 1 - How to Set Up and Run TensorFlow Lite Object Detection Models on the Raspberry Pi" and got it working without any problems.

    However, in "Section 2 - Run Edge TPU Object Detection Models on the Raspberry Pi Using the Coral USB Accelerator", got the following error when running the demo script:

    (tflite1-env) pi@raspberrypi:~/tflite1 $ python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model --edgetpu INFO: Initialized TensorFlow Lite runtime. /home/pi/tflite1/Sample_TFLite_model/edgetpu.tflite Traceback (most recent call last): File "TFLite_detection_webcam.py", line 140, in interpreter.allocate_tensors() File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter.py", line 244, in allocate_tensors return self._interpreter.AllocateTensors() File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 106, in AllocateTensors return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self) RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 2 (EdgeTpuDelegateForCustomOp) failed to prepare.

    Is it possible that this is related to recent update as noted here? https://github.com/google-coral/edgetpu/issues/44#issuecomment-579787546

    There appears to be a fix: https://github.com/google-coral/edgetpu/issues/44#issuecomment-579905056 I followed the fix but this did not solve the problem. Thank you so much again.

    My system info: (tflite1-env) pi@raspberrypi:~/tflite1 $ sudo dpkg -l | grep edge ii libedgetpu1-std:armhf 13.0 armhf Support library for Edge TPU

    (tflite1-env) pi@raspberrypi:~/tflite1 $ cat /etc/os-release PRETTY_NAME="Raspbian GNU/Linux 10 (buster)" NAME="Raspbian GNU/Linux" VERSION_ID="10" VERSION="10 (buster)" VERSION_CODENAME=buster ID=raspbian ID_LIKE=debian HOME_URL="http://www.raspbian.org/" SUPPORT_URL="http://www.raspbian.org/RaspbianForums" BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"

    opened by thepiam 10
  • IndexError: list index out of range

    IndexError: list index out of range

    Hello. Thanks for this great tutorial. I was able to deploy tensorflow lite on the RasPi3+ and this here works well:

    python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model

    But when I try to calculate an own model and use it the same way (with TFLite_detection_webcam.py) I get the IndexError: list index out of range error.

    I created the own model in Debian Linux 9 with TF2.

    $ pip3 list | grep tensorflow
    tensorflow                   2.0.0     
    tensorflow-estimator         2.0.1     
    tensorflow-hub               0.7.0     
    

    I created the model whis way:

    $ make_image_classifier \
    --image_dir ~/tensorflow/images_train/ \
    --tfhub_module https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4 \
    --image_size 224 \
    --saved_model_dir /tmp/mynewmodel \
    --labels_output_file /tmp/mynewmodel/labelmap.txt \
    --tflite_output_file /tmp/mynewmodel/detect.tflite
    

    This is the output:

    $ ls -lh /tmp/mynewmodel/
    insgesamt 11M
    drwxr-xr-x 2 bnc bnc 4,0K Jan 10 22:55 assets
    -rw-r--r-- 1 bnc bnc 8,5M Jan 11 00:02 detect.tflite
    -rw-r--r-- 1 bnc bnc   20 Jan 11 00:02 labelmap.txt
    -rw-r--r-- 1 bnc bnc 2,0M Jan 11 00:02 saved_model.pb
    drwxr-xr-x 2 bnc bnc 4,0K Jan 11 00:02 variables
    

    When I try to use the self-generated tensorflow lite model on the RasPi 3+, I get this error message:

    $ python3 TFLite_detection_webcam.py --modeldir=/home/pi/mynewmodel
    INFO: Initialized TensorFlow Lite runtime.
    Traceback (most recent call last):
      File "TFLite_detection_webcam.py", line 186, in <module>
        classes = interpreter.get_tensor(output_details[1]['index'])[0] # Class index of detected objects
    IndexError: list index out of range
    

    Every 1-2 hours, I get one of these lines as additional output:

    Corrupt JPEG data: premature end of data segment
    

    What can I do to fix this? Has anyone the same issue?

    opened by christianbaun 10
  • ImportError: /home/pi/tflite1/tflite1-env/lib/python3.5/site-packages/tensorflow/lite/python/interpreter_wrapper/_tensorflow_wrap_interpreter_wrapper.so: undefined symbol: _ZN6tflite12tensor_utils24NeonVectorScalarMultiplyEPKaifPf

    ImportError: /home/pi/tflite1/tflite1-env/lib/python3.5/site-packages/tensorflow/lite/python/interpreter_wrapper/_tensorflow_wrap_interpreter_wrapper.so: undefined symbol: _ZN6tflite12tensor_utils24NeonVectorScalarMultiplyEPKaifPf

    I have observed one more thing today that TensorFlow v.2.0 is not available for RPi 3b+ Debrain stretch as tensorflow-2.0.0-cp35none-linux_armv7l.whl is not available. :( When I try to install tenosrflow 2.0.0 using the following command: sudo pip install tensorflow-2.0.0-cp35-none-linux_armv71.whl It gives me the following error: Requirement tensorflow-2.0.0-cp35-none-linux_armv71.whl looks like a filename, but the file doesnot exist

    opened by saqibshakeel035 10
  • Cumulative Counting Mode integration?

    Cumulative Counting Mode integration?

    Hi,

    Just like to say great work!

    Question:

    How would I go about integrating the Cumulative Counting Model api into the TFLite_detection_webcam.py?

    https://github.com/ahmetozlu/tensorflow_object_counting_api

    1.2) For detecting, tracking and counting the vehicles with enabled color prediction

    Usage of "Cumulative Counting Mode" for the "vehicle counting" case:

    fps = 24 # change it with your input video fps
    width = 640 # change it with your input video width
    height = 352 # change it with your input vide height
    is_color_recognition_enabled = 0 # set it to 1 for enabling the color prediction for the detected objects
    roi = 200 # roi line position
    deviation = 3 # the constant that represents the object counting area
    
    object_counting_api.cumulative_object_counting_y_axis(input_video, detection_graph, category_index, is_color_recognition_enabled, fps, width, height, roi, deviation) # counting all the objects
    
    opened by airqualityanthony 10
  • Some things wrong when building the CPU-only version.

    Some things wrong when building the CPU-only version.

    hey, I got some things wrong when I try to build the CPU-only version. the configuration session look like this

    You have bazel 0.21.0- (@non-git) installed. Please specify the location of python. [Default is C:\Users\YT\Anaconda3\envs\tensorflow-build\python.exe]: Found possible Python library paths: C:\Users\YT\Anaconda3\envs\tensorflow-build\lib\site-packages Please input the desired Python library path to use. Default is [C:\Users\YT\Anaconda3\envs\tensorflow-build\lib\site-packages] Do you wish to build TensorFlow with XLA JIT support? [y/N]: n No XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with ROCm support? [y/N]: n No ROCm support will be enabled for TensorFlow. Do you wish to build TensorFlow with CUDA support? [y/N]: n No CUDA support will be enabled for TensorFlow. Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is /arch:AVX]: Would you like to override eigen strong inline for some C++ compilation to reduce the compilation time? [Y/n]: y Eigen strong inline overridden. Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. --config=gdr # Build with GDR support. --config=verbs # Build with libverbs support. --config=ngraph # Build with Intel nGraph support. --config=dynamic_kernels # (Experimental) Build kernels into separate shared objects. Preconfigured Bazel build configs to DISABLE default on features: --config=noaws # Disable AWS S3 filesystem support. --config=nogcp # Disable GCP support. --config=nohdfs # Disable HDFS support. --config=noignite # Disable Apacha Ignite support. --config=nokafka # Disable Apache Kafka support. --config=nonccl # Disable NVIDIA NCCL support.

    Then it runs very well at first,but it encountered some errors in the middle,just like this:

    ERROR: C:/tensorflow-build/tensorflow/tensorflow/core/kernels/BUILD:3221:1: C++ compilation of rule '//tensorflow/core/kernels:scan_ops' failed (Exit 2): cl.exe failed: error executing command

    and this:

    ERROR: C:/tensorflow-build/tensorflow/tensorflow/tools/pip_package/BUILD:241:1 C++ compilation of rule '//tensorflow/core/kernels:batch_matmul_op' failed (Exit 2): cl.exe failed: error executing command

    I don't know if this error message is sufficient, if you need more details,just ask me.

    opened by YTGhost 9
  • Comparison Accuracy Tensorflow vs Tensorflow Lite

    Comparison Accuracy Tensorflow vs Tensorflow Lite

    Hello together,

    Since i feel that the performance of my converted tflite models is really low although i have a mAP of 0,96 in the evaluation of the full model.

    Is there a way to get the mAP of the TFlite Model also?

    Thanks in advance

    opened by Natriumpikant 6
  • TF1 Colab - RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd

    TF1 Colab - RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd

    Hey there, I've been trying to use the TF1 colab to train a dataset for the edge tpu. I'm getting the following error: RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd. This occurs in Create CSV data files and TFRecord files. I've tried a few things but unable to figure it out. Your help would be very appreciated. Thanks!

    opened by joshfox10 1
  • Run training use tf.cast instead

    Run training use tf.cast instead

    In Run Training process when " Use 'tf.cast' Instead. " returned and the code not process anything and loading ( or processing ) forever

    image

    I ran is coral for 64H 37M 22S and it didn't go well..

    opened by jinjindoli 0
  • Error: Corrupt JPEG data

    Error: Corrupt JPEG data

    I tried to follow your explanation for setting up the Raspberry Pi with a USB-Webcam and I think it works out well. But it works only when I use the sample TFLite model. If I use my own tensorflow lite model (trained with TF Model Maker) the window for the video stream will not open up. For both of the TFLite models however I get thousands of lines with "Corrupt JPEG data: 1 extraneous bytes before marker". What did I do wrong?

    opened by Alfrs28 0
  • Custom model from Azure Custom Vision

    Custom model from Azure Custom Vision

    Hi, I would like to ask about compatibility these example with other tflite, exported at the Azure Custom Vision.

    To train my custom dataset, I used MS Azure Custom Vision. That is pretty easy to do labelin, train and export Leave the model's correctness, I cannot use it with your detection examples.

    It returns IndexError: list index out of range at
    classes = interpreter.get_tensor(output_details[classes_idx]['index'])[0]

    I tried TF2 index order for boxes_idx, classes_idx and scores_idx.

    opened by mipsan 0
  • when will be added TensorFlow Lite 1 notebook??

    when will be added TensorFlow Lite 1 notebook??

    Hello :) Thank you for sharing nice Colab notebook for Tensorflow v2 ! https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/Train_TFLite2_Object_Detction_Model.ipynb

    I'm having problem in less accuracy with quantized tflite. and notebook said quantized with tensor flow v1 is better then v2. but as far as I know, Colab is not supporting tensor flow v1 anymore....so....how can I use ssd-mobilenet-v1-quantized model ?

    Thank you

    opened by kotran88 2
Owner
Evan
Hardware engineer and computer vision specialist!
Evan
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

null 80 Dec 27, 2022
PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices.

PyTorch-LIT PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices. With

Amin Rezaei 157 Dec 11, 2022
An example showing how to use jax to train resnet50 on multi-node multi-GPU

jax-multi-gpu-resnet50-example This repo shows how to use jax for multi-node multi-GPU training. The example is adapted from the resnet50 example in d

Yangzihao Wang 20 Jul 4, 2022
Example scripts for the detection of lanes using the ultra fast lane detection model in Tensorflow Lite.

TFlite Ultra Fast Lane Detection Inference Example scripts for the detection of lanes using the ultra fast lane detection model in Tensorflow Lite. So

Ibai Gorordo 12 Aug 27, 2022
Run object detection model on the Raspberry Pi

Using TensorFlow Lite with Python is great for embedded devices based on Linux, such as Raspberry Pi.

Dimitri Yanovsky 6 Oct 8, 2022
🍅🍅🍅YOLOv5-Lite: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 1.7M (int8) and 3.3M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~

YOLOv5-Lite:lighter, faster and easier to deploy Perform a series of ablation experiments on yolov5 to make it lighter (smaller Flops, lower memory, a

pogg 1.5k Jan 5, 2023
Python scripts to detect faces in Python with the BlazeFace Tensorflow Lite models

Python scripts to detect faces using Python with the BlazeFace Tensorflow Lite models. Tested on Windows 10, Tensorflow 2.4.0 (Python 3.8).

Ibai Gorordo 46 Nov 17, 2022
This repository is related to an Arabic tutorial, within the tutorial we discuss the common data structure and algorithms and their worst and best case for each, then implement the code using Python.

Data Structure and Algorithms with Python This repository is related to the Arabic tutorial here, within the tutorial we discuss the common data struc

Mohamed Ayman 33 Dec 2, 2022
MohammadReza Sharifi 27 Dec 13, 2022
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in Tensorflow Lite.

TFLite-msg_chn_wacv20-depth-completion Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model

Ibai Gorordo 2 Oct 4, 2021
A lightweight face-recognition toolbox and pipeline based on tensorflow-lite

FaceIDLight ?? Description A lightweight face-recognition toolbox and pipeline based on tensorflow-lite with MTCNN-Face-Detection and ArcFace-Face-Rec

Martin Knoche 16 Dec 7, 2022
Python scripts form performing stereo depth estimation using the HITNET model in Tensorflow Lite.

TFLite-HITNET-Stereo-depth-estimation Python scripts form performing stereo depth estimation using the HITNET model in Tensorflow Lite. Stereo depth e

Ibai Gorordo 22 Oct 20, 2022
Swapping face using Face Mesh with TensorFlow Lite

Swapping face using Face Mesh with TensorFlow Lite

iwatake 17 Apr 26, 2022
Python scripts for performing stereo depth estimation using the MobileStereoNet model in Tensorflow Lite.

TFLite-MobileStereoNet Python scripts for performing stereo depth estimation using the MobileStereoNet model in Tensorflow Lite. Stereo depth estimati

Ibai Gorordo 4 Feb 14, 2022
BED: A Real-Time Object Detection System for Edge Devices

BED: A Real-Time Object Detection System for Edge Devices About this project Thi

Data Analytics Lab at Texas A&M University 44 Nov 18, 2022
High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.

Anakin2.0 Welcome to the Anakin GitHub. Anakin is a cross-platform, high-performance inference engine, which is originally developed by Baidu engineer

null 514 Dec 28, 2022
Research shows Google collects 20x more data from Android than Apple collects from iOS. Block this non-consensual telemetry using pihole blocklists.

pihole-antitelemetry Research shows Google collects 20x more data from Android than Apple collects from iOS. Block both using these pihole lists. Proj

Adrian Edwards 290 Jan 9, 2023