Quantized tflite models for ailia TFLite Runtime

Overview

ailia-models-tflite

Quantized tflite models for ailia TFLite Runtime

About ailia TFLite Runtime

ailia TF Lite Runtime is a TensorFlow Lite compatible inference engine. Written in C99, it supports inference in Non-OS and RTOS. It also supports high-speed inference using Intel MKL on a PC, and operates 29 times faster than the official TensorFlow Lite.

Install

Get the ailia TF Lite Runtime package from ax Inc. Run the following command.

cd ailia_tflite_runtime/python
python3 bootstrap.py
pip3 install .

Models

Face detection

Model Reference Exported From Netron
BlazeFace PINTO_model_zoo TensorFlow Netron

Face recognition

Model Reference Exported From Netron
Face Mesh PINTO_model_zoo TensorFlow Netron

Hand recognition

Model Reference Exported From Netron
Blaze Hand PINTO_model_zoo TensorFlow Netron

Image classification

Model Reference Exported From Netron
MobileNet MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications Keras Netron
MobileNetV2 MobileNetV2: Inverted Residuals and Linear Bottlenecks Keras Netron
ResNet50 tf.keras.applications.resnet50.ResNet50 Keras Netron

Image segmentation

Model Reference Exported From Netron
DeepLabv3+ PINTO_model_zoo TensorFlow Netron

Object detection

Model Reference Exported From Netron
MobileNetV2-SSDLite PINTO_model_zoo TensorFlow Netron
YOLOv3 tiny tensorflow-yolov4-tflite TensorFlow Netron

Options

You can benchmark with the -b option. You can use the official TensorFlow Lite with the --tflite option.

You might also like...
This project deploys a yolo fastest model in the form of tflite on raspberry 3b+. The model is from another repository of mine called -Trash-Classification-Car
This project deploys a yolo fastest model in the form of tflite on raspberry 3b+. The model is from another repository of mine called -Trash-Classification-Car

Deploy-yolo-fastest-tflite-on-raspberry 觉得有用的话可以顺手点个star嗷 这个项目将垃圾分类小车中的tflite模型移植到了树莓派3b+上面。 该项目主要是为了记录在树莓派部署yolo fastest tflite的流程 (之后有时间会尝试用C++部署来提升

WHENet - ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L
WHENet - ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L

HeadPoseEstimation-WHENet-yolov4-onnx-openvino ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L 1. Usage $ git clone htt

Simple converter for deploying Stable-Baselines3 model to TFLite and/or Coral

Running SB3 developed agents on TFLite or Coral Introduction I've been using Stable-Baselines3 to train agents against some custom Gyms, some of which

Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron
Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Runtime type annotations for the shape, dtype etc. of PyTorch Tensors.

torchtyping Type annotations for a tensor's shape, dtype, names, ... Turn this: def batch_outer_product(x: torch.Tensor, y: torch.Tensor) - torch.Ten

NVIDIA container runtime

nvidia-container-runtime A modified version of runc adding a custom pre-start hook to all containers. If environment variable NVIDIA_VISIBLE_DEVICES i

Conflict-aware Inference of Python Compatible Runtime Environments with Domain Knowledge Graph, ICSE 2022

PyCRE Conflict-aware Inference of Python Compatible Runtime Environments with Domain Knowledge Graph, ICSE 2022 Dependencies This project is developed

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

ONNX Runtime is a cross-platform inference and training machine-learning accelerator. ONNX Runtime inference can enable faster customer experiences an

A Runtime method overload decorator which should behave like a compiled language

strongtyping-pyoverload A Runtime method overload decorator which should behave like a compiled language there is a override decorator from typing whi

Comments
  • Optimize YOLOX

    Optimize YOLOX

    Resolve #9

    Add optimized version of YOLOX tiny TFLite with NHWC order natively, removing many Transpose layers

    New YOLOX tiny TFLite model has already been uploaded to the server

    opened by zhaochow 4
  • MobileNetV1とMobileNetV2のモデル差し替え

    MobileNetV1とMobileNetV2のモデル差し替え

    --recalibオプションでImageNetのValidationセットを使用してInt8で量子化したモデルを使用する機能を追加。 また、MobileNetV1とV2で使用されている  imagenet_utils.preprocess_input(x, data_format=data_format, mode="tf") はRGBを処理対象とするが、BGRで処理していた問題を修正。

    opened by kyakuno 0
  • conversion code of model from pytorch to tflite

    conversion code of model from pytorch to tflite

    I have gone through the code, but I was unable to find that portion of code to convert the model from pytorch to tflite. can you please share the portion of code you are using to convert the model to tflite.

    I also used a code to convert the model to tflite, but getting error -> MicrosoftTeams-image (19)

    please help me to resolve the issue

    opened by sudipxerox 1
Owner
ax Inc.
AI to the power of X
ax Inc.
Repo for parser tensorflow(.pb) and tflite(.tflite)

tfmodel_parser .pb file is the format of tensorflow model .tflite file is the format of tflite model, which usually used in mobile devices before star

null 1 Dec 23, 2021
ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS.

ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS. It currently supports four examples for you to quickly experience the power of ONNX Runtime Web.

Microsoft 58 Dec 18, 2022
BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.

BitPack is a practical tool that can efficiently save quantized neural network models with mixed bitwidth.

Zhen Dong 36 Dec 2, 2022
Quantized models with python

quantized-network download .pth files to qmodels/: googlenet : https://download.

adreamxcj 2 Dec 28, 2021
tf2onnx - Convert TensorFlow, Keras and Tflite models to ONNX.

tf2onnx converts TensorFlow (tf-1.x or tf-2.x), tf.keras and tflite models to ONNX via command line or python api.

Open Neural Network Exchange 1.8k Jan 8, 2023
An executor that loads ONNX models and embeds documents using the ONNX runtime.

ONNXEncoder An executor that loads ONNX models and embeds documents using the ONNX runtime. Usage via Docker image (recommended) from jina import Flow

Jina AI 2 Mar 15, 2022
YOLOv5 in PyTorch > ONNX > CoreML > TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices evolved over thousands of hours of training and evolution on anonymized client datasets. All code and models are under active development, and are subject to modification or deletion without notice.

Ultralytics 34.1k Dec 31, 2022
YOLOv3 in PyTorch > ONNX > CoreML > TFLite

This repository represents Ultralytics open-source research into future object detection methods, and incorporates lessons learned and best practices

Ultralytics 9.3k Jan 7, 2023
Python TFLite scripts for detecting objects of any class in an image without knowing their label.

Python TFLite scripts for detecting objects of any class in an image without knowing their label.

Ibai Gorordo 42 Oct 7, 2022