Human Pose Detection on EdgeTPU

Overview

Coral PoseNet

Pose estimation refers to computer vision techniques that detect human figures in images and video, so that one could determine, for example, where someone’s elbow, shoulder or foot show up in an image. PoseNet does not recognize who is in an image, it is simply estimating where key body joints are.

This repo contains a set of PoseNet models that are quantized and optimized for use on Coral's Edge TPU, together with some example code to shows how to run it on a camera stream.

Why PoseNet ?

Pose estimation has many uses, from interactive installations that react to the body to augmented reality, animation, fitness uses, and more. We hope the accessibility of this model inspires more developers and makers to experiment and apply pose detection to their own unique projects, to demonstrate how machine learning can be deployed in ways that are anonymous and private.

How does it work ?

At a high level pose estimation happens in two phases:

  1. An input RGB image is fed through a convolutional neural network. In our case this is a MobileNet V1 architecture. Instead of a classification head however, there is a specialized head which produces a set of heatmaps (one for each kind of key point) and some offset maps. This step runs on the EdgeTPU. The results are then fed into step 2)

  2. A special multi-pose decoding algorithm is used to decode poses, pose confidence scores, keypoint positions, and keypoint confidence scores. Note that unlike in the TensorflowJS version we have created a custom OP in Tensorflow Lite and appended it to the network graph itself. This CustomOP does the decoding (on the CPU) as a post processing step. The advantage is that we don't have to deal with the heatmaps directly and when we then call this network through the Coral Python API we simply get a series of keypoints from the network.

If you're interested in the gory details of the decoding algorithm and how PoseNet works under the hood, I recommend you take a look at the original research paper or this medium post whihch describes the raw heatmaps produced by the convolutional model.

Important concepts

Pose: at the highest level, PoseNet will return a pose object that contains a list of keypoints and an instance-level confidence score for each detected person.

Keypoint: a part of a person’s pose that is estimated, such as the nose, right ear, left knee, right foot, etc. It contains both a position and a keypoint confidence score. PoseNet currently detects 17 keypoints illustrated in the following diagram:

pose keypoints

Keypoint Confidence Score: this determines the confidence that an estimated keypoint position is accurate. It ranges between 0.0 and 1.0. It can be used to hide keypoints that are not deemed strong enough.

Keypoint Position: 2D x and y coordinates in the original input image where a keypoint has been detected.

Examples in this repo

NOTE: PoseNet relies on the latest Pycoral API, tflite_runtime API, and libedgetpu1-std or libedgetpu1-max:

Please also update your system before running these examples. For more information on updating see:

To install all other requirements for third party libraries, simply run

sh install_requirements.sh

simple_pose.py

A minimal example that simply downloads an image, and prints the pose keypoints.

python3 simple_pose.py

pose_camera.py

A camera example that streams the camera image through posenet and draws the pose on top as an overlay. This is a great first example to run to familiarize yourself with the network and its outputs.

Run a simple demo like this:

python3 pose_camera.py

If the camera and monitor are both facing you, consider adding the --mirror flag:

python3 pose_camera.py --mirror

In this repo we have included 3 posenet model files for differnet input resolutions. The larger resolutions are slower of course, but allow a wider field of view, or further-away poses to be processed correctly.

posenet_mobilenet_v1_075_721_1281_quant_decoder_edgetpu.tflite
posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite
posenet_mobilenet_v1_075_353_481_quant_decoder_edgetpu.tflite

You can change the camera resolution by using the --res parameter:

python3 pose_camera.py --res 480x360  # fast but low res
python3 pose_camera.py --res 640x480  # default
python3 pose_camera.py --res 1280x720 # slower but high res

anonymizer.py

A fun little app that demonstrates how Coral and PoseNet can be used to analyze human behavior in an anonymous and privacy-preserving way.

Posenet converts an image of a human into a mere skeleton which captures its position and movement over time, but discards any precisely identifying features and the original camera image. Because Coral devices run all the image analysis locally, the actual image is never streamed anywhere and is immediately discarded. The poses can be safely stored or analysed.

For example a store owner may want to study the bahavior of customers as they move through the store, in order to optimize flow and improve product placement. A museum may want to track which areas are most busy, at which times such as to give guidance which exhibits may currently have the shortest waiting times.

With Coral this is possible without recording anybody's image directly or streaming data to a cloud service - instead the images are immediately discarded.

The anaonymizer is a small app that demonstrates this is a fun way. To use the anonymizer set up your camera in a sturdy position. Lauch the app and walk out of the image. This demo waits until no one is in the frame, then stores the 'background' image. Now, step back in. You'll see your current pose overlayed over a static image of the background.

python3 anonymizer.py

(If the camera and monitor are both facing you, consider adding the --mirror flag.)

video of three people interacting with the anonymizer demo

synthesizer.py

This demo allows people to control musical synthesizers with their arms. Up to 3 people are each assigned a different instrument and octave, and control the pitch with their right wrists and the volume with their left wrists.

You'll need to install FluidSynth and a General Midi SoundFont:

apt install fluidsynth fluid-soundfont-gm
pip3 install pyfluidsynth

Now you can run the demo like this:

python3 synthesizer.py

The PoseEngine class

The PoseEngine class (defined in pose_engine.py) allows easy access to the PoseNet network from Python, using the EdgeTPU API.

You simply initialize the class with the location of the model .tflite file and then call DetectPosesInImage, passing a numpy object that contains the image. The numpy object should be in int8, [Y,X,RGB] format.

A minimal example might be:

from tflite_runtime.interpreter import Interpreter
import os
import numpy as np
from PIL import Image
from PIL import ImageDraw
from pose_engine import PoseEngine


os.system('wget https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/'
          'Hindu_marriage_ceremony_offering.jpg/'
          '640px-Hindu_marriage_ceremony_offering.jpg -O /tmp/couple.jpg')
pil_image = Image.open('/tmp/couple.jpg').convert('RGB')
engine = PoseEngine(
    'models/mobilenet/posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite')
poses, _ = engine.DetectPosesInImage(pil_image)

for pose in poses:
    if pose.score < 0.4: continue
    print('\nPose Score: ', pose.score)
    for label, keypoint in pose.keypoints.items():
        print('  %-20s x=%-4d y=%-4d score=%.1f' %
              (label, keypoint.point[0], keypoint.point[1], keypoint.score))

To try this, run

python3 simple_pose.py

And you should see an output like this:

Inference time: 14 ms

Pose Score:  0.60698134
  NOSE                 x=211  y=152  score=1.0
  LEFT_EYE             x=224  y=138  score=1.0
  RIGHT_EYE            x=199  y=136  score=1.0
  LEFT_EAR             x=245  y=135  score=1.0
  RIGHT_EAR            x=183  y=129  score=0.8
  LEFT_SHOULDER        x=269  y=169  score=0.7
  RIGHT_SHOULDER       x=160  y=173  score=1.0
  LEFT_ELBOW           x=281  y=255  score=0.6
  RIGHT_ELBOW          x=153  y=253  score=1.0
  LEFT_WRIST           x=237  y=333  score=0.6
  RIGHT_WRIST          x=163  y=305  score=0.5
  LEFT_HIP             x=256  y=318  score=0.2
  RIGHT_HIP            x=171  y=311  score=0.2
  LEFT_KNEE            x=221  y=342  score=0.3
  RIGHT_KNEE           x=209  y=340  score=0.3
  LEFT_ANKLE           x=188  y=408  score=0.2
  RIGHT_ANKLE          x=189  y=410  score=0.2

Comments
  • Request Posenet *.tflite file (not compilied with edge_tpu_compiler yet)

    Request Posenet *.tflite file (not compilied with edge_tpu_compiler yet)

    Hi Team, Currently, this repository only contained *_edgetpu.tflite file for posenet. I would like to request *.tflite file (not compilied with edge_tpu_compiler yet) for co-compile 2 model purpose.

    Can you please help me?

    opened by AlexTraan 14
  • AttributeError: 'Delegate' object has no attribute '_library' when trying to execute quant_decoder

    AttributeError: 'Delegate' object has no attribute '_library' when trying to execute quant_decoder

    @ivelin I have gone through precisely this a while back and am trying to update the implementation currently. I will do my best to provide what I did.

    Does that mean EdgeTPU knows how to resolve the CustomOp reference in the graph and execute it?

    Not quite, at least not as far as I can tell. It appears to be provided by the EdgeTPU library.

    Is there a way to help TFLite resolve the CustomOp reference in the graph or that's an EdgeTPU feature only?

    I was previously unable to find a way.

    Looks like one way to inform TFLite of custom ops is to rebuild it from source. However that requires the CustomOp implementation to be available at build time.

    Correct. Fortunately, it is. As I mentioned above, it appears to be part of the Edge TPU library. I simply build this code into my binary. You can find the op here: https://github.com/google-coral/edgetpu/blob/master/src/cpp/posenet/posenet_decoder_op.cc

    Related code that is likely to be needed as well is available there as well. I have successfully used this in CPU only on Linux. I am working on a build for Windows. I am not fond of the Bazel build system, particularly because for whatever reason tensorflow does not use up to date versions of Bazel and building on Windows is not easy. Frankly neither is building on Linux but there is at least the Docker image available there.

    Hello jwoolston! I'm trying to do the same right now but I'm facing the next problem:

    AttributeError: 'Delegate' object has no attribute '_library' (decoder) [ec2-user@ip-172-31-5-112 mobilenet]$ vim run_model.py (decoder) [ec2-user@ip-172-31-5-112 mobilenet]$ python run_model.py }Traceback (most recent call last): File "run_model.py", line 3, in <module> tpu= tflite.load_delegate('libedgetpu.so.1') File "/home/ec2-user/decoder/lib/python3.7/site-packages/tflite_runtime/interpreter.py", line 166, in load_delegate delegate = Delegate(library, options) File "/home/ec2-user/decoder/lib/python3.7/site-packages/tflite_runtime/interpreter.py", line 90, in __init__ self._library = ctypes.pydll.LoadLibrary(library) File "/usr/local/lib/python3.7/ctypes/__init__.py", line 442, in LoadLibrary return self._dlltype(name) File "/usr/local/lib/python3.7/ctypes/__init__.py", line 364, in __init__ self._handle = _dlopen(self._name, mode) OSError: libedgetpu.so.1: cannot open shared object file: No such file or directory Exception ignored in: <function Delegate.__del__ at 0x7fa6194d70e0> Traceback (most recent call last): File "/home/ec2-user/decoder/lib/python3.7/site-packages/tflite_runtime/interpreter.py", line 125, in __del__ if self._library is not None: AttributeError: 'Delegate' object has no attribute '_library'

    When executing this code: `import tflite_runtime.interpreter as tflite

    tpu= tflite.load_delegate('../posenet-tflite-convert/data/edgetpu/libedgetpu/direct/aarch64/libedgetpu.so.1') #posenet = tflite.load_delegate('posenet_decoder.so') interpreter = tflite.Interpreter(model_path='posenet_mobilenet_v1_075_481_641_quant_decoder.tflite') interpreter.allocate_tensors()`

    I used this docker to generate the .so libraries required: https://github.com/muka/posenet-tflite-convert

    But I'm not sure how I have to connect the libraries with my code or the tflite_runtime.

    Please, could you help me to have a better vision of this problem or tell me more about the solution you used?

    I'm working on Amazon Linux 2.

    Greetings!

    Originally posted by @lupitia1 in https://github.com/google-coral/project-posenet/issues/36#issuecomment-748903000

    opened by lupitia1 12
  • project-posenet and others not working with enterprise-eagle-20200724205123

    project-posenet and others not working with enterprise-eagle-20200724205123

    We have 32 Coral Dev Boards and most of the demos now fail after flashing enterprise-eagle-20200724205123 because edgetpu https://github.com/google-coral/edgetpu has been branded Legacy and replaced by libedgetpu

    mendel@k3s-tpu-09:~/project-posenet$ python3 simple_pose.py Traceback (most recent call last): File "simple_pose.py", line 18, in <module> from pose_engine import PoseEngine File "/home/mendel/project-posenet/pose_engine.py", line 20, in <module> from edgetpu import __version__ as edgetpu_version ModuleNotFoundError: No module named 'edgetpu'

    Are there any plans to update the example code soon ? Can i substitute libedgetpu easily ?

    type:bug comp:API 
    opened by jjgraham 11
  • PoseNet Crashing

    PoseNet Crashing

    Description

    Hi! We're interested in running PoseNet to track physical marker points continuously (or at least for long periods of time). When we run simple_pose.py, the coral device consistently crashes after running for 2-5 minutes. How can we fix this problem? Thanks for the help!

    Click to expand!

    Issue Type

    Performance

    Operating System

    Mendel Linux

    Coral Device

    Dev Board

    Other Devices

    No response

    Programming Language

    No response

    Relevant Log Output

    No response

    type:performance subtype:Mendel Linux Hardware:Dev Board 
    opened by nihalraman 10
  • Can't run simple_pose.py

    Can't run simple_pose.py

    OSError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./posenet_lib/x86_64/posenet_decoder.so)

    Any idea on this?

    I compiled my own posenet_decoder as well, which results in a different error. Is this .so only working with ubuntu 20?

    opened by anoop54 10
  • The latest Edge TPU runtime (v15.0) possibly breaks old models?

    The latest Edge TPU runtime (v15.0) possibly breaks old models?

    Running the models against the latest edgetpu runtime gave me an error that previously had been caused by package upgrades.

    python simple_pose.py
    
    Traceback (most recent call last):
      File "simple_pose.py", line 25, in <module>
        engine = PoseEngine('models/mobilenet/posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite')
      File "/home/pi/WorkingDirectory/opensource/project-posenet/pose_engine.py", line 85, in __init__
        BasicEngine.__init__(self, model_path)
      File "/home/pi/.pyenv/versions/dash_app/lib/python3.7/site-packages/edgetpu/basic/basic_engine.py", line 92, in __init__
        self._engine = BasicEnginePythonWrapper.CreateFromFile(model_path)
    RuntimeError: Internal: Unsupported data type in custom op handler: 1006632960Node number 0 (edgetpu-custom-op) failed to prepare.
    Failed to allocate tensors.
    

    Here's the setup on my Pi 4 running Raspbian Buster.

    libedgetpu1-max:armhf==15.0
    pycoral==1.0.0
    tflite-runtime==2.5.0
    

    That was not an issue previously under

    libedgetpu1-max:armhf==14.1
    python3-edgetpu==??
    tflite-runtime==2.1.0
    

    which I can't figure out how to downgrade to.

    type:bug comp:API comp:thirdparty 
    opened by jingw222 10
  • Is it possible to run the Custom OP on CPU with TFLite?

    Is it possible to run the Custom OP on CPU with TFLite?

    Coral team, thank you for the great example.

    We are working on an open source project that allows users to take advantage of EdgeTPU when available. Otherwise inference falls back to the CPU.

    The README for this PoseNet example states that the CustomOP is embedded in the graph itself. Does that mean EdgeTPU knows how to resolve the CustomOp reference in the graph and execute it?

    Trying to run the graph on TFLite on CPU without EdgeTPU produces the following error, which is a known limitation of TFLite:

    def AllocateTensors(self):
    >       return _interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
    E       RuntimeError: Encountered unresolved custom op: PosenetDecoderOp.Node number 32 (PosenetDecoderOp) failed to prepare.
    

    Is there a way to help TFLite resolve the CustomOp reference in the graph or that's an EdgeTPU feature only?

    Looks like one way to inform TFLite of custom ops is to rebuild it from source. However that requires the CustomOp implementation to be available at build time.

    Any guidance would be appreciated.

    Thank you,

    Ivelin

    type:support comp:model 
    opened by ivelin 9
  • Excuse me,I want to ask how to input jpeg

    Excuse me,I want to ask how to input jpeg

    mendel@deft-dog:~/project-posenet$ python3 pose_camera.py --jpeg
    /home/mendel/project-posenet/gstreamer.py:15: PyGIWarning: Gst was imported without specifying a version first. Use gi.require_version('Gst', '1.0') before import to ensure that the right version gets loaded.
      from gi.repository import GLib, GObject, Gst, GstBase, GstVideo, Gtk
    /home/mendel/project-posenet/gstreamer.py:15: PyGIWarning: GstBase was imported without specifying a version first. Use gi.require_version('GstBase', '1.0') before import to ensure that the right version gets loaded.
      from gi.repository import GLib, GObject, Gst, GstBase, GstVideo, Gtk
    /home/mendel/project-posenet/gstreamer.py:15: PyGIWarning: GstVideo was imported without specifying a version first. Use gi.require_version('GstVideo', '1.0') before import to ensure that the right version gets loaded.
      from gi.repository import GLib, GObject, Gst, GstBase, GstVideo, Gtk
    /home/mendel/project-posenet/gstreamer.py:15: PyGIWarning: Gtk was imported without specifying a version first. Use gi.require_version('Gtk', '3.0') before import to ensure that the right version gets loaded.
      from gi.repository import GLib, GObject, Gst, GstBase, GstVideo, Gtk
    Loading model:  models/mobilenet/posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite
    Gstreamer pipeline:  v4l2src device=/dev/video0 ! image/jpeg,width=640,height=480,framerate=30/1 ! decodebin ! videoflip video-direction=identity ! tee name=t
                   t. ! queue max-size-buffers=1 leaky=downstream ! videoconvert ! freezer name=freezer ! rsvgoverlay name=overlay
                      ! videoconvert ! autovideosink
                   t. ! queue max-size-buffers=1 leaky=downstream ! videoconvert ! videoscale ! video/x-raw,width=641,height=480 ! videobox name=box autocrop=true
                      ! video/x-raw,format=RGB,width=641,height=481 ! appsink name=appsink emit-signals=true max-buffers=1 drop=true
    
    Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
    streaming stopped, reason not-negotiated (-4)
    

    if enter the directory

    python3 pose_camera.py --jpeg /home/mendel/model/omodel/o32.jpeg
    /home/mendel/project-posenet/gstreamer.py:15: PyGIWarning: Gst was imported without specifying a version first. Use gi.require_version('Gst', '1.0') before import to ensure that the right version gets loaded.
      from gi.repository import GLib, GObject, Gst, GstBase, GstVideo, Gtk
    /home/mendel/project-posenet/gstreamer.py:15: PyGIWarning: GstBase was imported without specifying a version first. Use gi.require_version('GstBase', '1.0') before import to ensure that the right version gets loaded.
      from gi.repository import GLib, GObject, Gst, GstBase, GstVideo, Gtk
    /home/mendel/project-posenet/gstreamer.py:15: PyGIWarning: GstVideo was imported without specifying a version first. Use gi.require_version('GstVideo', '1.0') before import to ensure that the right version gets loaded.
      from gi.repository import GLib, GObject, Gst, GstBase, GstVideo, Gtk
    /home/mendel/project-posenet/gstreamer.py:15: PyGIWarning: Gtk was imported without specifying a version first. Use gi.require_version('Gtk', '3.0') before import to ensure that the right version gets loaded.
      from gi.repository import GLib, GObject, Gst, GstBase, GstVideo, Gtk
    usage: pose_camera.py [-h] [--mirror] [--model MODEL]
                          [--res {480x360,640x480,1280x720}] [--videosrc VIDEOSRC]
                          [--h264] [--jpeg]
    pose_camera.py: error: unrecognized arguments: /home/mendel/model/omodel/o32.jpeg
    

    So, I want to ask the basic problem for this...Thank you!

    type:support Hardware:Dev Board stat:awaiting response 
    opened by XuTZ0912 7
  • Error running ResNet50 on Pi 4 with USB Accelerator

    Error running ResNet50 on Pi 4 with USB Accelerator

    The model I'm currently testing is posenet_resnet_50_640_480_16_quant_edgetpu_decoder.tflite. Loading the model with PoseEngine API seems to be just fine, but as soon as the model starts to run inferences, it aborts and throws me this error

    F :842] transfer on tag 2 failed. Abort. Deadline exceeded: USB transfer error 2 [LibUsbDataOutCallback]
    

    And that's not the case with the MobileNet version posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite at all. What gives?

    Hardwares:

    • Raspberry Pi 4
    • Raspberry Pi Camera Module v2
    • Coral USB Accelerator (connected to a USB 3 port on the Pi)
    • Official USB-C power supply (3A)

    Softwares:

    • python3-edgetpu v13.0
    • libedgetpu1-std:armhf v13.0
    opened by jingw222 7
  • Security Policy violation Binary Artifacts

    Security Policy violation Binary Artifacts

    This issue was automatically created by Allstar.

    Security Policy Violation Project is out of compliance with Binary Artifacts policy: binaries present in source code

    Rule Description Binary Artifacts are an increased security risk in your repository. Binary artifacts cannot be reviewed, allowing the introduction of possibly obsolete or maliciously subverted executables. For more information see the Security Scorecards Documentation for Binary Artifacts.

    Remediation Steps To remediate, remove the generated executable artifacts from the repository.

    Artifacts Found

    • posenet_lib/aarch64/posenet_decoder.so
    • posenet_lib/armv7a/posenet_decoder.so
    • posenet_lib/x86_64/posenet_decoder.so

    Additional Information This policy is drawn from Security Scorecards, which is a tool that scores a project's adherence to security best practices. You may wish to run a Scorecards scan directly on this repository for more details.


    Allstar has been installed on all Google managed GitHub orgs. Policies are gradually being rolled out and enforced by the GOSST and OSPO teams. Learn more at http://go/allstar

    This issue will auto resolve when the policy is in compliance.

    Issue created by Allstar. See https://github.com/ossf/allstar/ for more information. For questions specific to the repository, please contact the owner or maintainer.

    allstar 
    opened by google-allstar-prod[bot] 6
  • Probably naive question

    Probably naive question

    Description

    ...Would it be possible to estimate the 6DOF pose of the camera from the pose detections of the Coral? I mean rotation and translation vectors.

    Click to expand!

    Issue Type

    Support

    Operating System

    Linux

    Coral Device

    USB Accelerator

    Other Devices

    Rapsberry Pi 4

    Programming Language

    Python 3.9

    Relevant Log Output

    No response

    type:support subtype:ubuntu/linux Hardware:USB Accelerator comp:thirdparty 
    opened by neilyoung 6
  • frames overlaying each other

    frames overlaying each other

    Description

    Hi there, I wanted to save multiple frames from the anonymizer.py file for training a model. Was there a way to completely get rid of overlaying poses and only capture current frames in real time? I did try the --mirror, it did in fact reduce the number of frames but it never got rid of all of them and allowing to only capture one wire frame of the individual.

    Click to expand!

    Issue Type

    Bug, Feature Request

    Operating System

    Mendel Linux

    Coral Device

    Dev Board Mini

    Other Devices

    Raspberry Pi 3

    Programming Language

    Python 3.7

    Relevant Log Output

    No response

    type:feature type:bug subtype:Mendel Linux Hardware:Dev Board Mini comp:model comp:thirdparty 
    opened by zain-altaf 5
  • view output on streaming server

    view output on streaming server

    Can I stream outputs to remote server?

    Hi, really cool stuff!

    I am currently running pose_camera.py and I have a monitor connected to the coral via HDMI which is great for seeing the segments and key points being picked up by the model. Is it possible to send these outputs straight to a streaming server I could view in chrome? Would be great not to need an extra monitor for this.

    I.e. I want to have pose_camera.py stream its outputs just like this demo: https://coral.ai/docs/dev-board/camera/#view-with-a-streaming-server, which I run by calling edgetpu_demo --stream

    Click to expand!

    Issue Type

    Performance, Support, Feature Request

    Operating System

    Mendel Linux, Mac OS

    Coral Device

    Dev Board

    Other Devices

    No response

    Programming Language

    Python 3.8

    Relevant Log Output

    No response

    type:feature type:support type:performance subtype:Mendel Linux subtype:macOS Hardware:Dev Board comp:demo comp:model 
    opened by ilonadem 5
  • Anonymizer background overlays all previous poses

    Anonymizer background overlays all previous poses

    Description

    Anonymizer file is continually overlaying the pose onto the base image. Pose overlays just continually get added to the locked background. I would like it to only show one pose on the background at once.

    Click to expand!

    Issue Type

    Bug

    Operating System

    Mendel Linux

    Coral Device

    Dev Board

    Other Devices

    No response

    Programming Language

    Python 3.5

    Relevant Log Output

    No response

    type:bug subtype:Mendel Linux Hardware:Dev Board 
    opened by amandalam2000 2
  • Not able to run pose_camera.py on a video

    Not able to run pose_camera.py on a video

    Description

    l am trying to run this following command on the dev board

    mendel@silly-llama:~/project-posenet$ python3 pose_camera.py --videosrc BenVid.MOV

    The goal is to map out the poses of the people in the video but l am getting this error, does anyone have an idea of what could be missing? l am running headless connecting via mdt shell and downloaded fresh project repo without any changes.

    Traceback (most recent call last): File "pose_camera.py", line 166, in main() File "pose_camera.py", line 162, in main run(run_inference, render_overlay) File "pose_camera.py", line 127, in run jpeg=args.jpeg File "/home/mendel/project-posenet/gstreamer.py", line 366, in run_pipeline pipeline.run() File "/home/mendel/project-posenet/gstreamer.py", line 74, in run sinkelement.set_property('sync', False) AttributeError: 'NoneType' object has no attribute 'set_property'

    Click to expand!

    Issue Type

    Bug, Build/Install, Support

    Operating System

    Mac OS

    Coral Device

    Dev Board

    Other Devices

    No response

    Programming Language

    No response

    Relevant Log Output

    Traceback (most recent call last):
      File "pose_camera.py", line 166, in <module>
        main()
      File "pose_camera.py", line 162, in main
        run(run_inference, render_overlay)
      File "pose_camera.py", line 127, in run
        jpeg=args.jpeg
      File "/home/mendel/project-posenet/gstreamer.py", line 366, in run_pipeline
        pipeline.run()
      File "/home/mendel/project-posenet/gstreamer.py", line 74, in run
        sinkelement.set_property('sync', False)
    AttributeError: 'NoneType' object has no attribute 'set_property'
    
    type:build/install type:support type:bug subtype:macOS Hardware:Dev Board 
    opened by pakcodee 3
  • Work in kafka with pose_camera.py

    Work in kafka with pose_camera.py

    Description

    Good day. I have the next question, namely: i want to send video from pose_camera.py to kafka broker. For this goal i should understand where in code_file pose_camera.py or any project files i can take stream with output video with drew pose on top. For example like for cv2 library: video = cv2.VideoCapture(path_to_video) while video.isOpened(): success, frame = video.read() if not success: break cv2.imshow('jpg', frame)[1].tobytes

    Maybe you can help me with this problem? P.S. I wrote own code for case cv2 with face recognise. And now I want to compare speed of write in kafka topic for case cv2 and gstreamer. Thanks for helping

    Click to expand!

    Issue Type

    Support

    Operating System

    Ubuntu

    Coral Device

    USB Accelerator

    Other Devices

    No response

    Programming Language

    Python 3.8

    Relevant Log Output

    No response

    type:support subtype:ubuntu/linux Hardware:USB Accelerator stat:community support 
    opened by Govraskirill 0
Owner
google-coral
Open source projects for coral.ai
google-coral
Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite and .pb from .tflite.

tflite2tensorflow Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite and .pb from .tflite. 1. Supported Layers No. TFLite Layer TF

Katsuya Hyodo 214 Dec 29, 2022
WHENet - ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L

HeadPoseEstimation-WHENet-yolov4-onnx-openvino ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L 1. Usage $ git clone htt

Katsuya Hyodo 49 Sep 21, 2022
SE3 Pose Interp - Interpolate camera pose or trajectory in SE3, pose interpolation, trajectory interpolation

SE3 Pose Interpolation Pose estimated from SLAM system are always discrete, and

Ran Cheng 4 Dec 15, 2022
《Unsupervised 3D Human Pose Representation with Viewpoint and Pose Disentanglement》(ECCV 2020) GitHub: [fig9]

Unsupervised 3D Human Pose Representation [Paper] The implementation of our paper Unsupervised 3D Human Pose Representation with Viewpoint and Pose Di

null 42 Nov 24, 2022
Repository for the paper "PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation", CVPR 2021.

PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation Code repository for the paper: PoseAug: A Differentiable Pose Augme

Pyjcsx 328 Dec 17, 2022
Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors, CVPR 2021

Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors Human POSEitioning System (H

Aymen Mir 66 Dec 21, 2022
Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation

SimplePose Code and pre-trained models for our paper, “Simple Pose: Rethinking and Improving a Bottom-up Approach for Multi-Person Pose Estimation”, a

Jia Li 256 Dec 24, 2022
This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation

SO-Pose This repository contains codes of ICCV2021 paper: SO-Pose: Exploiting Self-Occlusion for Direct 6D Pose Estimation This paper is basically an

shangbuhuan 52 Nov 25, 2022
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019) News [2020/07/05] A very nice blog from Towards Data Science introd

Leo Xiao 3.9k Jan 5, 2023
Human head pose estimation using Keras over TensorFlow.

RealHePoNet: a robust single-stage ConvNet for head pose estimation in the wild.

Rafael Berral Soler 71 Jan 5, 2023
Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021)

Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021) Introduction This is the official code of Deep Dual Consecutive Network for Human P

null 295 Dec 29, 2022
Bottom-up Human Pose Estimation

Introduction This is the official code of Rethinking the Heatmap Regression for Bottom-up Human Pose Estimation. This paper has been accepted to CVPR2

null 108 Dec 1, 2022
Code for "3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop"

PyMAF This repository contains the code for the following paper: 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop Hongwe

Hongwen Zhang 450 Dec 28, 2022
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

HRNet 367 Dec 27, 2022
Official PyTorch implementation of "IntegralAction: Pose-driven Feature Integration for Robust Human Action Recognition in Videos", CVPRW 2021

IntegralAction: Pose-driven Feature Integration for Robust Human Action Recognition in Videos Introduction This repo is official PyTorch implementatio

Gyeongsik Moon 29 Sep 24, 2022
Official Pytorch implementation of "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video", CVPR 2021

TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video Qualtitative result Paper teaser video Introduction This r

Hongsuk Choi 215 Jan 6, 2023
Research code for CVPR 2021 paper "End-to-End Human Pose and Mesh Reconstruction with Transformers"

MeshTransformer ✨ This is our research code of End-to-End Human Pose and Mesh Reconstruction with Transformers. MEsh TRansfOrmer is a simple yet effec

Microsoft 473 Dec 31, 2022
HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation

HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation Official PyTroch implementation of HPRNet. HPRNet: Hierarchical Point Regre

Nermin Samet 53 Dec 4, 2022
A large-scale video dataset for the training and evaluation of 3D human pose estimation models

ASPset-510 ASPset-510 (Australian Sports Pose Dataset) is a large-scale video dataset for the training and evaluation of 3D human pose estimation mode

Aiden Nibali 36 Oct 30, 2022