Narya API allows you track soccer player from camera inputs, and evaluate them with an Expected Discounted Goal (EDG) Agent

Related tags

Deep Learning narya
Overview

Narya

The Narya API allows you track soccer player from camera inputs, and evaluate them with an Expected Discounted Goal (EDG) Agent. This repository contains the implementation of the following paper. We also make available all of our pretrained agents, and the datasets we used as well.

The goal of this repository is to allow anyone without any access to soccer data to produce its own and to analyse them with powerfull tools. We also hope that by releasing our training procedures and datasets, better models will emerge and make this tool better iteratively.

We also built 4 notebooks to explain how to use our models and a colab:

and released of blog post version of these notebooks here.

We tried to make everything easy to reuse, we hope anyone will be able to:

  • Use our datasets to train other models
  • Finetune some of our trained models
  • Use our trackers
  • Evaluate players with our EDG Agent
  • and much more

You can find at the bottom of the readme links to our models and datasets, but also to tools and models trained by the community.

Installation

You can either install narya from source:

git clone && cd narya && pip3 install -r requirements.txt

Google Football:

Google Football needs to be installed differently. Please see their repo to take care of it.

Google Football Repo

Player tracking:

The installed version is directly compatible with the player tracking models. However, it seems that some errors might occur with keras.load_model when the architecture of the model is contained in the .h5 file. In doubt, Python 3.7 is always working with our installation.

EDG:

As Google Football API is currently not supporting Tensorflow 2, you need to manually downgrade its version in order to use our EDG agent:

pip3 install tensorflow==1.13.1 pip3 install tensorflow_probability==0.5.0

Models & Datasets:

The models will be downloaded automatically with the library. If needed, they can be access at the end of the readme. The datasets are also available below.

Tracking Players Models:

Each model can be accessed on its own, or you can use the full tracking itself.

Single Model

Each pretrained model is built on the same architecture to allow for the easier utilisation possible: you import it, and you use it. The processing function, or different frameworks, are handled internaly.

Let's import an image:

import numpy as np
import cv2
image = cv2.imread('test_image.jpg')
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

Now, let's create our models:

from narya.models.keras_models import DeepHomoModel
from narya.models.keras_models import KeypointDetectorModel
from narya.models.gluon_models import TrackerModel

direct_homography_model = DeepHomoModel()

keypoint_model = KeypointDetectorModel(
    backbone='efficientnetb3', num_classes=29, input_shape=(320, 320),
)

tracking_model = TrackerModel(pretrained=True, backbone='ssd_512_resnet50_v1_coco')

We can now directly make predictions:

homography_1 = direct_homography_model(image)
keypoints_masks = keypoint_model(image)
cid, score, bbox = tracking_model(image)

In the tracking class, we also process the homography we estimate with interpolation and filters. This ensure smooth estimation during the entire video.

Processing:

We can now vizualise or use each of this predictions. For example, visualize the predicted keypoints:

from narya.utils.vizualization import visualize
visualize(
        image=denormalize(image.squeeze()),
        pr_mask=keypoints_masks[..., -1].squeeze(),
    )

Full Tracker:

Given a list of images, one can easily apply our tracking algorithm:

from narya.tracker.full_tracker import FootballTracker

This tracker contains the 3 models seen above, and the tracking/ReIdentification model. You can create it by specifying your frame rate, and the size of the memory frames buffer:

tracker = FootballTracker(frame_rate=24.7,track_buffer = 60)

Given a list of image, the full tracking is computed using:

trajectories = tracker(img_list,split_size = 512, save_tracking_folder = 'test_tracking/',
                        template = template,skip_homo = None)

We also built post processing functions to handle the mistakes the tracker can make, and also visualization tools to plot the data.

EDG:

The best way to use our EDG agent is to first convert your tracking data to a google format, using the utils functions:

from narya.utils.google_football_utils import _save_data, _build_obs_stacked

data_google = _save_data(df,'test_temo.dump')
observations = {
    'frame_count':[],
    'obs':[],
    'obs_count':[],
    'value':[]
}
for i in range(len(data_google)):
    obs,obs_count = _build_obs_stacked(data_google,i)
    observations['frame_count'].append(i)
    observations['obs'].append(obs)
    observations['obs_count'].append(obs_count)

You can now easily load a pretrained agent, and use it to get the value of any action with:

from narya.analytics.edg_agent import AgentValue

agent = AgentValue(checkpoints = checkpoints)
value = agent.get_value([obs])

Processing:

You can use these values to plot the value of an action, or plot map of values at a given time. You can use:

map_value = agent.get_edg_map(observations['obs'][20],observations['obs_count'][20],79,57,entity = 'ball')

and

for indx,obs in enumerate(observations['obs']):
    value = agent.get_value([obs])
    observations['value'].append(value)
df_dict = {
    'frame_count':observations['frame_count'],
    'value':observations['value']
}
df_ = pd.DataFrame(df_dict)

to compute an EDG map and the EDG overtime of an action.

Open Source

Our goal with this project was to both build a powerful tool to analyse soccer plays. This led us to build a soccer player tracking model on top of it. We hope that by releasing our codes, weights, and datasets, more people will be able to perform amazing projects related to soccer/sport analysis.

If you find any bug, please open an issue. If you see any improvements, or trained a model you want to share, please open a pull request!

Thanks

A special thanks to Last Row, for providing some tracking data at the beginning, to try our agent, and to Soccermatics for providing Vizualisation tools (and some motivation to start this project).

Citation

If you use Narya in your research and would like to cite it, we suggest you use the following citation:

@misc{garnier2021evaluating,
      title={Evaluating Soccer Player: from Live Camera to Deep Reinforcement Learning}, 
      author={Paul Garnier and Théophane Gregoir},
      year={2021},
      eprint={2101.05388},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Links:

Links to the models and datasets from the original Paper

Model Description Link
11_vs_11_selfplay_last EDG agent https://storage.googleapis.com/narya-bucket-1/models/11_vs_11_selfplay_last
deep_homo_model.h5 Direct Homography estimation Weights https://storage.googleapis.com/narya-bucket-1/models/deep_homo_model.h5
deep_homo_model_1.h5 Direct Homography estimation Architecture https://storage.googleapis.com/narya-bucket-1/models/deep_homo_model_1.h5
keypoint_detector.h5 Keypoints detection Weights https://storage.googleapis.com/narya-bucket-1/models/keypoint_detector.h5
player_reid.pth Player Embedding Weights https://storage.googleapis.com/narya-bucket-1/models/player_reid.pth
player_tracker.params Player & Ball detection Weights https://storage.googleapis.com/narya-bucket-1/models/player_tracker.params

The datasets can be downloaded at:

Dataset Description Link
homography_dataset.zip Homography Dataset (image,homography) https://storage.googleapis.com/narya-bucket-1/dataset/homography_dataset.zip
keypoints_dataset.zip Keypoint Dataset (image,list of mask) https://storage.googleapis.com/narya-bucket-1/dataset/keypoints_dataset.zip
tracking_dataset.zip Tracking Dataset in VOC format (image, bounding boxes for players/ball) https://storage.googleapis.com/narya-bucket-1/dataset/tracking_dataset.zip

Links to models trained by the community

Experimental data for vertical pitches:

Model Description Link
vertical_HomographyModel_0.0001_32.h5 Direct Homography estimation Weights https://storage.googleapis.com/narya-bucket-1/models/vertical_HomographyModel_0.0001_32.h5
vertical_FPN_efficientnetb3_0.0001_32.h5 Keypoints detection Weights https://storage.googleapis.com/narya-bucket-1/models/vertical_FPN_efficientnetb3_0.0001_32.h5
Dataset Description Link
vertical_samples_direct_homography.zip Homography Dataset (image,homography) https://storage.googleapis.com/narya-bucket-1/dataset/vertical_samples_direct_homography.zip
vertical_samples_keypoints.zip Keypoint Dataset (image,list of mask) https://storage.googleapis.com/narya-bucket-1/dataset/vertical_samples_keypoints.zip

Tools

Tool for efficient creation of training labels:

Tool built by @larsmaurath to label football images: https://github.com/larsmaurath/narya-label-creator

Tool for creation of keypoints datasets:

Tool built by @kkoripl to create keypoints datasets - xml files and images resizing: https://github.com/kkoripl/NaryaKeyPointsDatasetCreator

Comments
  • Pitch keypoints id's description

    Pitch keypoints id's description

    Hi,

    I'd like to train homography model with new key points data, so looked into your scripts for that thing and to data set. There are xml files with keypoints id's in there (29 of them or so). Are they same for every data set picture? I mean like is id = 0 every single time the middle of the pitch? Maybe could you make a graphic to show which id's are which points on the pitch?

    opened by kkoripl 11
  • Application of DeepHomoModel to vertical pitches

    Application of DeepHomoModel to vertical pitches

    Hi,

    I have been trying to train the DeepHomoModel to identify vertical pitches which was pretty straight forward with your excellent documentation.

    What I have noticed is that the training on vertical pitch data seems to be slower than for the horizontal pitch data you trained on in your paper. I have adapted "https://github.com/DonsetPG/narya/blob/master/narya/trainer/homography_train.py" to my new training data and also played around with the default pitch specs in "get_default_corners" to use a larger share of the image.

    I have currently trained on around 400 images, but results are significantly worse than when I train the model on 400 horizontal images from your data set.

    Do you have any sense why the model may not work as well for vertical pitches? Of course there may be many reasons why it doesn't train as well in my case, but wanted to see if you have any no-brainer reasons before I generate more training data.

    Below one of the best results I could generate:

    narya_vert_pitch

    Thanks a lot already!

    opened by larsmaurath 6
  • ValueError: bad marshal data (unknown type code)

    ValueError: bad marshal data (unknown type code)

    When I try to run models_examples.ipynb in my local machine, I get the above said error. It usually happens in the line deep_homo_model = DeepHomoModel()

    I installed all the requirements. What can be the possible source of error here?

    Sorry for this long error.

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    <ipython-input-6-3f60b79e4c6b> in <module>
          1 from narya.models.keras_models import DeepHomoModel
          2 
    ----> 3 deep_homo_model = DeepHomoModel()
          4 
          5 # WEIGHTS_PATH = (
    
    ~/Documents/narya/narya/models/keras_models.py in __init__(self, pretrained, input_shape)
         59         self.pretrained = pretrained
         60 
    ---> 61         self.resnet_18 = _build_resnet18()
         62 
         63         inputs = tf.keras.layers.Input((self.input_shape[0], self.input_shape[1], 3))
    
    ~/Documents/narya/narya/models/keras_models.py in _build_resnet18()
         34     )
         35 
    ---> 36     resnet18 = tf.keras.models.load_model(resnet18_path_to_file)
         37     resnet18.compile()
         38 
    
    ~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/saving/save.py in load_model(filepath, custom_objects, compile)
        182     if (h5py is not None and (
        183         isinstance(filepath, h5py.File) or h5py.is_hdf5(filepath))):
    --> 184       return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
        185 
        186     if sys.version_info >= (3, 4) and isinstance(filepath, pathlib.Path):
    
    ~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/saving/hdf5_format.py in load_model_from_hdf5(filepath, custom_objects, compile)
        175       raise ValueError('No model found in config file.')
        176     model_config = json.loads(model_config.decode('utf-8'))
    --> 177     model = model_config_lib.model_from_config(model_config,
        178                                                custom_objects=custom_objects)
        179 
    
    ~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/saving/model_config.py in model_from_config(config, custom_objects)
         53                     '`Sequential.from_config(config)`?')
         54   from tensorflow.python.keras.layers import deserialize  # pylint: disable=g-import-not-at-top
    ---> 55   return deserialize(config, custom_objects=custom_objects)
         56 
         57 
    
    ~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/layers/serialization.py in deserialize(config, custom_objects)
        103     config['class_name'] = _DESERIALIZATION_TABLE[layer_class_name]
        104 
    --> 105   return deserialize_keras_object(
        106       config,
        107       module_objects=globs,
    
    ~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
        367 
        368       if 'custom_objects' in arg_spec.args:
    --> 369         return cls.from_config(
        370             cls_config,
        371             custom_objects=dict(
    
    ~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/engine/network.py in from_config(cls, config, custom_objects)
        984         ValueError: In case of improperly formatted config dict.
        985     """
    --> 986     input_tensors, output_tensors, created_layers = reconstruct_from_config(
        987         config, custom_objects)
        988     model = cls(inputs=input_tensors, outputs=output_tensors,
    
    ~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/engine/network.py in reconstruct_from_config(config, custom_objects, created_layers)
       2017   # First, we create all layers and enqueue nodes to be processed
       2018   for layer_data in config['layers']:
    -> 2019     process_layer(layer_data)
       2020   # Then we process nodes in order of layer depth.
       2021   # Nodes that cannot yet be processed (if the inbound node
    
    ~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/engine/network.py in process_layer(layer_data)
       1999       from tensorflow.python.keras.layers import deserialize as deserialize_layer  # pylint: disable=g-import-not-at-top
       2000 
    -> 2001       layer = deserialize_layer(layer_data, custom_objects=custom_objects)
       2002       created_layers[layer_name] = layer
       2003 
    
    ~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/layers/serialization.py in deserialize(config, custom_objects)
        103     config['class_name'] = _DESERIALIZATION_TABLE[layer_class_name]
        104 
    --> 105   return deserialize_keras_object(
        106       config,
        107       module_objects=globs,
    
    ~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
        367 
        368       if 'custom_objects' in arg_spec.args:
    --> 369         return cls.from_config(
        370             cls_config,
        371             custom_objects=dict(
    
    ~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/layers/core.py in from_config(cls, config, custom_objects)
        988   def from_config(cls, config, custom_objects=None):
        989     config = config.copy()
    --> 990     function = cls._parse_function_from_config(
        991         config, custom_objects, 'function', 'module', 'function_type')
        992 
    
    ~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/layers/core.py in _parse_function_from_config(cls, config, custom_objects, func_attr_name, module_attr_name, func_type_attr_name)
       1040     elif function_type == 'lambda':
       1041       # Unsafe deserialization from bytecode
    -> 1042       function = generic_utils.func_load(
       1043           config[func_attr_name], globs=globs)
       1044     elif function_type == 'raw':
    
    ~/anaconda3/envs/narya/lib/python3.8/site-packages/tensorflow/python/keras/utils/generic_utils.py in func_load(code, defaults, closure, globs)
        469   except (UnicodeEncodeError, binascii.Error):
        470     raw_code = code.encode('raw_unicode_escape')
    --> 471   code = marshal.loads(raw_code)
        472   if globs is None:
        473     globs = globals()
    
    ValueError: bad marshal data (unknown type code)
    
    opened by PareshKamble 5
  • full-tracking.ipynb colab crashes

    full-tracking.ipynb colab crashes

    Hi, I'm trying to run full-tracking.ipynb and every time I run this cell: tracker = FootballTracker(frame_rate=24.7,track_buffer = 60) the notebook just crashes and restarts, only error I can get is app.log {"name":"app","hostname":"0272c32279bc","pid":1,"type":"jupyter","level":30,"msg":"The Jupyter Notebook is running at:","time":"2021-02-23T20:01:27.113Z","v":0} {"name":"app","hostname":"0272c32279bc","pid":1,"type":"jupyter","level":30,"msg":"http://172.28.0.12:9000/","time":"2021-02-23T20:01:27.114Z","v":0} {"name":"app","hostname":"0272c32279bc","pid":1,"type":"jupyter","level":30,"msg":"Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).","time":"2021-02-23T20:01:27.114Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"google.colab serverextension initialized.","time":"2021-02-23T20:01:27.111Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"Serving notebooks from local directory: /","time":"2021-02-23T20:01:27.115Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"0 active kernels","time":"2021-02-23T20:01:27.115Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"The Jupyter Notebook is running at:","time":"2021-02-23T20:01:27.115Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"http://172.28.0.2:9000/","time":"2021-02-23T20:01:27.115Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).","time":"2021-02-23T20:01:27.115Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"Kernel started: f1e57aae-7e38-0d90a92821e0","time":"2021-02-23T20:01:30.178Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"Adapting to protocol v5.1 for kernel f1e57aae-7e38-0d90a92821e0","time":"2021-02-23T20:01:31.374Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"Kernel restarted: f1e57aae-7e38-0d90a92821e0","time":"2021-02-23T20:08:30.051Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":40,"msg":"[20:09:16] src/imperative/./imperative_utils.h:92: GPU support is disabled. Compile MXNet with USE_CUDA=1 to enable GPU support.","time":"2021-02-23T20:09:16.738Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":40,"msg":"terminate called after throwing an instance of 'dmlc::Error'","time":"2021-02-23T20:09:16.739Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":40,"msg":" what(): [20:09:16] src/imperative/imperative.cc:81: Operator _zeros is not implemented for GPU.","time":"2021-02-23T20:09:16.739Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":40,"msg":"Stack trace:","time":"2021-02-23T20:09:16.740Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":40,"msg":" [bt] (0) /usr/local/lib/python3.7/dist-packages/mxnet/libmxnet.so(+0x307d3b) [0x7f6cb840cd3b]","time":"2021-02-23T20:09:16.740Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":40,"msg":" [bt] (1) /usr/local/lib/python3.7/dist-packages/mxnet/libmxnet.so(mxnet::Imperative::InvokeOp(mxnet::Context const&, nnvm::NodeAttrs const&, std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&, std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, mxnet::DispatchMode, mxnet::OpStatePtr)+0x6bb) [0x7f6cbb5aec5b]","time":"2021-02-23T20:09:16.740Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":30,"msg":"KernelRestarter: restarting kernel (1/5), keep random ports","time":"2021-02-23T20:09:18.050Z","v":0} {"name":"app","hostname":"dd02dc2c4d71","pid":1,"type":"jupyter","level":40,"msg":"**WARNING:root:kernel f1e57aae-7e38-0d90a92821e0 restarted**","time":"2021-02-23T20:09:18.050Z","v":0}

    opened by alenm10 5
  • Training time for KeypointDetectorModel

    Training time for KeypointDetectorModel

    Is there a reason why training KeypointDetectorModel is so much slower now? I used to train on 100 epoch for about 3 hours few months ago in GoogleColab but now for just one epoch it takes about 40 minutes (67 hours for all 100 epochs). Does anyone have a similar problem?

    opened by alenm10 4
  • Error in training keypoints

    Error in training keypoints

    I faced tuple index out of range error while training the keypoints model line 111.

    image

    Here is the complete trace

    https://gist.github.com/Rit-ctrl/13b448b1ac115297d1abf43db9f18eed

    I get this error irrespective of the dataset I use for training

    opened by Rit-ctrl 4
  • Module 'albumentations' has no attribute 'Lambda'

    Module 'albumentations' has no attribute 'Lambda'

    Hi - this time no questions..,

    Using script in keypoints_train.py I've tried to start an overview into training time for KeyPointModel on Google Colab, but... got into trouble.

    I've copied whole your code from keypoints_train.py, with just small change for arguments loading, but when making object of KeyPointDataset there is an error of module albumentations: image

    I don't know if this is an issue of colab itself, maybe some library version is not suitable, but I cloned all of this repository with installing all requirements into colab.

    opened by kkoripl 4
  • Training KeypointDetectorModel on Google Colab

    Training KeypointDetectorModel on Google Colab

    I have a problem with training KeypointDetectorModel on Google Colab. A mysterious thing happens and the cell ends with ^C output.

    ... [ommited logs for readability]
    Total params: 13,945,158
    Trainable params: 13,855,558
    Non-trainable params: 89,600
    __________________________________________________________________________________________________
    ----------
    Building dataset
    ----------
    ----------
    Launching the training
    ----------
    Epoch 1/100
    2021-02-11 19:22:11.500514: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
    ^C
    

    I have no idea where the problem is as there is no logs. Here is colab notebook with minimal code to reproduce this error.

    opened by karlosos 3
  • GoogleAPI 403 error : API delinquent ?

    GoogleAPI 403 error : API delinquent ?

    I found that the code related to downloading the pre-trained model through googleapis.com didn't work today. The error said 403 error and I wasn't able to access each data by accessing each URL.

    I'm so afraid but I guess it's caused by the GCP license. Does anybody have any ideas? I didn't download the pre-trained model, so I'm so glad to have weights not only as API but through GitHub or something to use.

    program error code

    Exception: URL fetch failure on https://storage.googleapis.com/narya-bucket-1/models/deep_homo_model_1.h5: 403 -- Forbidden Exception: URL fetch failure on https://storage.googleapis.com/narya-bucket-1/models/player_tracker.params: 403 -- Forbidde

    html page once I access each URL listed in https://donsetpg.github.io/naryaDoc/models/index.html

    This XML file does not appear to have any style information associated with it. The document tree is shown below. UserProjectAccountProblem The project to be billed is associated with a delinquent billing account.

    The billing account for the owning project is disabled in state delinquent
    opened by yoshi2210 2
  • About performance

    About performance

    Hello,

    First of all thank you for this amazing work. I am interested in tracking players and projecting their coordinates on a 2D plane using live camera. I am not interested in the analysis part, just the projection. Could your detection, homography and reid models be used with live video stream and achieve good fps performance?

    opened by burakkaraceylan 2
  • Keras Models for Homography and Key Points Estimation not using GPU

    Keras Models for Homography and Key Points Estimation not using GPU

    I am Pytorch User and have not much idea about tensorflow. I am trying to run Keras Models (Homography, Keypoints) on GPU. I have RTX 3090 with CUDA 11.2. I installed Tensorflow-gpu==2.2.0 intsead of simple tensorflow and also tried adding these lines before loading models. with tf.device('/GPU:0'): Can you please guide me how can i load these models on GPU instead of CPU.

    opened by Muaz65 2
  • Bump tensorflow from 2.2.0 to 2.9.3 in /narya

    Bump tensorflow from 2.2.0 to 2.9.3 in /narya

    Bumps tensorflow from 2.2.0 to 2.9.3.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump tensorflow from 2.2.0 to 2.9.3

    Bump tensorflow from 2.2.0 to 2.9.3

    Bumps tensorflow from 2.2.0 to 2.9.3.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • versions incompatibility problems

    versions incompatibility problems

    I'm trying to run the file in colab and I always find versions incompatibility problems.

    Even after several attempts to change the requirements, I still can't complete the execution of the file. Has anyone managed to run successfully lately?

    I also cannot finish the execution of the remaining codes on my personal computer... Any news about the correct versions to be able to run this code? It would be extremely useful to me...

    opened by rmarcelino4 1
  • Colab crashes when running import TrackingDatasetBuilder

    Colab crashes when running import TrackingDatasetBuilder

    I tried to run tracker_train in colab with GPU enabled. While running this line

    from narya.datasets.tracking_dataset import TrackingDatasetBuilder my colab crashes.

    opened by Rit-ctrl 0
  • Issue in from narya.narya.models.keras_models import KeypointDetectorModel

    Issue in from narya.narya.models.keras_models import KeypointDetectorModel


    AttributeError Traceback (most recent call last) in () 2 3 kp_model = KeypointDetectorModel( 4 backbone='efficientnetb3', num_classes=29, input_shape=(320, 320), 5 ) 6

    6 frames /usr/local/lib/python3.7/dist-packages/efficientnet/model.py in EfficientNet(width_coefficient, depth_coefficient, default_resolution, dropout_rate, drop_connect_rate, depth_divisor, blocks_args, model_name, include_top, weights, input_tensor, input_shape, pooling, classes, **kwargs) 468 file_name = model_name + '_weights_tf_dim_ordering_tf_kernels_autoaugment_notop.h5' 469 file_hash = WEIGHTS_HASHES[model_name][1] 470 weights_path = keras_utils.get_file(file_name, 471 BASE_WEIGHTS_PATH + file_name, 472 cache_subdir='models',

    AttributeError: module 'keras.utils' has no attribute 'get_file'

    opened by sportistats 0
Owner
Paul Garnier
Currently building flaneer.com at day Sport analytics at night
Paul Garnier
This project uses reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can learn to read tape. The project is dedicated to hero in life great Jesse Livermore.

Reinforcement-trading This project uses Reinforcement learning on stock market and agent tries to learn trading. The goal is to check if the agent can

Deepender Singla 1.4k Dec 22, 2022
3D position tracking for soccer players with multi-camera videos

This repo contains a full pipeline to support 3D position tracking of soccer players, with multi-view calibrated moving/fixed video sequences as inputs.

Yuchang Jiang 72 Dec 27, 2022
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano <https:

null 9.6k Dec 31, 2022
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano <https:

null 9.6k Jan 6, 2023
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano <https:

null 9.3k Feb 12, 2021
OCTIS: Comparing Topic Models is Simple! A python package to optimize and evaluate topic models (accepted at EACL2021 demo track)

OCTIS : Optimizing and Comparing Topic Models is Simple! OCTIS (Optimizing and Comparing Topic models Is Simple) aims at training, analyzing and compa

MIND 478 Jan 1, 2023
Ludwig is a toolbox that allows to train and evaluate deep learning models without the need to write code.

Translated in ???? Korean/ Ludwig is a toolbox that allows users to train and test deep learning models without the need to write code. It is built on

Ludwig 8.7k Jan 5, 2023
Ludwig is a toolbox that allows to train and evaluate deep learning models without the need to write code.

Translated in ???? Korean/ Ludwig is a toolbox that allows users to train and test deep learning models without the need to write code. It is built on

Ludwig 8.7k Dec 31, 2022
A Player for Kanye West's Stem Player. Sort of an emulator.

Stem Player Player Stem Player Player Usage Download the latest release here Optional: install ffmpeg, instructions here NOTE: DOES NOT ENABLE DOWNLOA

null 119 Dec 28, 2022
Camera-caps - Examine the camera capabilities for V4l2 cameras

camera-caps This is a graphical user interface over the v4l2-ctl command line to

Jetsonhacks 25 Dec 26, 2022
Convolutional neural network web app trained to track our infant’s sleep schedule using our Google Nest camera.

Machine Learning Sleep Schedule Tracker What is it? Convolutional neural network web app trained to track our infant’s sleep schedule using our Google

g-parki 7 Jul 15, 2022
Alex Pashevich 62 Dec 24, 2022
Unofficial implementation of Perceiver IO: A General Architecture for Structured Inputs & Outputs

Perceiver IO Unofficial implementation of Perceiver IO: A General Architecture for Structured Inputs & Outputs Usage import torch from src.perceiver.

Timur Ganiev 111 Nov 15, 2022
PerfFuzz: Automatically Generate Pathological Inputs for C/C++ programs

PerfFuzz Performance problems in software can arise unexpectedly when programs are provided with inputs that exhibit pathological behavior. But how ca

Caroline Lemieux 125 Nov 18, 2022
Game Agent Framework. Helping you create AIs / Bots that learn to play any game you own!

Serpent.AI - Game Agent Framework (Python) Update: Revival (May 2020) Development work has resumed on the framework with the aim of bringing it into 2

Serpent.AI 6.4k Jan 5, 2023
Monify: an Expense tracker Program implemented in a Graphical User Interface that allows users to keep track of their expenses

?? MONIFY (EXPENSE TRACKER PRO) ?? Description Monify is an Expense tracker Program implemented in a Graphical User Interface allows users to add inco

Moyosore Weke 1 Dec 14, 2021
API for RL algorithm design & testing of BCA (Building Control Agent) HVAC on EnergyPlus building energy simulator by wrapping their EMS Python API

RL - EmsPy (work In Progress...) The EmsPy Python package was made to facilitate Reinforcement Learning (RL) algorithm research for developing and tes

null 20 Jan 5, 2023