PyTorch Live is an easy to use library of tools for creating on-device ML demos on Android and iOS.

Related tags

Deep Learning live
Overview

Build your AI-powered mobile apps in minutes: Get Started ยท Tutorials ยท API

 

Deploy Website Torchlive CLI Build Android Template App Build iOS Template App

Current React Native PyTorch Core npm package version. Current PyTorch Live CLI npm package version. Current React Native PyTorch Live template npm package version.

PyTorch Live is released under the MIT license.


PyTorch Live is a set of tools to build AI-powered experiences for mobile.

This monorepo includes the PyTorch Live command line interface (i.e., torchlive-cli), a React Native package to run on-device inference with PyTorch Mobile, and a React Native template with examples ready to be deployed on mobile devices.

Contents

๐Ÿ“‹ Requirements

PyTorch Live apps may target Android 10.0 (API 29) and iOS 12.0 or newer. You may use Windows, macOS, or Linux as your development operating system, though building and running the PyTorch Live CLI is limited to macOS.

๐ŸŽ‰ Building your first PyTorch Live app

Follow the Getting Started guide. PyTorch Live offers a CLI with convenient commands to install development dependencies and initialize new projects. We also have a few tutorials for you to keep going after getting started:

๐Ÿ“– Documentation

The full documentation for PyTorch Live can be found on our website.

๐Ÿ‘ How to Contribute

The main purpose of this repository is to continue evolving PyTorch Live. We want to make contributing to this project as easy and transparent as possible, and we are grateful to the community for contributing bug fixes and improvements. Read below to learn how you can take part in improving PyTorch Live.

Code of Conduct

Facebook has adopted a Code of Conduct that we expect project participants to adhere to. Please read the full text so that you can understand what actions will and will not be tolerated.

Contributing Guide

Read our Contributing Guide to learn about our development process, how to propose bugfixes and improvements, and how to build and test your changes to PyTorch Live.

License

PyTorch Live is MIT licensed, as found in the LICENSE file.

Comments
  • how to implement this: torchvision.transforms.functional.perspective

    how to implement this: torchvision.transforms.functional.perspective

    Area Select

    react-native-pytorch-core (core package)

    Description

    Hello! thanks for contributions!

    I have a problem while developing my project. I need a function like torchvision.transforms.functional.perspective

    Could you add this implementation for torchvision.transforms.functional.perspective? or Can i implement this function? There is no implementation of perspective function in playtorch docs

    Another solution that i proceed is making pytorch mobile model for this function. This idea came from @raedle of this issue. but it has a error at react-native app like this:

    {"message": "Calling torch.linalg.lstsq on a CPU tensor requires compiling PyTorch with LAPACK. Please use PyTorch built with LAPACK support.
    
      Debug info for handle(s): debug_handles:{-1}, was not found.
    
    Exception raised from apply_lstsq at ../aten/src/ATen/native/BatchLinearAlgebraKernel.cpp:559 (most recent call first):
    (no backtrace available)"}
    

    Should i question to pytorch github about this error?

    My perspective model is like this: This model is successful at python code.

    import torch, torchvision
    from typing import List, Dict
    import torchvision.transforms.functional as F
    
    class WrapPerspectiveCrop(torch.nn.Module):
        def __init__(self):
            super().__init__()
    
        def forward(self, inputs: torch.Tensor, points: List[List[int]]):
            size_points = [[0,0], [inputs.shape[2],0] , [inputs.shape[2],inputs.shape[1]], [0,inputs.shape[1]]]
            inputs = F.perspective(inputs, points, size_points)
            return inputs
    
        
    crop = WrapPerspectiveCrop()
    scripted_model = torch.jit.script(crop)
    scripted_model.save("wrap_perspective.pt")
    
    
    import torch
    from torch.utils.mobile_optimizer import optimize_for_mobile
    
    optimized_scripted_module=optimize_for_mobile(scripted_model)
    optimized_scripted_module._save_for_lite_interpreter("wrap_perspective.ptl")
    

    How can i solve this problem? Many many thanks for anyone help!

    โœจ enhancement ๐Ÿ†˜ help wanted ๐Ÿ˜‡ wontfix ๐Ÿค– android ๐Ÿ ios 
    opened by nh9k 25
  • yolov5s.torchscript.ptl

    yolov5s.torchscript.ptl

    Version

    1.1.0

    Problem Area

    react-native-pytorch-core (core package)

    Steps to Reproduce

    run model yolov5s.torchscript.ptl

    Expected Results

    get results (bbox, class)

    Code example, screenshot, or link to repository

    Error Possible Unhandled Promise Rejection (id: 0): Error: End of input at character 0 of promiseMethodWrapper

    Did I follow this instruction to convert .pt to .ptl or am I doing something wrong? https://github.com/pytorch/android-demo-app/tree/master/ObjectDetection I ran android project pytorch mobile model worked there.

    As I understand it, you need live.spec.json but where to get it for yolov5?

    It's him ? I used it but it still gives an error or I need to project it? {'config.txt': '{"shape": [1, 3, 640, 640], "stride": 32, "names": ["person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"]}'}

    here is the link to the issue

    โœจ enhancement ๐Ÿ†˜ help wanted 
    opened by bairock 19
  • Image Classification Tutorial Snack: Can't find variable __torchlive__

    Image Classification Tutorial Snack: Can't find variable __torchlive__

    Version

    0.2.0

    Problem Area

    Website/Documentation

    Steps to Reproduce

    1. https://playtorch.dev/docs/tutorials/snacks/image-classification/
    2. Scan QR Code
    3. Can't find variable: __torchlive__ Evaluacting react-native-pytorch-core.js

    Expected Results

    No error message

    Code example, screenshot, or link to repository

    Hi,

    I noticed there was an issue for __torchlive__ that was opened for the CLI, but I wanted to also address the broken Snack demo in your documentation. Thanks!

    Using an iPhone 13 Pro

    https://snack.expo.dev/@playtorch/image-classification

    opened by peterpme 17
  • Create an AVD for the M1 Macs (avoids emulator failure)

    Create an AVD for the M1 Macs (avoids emulator failure)

    Summary

    The purpose of this pull request is to also create an AVD for the ARM architecture in order to adapt TorchLive for the M1 Macs.

    We can see that the file AndroidEmulatorDeviceInstaller.ts will create a x86 Android Emulator:

    Previous command to create the AVD:

    const cmd = `echo "no" | ${cltPath} create avd --name "${AndroidVirtualDeviceName}" --device "pixel" --force --abi google_apis/x86_64 --package "system-images;android-29;google_apis;x86_64"`;
    

    Previous keys in the config.ini file of the pytorch_live device:

    'abi.type': 'x86_64',
    'hw.cpu.arch': 'x86_64',
    

    When we execute the command npx torchlive-cli setup-dev we don't get an error but instead an unusable AVD is created (see that the PyTorch Live AVD has a size of just 1MB):

    Screenshot_2022-01-02_at_18 07 06

    Therefore, if we execute the command npx torchlive-cli run-android the terminal will print the android emulator version but it won't continue the execution as it can't execute a device designed to be executed in a x86 CPU architecture:

    Screenshot_2022-01-03_at_18 34 46

    Things that were done to make this adaptation possible:

    Changelog

    [TORCHLIVE-CLI][SETUP-DEV] - Create an AVD for the M1 Macs (avoids emulator failure)

    Test Plan

    After cloning the repository, you can go to the torchlive-cli folder, install the NPM packages and run the setup-dev command:

    $ cd torchlive-cli
    $ npm install
    $ npm run start setup-dev
    

    Now, if you open Android Studio and go to the AVD Manager, you will see that the column CPU/ABI of the pytorch live virtual device will be set according to your CPU architecture.

    To really check if the emulator works, initialise a project using torchlive-cli and run it with android:

    $ npx torchlive-cli init MyFirstProject
    $ cd MyFirstProject
    $ npx torchlive-cli run-android
    

    After doing this, you can go to the Android Studio AVD Manager and you will see that the size of the pytorch_live emulator has a size of 17GB (instead of 1MB) and its CPU/ABI is correctly set to arm64:

    Screenshot_2022-01-03_at_17 57 13

    Remarks

    Newer MacBooks identification issue

    A way to identify that the current device is a M1 MacBook is executing the following condition:

    if(os.cpus()[0].model === 'Apple M1') {}
    

    However, this only applies to the M1 MacBooks and I could not test this condition on the new M1 Pro & M1 Max MacBooks.

    CLA Signed 
    opened by aaronespasa 12
  • Simple custom model is not working on android

    Simple custom model is not working on android

    Tutorial Select

    Prepare Custom Model

    Feedback

    Hello, always thanks for contributions! This is somewhat custom problem i think, so that i don't know that i can question. My simple tensor manipulation model is not working on my react-native app, but it is working successfully at my python code.

    My adb logcat of several code trials sometimes got output memory consumptions problem or my java script code sometimes got output [Error: Exception in HostFunction: vector] or std::bad_alloc or just app crashes.

    Can i get some help?..

    Model export at python code:

    import torch, torchvision
    import os
    from typing import List, Dict
    import cv2
    import torchvision.transforms.functional as F
    
    class PostCD(torch.nn.Module):
        def __init__(self):
            super().__init__()
    
        def forward(self, inputs: torch.Tensor, iouThreshold: float):
            #shape e.g., "shape": [1, 25200, 16]
            
            inputs = inputs[inputs[:,:,4] > iouThreshold]
            max_class_tensor = torch.transpose(torch.argmax(inputs[:,5:], dim=1).unsqueeze(0),0,1)
            outputs = torch.cat((inputs[:,:5], max_class_tensor), 1).unsqueeze(0)
            return outputs
    
        
    pcd = PostCD()
    scripted_model = torch.jit.script(pcd)
    
    data = [[247.37535095214844, 525.0855712890625, 129.0428924560547, 31.98747444152832, 0.8668637275695801, 0.0027807741425931454, 0.002542165108025074, 0.00628146156668663, 0.0026534167118370533, 0.002410427201539278, 0.0035515064373612404, 0.00348650268279016, 0.006360366474837065, 0.002872081473469734, 0.0036974542308598757, 0.9905574321746826],
            [247.37535095214844, 525.0855712890625, 129.0428924560547, 31.98747444152832, 0.8668637275695801, 0.9027807741425931454, 0.002542165108025074, 0.00628146156668663, 0.0026534167118370533, 0.002410427201539278, 0.0035515064373612404, 0.00348650268279016, 0.006360366474837065, 0.002872081473469734, 0.0036974542308598757, 0.005574321746826],
            [247.37535095214844, 525.0855712890625, 129.0428924560547, 31.98747444152832, 0.1, 0.0027807741425931454, 0.002542165108025074, 0.00628146156668663, 0.0026534167118370533, 0.002410427201539278, 0.0035515064373612404, 0.00348650268279016, 0.006360366474837065, 0.002872081473469734, 0.0036974542308598757, 0.9905574321746826],
            [247.37535095214844, 525.0855712890625, 129.0428924560547, 31.98747444152832, 0.1, 0.0027807741425931454, 0.9, 0.00628146156668663, 0.0026534167118370533, 0.002410427201539278, 0.0035515064373612404, 0.00348650268279016, 0.006360366474837065, 0.002872081473469734, 0.0036974542308598757, 0.0004000],
            [247.37535095214844, 525.0855712890625, 129.0428924560547, 31.98747444152832, 0.1, 0.0027807741425931454, 0.9, 0.00628146156668663, 0.0026534167118370533, 0.002410427201539278, 0.0035515064373612404, 0.00348650268279016, 0.006360366474837065, 0.002872081473469734, 0.0036974542308598757, 0.0004000],
            [247.37535095214844, 525.0855712890625, 129.0428924560547, 31.98747444152832, 0.1, 0.0027807741425931454, 0.9, 0.00628146156668663, 0.0026534167118370533, 0.002410427201539278, 0.0035515064373612404, 0.00348650268279016, 0.006360366474837065, 0.002872081473469734, 0.0036974542308598757, 0.0004000],
            ]
    x_data = torch.tensor(data).unsqueeze(0)
    
    print(x_data)
    print(x_data.shape, end='\n\n')
        
    outputs = scripted_model(x_data, 0.3)
    
    print(outputs)
    print(outputs.shape)
    
    scripted_model.save("post_cd.pt")
    
    import torch
    from torch.utils.mobile_optimizer import optimize_for_mobile
    
    optimized_scripted_module=optimize_for_mobile(scripted_model)
    optimized_scripted_module._save_for_lite_interpreter("post_cd.ptl")
    

    Python output:

    tensor([[[2.4738e+02, 5.2509e+02, 1.2904e+02, 3.1987e+01, 8.6686e-01,
              2.7808e-03, 2.5422e-03, 6.2815e-03, 2.6534e-03, 2.4104e-03,
              3.5515e-03, 3.4865e-03, 6.3604e-03, 2.8721e-03, 3.6975e-03,
              9.9056e-01],
             [2.4738e+02, 5.2509e+02, 1.2904e+02, 3.1987e+01, 8.6686e-01,
              9.0278e-01, 2.5422e-03, 6.2815e-03, 2.6534e-03, 2.4104e-03,
              3.5515e-03, 3.4865e-03, 6.3604e-03, 2.8721e-03, 3.6975e-03,
              5.5743e-03],
             [2.4738e+02, 5.2509e+02, 1.2904e+02, 3.1987e+01, 1.0000e-01,
              2.7808e-03, 2.5422e-03, 6.2815e-03, 2.6534e-03, 2.4104e-03,
              3.5515e-03, 3.4865e-03, 6.3604e-03, 2.8721e-03, 3.6975e-03,
              9.9056e-01],
             [2.4738e+02, 5.2509e+02, 1.2904e+02, 3.1987e+01, 1.0000e-01,
              2.7808e-03, 9.0000e-01, 6.2815e-03, 2.6534e-03, 2.4104e-03,
              3.5515e-03, 3.4865e-03, 6.3604e-03, 2.8721e-03, 3.6975e-03,
              4.0000e-04],
             [2.4738e+02, 5.2509e+02, 1.2904e+02, 3.1987e+01, 1.0000e-01,
              2.7808e-03, 9.0000e-01, 6.2815e-03, 2.6534e-03, 2.4104e-03,
              3.5515e-03, 3.4865e-03, 6.3604e-03, 2.8721e-03, 3.6975e-03,
              4.0000e-04],
             [2.4738e+02, 5.2509e+02, 1.2904e+02, 3.1987e+01, 1.0000e-01,
              2.7808e-03, 9.0000e-01, 6.2815e-03, 2.6534e-03, 2.4104e-03,
              3.5515e-03, 3.4865e-03, 6.3604e-03, 2.8721e-03, 3.6975e-03,
              4.0000e-04]]])
    torch.Size([1, 6, 16])
    
    tensor([[[247.3754, 525.0856, 129.0429,  31.9875,   0.8669,  10.0000],
             [247.3754, 525.0856, 129.0429,  31.9875,   0.8669,   0.0000]]])
    torch.Size([1, 2, 6])
    

    This is app code:

    const MODEL_URL = localPath;
    let pcd_model = null;
    
    async function testPCD(){
        if (pcd_model == null) {
            const filePath = await MobileModel.download(require(MODEL_URL));
            pcd_model = await torch.jit._loadForMobile(filePath);
            console.log('Model successfully loaded');
        }
    
    
        var data = [[247.37535095214844, 525.0855712890625, 129.0428924560547, 31.98747444152832, 0.8668637275695801, 0.0027807741425931454, 0.002542165108025074, 0.00628146156668663, 0.0026534167118370533, 0.002410427201539278, 0.0035515064373612404, 0.00348650268279016, 0.006360366474837065, 0.002872081473469734, 0.0036974542308598757, 0.9905574321746826],
                    [247.37535095214844, 525.0855712890625, 129.0428924560547, 31.98747444152832, 0.8668637275695801, 0.9027807741425931454, 0.002542165108025074, 0.00628146156668663, 0.0026534167118370533, 0.002410427201539278, 0.0035515064373612404, 0.00348650268279016, 0.006360366474837065, 0.002872081473469734, 0.0036974542308598757, 0.005574321746826],
                    [247.37535095214844, 525.0855712890625, 129.0428924560547, 31.98747444152832, 0.1, 0.0027807741425931454, 0.002542165108025074, 0.00628146156668663, 0.0026534167118370533, 0.002410427201539278, 0.0035515064373612404, 0.00348650268279016, 0.006360366474837065, 0.002872081473469734, 0.0036974542308598757, 0.9905574321746826],
                    [247.37535095214844, 525.0855712890625, 129.0428924560547, 31.98747444152832, 0.1, 0.0027807741425931454, 0.9, 0.00628146156668663, 0.0026534167118370533, 0.002410427201539278, 0.0035515064373612404, 0.00348650268279016, 0.006360366474837065, 0.002872081473469734, 0.0036974542308598757, 0.0004000],
                    [247.37535095214844, 525.0855712890625, 129.0428924560547, 31.98747444152832, 0.1, 0.0027807741425931454, 0.9, 0.00628146156668663, 0.0026534167118370533, 0.002410427201539278, 0.0035515064373612404, 0.00348650268279016, 0.006360366474837065, 0.002872081473469734, 0.0036974542308598757, 0.0004000],
                    [247.37535095214844, 525.0855712890625, 129.0428924560547, 31.98747444152832, 0.1, 0.0027807741425931454, 0.9, 0.00628146156668663, 0.0026534167118370533, 0.002410427201539278, 0.0035515064373612404, 0.00348650268279016, 0.006360366474837065, 0.002872081473469734, 0.0036974542308598757, 0.0004000],
                    ]
        var x_data = torch.tensor(data).unsqueeze(0);
        console.log(x_data);
        console.log(x_data.shape);
    
        try{
            const startInferencTime = global.performance.now();
    
            const outputs = await pcd_model.forward(x_data, 0.3);
    
            console.log(outputs);
            console.log(outputs.shape);
    
            const inferenceTime = global.performance.now() - startInferencTime;
            console.log(`inference time ${inferenceTime.toFixed(3)} ms`);
        }
        catch(err){
            console.log(err);
        }
    }
    

    Node.js output:

     LOG  Model successfully loaded
     LOG  {"abs": [Function abs], "add": [Function add], "argmax": [Function argmax], "argmin": [Function argmin], "clamp": [Function clamp], "contiguous": [Function contiguous], "data": [Function data], "div": [Function div], "dtype": "float32", "expand": [Function expand], "flip": [Function flip], "item": [Function item], "mul": [Function mul], "permute": [Function permute], "reshape": [Function reshape], "shape": [1, 6, 16], "size": [Function size], "softmax": [Function softmax], "sqrt": [Function sqrt], "squeeze": [Function squeeze], "stride": [Function stride], "sub": [Function sub], "sum": [Function sum], "to": [Function to], "toString": [Function toString], "topk": [Function topk], "unsqueeze": [Function unsqueeze]}
     LOG  [1, 6, 16]
    

    -> App crashes

    opened by nh9k 11
  • libreactnativejni.so is missing when adding the live package

    libreactnativejni.so is missing when adding the live package

    Version

    No response

    Problem Area

    react-native-pytorch-core (core package)

    Steps to Reproduce

    1. Test it on Android
    2. Do all the steps from here https://pytorch.org/live/docs/tutorials/add-package/
    3. Use the Android Studio, run clean build, rebuild, and run
    4. You will see the following Errors

    Related: https://stackoverflow.com/questions/44485941/lib-so-missing-and-no-known-rule-to-make-it

    Expected Results

    1. Build command failed. Error while executing process D:\ADK\cmake\3.10.2.4988404\bin\ninja.exe with arguments {-C D:\XXXXXXXXXXX\Main\Project\Project\XXXXXXX\src\mobile\node_modules\react-native-pytorch-core\android.cxx\cmake\debug\armeabi-v7a torchlive} ninja: Entering directory `D:\XXXXXXXXXXX\Project\Project\XXXXXXX\src\mobile\node_modules\react-native-pytorch-core\android.cxx\cmake\debug\armeabi-v7a'

    ninja: error: '../../../../build/react-native-0.64.3.aar/jni/armeabi-v7a/libreactnativejni.so', needed by '../../../../build/intermediates/cmake/debug/obj/armeabi-v7a/libtorchlive.so', missing and no known rule to make it

    1. If you run the cmake on clion, you will see this warning. I am not sure it is the reason.

    CMake Error: The following variables are used in this project, but they are set to NOTFOUND. Please set them or make sure they are set and tested correctly in the CMake files: FBJNI_LIBRARY linked by target "torchlive" in directory /cygdrive/d/XXXXXXXXXXX Main/Project/Project/XXXXXXX/src/mobile/node_modules/react-native-pytorch-core/android PYTORCH_LIBRARY linked by target "torchlive" in directory /cygdrive/d/XXXXXXXXXXX Main/Project/Project/XXXXXXX/src/mobile/node_modules/react-native-pytorch-core/android REACT_NATIVE_JNI_LIB linked by target "torchlive" in directory /cygdrive/d/XXXXXXXXXXX Main/Project/Project/XXXXXXX/src/mobile/node_modules/react-native-pytorch-core/android REACT_NATIVE_UTILS_LIB linked by target "torchlive" in directory /cygdrive/d/XXXXXXXXXXX Main/Project/Project/XXXXXXX/src/mobile/node_modules/react-native-pytorch-core/android

    Code example, screenshot, or link to repository

    No response

    opened by JonathanSum 11
  • Repeated rendering to Canvas crashes app on iOS

    Repeated rendering to Canvas crashes app on iOS

    Version

    0.2.1

    Problem Area

    react-native-pytorch-core (core package)

    Steps to Reproduce

    1. repeat rendering to Canvas

    Error message: Message from debugger: Terminated due to memory issue

    just before the crush: W0821 11:40:23.873222 1881387008 JSIExecutor.cpp:381] Memory warning (pressure level: 1) received by JS VM, unrecognized pressure level

    The crush doesn't occur when this commit is ignored. v0.2.0 doesn't crush app.

    What kind of measures should I take?

    Expected Results

    No response

    Code example, screenshot, or link to repository

    No response

    ๐Ÿ› bug ๐Ÿ ios 
    opened by SomaKishimoto 10
  • U2-net cloth segmentation model

    U2-net cloth segmentation model

    Tutorial Select

    Prepare Custom Model

    Feedback

    Hi Playtorch community,

    I am trying to implement this model. It is based on U2-net but does the clothes segmentation. I converted it in the same way as I did for the usual U2-net model, using the tutorial I have previously posted.

    I am using the U2-net snack snack as core, which perfectly works on my device using Playtorch app. Then I change the path to the converted model (https://cdn-128.anonfiles.com/v5l75ez1yc/431fccf2-1658318807/cloth_segm_live.ptl) in ImageMask.ts When I take a picture, nothing happens, I just see the camera UI.

    Here is the link to my expo snack for cloth segmentation model.

    I would appreciate any help with this issue.

    opened by lavandaboy 10
  • Can't find variable: __torchlive__

    Can't find variable: __torchlive__

    After install package, I got this error when run yarn start. I add package to existing app which using bare react native. Current RN ver: 0.66.0 React ver: 17.0.2

    image

    opened by andydam452 10
  • app crashes on android

    app crashes on android

    Version

    No response

    Problem Area

    react-native-pytorch-core (core package)

    Steps to Reproduce

    I am trying to run Image Classification model on Android. I had followed manual instructions for setup on linux machine. Everything is fine till the last step of installation and app gets succesfully installed on my android phone running android v9(Pie). But app crashes as soon as it is launched at start.

    Expected Results

    No response

    Code example, screenshot, or link to repository

    No response

    ๐Ÿค– android 
    opened by yMayanand 9
  • First Project fails on Windows

    First Project fails on Windows

    Version

    No response

    Problem Area

    react-native-pytorch-core (core package)

    Steps to Reproduce

    I am trying to install PyTorchLive on Windows 10 machine.

    1. Already have Python installed.
    2. I followed steps in https://pytorch.org/live/docs/tutorials/get-started-manually/ to install reactnative development environment
    3. Am able to successfully run the react-native sample application.
    4. npx react-native init MyFirstProject --template react-native-template-pytorch-live give the following error: Welcome to React Native! Learn once, write anywhere

    โˆš Downloading template โˆš Copying template โˆš Processing template โˆš Executing post init script ร— Installing dependencies

    error Error: Command failed: npm install
    npm ERR! code ERESOLVE
    npm ERR! ERESOLVE unable to resolve dependency tree
    npm ERR!
    npm ERR! While resolving: [email protected]
    npm ERR! Found: [email protected]
    npm ERR! node_modules/react
    npm ERR!   react@"17.0.1" from the root project
    npm ERR!
    npm ERR! Could not resolve dependency:
    npm ERR! peer react@"^16.0" from @react-native-community/[email protected]
    npm ERR! node_modules/@react-native-community/masked-view
    npm ERR!   @react-native-community/masked-view@"^0.1.10" from the root project
    npm ERR!
    npm ERR! Fix the upstream dependency conflict, or retry
    npm ERR! this command with --force, or --legacy-peer-deps
    npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
    

    Expected Results

    No response

    Code example, screenshot, or link to repository

    No response

    ๐Ÿ”… good first issue ๐Ÿ†˜ help wanted ๐Ÿ’ป cli 
    opened by rmadhira86 9
  • Big sizes of camera resolution can cause slow capture.

    Big sizes of camera resolution can cause slow capture.

    Area Select

    react-native-pytorch-core (core package)

    Description

    Hello, always thanks for contributions.

    My test app needs an image with good resolution, but if the camera resolution size is larger than that of the playtorch tutorial, it will be slower than before to change the camera screen to the loading screen after pressing the capture button. I also tested it in the tutorial on android. i modified targetResolution={{ width: 1080, height: 1920 }} to targetResolution={{ width: 3000, height: 4000 }}. The image result is good in resolution, but the loading screen changes slowly on the camera screen. How can i improve the changing speed?

    I saw the code in CamerView.tsx, but it doesn't seem to help me.

    opened by nh9k 2
  • Install PyTorch Tutorial

    Install PyTorch Tutorial

    Stack created with Sapling. Best reviewed with ReviewStack.

    • #180
    • -> #179

    Install PyTorch Tutorial Basic tutorial for how to install the Python PyTorch dependency

    CLA Signed 
    opened by raedle 1
  • Basic instructions to build the PlayTorch app

    Basic instructions to build the PlayTorch app

    Stack created with Sapling. Best reviewed with ReviewStack.

    • -> #178

    Basic instructions to build the PlayTorch app Adding basic instructions to build the PlayTorch app locally

    CLA Signed 
    opened by raedle 1
  • Upgrade to Expo SDK 47

    Upgrade to Expo SDK 47

    Summary: Upgrade app to use Expo SDK 47.

    Additional Changes:

    • Updated minimum deployment target to iOS 13
    • Updated snack-runtime, which required using the custom build raedle/[email protected] (see changes in forked expo/snack repo: https://github.com/raedle/snack/tree/playtorch-expo-sdk-47)

    Differential Revision: D41605720

    CLA Signed fb-exported 
    opened by raedle 2
  • Can't merge user_target_xcconfig for pod targets

    Can't merge user_target_xcconfig for pod targets

    Version

    LibTorch-Lite (1.12.0), react-native-pytorch-core (0.2.2)

    Problem Area

    react-native-pytorch-core (core package)

    Steps to Reproduce

    Environment:

    • MacBook Air m1 Ventura 13.0
    • Darwin MacBook-Air.local 22.1.0 Darwin Kernel Version 22.1.0: Sun Oct 9 20:14:30 PDT 2022; root:xnu-8792.41.9~2/RELEASE_ARM64_T8103 arm64
    • npx --version 8.3.1
    • packaage.json
    {
      "name": "myapp",
      "version": "0.0.1",
      "private": true,
      "scripts": {
        "android": "react-native run-android",
        "ios": "react-native run-ios",
        "start": "react-native start",
        "test": "jest",
        "lint": "eslint ."
      },
      "dependencies": {
        "react": "18.1.0",
        "react-native": "0.70.5",
        "react-native-pytorch-core": "^0.2.2"
      },
      "devDependencies": {
        "@babel/core": "^7.12.9",
        "@babel/runtime": "^7.12.5",
        "@react-native-community/eslint-config": "^2.0.0",
        "babel-jest": "^26.6.3",
        "eslint": "^7.32.0",
        "jest": "^26.6.3",
        "metro-react-native-babel-preset": "0.72.3",
        "react-test-renderer": "18.1.0"
      },
      "jest": {
        "preset": "react-native"
      }
    }
    
    1. npx react-native init myapp
    2. cd myapp/ios && pod install
    3. cd ..
    4. npm install react-native-pytorch-core
    5. cd ios && pod install

    Integrating client project Pod installation complete! There are 60 dependencies from the Podfile and 51 total pods installed.

    [!] Can't merge user_target_xcconfig for pod targets: ["LibTorch-Lite", "Core", "Torch", "hermes-engine"]. Singular build setting CLANG_CXX_LANGUAGE_STANDARD has different values.

    [!] Can't merge user_target_xcconfig for pod targets: ["LibTorch-Lite", "Core", "Torch", "hermes-engine"]. Singular build setting CLANG_CXX_LIBRARY has different values.

    [!] Can't merge user_target_xcconfig for pod targets: ["LibTorch-Lite", "Core", "Torch", "hermes-engine"]. Singular build setting CLANG_CXX_LANGUAGE_STANDARD has different values.

    [!] Can't merge user_target_xcconfig for pod targets: ["LibTorch-Lite", "Core", "Torch", "hermes-engine"]. Singular build setting CLANG_CXX_LIBRARY has different values.

    [!] Can't merge user_target_xcconfig for pod targets: ["LibTorch-Lite", "Core", "Torch", "hermes-engine"]. Singular build setting CLANG_CXX_LANGUAGE_STANDARD has different values.

    [!] Can't merge user_target_xcconfig for pod targets: ["LibTorch-Lite", "Core", "Torch", "hermes-engine"]. Singular build setting CLANG_CXX_LIBRARY has different values.

    Expected Results

    Dont expect to see the "Can't merge user_target_xcconfig for pod targets" messagesDon't

    Code example, screenshot, or link to repository

    No response

    opened by thomaskwan 6
Releases(v0.2.4)
  • v0.2.4(Dec 17, 2022)

    0.2.4 contains the following changes

    Learn more about the PlayTorch name and the PlayTorch app in our announcement blog post: https://pytorch.org/blog/introducing-the-playtorch-app/

    react-native-pytorch-core

    • Tensor Indexing API for set tensor (68d976f by @raedle)
    • Fix issue with JNI Env not available (0723cc0 by @raedle)
    • Fix issue with JNI Env not available for audio (c0b2026 by @raedle)

    Full Changelog: https://github.com/facebookresearch/playtorch/compare/v0.2.3...v0.2.4

    Source code(tar.gz)
    Source code(zip)
  • v0.2.3(Nov 20, 2022)

    0.2.3 contains the following notable changes plus many smaller improvements and fixes.

    Learn more about the PlayTorch name and the PlayTorch app in our announcement blog post: https://pytorch.org/blog/introducing-the-playtorch-app/

    ๐Ÿ’ซ The 0.2.0 series introduced new JavaScript interfaces to PyTorch APIs for flexible data processing and inference that replace MobileModel.execute and Live Spec JSON. See the core package's README for example usage.

    react-native-pytorch-core

    • Add tensor.matmul to PyTorch SDK for JSI (7f633ca4 by @zrfisher)
    • Update model urls from previous GitHub repo pytorch/live to facebookresearch/playtorch (edf43de1 by @raedle)
    • Fix minor compiler warnings (bd9d13dc by @raedle)
    • Add pragma marks to ignore deprecated-delcarations surfaced by C++17 compiler. This was needed because PyTorch Mobile is C++14 (20842f64 by @raedle)
    • Introduce new image toBlob API function signature (#130) (ffa4ed63 by @raedle)
    • torch.jit._load_for_mobile with device and extra files options (#141) (e249116f by @raedle)
    • Patch failing issue with RN 0.64.3 (#157) (5e6b2c05 by @raedle)
    • Enforce 3 argument requirement for randint (02c37a34 by @neildhar)
    • upgrade prettier to 2.7.1 (74a3ed81 by @bradzacher)

    Breaking Changes

    • TypeScript type updates to improve handling of IValue generics (#144) (ef4465f4 by @raedle)

    react-native-template-pytorch-live

    • Update model urls from previous GitHub repo pytorch/live to facebookresearch/playtorch (bfd6a979 by @raedle)

    • Note: react-native-template-pytorch-live is deprecated. Instead, follow React Nativeโ€™s Environment Setup guide. To use react-native-pytorch-core in an existing app follow the documentation Add PlayTorch to Existing App or in an Expo managed app use npx expo install react-native-pytorch-core.

    Thank you to our contributors @zrfisher, @neildhar, and @bradzacher.

    Full Changelog: https://github.com/facebookresearch/playtorch/compare/v0.2.2...v0.2.3

    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Sep 15, 2022)

    0.2.2 contains the following improvements and removes some deprecated features.

    ๐Ÿ’ซ The 0.2.0 series introduced new JavaScript interfaces to PyTorch APIs for flexible data processing and inference that replace MobileModel.execute and Live Spec JSON. See the core package's README for example usage.

    react-native-pytorch-core

    • Remove โ€œLive Specโ€ APIs. Instead, use the JavaScript interfaces to PyTorch APIs introduced with 0.2.0 (#111)
    • Force same orientation on ImageUtil.fromFile (#124)
    • Extend Tensor.topk() function parameters to match PyTorch API
    • Fix a bug with Tensor indexing APIs (#118)

    react-native-template-pytorch-live

    • Remove related โ€œslim templateโ€ package react-native-template-ptl. Instead, get started with the standard React Native CLI, Expo, or Expo EAS

    torchlive-cli

    • torchlive-cli was deprecated in 0.2.1 and has been removed in this release. Instead, follow the instructions in the Manual Environment Setup guide (#121)

    Thank you to our contributors @cjfghk5697, @liuyinglao, @raedle, @reedless, and Zeinab Sadeghipour Kermani

    Full Changelog: https://github.com/facebookresearch/playtorch/compare/v0.2.1...v0.2.2

    Source code(tar.gz)
    Source code(zip)
    midas.ptl(82.34 MB)
  • v0.2.1(Aug 16, 2022)

    0.2.1 contains the following notable changes plus many smaller improvements and fixes.

    This is the first release under the new name "PlayTorch" which replaces "PyTorch Live". The repository now lives at https://github.com/facebookresearch/playtorch

    Learn more about the PlayTorch name and the PlayTorch app in our announcement blog post: https://pytorch.org/blog/introducing-the-playtorch-app/

    ๐Ÿ’ซ The 0.2.0 series introduced new JavaScript interfaces to PyTorch APIs for flexible data processing and inference that replace MobileModel.execute and Live Spec JSON. See the core package's README for example usage.

    react-native-pytorch-core

    • Support passing general JavaScript types (e.g., Array, String, Object) to Module.forward, and automatically unpack result types (IValue in C++) to JavaScript types
    • Support for React Native 0.68, 0.69
    • Fix Expo Config Plugin: now the apps generated by expo prebuild are ready to compile
    • Fix iOS <Canvas> element to match the (correct) Android behavior when calling CanvasRenderingContext2D.invalidate() to repaint
    • Add new PyTorch JavaScript API wrappers: torch.cat, torch.full, torch.linspace, torch.logspace, torch.randperm, torch.randn, Tensor.argmin, Tensor.expand, and Tensor.flip
    • Add support for RGBA and grayscale to media.imageFromTensor
    • Deprecate media.imageFromBlob in favor of media.imageFromTensor
    • Deprecate MobileModel.execute, MobileModel.preload, MobileModel.unload, and Live Spec in general. Migrate away from Live Spec to use torch.jit._loadForMobile
    • Fix toDictStringKey (#99) in legacy Live Spec

    react-native-template-pytorch-live

    • Update documentation references from PyTorch Live to PlayTorch

    torchlive-cli

    • Note: torchlive-cli is deprecated. Instead, follow React Nativeโ€™s Environment Setup guide. To use the default torchlive-cli template, pass the --template flag during setup: npx react-native init AwesomeProject --template react-native-template-pytorch-live

    Thank you to our contributors @ansonsyfang, @bhadreshpsavani, @chrisklaiber, @cjfghk5697, @clarksandholtz, @justinhaaheim, @liuyinglao, @michaelkulinich, @raedle, @simpleton, @ta211, Kyle Into, Lucca Bertoncini, Prakhar Sahay, Shushanth Madhubalan, Vladimir Pinchuk, and Zonggen Yi.

    Full Changelog: https://github.com/facebookresearch/playtorch/compare/v0.2.0...v0.2.1

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jul 7, 2022)

    0.2.0 is a major update with various bug fixes and the following notable changes. For an introduction to the new platform's capabilities, see our PlayTorch announcement blog post and new home at https://playtorch.dev

    react-native-pytorch-core

    • ๐Ÿ’ซ New JavaScript interfaces to PyTorch APIs for flexible data processing and inference. This replaces MobileModel.execute and Live Spec JSON. See the core package's README with example usage
    • Example code now uses the JavaScript interfaces to PyTorch APIs
    • New image APIs to convert from image to blob to tensor and back
    • Expand model support to include, for example, YOLOv5, DeepLabV3, Fast Neural Style, Wav2Vec2
    • Update PyTorch Mobile dependency from 1.10 to 1.12
    • Improve support for React Native 0.66+
    • Improve support for Apple Silicon

    react-native-template-pytorch-live

    • Remove dependency on Python and locally created models via make_models.py. Instead, the pre-exported models are loaded from the network
    • Example code now uses the JavaScript interfaces to PyTorch APIs

    torchlive-cli

    • Note: torchlive-cli will be deprecated in an upcoming release. Instead, follow React Nativeโ€™s Environment Setup guide. To use the default torchlive-cli template, pass the --template flag during setup: npx react-native init AwesomeProject --template react-native-template-pytorch-live

    Thank you to our contributors @ansonsyfang, @ap190, @chrisklaiber, @clarksandholtz, @justinhaaheim, @krazykalman, @liuyinglao, @metawaku, @pavlos-chatzisavvas, @pd21989, @raedle, @ta211, @williamngan, Alp Genc, Bode Sule, Isaac Mosebrook, Minji Kim, and Thiago Roscia

    Full Changelog: https://github.com/pytorch/live/compare/v0.1.3...v0.2.0

    Source code(tar.gz)
    Source code(zip)
    yolov5s.ptl(28.05 MB)
  • v0.2.0-rc.3(Jul 5, 2022)

  • v0.2.0-rc.2(Jun 24, 2022)

  • v0.2.0-rc.1(Jun 10, 2022)

  • v0.2.0-rc.0(Jun 3, 2022)

  • v0.1.3(Jan 19, 2022)

    0.1.3 is out with changes:

    Fixed

    • Remove unused experimental annotation (bee9aec) @raedle
    • Fix the detox test in the template app (239070f) @liuyinglao
    • Hardcoding length constraint into bert_qa model spec (804f413) @clarksandholtz
    • Create an AVD for the M1 Macs (avoids emulator failure) (#24) (3c096ea) @aaronespasa
    • Some cleanups to the model spec documentation (#27 ) (620b4d5) @mdwelsh
    • NPM publish to @latest only when release type is released (f9a399d) @liuyinglao
    • Fix encoding issue on Windows 11 (698132e) @raedle

    Features

    • Add fromJSRef method to ImageUtil (#31) (4825d28) @Sxela
    • Add e2e tests for JSI Configuration (9f5318a) @liuyinglao
    • Created slim template (#28) (a52ab6d) @clarksandholtz
    • Setup Pytorch Live to run cpp function with JSI (80e7bef) @liuyinglao
    • Run make_models.py in post-init script on Windows (b580458) @raedle

    Full Changelog: https://github.com/pytorch/live/compare/v0.1.2...v0.1.3

    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Dec 20, 2021)

    0.1.2 is out with fixes:

    Fixed

    • New homepage for pytorch core npm module (#5) (9c44fa3) @HugoGresse
    • Display error message when post-init scripts exit with non-zero code (39b8775) @liuyinglao
    • RuntimeError related to quantization solved for ARM architecture (#8) (b8ab157) @aaronespasa
    • Update entry file for variant release (e0d014d) @raedle
    • Fix model loading in Android release build (#11) (394ba1b) @chrisklaiber
    • Delete error.log if make_models.py exits with 0 (c1bfd39) @raedle
    • Fix YarnInstaller skipping if installed (2bf671b) @raedle

    Features

    • Add detox e2e test for complex change (5099a06) @liuyinglao
    • Option to use gem/brew to install CocoaPods (d6bef4f) @raedle
    • Adds "yes" option to setup-dev (1f2e685) @raedle
    • Adds "cocoapods-installer" option (be1ba16) @raedle

    Full Changelog: https://github.com/pytorch/live/compare/v0.1.1...v0.1.2

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Dec 1, 2021)

  • v0.1.0(Dec 1, 2021)

    This is our initial release of PyTorch Live (v0.1.0), including:

    • PyTorch Live website: https://pytorch.org/live
    • PyTorch Live CLI: https://www.npmjs.com/package/torchlive-cli
    • React Native PyTorch Core package: https://www.npmjs.com/package/react-native-pytorch-core
    • React Native PyTorch Live template: https://www.npmjs.com/package/react-native-template-pytorch-live

    Special thanks to @zrfisher and @HugoGresse for their contributions to this initial release!

    Source code(tar.gz)
    Source code(zip)
    BERTVocab.json(255.90 KB)
    bert_qa.ptl(131.95 MB)
    CoCoClasses.json(1.09 KB)
    deeplabv3.ptl(160.19 MB)
    deeplabv3_mobilenet.ptl(42.60 MB)
    detr_resnet50.ptl(158.80 MB)
    ImageNetClasses.json(28.00 KB)
    mnist.ptl(4.58 MB)
    mobilenet_v3_large.ptl(20.92 MB)
    mobilenet_v3_small.ptl(9.72 MB)
    pytorch_mobile_1_12_x86_64.zip(94.09 MB)
    pytorch_mobile_install_arm64.zip(34.11 MB)
    pytorch_mobile_install_x86_64.zip(71.76 MB)
    resnet18.ptl(44.59 MB)
    wav2vec2.ptl(197.40 MB)
Python library to receive live stream events like comments and gifts in realtime from TikTok LIVE.

TikTokLive A python library to connect to and read events from TikTok's LIVE service A python library to receive and decode livestream events such as

Isaac Kogan 277 Dec 23, 2022
Pre-trained Deep Learning models and demos (high quality and extremely fast)

OpenVINOโ„ข Toolkit - Open Model Zoo repository This repository includes optimized deep learning models and a set of demos to expedite development of hi

OpenVINO Toolkit 3.4k Dec 31, 2022
Demos of essentia classifiers hosted on replicate.ai

essentia-replicate-demos Demos of Essentia models hosted on replicate.ai's MTG site. The models Check our site for a complete list of the models avail

Music Technology Group - Universitat Pompeu Fabra 12 Nov 14, 2022
A Lighting Pytorch Framework for Recommendation System, Easy-to-use and Easy-to-extend.

Torch-RecHub A Lighting Pytorch Framework for Recommendation Models, Easy-to-use and Easy-to-extend. ๅฎ‰่ฃ… pip install torch-rechub ไธป่ฆ็‰นๆ€ง scikit-learn้ฃŽๆ ผๆ˜“็”จ

Mincai Lai 67 Jan 4, 2023
HMLLDB is a collection of LLDB commands to assist in the debugging of iOS apps.

HMLLDB is a collection of LLDB commands to assist in the debugging of iOS apps. ไธญๆ–‡ไป‹็ป Features Non-intrusive. Your iOS project does not need to be modi

mao2020 47 Oct 22, 2022
An algorithm study of the 6th iOS 10 set of Boost Camp Web Mobile

์•Œ๊ณ ๋ฆฌ์ฆ˜ ์Šคํ„ฐ๋”” ?? ๋ถ€์ŠคํŠธ์บ ํ”„ ์›น๋ชจ๋ฐ”์ผ 6๊ธฐ iOS 10์กฐ์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜ ์Šคํ„ฐ๋”” ์ž…๋‹ˆ๋‹ค. ๊ฐœ์ธ์ ์ธ ์‚ฌ์ • ๋“ฑ์œผ๋กœ S034, S055๋งŒ ์ฐธ๊ฐ€ํ•˜์˜€์Šต๋‹ˆ๋‹ค. ์Šคํ„ฐ๋”” ๋ชฉ์  ์ƒ์ง„: ์ฝ”ํ…Œ ํ•ฉ๊ฒฉ + ๋ถ€์บ ๋๋‚˜๊ณ  ์•„์นจ์— ์ผ์–ด๋‚˜๊ธฐ ์œ„ํ•ด ํ•„์š”ํ•œ ์‚ฌ์ดํด ๊ธฐ์™„: ๊พธ์ค€ํ•˜๊ฒŒ ์ž๋ฆฌ์— ์•‰์•„ ๊ณต๋ถ€ํ•˜๊ธฐ +

null 2 Jan 11, 2022
๐ŸŽ๏ธ Accelerate training and inference of ๐Ÿค— Transformers with easy to use hardware optimization tools

Hugging Face Optimum ?? Optimum is an extension of ?? Transformers, providing a set of performance optimization tools enabling maximum efficiency to t

Hugging Face 842 Dec 30, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code here will be included in upstream Pytorch eventually. The intention of Apex is to make up-to-date utilities available to users as quickly as possible.

NVIDIA Corporation 6.9k Jan 3, 2023
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

Introduction This is a Python package available on PyPI for NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pyto

Artit 'Art' Wangperawong 5 Sep 29, 2021
Fast image augmentation library and easy to use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

null 11.4k Jan 9, 2023
A set of tools for creating and testing machine learning features, with a scikit-learn compatible API

Feature Forge This library provides a set of tools that can be useful in many machine learning applications (classification, clustering, regression, e

Machinalis 380 Nov 5, 2022
this is a lite easy to use virtual keyboard project for anyone to use

virtual_Keyboard this is a lite easy to use virtual keyboard project for anyone to use motivation I made this for this year's recruitment for RobEn AA

Mohamed Emad 3 Oct 23, 2021
A collection of easy-to-use, ready-to-use, interesting deep neural network models

Interesting and reproducible research works should be conserved. This repository wraps a collection of deep neural network models into a simple and un

Aria Ghora Prabono 16 Jun 16, 2022
CompilerGym is a library of easy to use and performant reinforcement learning environments for compiler tasks

CompilerGym is a library of easy to use and performant reinforcement learning environments for compiler tasks

Facebook Research 721 Jan 3, 2023
A tutorial showing how to train, convert, and run TensorFlow Lite object detection models on Android devices, the Raspberry Pi, and more!

A tutorial showing how to train, convert, and run TensorFlow Lite object detection models on Android devices, the Raspberry Pi, and more!

Evan 1.3k Jan 2, 2023
GeDML is an easy-to-use generalized deep metric learning library

GeDML is an easy-to-use generalized deep metric learning library

Borui Zhang 32 Dec 5, 2022
A simple, high level, easy-to-use open source Computer Vision library for Python.

ZoomVision : Slicing Aid Detection A simple, high level, easy-to-use open source Computer Vision library for Python. Installation Installing dependenc

Nurettin SinanoฤŸlu 2 Mar 4, 2022
Pytorch Lightning 1.2k Jan 6, 2023