QED-C: The Quantum Economic Development Consortium provides these computer programs and software for use in the fields of quantum science and engineering.

Overview

Application-Oriented Performance Benchmarks for Quantum Computing

This repository contains a collection of prototypical application- or algorithm-centric benchmark programs designed for the purpose of characterizing the end-user perception of the performance of current-generation Quantum Computers.

The repository is maintained by members of the Quantum Economic Development Consortium (QED-C) Technical Advisory Committee on Standards and Performance Metrics (Standards TAC).

Important Note -- The examples maintained in this repository are not intended to be viewed as "performance standards". Rather, they are offered as simple "prototypes", designed to make it as easy as possible for users to execute simple "reference applications" across multiple quantum computing APIs and platforms. The application / algorithmic examples are structured using a uniform pattern for defining circuits, executing across different platforms, collecting results, and measuring the performance and fidelity in useful ways.

A wide variety of "reference applications" are provided. At the current stage in the evolution of quantum computing hardware, some applications will perform better on one hardware target, while a completely different set may execute better on another target. They are designed to provide for users a quantum "jump start", so to speak, eliminating the need to develop for themselves uniform code patterns that facilitate quick development, deployment and experimentation.

See the Implementation Status section below for the latest report on benchmarks implemented to date.

Notes on Repository Organization

The repository is organized at the highest level by specific reference application names. There is a directory for each application or algorithmic example, e.g. quantum-fourier-transform, which contains the the bulk of code for that application.

Within each application directory, there is a second level directory, one for each of the target programming environments that are supported. The repository is organized in this way to emphasize the application first and the target environment second, to encourage full support across platforms.

The directory names and the currently supported environments are:

    qiskit      -- IBM Qiskit
    cirq        -- Google Cirq
    braket      -- Amazon Braket

The goal has been to make the implementation of each algorithm identical across the different target environments, with processing and reporting of results as similar as possible. Each application directory includes a README file with information specific to that application or algorithm. Below we list the benchmarks we have implemented with a suggested order of approach; the benchmarks in levels 1 and 2 are more simple and a good place to start for beginners, while levels 3 and 4 are more complicated and might build off of intuition and reasoning developed in earlier algorithms.

Complexity of Benchmark Algorithms (Increasing Difficulty)

    1: Deutsch-Jozsa, Bernstein-Vazirani, Hidden Shift
    2: Quantum Fourier Transform, Grover's Search
    3: Phase Estimation, Amplitude Estimation
    4: Monte Carlo, Hamiltonian Simulation, Variational Quantum Eigensolver, Shor's Order Finding

In addition to the application directories at the highest level, there several other directories or files with specific purpose:

    _common                      -- collection of shared routines, used by all the application examples
    _doc                         -- detailed DESIGN_NOTES, and other reference materials
    _containerbuildfiles         -- build files and instructions for creating Docker images (optional)
    _setup                       -- information on setting up all environments
    
    benchmarks-*.ipynb.template  -- Jupyter Notebook templates

Setup and Configuration

The prototype benchmark applications are easy to run and contain few dependencies. The primary dependency is on the Python packages needed for the target environment in which you would like to execute the examples.

In the _setup folder you will find a subdirectory for each of the target environments that contains a README with everything you need to know to install and configure the specific environment in which you would like to run.

Important Note:

The suite of application benchmarks is configured by default to run on the simulators
that are typically included with the quantum programming environments.
Certain program parameters, such as maximum numbers of qubits, number of circuits
to execute for each qubit width and the number of shots, are defaulted to values that 
can run on the simulators easily.

However, when running on hardware, it is important to reduce these values to account 
for the capabilities of the machine on which you are executing. This is especially 
important for systems on which one could incur high billing costs if running large circuits.
See the above link to the _setup folder for more information about each programming environment.

Executing the Application Benchmark Programs from a Shell Window

The benchmark programs may be run manually in a command shell. In a command window or shell, change directory to the application you would like to execute. Then, simply execute a line similar to the following, to begin execution of the main program for the application:

    cd bernstein-vazirani/qiskit
    python bv_benchmark.py

This will run the program, construct and execute multiple circuits, analyze results and produce a set of bar charts to report on the results. The program executes random circuits constructed for a specific number of qubits, in a loop that ranges from min_qubits to max_qubits (with default values that can be passed as parameters). The number of random circuits generated for each qubit size can be controlled by the max_circuits parameter.

As each benchmark program is executed, you should see output that looks like the following, showing the average circuit creation and execution time along with a measure of the quality of the result, for each circuit width executed by the benchmark program:

Sample Output

Executing the Application Benchmark Programs in a Jupyter Notebook

Alternatively you may use the Jupyter Notebook templates that are provided in this repository. Simply copy and remove the .template extension from the copied ipynb template file. There is one template file provided for each of the API environments supported.

In the top level of this repo, start your jupyter-notebook process. When the browser listing appears, select the desired notebook .ipynb file to launch the notebook. There you will have access to a cell for each of the benchmarks in the repository, and may "Run" any one of them independently and see the results presented there.

Container Deployment of the Application Benchmark Programs

Applications are often deployed into Container Management Frameworks such as Docker, Kubernetes, and the like.

The Prototype Benchmarks repository includes support for the creation of a unique 'container image' for each of the supported API environments. You can find the instructions and all the necessary build files in a folder at the top level named _containerbuildfiles. The benchmark program image can be deployed into a container management framework and executed as any other application in that framework.

Once built, deployed and launched, the container process invokes a Jupyter Notebook from which you can run all the available benchmarks.

Interpreting Metrics

  • Creation Time: time spent on classical machine creating the circuit and transpiling.
  • Execution Time: time spent on quantum simulator or hardware backend running the circuit. This only includes the time when the algorirhm is being run and does not inlcude any of the time waiting in a queue on qiskit and cirq. Braket does not currently repor execution time, and therefore does include the queue time as well.
  • Fidelity: a measure of how well the simulator or hardware runs a particular benchmark, on a scale from 0 to 1, with 0 being a completely useless result and 1 being perfect execution of the algorithm. The math of how we calculate the fidelity is outlined in the file _doc/POLARIZATION_FIDELITY.md.
  • Circuit/Transpiled Depth: number of layers of gates to apply a particular algorithm. The Circuit depth is the depth if all of the gates used for the algorithm were native, while the transpile depth is the amount of gates if only certain gates are allowed. We default to ['rx', 'ry', 'rz', 'cx']. Note: this set of gates is just used to provide a normalized transpiled depth across all hardware and simulator platforms, and we seperately transpile to the native gate set of the hardware. The depth can be used to help provide reasoning for why one algorithm is harder to run than another for the same circuit width. This metric is currently only available on the Qiskit implementation of the algorithms.

Implementation Status

Below is a table showing the degree to which the benchmarks have been implemented in each of the target platforms (as of the last update to this branch):

Prototype Benchmarks - Implementation Status

Comments
  • No ability to specify which qubits used in Qiskit transpiler

    No ability to specify which qubits used in Qiskit transpiler

    The benchmarking suite has no way to specify which qubits are used in the execution of a given circuit, i.e. one cannot define an initial_layout here:

    https://github.com/SRI-International/QC-App-Oriented-Benchmarks/blob/5e1f68eed96d667b9bcc08d3b04b2004f4c04643/_common/qiskit/execute.py#L269

    This is nice to have because, for example, in Fig. 11 of https://arxiv.org/abs/2110.03137 you look at dynamic Berstein-Vazirani on the Lagos system, but the 0-1 edge of the coupling map is actually not the best (it is also not the worst). On that machine the 3-5 edge is the best in terms of fidelity:

    [3, 5] 0.7861328125
    [2, 1] 0.77197265625
    [5, 4] 0.7678222656250001
    [5, 3] 0.7481689453125
    [3, 1] 0.7360839843750001
    [4, 5] 0.7303466796875
    [0, 1] 0.7076416015625001
    [1, 2] 0.665771484375
    [6, 5] 0.64208984375
    [1, 0] 0.6323242187500001
    [1, 3] 0.6124267578125001
    [5, 6] 0.5809326171875001
    
    opened by nonhermitian 10
  • Problem in executing the vqe code

    Problem in executing the vqe code

    Hello Sir, I am finding problem in executing the vqe code. The error I am getting is below: image I tried installing these packages but the error is still not resolved. Need your help.

    opened by manu123416 6
  • Deutsch-Josza Benchmarking Test is giving throwing Error

    Deutsch-Josza Benchmarking Test is giving throwing Error

    Hi Tried, I tried running the below code from your Deush Josza algorithm:

    """
    Deutsch-Jozsa Benchmark Program - Qiskit
    """
    
    import sys
    import time
    
    import numpy as np
    from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
    
    sys.path[1:1] = ["_common", "_common/qiskit"]
    sys.path[1:1] = ["../../_common", "../../_common/qiskit"]
    import execute as ex
    import metrics as metrics
    
    np.random.seed(0)
    
    verbose = False
    
    # saved circuits for display
    QC_ = None
    C_ORACLE_ = None
    B_ORACLE_ = None
    
    
    ############### Circuit Definition
    
    # Create a constant oracle, appending gates to given circuit
    def constant_oracle(input_size, num_qubits):
        # Initialize first n qubits and single ancilla qubit
        qc = QuantumCircuit(num_qubits, name=f"Uf")
    
        output = np.random.randint(2)
        if output == 1:
            qc.x(input_size)
    
        global C_ORACLE_
        if C_ORACLE_ == None or num_qubits <= 6:
            if num_qubits < 9: C_ORACLE_ = qc
    
        return qc
    
    
    # Create a balanced oracle.
    # Perform CNOTs with each input qubit as a control and the output bit as the target.
    # Vary the input states that give 0 or 1 by wrapping some of the controls in X-gates.
    def balanced_oracle(input_size, num_qubits):
        # Initialize first n qubits and single ancilla qubit
        qc = QuantumCircuit(num_qubits, name=f"Uf")
    
        b_str = "10101010101010101010"  # permit input_string up to 20 chars
        for qubit in range(input_size):
            if b_str[qubit] == '1':
                qc.x(qubit)
    
        qc.barrier()
    
        for qubit in range(input_size):
            qc.cx(qubit, input_size)
    
        qc.barrier()
    
        for qubit in range(input_size):
            if b_str[qubit] == '1':
                qc.x(qubit)
    
        global B_ORACLE_
        if B_ORACLE_ == None or num_qubits <= 6:
            if num_qubits < 9: B_ORACLE_ = qc
    
        return qc
    
    
    # Create benchmark circuit
    def DeutschJozsa(num_qubits, type):
        # Size of input is one less than available qubits
        input_size = num_qubits - 1
    
        # allocate qubits
        qr = QuantumRegister(num_qubits);
        cr = ClassicalRegister(input_size);
        qc = QuantumCircuit(qr, cr, name="main")
    
        for qubit in range(input_size):
            qc.h(qubit)
        qc.x(input_size)
        qc.h(input_size)
    
        qc.barrier()
    
        # Add a constant or balanced oracle function
        if type == 0:
            Uf = constant_oracle(input_size, num_qubits)
        else:
            Uf = balanced_oracle(input_size, num_qubits)
        qc.append(Uf, qr)
    
        qc.barrier()
    
        for qubit in range(num_qubits):
            qc.h(qubit)
    
        # uncompute ancilla qubit, not necessary for algorithm
        qc.x(input_size)
    
        qc.barrier()
    
        for i in range(input_size):
            qc.measure(i, i)
    
        # save smaller circuit and oracle subcircuit example for display
        global QC_
        if QC_ == None or num_qubits <= 6:
            if num_qubits < 9: QC_ = qc
    
        # return a handle to the circuit
        return qc
    
    
    ############### Result Data Analysis
    
    # Analyze and print measured results
    # Expected result is always the type, so fidelity calc is simple
    def analyze_and_print_result(qc, result, num_qubits, type, num_shots):
        # Size of input is one less than available qubits
        input_size = num_qubits - 1
    
        # obtain counts from the result object
        counts = result.get_counts(qc)
        if verbose: print(f"For type {type} measured: {counts}")
    
        # create the key that is expected to have all the measurements (for this circuit)
        if type == 0:
            key = '0' * input_size
        else:
            key = '1' * input_size
    
        # correct distribution is measuring the key 100% of the time
        correct_dist = {key: 1.0}
    
        # use our polarization fidelity rescaling
        fidelity = metrics.polarization_fidelity(counts, correct_dist)
    
        return counts, fidelity
    
    
    ################ Benchmark Loop
    
    # Execute program with default parameters
    def run(min_qubits=3, max_qubits=8, max_circuits=3, num_shots=100,
            backend_id='qasm_simulator', provider_backend=None,
            hub="ibm-q", group="open", project="main", exec_options=None):
        print("Deutsch-Jozsa Benchmark Program - Qiskit")
    
        # validate parameters (smallest circuit is 3 qubits)
        max_qubits = max(3, max_qubits)
        min_qubits = min(max(3, min_qubits), max_qubits)
        # print(f"min, max qubits = {min_qubits} {max_qubits}")
    
        # Initialize metrics module
        metrics.init_metrics()
    
        # Define custom result handler
        def execution_handler(qc, result, num_qubits, type, num_shots):
    
            # determine fidelity of result set
            num_qubits = int(num_qubits)
            counts, fidelity = analyze_and_print_result(qc, result, num_qubits, int(type), num_shots)
            metrics.store_metric(num_qubits, type, 'fidelity', fidelity)
    
        # Initialize execution module using the execution result handler above and specified backend_id
        ex.init_execution(execution_handler)
        ex.set_execution_target(backend_id, provider_backend=provider_backend,
                                hub=hub, group=group, project=project, exec_options=exec_options)
    
        # Execute Benchmark Program N times for multiple circuit sizes
        # Accumulate metrics asynchronously as circuits complete
        for num_qubits in range(min_qubits, max_qubits + 1):
    
            input_size = num_qubits - 1
    
            # determine number of circuits to execute for this group
            num_circuits = min(2, max_circuits)
    
            print(f"************\nExecuting [{num_circuits}] circuits with num_qubits = {num_qubits}")
    
            # loop over only 2 circuits
            for type in range(num_circuits):
                # create the circuit for given qubit size and secret string, store time metric
                ts = time.time()
                qc = DeutschJozsa(num_qubits, type)
                metrics.store_metric(num_qubits, type, 'create_time', time.time() - ts)
    
                # collapse the sub-circuit levels used in this benchmark (for qiskit)
                qc2 = qc.decompose()
    
                # submit circuit for execution on target (simulator, cloud simulator, or hardware)
                ex.submit_circuit(qc2, num_qubits, type, num_shots)
    
            # Wait for some active circuits to complete; report metrics when groups complete
            ex.throttle_execution(metrics.finalize_group)
    
        # Wait for all active circuits to complete; report metrics when groups complete
        ex.finalize_execution(metrics.finalize_group)
    
        # print a sample circuit
        print("Sample Circuit:");
        print(QC_ if QC_ != None else "  ... too large!")
        print("\nConstant Oracle 'Uf' =");
        print(C_ORACLE_ if C_ORACLE_ != None else " ... too large or not used!")
        print("\nBalanced Oracle 'Uf' =");
        print(B_ORACLE_ if B_ORACLE_ != None else " ... too large or not used!")
    
        # Plot metrics for all circuit sizes
        metrics.plot_metrics("Benchmark Results - Deutsch-Jozsa - Qiskit")
    
    
    # if main, execute method
    if __name__ == '__main__': run()
    

    I am getting the below error:

    C:\Users\manuc\Documents\Pytorch_Study_Workspace\Benchmark_dj\venv\Scripts\python.exe C:/Users/manuc/Documents/Pytorch_Study_Workspace/Benchmark_dj/main.py
    Traceback (most recent call last):
      File "C:\Users\manuc\Documents\Pytorch_Study_Workspace\Benchmark_dj\main.py", line 219, in <module>
        if __name__ == '__main__': run()
      File "C:\Users\manuc\Documents\Pytorch_Study_Workspace\Benchmark_dj\main.py", line 161, in run
        metrics.init_metrics()
    AttributeError: module 'metrics' has no attribute 'init_metrics'
    Deutsch-Jozsa Benchmark Program - Qiskit
    
    Process finished with exit code 1
    

    Please guide me how to correct this Error?

    opened by manu123416 5
  • Figure 9 from paper does not use proper depth

    Figure 9 from paper does not use proper depth

    In the paper https://arxiv.org/abs/2110.03137 it is said that the depth calculation for circuits is done using the basis set ['rx', 'ry', 'rz', 'cx']. However when looking at Fig 9 of the paper the reported depth does not match what the depth is when the circuit is decomposed to the indicated basis. Namely the routine just does a decompose

    https://github.com/SRI-International/QC-App-Oriented-Benchmarks/blob/5e1f68eed96d667b9bcc08d3b04b2004f4c04643/quantum-fourier-transform/qiskit/qft_benchmark.py#L313

    before passing on to execute that computes the depth of this decomposition:

    https://github.com/SRI-International/QC-App-Oriented-Benchmarks/blob/5e1f68eed96d667b9bcc08d3b04b2004f4c04643/_common/qiskit/execute.py#L237

    This does not decompose to the correct basis, and the circuits are much shorter than they should be. At 7 qubits, the decomposed depth is 31, which matches Fig 9., but the actual depth with the correct basis is 78. The same is true for the circuits that are swap mapped to the Casablanca system, where I get an avg depth of 117.

    opened by nonhermitian 5
  • more changes to optgap plotting

    more changes to optgap plotting

    • Added function to metrics.py for computing the empirical probability distribution of cut sizes.
    • Added function that plots the cactus plots
    • Added an mplstyle file
    • Rather than using mplstyle globally, using it with a context manager
    • Restructuring in plot_metrics_optgaps to allow choosing which metrics to plot.
    opened by PratikSathe 3
  • hamiltonian-simulation is throwing the error

    hamiltonian-simulation is throwing the error

    Hi, I tried running the hamiltonian-simulation code in the pycharm. I am getting the below error: image The directory structure of my program looks as below: image Please give suggestion on how to resolve this error?

    opened by manu123416 3
  • Added `q_sum` check

    Added `q_sum` check

    When running some benchmarks, if the expected distribution is too small q_sum will be zero and result in a ZeroDivisionError error when calculating hellinger_fidelity_with_expected.

    I wasn't sure on what the printed Error message should say. Please let me know if you have any suggestions.

    opened by japanavi 3
  • add multi transpile code and example

    add multi transpile code and example

    Modify execute.py module for qiskit to allow multiple transpilations passes for the transpiler. This is useful because the transpiled circuit is found stochastically and it can vary in different runs of the transpiler. Running the transpiler multiple times and selecting the transpiled circuit with the lowest cx count can improve the fidelity of circuit ran with qiskit

    opened by miamico 3
  • Added custom theta initialization & small refactor

    Added custom theta initialization & small refactor

    Added custom theta initialization parameter custom_theta_init: dict[float]. To initialize custom angles, pass a dict of floats with the angle as key, e.g.

    maxcut_benchmark.run(custom_theta_init={"beta": 0.5, "gamma": 0.5})

    Slightly refactored code to initialize new thetas in main loop, this means we no longer need the method and rounds parameters in MaxCut since we always pass thetas_init.

    Also, fixed title issue in optimality gap plot (was missing the f for the fstring).

    opened by japanavi 2
  • Cleaned up notebook templates

    Cleaned up notebook templates

    • Added markdown headings above each benchmark cell for easier navigation.
    • Deleted run magic commands because they were redundant and didn't allow user to specify any new arguments.
    opened by japanavi 2
  • Fixed opt time computation; Uniform random sampling for maxcut

    Fixed opt time computation; Uniform random sampling for maxcut

    Get distribution of cut sizes for uniform random sampling. Changes to metrics.py. Next, will add a function for plotting empirical probability distribution of cut sizes, both for the QAOA output, and uniform random sampling.

    opened by PratikSathe 1
  • change system path additions

    change system path additions

    use file in any module that is imported. This method allows one to import, for example, maxcut/qiskit/maxcut_benchmark.py from a location which is different from maxcut/qiskit/ directory. Previous way of adding system path did not work in this case.

    opened by PratikSathe 0
  • Generating invalid expected distribution

    Generating invalid expected distribution

    In this PR, it's found that it's possible for the maxcut benchmark to produce expected distributions with norm=0. At line 139 of get_expectation, there is a step:

    # scale to number of shots
            for k, v in counts.items():
                counts[k] = round(v * num_shots)
    

    Correct me if I'm wrong, but this is used to compare the results against a discrete approximation to the theoretical distribution, possibly to not penalize a result list that does not contain any counts for bitstrings that have very small probability mass (so that you'd expect 0 appearances at the given shot count) and also just to peg an integer number of results for each bitstring as ideal. This means for the case you describe, the only way to run the problem instance throwing the error is to increase the number of shots until the discretized expected distribution has at least a single nonzero element. I worry this has its own issues, because a significant distortion between the original distribution and the discretized distribution causes a distortion in the actual fidelity calculation. You could conceivably be comparing the results to a discrete distribution with whacky finite-size effects that make it look very different from the distribution that an ideal quantum computer is pulling from. This should only happen when the theoretical distribution is very wide and mostly but not perfectly flat, (I think), but it's worth considering.

    Just wanted to share these thoughts... I don't think we use this kind of step in other benchmarks? It seems odd that we'd calculate fidelity against discretized distributions for some benchmarks and continuous exact distributions for others. Maybe we should perform a check that something like this doesn't happen in maxcut_benchmark.py or pass the exact distribution even if the fidelity moderately underperforms at low shot counts?

    opened by necaisej 0
  • polarization fidelity is not a valid comparator

    polarization fidelity is not a valid comparator

    The QED-C benchmarks, and paper, use the polarization fidelity as the comparison metric amongst applications and differing numbers of qubits. This is given by Eq.(2) of the paper (https://arxiv.org/abs/2110.03137). In the plots this fidelity is reported on the interval [0,1], e.g

    Screen Shot 2022-02-13 at 10 01 41 . However, the polarization fidelity is not defined over the interval [0,1]. Instead, the lower bound is negative and is given by
    -(1/2**N)/(1-1/2**N)
    

    where N is the number of qubits. Therefore the range [0,1] is only valid in the large N limit. More importantly, this lower bound is qubit number specific. Therefore this fidelity cannot be used as a comparator across differing numbers of qubits, as done in the tests and paper.

    The polarization fidelity should, for example, be shifted and rescaled so that the range [0,1] is valid across all numbers of qubits.

    opened by nonhermitian 4
  • Chaseklvk/add amplitude estimation braket

    Chaseklvk/add amplitude estimation braket

    This PR adds an Amazon Braket implementation of the Amplitude Estimation benchmark.

    Here are a couple notes about this implementation:

    1. I found that Braket just doesn't support certain features natively, namely, adjoint circuits, multi-cnot gates, etc. (at least not obviously from the documentation), so I had to implement many of those functions by hand. It sounds like it might be useful to compile a general set of Braket utils useful across all benchmarks.

    2. The main implementation is located in braket/ae_benchmark.py and the utilities mentioned in point 1 are located in braket/ae_utils.py.

    3. I tried to keep the benchmark itself as uniform as possible with respect to the existing benchmarks with some small differences due to Braket's limitations.

    • When generating Q, instead of returning cQ and Q, the function Q_Unitary returns the Q circuit object as well as the unitary matrix of Q which is later supplied to controlled_unitary() when creating the general circuit.
    • I kept a vanilla python list of qubit numbers to use similar to QuantumRegister in Qiskit

    Let me know if you have any questions! I'm not sure if there's any established workflow, so I just assumed based on the branches that I should PR into develop. Please let me know if there's a different established flow.

    opened by chaseklvk 1
Owner
SRI International
SRI International
Semantic-based Patch Detection for Binary Programs

PMatch Semantic-based Patch Detection for Binary Programs Requirement tensorflow-gpu 1.13.1 numpy 1.16.2 scikit-learn 0.20.3 ssdeep 3.4 Usage tar -xvz

Mr.Curiosity 3 Sep 2, 2022
Use Convolutional Recurrent Neural Network to recognize the Handwritten line text image without pre segmentation into words or characters. Use CTC loss Function to train.

Handwritten Line Text Recognition using Deep Learning with Tensorflow Description Use Convolutional Recurrent Neural Network to recognize the Handwrit

sushant097 224 Jan 7, 2023
ISI's Optical Character Recognition (OCR) software for machine-print and handwriting data

VistaOCR ISI's Optical Character Recognition (OCR) software for machine-print and handwriting data Publications "How to Efficiently Increase Resolutio

ISI Center for Vision, Image, Speech, and Text Analytics 21 Dec 8, 2021
This repository provides train&test code, dataset, det.&rec. annotation, evaluation script, annotation tool, and ranking.

SCUT-CTW1500 Datasets We have updated annotations for both train and test set. Train: 1000 images [images][annos] Additional point annotation for each

Yuliang Liu 600 Dec 18, 2022
OCR software for recognition of handwritten text

Handwriting OCR The project tries to create software for recognition of a handwritten text from photos (also for Czech language). It uses computer vis

Břetislav Hájek 562 Jan 3, 2023
A machine learning software for extracting information from scholarly documents

GROBID GROBID documentation Visit the GROBID documentation for more detailed information. Summary GROBID (or Grobid, but not GroBid nor GroBiD) means

Patrice Lopez 1.9k Jan 8, 2023
Provides OCR (Optical Character Recognition) services through web applications

OCR4all As suggested by the name one of the main goals of OCR4all is to allow basically any given user to independently perform OCR on a wide variety

null 174 Dec 31, 2022
A python screen recorder for low-end computers, provides high quality video output.

RecorderX - v1.0 A screen recorder made in Python with the help of OpenCv, it has ability to record your screen in high quality. No matter what your P

Priyanshu Jindal 4 Nov 10, 2021
This is a Computer vision package that makes its easy to run Image processing and AI functions. At the core it uses OpenCV and Mediapipe libraries.

CVZone This is a Computer vision package that makes its easy to run Image processing and AI functions. At the core it uses OpenCV and Mediapipe librar

CVZone 648 Dec 30, 2022
An organized collection of tutorials and projects created for aspriring computer vision students.

A repository created with the purpose of teaching students in BME lab 308A- Hanoi University of Science and Technology

Givralnguyen 5 Nov 24, 2021
Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless.

Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless. This is the official Roboflow python package that interfaces with the Roboflow API.

Roboflow 52 Dec 23, 2022
Course material for the Multi-agents and computer graphics course

TC2008B Course material for the Multi-agents and computer graphics course. Setup instructions Strongly recommend using a custom conda environment. Ins

null 16 Dec 13, 2022
Using computer vision method to recognize and calcutate the features of the architecture.

building-feature-recognition In this repository, we accomplished building feature recognition using traditional/dl-assisted computer vision method. Th

null 4 Aug 11, 2022
computer vision, image processing and machine learning on the web browser or node.

Image processing and Machine learning labs   computer vision, image processing and machine learning on the web browser or node note Fast Fourier Trans

ryohei tanaka 487 Nov 11, 2022
Omdena-abuja-anpd - Automatic Number Plate Detection for the security of lives and properties using Computer Vision.

Omdena-abuja-anpd - Automatic Number Plate Detection for the security of lives and properties using Computer Vision.

Abdulazeez Jimoh 1 Jan 1, 2022
Computer vision applications project (Flask and OpenCV)

Computer Vision Applications Project This project is at it's initial phase. This is all about the implementation of different computer vision techniqu

Suryam Thapa 1 Jan 26, 2022
Open Source Differentiable Computer Vision Library for PyTorch

Kornia is a differentiable computer vision library for PyTorch. It consists of a set of routines and differentiable modules to solve generic computer

kornia 7.6k Jan 4, 2023
Open Source Computer Vision Library

OpenCV: Open Source Computer Vision Library Resources Homepage: https://opencv.org Courses: https://opencv.org/courses Docs: https://docs.opencv.org/m

OpenCV 65.7k Jan 3, 2023
"Very simple but works well" Computer Vision based ID verification solution provided by LibraX.

ID Verification by LibraX.ai This is the first free Identity verification in the market. LibraX.ai is an identity verification platform for developers

LibraX.ai 46 Dec 6, 2022