Official repo for QHack—the quantum machine learning hackathon

Overview

Note: This repository has been frozen while we consider the submissions for the QHack Open Hackathon. We hope you enjoyed the event!

image

Welcome to QHack, the quantum machine learning hackathon! We're thrilled to have the opportunity to meet and work with such a large and diverse group of participants, and we look forward to interacting with you all during the event.

This year's event consists of three main components:

The up-to-date event schedule can be found here.

Power Ups and Prizes

QHack has some amazing goodies and prizes available to be won, courtesy of our sponsors.

Credits for AWS

  • Earn $250 in AWS credits: At the conclusion of our Feb 19 live stream, the top 80 teams on the scoreboard will receive $250 credits to help them build their Open Hackathon solutions on AWS. Teams can apply credits to any AWS service, including Amazon Braket where they can showcase their ideas on Rigetti, IonQ, and D-Wave hardware or with high-performance simulators in the cloud.

  • Earn $4000 in AWS credits: Teams who open an issue by Feb 24 on this GitHub repository with a description of their (in progress) Open Hackathon project are eligible for $4000 in additional AWS credits to use towards their hackathon project.

Access Sandbox's Floq Simulator

  • Alpha access to TPU-based quantum simulators: The top 50 teams in the challenge will each receive an API key for the alpha of Sandbox@Alphabet's Floq API. Discover more details about Floq@QHack here.

  • Floq Cash Prize: The team with the best usage of Floq by the end of the Open Hackathon will be eligible to receive a $2500 cash prize. See here for more details.

Grand Prize

  • Win a summer internship at CERN: The top overall team (judged by QML Challenge scoreboard ranking and Open Hackathon project) will receive up to 3 summer internship positions at CERN.

Please read our terms and conditions for official eligibility and evaluation criteria. Entry void in Quebec.

Participants in the event agree to abide by the QHack Code of Conduct.

Comments
  • [Power Up] Variational Language Model

    [Power Up] Variational Language Model

    Team Name:

    TeamX

    Project Description:

    In this project, we developed a variational quantum algorithm for Natural Language Processing. Our goal is to train a quantum circuit such that it can process and recognize words. Applications varies from word matching, sentence completion, sentence generation and more. We use state-of-the-art deep learning word embedding and amplitude encoded quantum register, with a new ansatz and training methodology to perform this task, based on the swap test between words.

    Source code:

    https://github.com/Slimane33/qhack_project

    Resource Estimate:

    We can use AWS SV1 for parallelizing the gradient during the training. But the computational cost remains high due to the number of sentences and the total number of words in the dictionary.

    With the current resource available, we estimate the training to be

    • For 10k sentences with 10 words per sentence / 2 qubits per word / 2 layers -> 4 days

    • For 10k sentences with 7 words per sentence / 3 qubits per word / 2 layers -> 10 days

    • We have started to generate a synthetic dataset to limit the resource consumption In any case, we might need more resources from AWS.

    • Number of qubits required: The quantum circuit to train corresponds to one sentence plus an extra word and an ancillary qubit, therefore Q*(N+1)+1 qubits. N being the number of words and Q the number of qubits per word. e.g : for a 4 words sentence with 3 qubits per word, we require 16 qubits. for a 5 words sentence with 4 qubits per word, we require 25 qubits.

    • Number of trainable parameters: The number of trainable parameters in the ansatz is around Q*(1+N/2)*L, where L is the number of layers, on average (it depends on the parity of the number of words and the number of qubits). e.g for a 4 words sentence with 3 qubits per word and 3 layers, we require 27 parameters.

    Power Up 
    opened by JonasLandman 8
  • [Power Up] Event classification with data-reuploading in High Energy Physics

    [Power Up] Event classification with data-reuploading in High Energy Physics

    Team Name:

    Entangled_Nets

    Project Description:

    The large experiments conducted in the field of particle physics require the detection and analysis of data produced in particle collisions that occurred using high-energy accelerators such as the LHC [2]. In these experiments, particles that are created by collisions are observed by layers of high-precision detectors surrounding the collision points, which produces large amounts of data about the collision. This motivated the use of "classical" machine learning techniques in different aspects to improve the performance and analysis of the data. Moreover, these developed techniques are also adapted to quantum computing, e.g, the unfolding measurement distributions via quantum annealing [3]. Intending to take advantage of both fields, the techniques in quantum machine learning, which are considered as one of the quantum computing algorithms that could bring quantum advantages over classical methods [4][5], will be used.

    Furthermore, since the development of quantum hardware with a sufficient number of qubits is still in progress, circuits that make use of fewer qubits are more plausible to consider. Besides, such circuits may prove relevant even if they do not provide any quantum advantage, since they may be useful parts of larger circuits. We will use the idea of data reuploading discussed by Pérez-Salinas et al. [6], where it is shown that it's possible to load a single qubit with arbitrary dimensional data and then use it as a universal quantum classifier.

    This project aims to use the method of data-reuploading, where qubits will be used as quantum classifiers to classify a certain dataset with high accuracy, and parametrized quantum circuit, whose variables are used to construct a cost function that should be minimized "classically". For our model, the SUSY dataset [1] will be considered.

    [1] SUSY Data Set - UCI Machine Learning Repository

    [2] Event Classification with Quantum Machine Learning in High-Energy Physics

    [3] Unfolding measurement distributions via quantum annealing

    [4] Quantum Computing in the NISQ era and beyond

    [5] Quantum Machine Learning in High Energy Physics

    [6] Data re-uploading for a universal quantum classifier, Adrián Pérez-Salinas, Alba Cervera-Lierta, Elies Gil-Fuster, José I. Latorre

    Source code:

    The draft source code

    Note: This is a draft code for the initial entry for the AWS Power-up. The final source code will be modified and submitted next within the final deadline.

    Resource Estimate:

    We intend to use the power-up prize to further investigate the algorithms and try different approaches to increase the accuracy of our model using simulators. Besides testing the developed model on the quantum hardware access provided by AWS.

    Aspen-8: 1 task x $0.30 / task = $0.30 Shots charges: 1,000 shots x $0.00035 / shot = $0.35 Total charges/Task: $0.65 = $0.30 + $0.35

    1Qubit testing: Task charges: Number of Tasks: 1000 Total charges: $650=1000x$0.65

    2Qubits testing: Number of Tasks: 1000 Total charges: $650=1000*$0.65

    1 Qubit training: Number of Tasks: 2001x0=20000 (10 epoch 200 tasks/epoch) Total charges: $1300=2000x$0.65

    2 Qubits training: Number of Tasks: 200x10=20000 (10 epoch 200 tasks/epoch) Total charges: $1300=2000$0.65

    Total resource estimation for all objectives: $3900

    Power Up 
    opened by t0gan 7
  • [Power Up] Performance Evaluation of Hybrid Quantum-Classical Object Detection Networks

    [Power Up] Performance Evaluation of Hybrid Quantum-Classical Object Detection Networks

    Team Name:

    QuantumTunnelers

    Project Description:

    Our project aims to create a hybrid model of popular object detection networks. Primarily, we are focusing on RetinaNet with a MobileNet (and possibly ResNet-18) feature extraction backbone. Our goal is to introduce quantum layers and measure various performance statistics such as mean Average Precision (mAP) and the number of epochs taken to reach a comparable Loss value.

    The main layer we are focusing on is the convolutional layer. Using a modification of both the original Quanvolutional layer model introduced in Henderson et al. (2019) and the demo found on PennyLane, we custom built a quantum convolutional layer that takes in any kernel size and output layer depth as parameters, automatically determines the correct number of qubits needed, and outputs the appropriate feature map using a quantum circuit as its base.

    We plan to replace key convolutional layers within RetinaNet with our custom quanvolutional layer and measure the aforementioned performance statistics. We hope to see improvement within the statistics and hope to extend this project to other popular networks after this Hackathon.

    Currently, we have trained and evaluated a custom-made backbone to test our quanvolutional layer due to MobileNet's architecture being too resource-consuming for our laptops. We plan to use AWS servers to properly train our hybrid backbones. For more details and information about our progress, please visit our GitHub repository.

    Source code:

    Our GitHub Repository: QuobileNet

    Resource Estimate:

    We have a hybrid model that costs too much time and resources to train on our current hardware. Therefore, we plan to train a 30 Qubit QCNN hybrid model using the Floq service. We plan to use the AWS service to test the quality of our results by comparing the inference performance of our QuobileNet with the classic RetinaNet (+ MobileNetV2 backbone) inference. The resource estimates for inference are as follows:

    Inference with QCNN: Kernel Size: 3x3 Input Image: 10x10 Number of Executions per QCNN layer: (10-3+1)^2 = 64 Number of input images: 50 Cost of 30 Qubit Circuit Execution with 1000 shots (Aspen-9): 0.35+0.30 = 0.65 Cost per QCNN layer: 2080$

    We can afford 2 QCNN layers that add up to 4160$ in total. We haven't used the initial 250$ credit yet, as we planned to use it for our final model. With the 4000$ bonus credit we will be able to test our model.

    Good luck to everyone!

    Power Up 
    opened by RKHashmani 7
  • [Power Up] Telling quantum DoQs and quantum Qats apart

    [Power Up] Telling quantum DoQs and quantum Qats apart

    Team Name:

    Quant'ronauts

    Project Description:

    Idea: we classify regions of the Hilbert space, of quantum states of n qubits. There are 2 categories, "Qat" and "DoQ". As an example, for n=1, one hemisphere of the Bloch sphere could be labelled "Qat", the other hemisphere "DoQ". The state vectors to classify are generated as the output of a sensor, which is then fed into a classifier circuit of M layers. Note that we are NOT classifying the classical params vector of the sensor, as we could use any other sensor with different parameterization as long as it's capable of producing Qat and DoQ states. Also, we take the sensor as is, we don't try to "optimize" it.

    Catch: during operation, the sensor can only produce its output once. Thus, when we calculate the accuracy on the test set, we are not allowed to make use of expected values resulting from many shots. There is only 1 shot (in the training phase, we can optimize using expected values, as training is done in our laboratory where we can recreate the sensor outputs of the training set at will). We'd like to experiment how much the accuracy drops due to this 1-shot limitation, whether it's different using simulator vs real quantum hardware, and what kind of cost function would reduce this impact.

    idea

    Extra: if multiple shots are allowed, how much would a data re-uploading scheme improve the accuracy? E.g. imagine there are M identical sensors located very close to each other. When a certain physical event happens, it sets all the parameters of the M sensors at once, identically for each sensor. Then, the parameters don't change until the next event. Furthermore, there may be exponentially many parameters of the sensor, inaccessible to us. So again, we are classifying quantum states.

    extra

    Source code:

    https://github.com/mickahell/qhack21

    Resource Estimate:

    We need the extra credit to see in more detail how the use of real quantum hardware influences the accuracy of the classifiers, as well as the accuracy gap between the different options mentioned above.

    After simulation, we plan to try 4 candidate circuits, of 1, 2, 5, and 10 wires, respectively, using a Rigetti device. We'll use gradient descent for training, 50 steps, in batches of 10, calculating expected values using 30 shots. So, if we calculate with an average of 60 variables per circuit to optimize, altogether 1 training session will require 50x10x30x60x2=1'800'000 shots (x2 is there due to parameter shift). There will be 4 different systems to train, so a total of 4x1'800'000=7'200'000 shots for training.

    Our test set has 200 items. For each of the 4 circuits, we'll compare 2 options, one using expected values of 30 shots, and 1 using only 1 single shot. So the total number of shots required is 4x200x30+4x200=24800.

    This estimation still has a buffer for the case when the simulation phase makes us change some of the figures, and/or if we want to try the IonQ device as well.

    Alternatively, we might train the circuits locally and do ONLY the testing phase using real quantum hardware, that would enable us to try much more than 4, already trained circuits.

    Power Up 
    opened by muttley2k 5
  • [Power Up] Quantum enhanced convolutional filter

    [Power Up] Quantum enhanced convolutional filter

    Team Name:

    CCH

    Project Description:

    The emerging field of hybrid quantum-classical algorithms joins CPUs and QPUs to speed-up/improve specific calculations within a classical algorithm. This allows for shorter quantum executions that are less susceptible to the cumulative effects of noise and that run well on today’s devices. This is why we intend to explore the performance of a hybrid convolutional neural network model that incorporates a trainable quantum layer, effectively replacing a convolutional filter, in both quantum simulators and QPU.

    Our team proposes to design a trainable quantum convolutional filter in a quantum-classical hybrid neural network, appealing for the NISQ era, inspired by these papers: Hybrid quantum-classical Convolutional Neural Networks [1] and Quanvolutional Neural Networks [2] , but generalizing these previous works to use cloud based QPU.

    Here is a list of the expected outcomes/ questions to address of this project:

    • Complete benchmarking of a quantum convolutional filter (Encoding of data + variational ansatz) embedded in a classical neural network, in the context of an image classification task with the MNIST dataset.

    • Example of complete workflow for training a quantum-classical CNN interfacing Pennylane with TensorFlow/Pytorch for automatic differentiation of the quantum and classical layers, and amazon braket for running the workflow on a QPU.

    • With the current noise level in cloud-based QPU, what size/depth of the parametrized quantum circuits is expressive enough without performance being buried under noisy conditions. Can we achieve a significant advantage (in terms of evaluation metrics for a fixed number of quantum vs classical parameters/weights) with today’s QPU?

    • Visual exploration of convolved features ( output of filters) with both quantum and classical convolutional filters.

    Source code:

    https://github.com/KetpuntoG/QFilters/blob/main/Qfilter4_enhanced%20(1).ipynb

    Resource Estimate:

    There are a few bottlenecks in the quantum classical hybrid models to explore (number of learnable parameters in ansatz related to depth of quantum circuits, number of convolutions will increase as image size increases as well). The quantum filters will need qubits registers of size in the range of 9 to 30 qubits (equivalent to NxN kernel window), 3x3 and 5x5 are typical sizes in CNN. But mainly, there will be shallow quantum circuits executed on both simulator and hardware backends (LocalSimulator and Rigetti QPU should be good enough), with a reasonable number of shots, many quantum computations will be performed during training if the number of epochs and dataset size is large. For performing the multiple translations of the kernel around the image, we expect to parallelize this workload on Amazon bracket during the training phase, to speed it up. Another aspect is to keep the classical layers not too deep to allow for efficient classical training. We also aim to run multiple benchmarks such as exploring the trade-off of number of run epochs and accuracy, the complexity/expressive power of the ansatz and the accuracy, number of quantum vs classical parameters and a time complexity benchmark of the hybrid training loop time.

    References

    [1] https://arxiv.org/abs/1911.02998 [2] https://arxiv.org/abs/1904.04767

    Power Up 
    opened by RicardoGaGu 4
  • [Power Up] Quantum Spectral Graph Convolutional Neural Networks

    [Power Up] Quantum Spectral Graph Convolutional Neural Networks

    Team Name:

    QUACKQUACKQUACK

    Project Description:

    Over recent years, a large influx of interest has been observed in classical machine learning regarding the research into and usage of Graph Neural Networks (GNN). Part of the reason for this interest is due to their innate ability to model vast physical phenomena through the medium of pair-wise interactions between the elements of the systems. Similarly , interest in Quantum Machine Learning models is also increasing, as such architectures can leverage the computational efficiency of quantum computers and offer problem tailored solutions by handcrafting antsatze guided by physical interactions. Consequently, we believe that combining these separate ideas will offer mutual benefits and improve model performance and advanced research in both fields. Seeing how GNNs are used to solve combinatorial tasks Combinatorial optimisation and reasoning with graph neural networks by Cappart et al included in workshops such as “Deep Learning and Combinatorial Optimisation” help at IPAM UCLA., we would argue that it is the right time to start thinking more about Quantum Graph Neural Networks (QGNN).

    We propose to implement Quantum Spectral Graph Convolutional Neural Networks (QSGCNN) as described in Verdon et al.. We are planning to use the Pennylane documentation on Quantum Graph Recurrent Neural Networks (QGRNNs) as a guideline, and we will replace the RNN layer with a spectral convolutional layer. In particular, we want to perform unsupervised graph clustering as described in Verdon et al.. We specifically want to compare the performance and inference speed between classical GNN models and their quantum counterparts on simple datasets, such as the one in Verdon et al. or k-core distilled popular GNN benchmark datasets (e.g. Cora, or Citeseer). This would primarily include the most popular and basic models based on the SGCNNs and as a stretch goal also on GraphSAGE. The results would be then compared with standard graph partitioning algorithms.

    Source code:

    https://github.com/bossemel/QHack_Project/tree/main

    Resource Estimate:

    We expect that the clustering performed by these models on these very small datasets won’t be insightful enough. Therefore, in order to obtain meaningful results from this experiment, we will need to train quantum models on graphs with a reasonable number of vertices and edges. We observe that the number of qubits required in the ansatz for QGNNs scales linearly with the number of vertices of the graph, and consequently it would be infeasible for us to demonstrate a meaningful application of the QGSCNN without using either a high-tech simulator, or by using an actual quantum device. Therefore, while we are unable to provide explicit costing projections at this time, we can say with certainty that having access to AWS credits will allow us to produce a much more impactful project on this topic.

    Power Up 
    opened by DanielPolatajko 4
  • Investigating the effects of quantum layers in machine learning by building a custom PennyLane wrapper.

    Investigating the effects of quantum layers in machine learning by building a custom PennyLane wrapper.

    Team Name:

    Cabriella

    Project Description:

    Here we investigate how does making a machine learning include quantum layers effect machine learning results. To do this, we employ a custom made python library which integrates PennyLane with python.

    The aim of this library is to make quantum machine learning easier to do by removing the need to encode hardware such as circuit, device, QNode etc, where our library atomically customizes according to input. We do this from the motivation that why classical machine learning get to not think about hardware, whereas quantum machine learners do.

    Equipped with this library, we will be able to efficiently test different types of quantum models to understand how the results are effected.

    Source code:

    A hyperlink to the draft source code for your team's hackathon project (e.g., a GitHub repo). https://github.com/SaadNaeem96/QHack-2021-by-XanaduAI/tree/main/Hackathon

    Resource Estimate:

    The method suggested includes 1) Making the python library 2) Investigating how qunautm layers chage a classical machine learning result.

    Our usage of resources will include:

    1. We have decided that the for 1) we will study 500 circuits existig kinds of circuits for each type of quantum machine learning model (CNN, ANN, Decision tree, LSTM etc)
    2. Testing the results of 2000 different types of classical machine learning result by adding variable number of qunautm machine learning layers/nodes.

    We intend to use the power-up prize to further investigate the algorithms and try different approaches to increase the accuracy of our model using simulators and quantum hardware provided by AWS.

    1. A Tensor Network Simulator based training (Floq TPU / Brakcet TN1 Simulator).
    2. Training on Bracket SV1
    Power Up 
    opened by SaadNaeem96 4
  • [Power Up] DQN with Quantum Variational Circuits

    [Power Up] DQN with Quantum Variational Circuits

    Team Name:

    DAC

    Project Description:

    The algorithm implements the pseudocode described by Reinfocement Learning With Quantum Variational Circuits of a reinforcement learning algorithm based on quantum DNN, more specifically DQN. The network architecture is the same as the one used in the paper, and the algorithm follows the classical DNN structure, based on PennyLane to build and measure our qubits. The training is made in the BlackJack environment for simplicity, but we would like to extend that to more challenging environments.

    Source code:

    https://github.com/carlosamds/qhack-dac

    Resource Estimate:

    If awarded with the additional AWS credits we intend to make many more tests regarding the circuit architecture and try harder gym enviroments. Using the Amazon Braket services, more of those tests would be possible and they could be finished much quicker.

    Power Up 
    opened by carlosamds 3
  • [Power Up] QNN-for-Thermodynamic-correlation

    [Power Up] QNN-for-Thermodynamic-correlation

    Team Name:

    Coherence

    Project Description:

    This work proposes a Quantum neural network-based methodology to estimate frictional pressure drop during boiling in mini-channels of non-azeotropic mixtures including nitrogen, methane, ethane, and propane. The methodology can assist in thermal analysis or design of heat exchangers used in cryogenic applications. The architecture of the proposed model includes the local quality, roughness, mass flux, and Reynolds number as inputs and frictional pressure drop as outputs. It will compare with one paper where my colleagues and I use the same data to create an ANN-based correlation for pressure drop estimation in microchannels [1].

    [1] Barroso-Maldonado, J. M., Montañez-Barrera, J. A., Belman-Flores, J. M., and Aceves, S. M. (2019). ANN-based correlation for frictional pressure drop of non-azeotropic mixtures during cryogenic forced boiling. Applied Thermal Engineering, 149(August 2018), 492-501. https://doi.org/10.1016/j.applthermaleng.2018.12.082

    Source code:

    https://github.com/alejomonbar/QNN-for-Thermodynamic-correlation

    Resource Estimate:

    My project consists of two stages, the first is to explore different circuit configurations to encode the inputs and the number of parameters used to determine the pressure drop correlation. Then, I need to run at least 20 different configurations which should be 2 hours of SV1 ($9). Second, I would like to see how noise affects the model then I need to use one of the quantum devices available in bracket. Training is an impossible task because I have almost 5000 sample data. Therefore, I'm going to use the test set that is composed of 693 samples to compare the solution using one of the quantum devices and the ideal pressure drop once the parameters are optimized. This means (693*100shot*0.01$/shot) + 693*0.3Task = 0.01*693*1000 + 0.3*693 = $900.9.

    My first training using a gives me better results (error 8%) than those we obtained in the paper presented above (error 9-9.5%). This gives me the intuition that with the correct layer configuration we can outperform the ANN results.

    Power Up 
    opened by alejomonbar 3
  • [Power Up] [ENTRY] Qountry Songs

    [Power Up] [ENTRY] Qountry Songs

    Team Name:

    QUANTIFY

    Project Description:

    The diagrammatic approach to quantum computations pioneered in [1,2] has been extended to quantum circuit compilation and optimisation [3]. The latter has been successfully applied for QNLP using NISQ machines [4, 7] instead of using Grover-like, QRAM-based approaches [5]. It has been shown that QAOA methods are approximators of universal computations such as the ones expressed in the ZX calculus. There exists a similarity between the QAOA exponentiated ZZ gates and the parameterised Ry gates and the trained circuits from [4] (“language diagrams into quantum circuits with phase-gates and CNOT-gates”). QNLP as a problem of the closest vector [4, 5] shares some similarities with skip-grams and the word2vec model.

    We investigate the applicability QNLP using QAOA (implemented using PennyLane) to verify the theory from [4]. We use a somewhat reverse approach to the one from [4], instead of starting from language diagrams, we start from skip-grams and train context from using windows of two words extracted from sentences.

    We generate country songs using trained models. Country songs are good candidates, because these include repeating somewhat straightforward concepts: the corpus includes a lot of redundancy and and many contexts in which the words are appearing.

    Our trained model will reflect the original language diagram of the corpus we started from. The semantics is embedded in the trained QAOA weights:: the strengths of the ZZ and Ry gates encode the grammatical relations.

    The feasibility of our QNLP is tested, for the moment, using [8]. The model can predict with an accuracy of 65% a corpus of 31 words after 200 training rounds (10 minutes) and using only 28 variables. A corpus of 84 words (61 unique) achieves 45% accuracy after 60 min. training. We use Google Colab. Below is a sample song ("She kicks") from the latter experimentation (we added the punctuation).

    Work bartender knows week
    Same mind … she kicks. Don’t,
    Same work end of
    Dive name, but I … My … 
     
    The bartender knows week drink,
    My she kick. Don't mind...
    Dive, but I, my name
    The/she kicks … Don’t
    
    1. Abramsky S, Coecke B. A categorical semantics of quantum protocols. In Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science, 2004. 2004 Jul 17 (pp. 415-425). IEEE.
    2. Abramsky S. Petri nets, discrete physics, and distributed quantum computation. InConcurrency, Graphs and Models 2008 (pp. 527-543). Springer, Berlin, Heidelberg.
    3. van de Wetering J. ZX-calculus for the working quantum computer scientist. arXiv preprint arXiv:2012.13966. 2020 Dec 27.
    4. Coecke B, de Felice G, Meichanetzidis K, Toumi A. Foundations for Near-Term Quantum Natural Language Processing. arXiv preprint arXiv:2012.03755. 2020 Dec 7.
    5. Zeng W, Coecke B. Quantum algorithms for compositional natural language processing. arXiv preprint arXiv:1608.01406. 2016 Aug 4.
    6. Lloyd S. Quantum approximate optimization is computationally universal. arXiv preprint arXiv:1812.11075. 2018 Dec 28.
    7. QNLP 2019 videos on youtube, e.g. https://www.youtube.com/watch?v=Osu2SPtCvfU
    8. https://www.azlyrics.com/lyrics/jonpardi/heartachemedication.html

    Source code:

    https://github.com/oumjunior/Qountry-songs

    Resource Estimate:

    We would greatly benefit from the PowerUp, as it would enable us to implement the training in a scalable manner.

    • We expect the performance of the model to increase with larger context. For the moment we use vectors spanning multiple qubits. 30 qubit simulator would be useful. E.g. window size 3 and 512 words, 27 qubits required.

    • Reduce inference cost: use circuit identities to simplify the QAOA circuit -- potentially use: b) pyZX for ZX simplification; b) inhouse QUANTIFY tool for brute forcing circuit identities

    • Learn a full song and compose a completely new song

    We will use both the IonQ machine (good connectivity) for small QOAOs, as well as classical simulators (SV1 and TN1) with noise for too large models. We estimate that $500 would be sufficient for classical simulation purposes. Training with the IonQ QPU would cost around $50 after we have converged about the architecture of the circuit. Around $250 for all the experiments and failures should be sufficient. Writing the song would cost another $100 due to the number of repetitions and shots. The previous estimation is for a minimum. The more free credits we receive, the bigger and larger circuits and songs we will try.

    Power Up 
    opened by oumjunior 3
  • [Power Up] [submission] Quantum-Aided Medical Image Diagnosis

    [Power Up] [submission] Quantum-Aided Medical Image Diagnosis

    Team Name:

    qt

    Project Description:

    Quantum-Aided Medical Image Diagnosis

    Objective

    Invasive ductal carcinoma (IDC) is - with ~ 80 % of cases - one of the most common types of breast cancer. It's malicious and able to form metastases which makes it especially dangerous. Often a biopsy is done to remove small tissue samples. Then a pathologist has to decide whether a patient has IDC, another type of breast cancer, or is healthy. In addition, sick cells need to be located to find out how advanced the disease is and which grade should be assigned. This has to be done manually and is a time-consuming process. Furthermore, the decision depends on the expertise of the pathologist and his or her equipment. Here, I'm proposing to use Quantum Genetic Algorithm (QGA) and Support Vector Machines (SVMs). I hope this method will be having effective results when compared to some of the standard approaches. This way one would be able to overcome the dependence on the pathologist which would be especially useful in regions where no experts are available. Also, after classifying images using QGA and SVMs, I will use Quanvolutional Neural Networks (QNN) or a hybrid quantum-classical model which can have the advantage over the classical approach and make a comparative analysis with standard approaches like Convolutional Neural Networks (CNN) Note- Finally testing this approach on different Quantum Devices and Simulators and come up with final results.

    Dataset:

    Context

    Invasive Ductal Carcinoma (IDC) is the most common subtype of all breast cancers. To assign an aggressiveness grade to a whole mount sample, pathologists typically focus on the regions which contain the IDC. As a result, one of the common pre-processing steps for automatic aggressiveness grading is to delineate the exact regions of IDC inside of a whole-mount slide.

    Content

    The original dataset consisted of 162 whole mount slide images of Breast Cancer (BCa) specimens scanned at 40x. From that, 277,524 patches of size 50 x 50 were extracted (198,738 IDC negative and 78,786 IDC positive). Each patch’s file name is of the format: uxXyYclassC.png — > example 10253idx5x1351y1101class0.png. Where u is the patient ID (10253idx5), X is the x-coordinate of where this patch was cropped from, Y is the y-coordinate of where this patch was cropped from, and C indicates the class where 0 is non-IDC and 1 is IDC.

    Source code:

    Code

    References:

    • https://www.researchgate.net/publication/342391570_Quantum_neural_network_for_quicker_clinical_prognostic_analysis_An_application_and_experimental_study_using_CT_scan_images_of_COVID-19_patients
    • https://deepai.org/publication/quantum-medical-imaging-algorithms
    • https://link.springer.com/article/10.1007/s00521-020-05518-x
    • https://arxiv.org/abs/2011.02831
    • https://pennylane.ai/qml/demos/tutorial_quanvolution.html
    • https://pennylane.ai/qml/demos/tutorial_multiclass_classification.html

    Resource Estimate:

    We intend to use the power-up prize to further investigate the algorithms and try different approaches to increase the accuracy of our model using simulators and quantum hardware provided by AWS

    Original Dataset Size- 278k Number of records going to consider in the base model- 5k Number of Shots- 500 Number of iterations- 2 or 3

    Cost Estimation:

    Rough Cost breakup of 250USD (got as being in top 40 team) + 4000USD (POWER UP if given) | Hardware | Estimated Cost | | ------- | ------- | | D-Wave | $950 = 5000 * 500 * 2 * 0.00019 | | Rigetti | $2625 = 5000 * 500 * 3 * 0.00035 | | Simulation | $675 |

    First design and test the model on the simulator, for which I'm taking a rough estimation of around $650+, after that run the code on Quantum Devices to get actual results and compare results. Note- This is just a rough estimate, actual cost may increase/decrease based on the usage

    Future Work:

    I'm pursuing my MS, so I will take forward this research as my final dissertation and will:

    • Develop a novel Quantum Algorithm that can be used for medical image diagnosis and test the results with different datasets available publicly
    • Optimize the circuit and see which approach works better with which set of datasets and circuit combination
    • Design API and web app to provide services in healthcare image analysis
    Power Up 
    opened by techwithshadab 3
  • Sound-Classification-using-Quanvolutional-Neural-Networks

    Sound-Classification-using-Quanvolutional-Neural-Networks

    Team Name: Two Bits in a Box

    Project Description: Sound classification is one of the popular topics in the classical machine learning literature eg.[1],[2]. One of the used methods is applying CNN to the spectrograms of the sound samples. Nevertheless, we couldn't find similar applications in the Quantum Machine Learning literature.

    In this project we aim to use Quanvolutional Neural Networks to classify sound using this kaggle dataset. We will mainly compare the performance of the Quanvolutional Neural Networks to the equivalent classical CNN implementation, and explore techniques in the Quantum Machine Learning literature that can enhance the existing classical ML techniques.

    Source code: https://github.com/heba0/Sound-Classification-using-Quanvolutional-Neural-Networks

    Resource Estimate: The AWS credit will help us experiment better with Quantum Computing resources. Our model will use around 3 layers with 3x3 kernels -> 3x3x3 = 27 qubits per task We would like to use the Rigetti with 2000 shots The training and testing datasets have around 9700 samples (can be sampled to smaller datasets)

    References: [1] Jaiswal, K. and Kalpeshbhai Patel, D., 2018. Sound Classification Using Convolutional Neural Networks. 2018 IEEE International Conference on Cloud Computing in Emerging Markets (CCEM),.

    [2] Davis, N. and Suresh, K., 2018. Environmental Sound Classification Using Deep Convolutional Neural Networks and Data Augmentation. 2018 IEEE Recent Advances in Intelligent Computational Systems (RAICS),.

    Your team's name (matching the name used on the QML Challenge Scoreboard)

    Presentation:

    1. https://github.com/heba0/Sound-Classification-using-Quanvolutional-Neural-Networks/blob/master/Dataset%20Info%20%26%20Preparation.ipynb

    2. https://github.com/heba0/Sound-Classification-using-Quanvolutional-Neural-Networks/blob/master/Preprocessing.ipynb

    3. https://github.com/heba0/Sound-Classification-using-Quanvolutional-Neural-Networks/blob/master/Preprocessing.ipynb

    submission 
    opened by Abou-el-ela 1
  • [ENTRY][Floq] Performance Evaluation of Hybrid Quantum-Classical Object Detection Network

    [ENTRY][Floq] Performance Evaluation of Hybrid Quantum-Classical Object Detection Network

    Team Name:

    QuantumTunnelers

    Project Description:

    Our project aims to create a hybrid model of popular object detection networks. Primarily, we are focusing on RetinaNet with a MobileNet (and possibly ResNet-18) feature extraction backbone. Our goal is to introduce quantum layers and measure various performance statistics such as mean Average Precision (mAP) and the number of epochs taken to reach a comparable Loss value.

    The main layer we are focusing on is the convolutional layer. Using a modification of both the original quanvolutional layer model introduced in Henderson et al. (2019) and the demo found on PennyLane, we custom built a quantum convolutional layer that takes in any kernel size and output layer depth as parameters, automatically determines the correct number of qubits needed, and outputs the appropriate feature map using a quantum circuit as its base.

    We plan to replace key convolutional layers within RetinaNet with our custom quanvolutional layer and measure the aforementioned performance statistics. We hope to see improvement within the statistics and hope to extend this project to other popular networks after this Hackathon.

    Updates

    Our initial plan was to modify MobileNetV2 to create a balance between accuracy and the number of qubits required, using Floq as a way to speed up the quanvolution if the number of qubits were within the Floq's allowed range. This proved to be more difficult than anticipated to accomplish within 2 days, seeing as how a number of hyperparameters (kernel sizes, feature map input and output depth for convolutions, etc) had to be modified to find a good balance. Unfortunately, due to sudden personal delays, we were unable to devote most of the last 2 days to this project.

    However, we managed to create a working MobileNetV2-based Hybrid Quantum-Classical feature extraction backbone with an easy-to-use support for Floq. Our limited time due to the personal delays prevented us from training and evaluating this hybrid model. Instead, we decided to focus on our own MNIST-focused feature extractor (QuanvNet) and applied a classification head on top (2 fully-connected layers).

    We tweaked our quanvolutional layer code to fix a bug where quanvolution layers using a quantum circuit with a single layer, dubbed quantum-1, were outperforming similar layers using a quantum circuit with double layers, dubbed quantum-2. In addition, we added support for the automatic use of Floq if an API-key is sent as an argument.

    Future

    We fully intend to continue working on this project and hope to create a hybrid version of an established backbone in order to compare with literature. We also intend to continue to modify our version of the quanvolutional layer in order to better understand what type of quantum circuit would lead to the most improvement over the classical version. If you are interested in working with us, please do not hesitate to reach out, and for more information, please visit our GitHub repository.

    Thank you very much!

    Presentation:

    Please visit our GitHub repository for more details about our project, validation results, and instructions on how to run our code.

    Source code:

    Our GitHub Repository: QuobileNet

    submission floq 
    opened by RKHashmani 1
  • [ENTRY] Quantum enhanced convolutional filter

    [ENTRY] Quantum enhanced convolutional filter

    Team Name:

    CCH

    Project Description:

    The emerging field of hybrid quantum-classical algorithms joins CPUs and QPUs to speed-up/improve specific calculations within a classical algorithm. This allows for shorter quantum executions that are less susceptible to the cumulative effects of noise and that run well on today’s devices. This is why we intend to explore the performance of a hybrid convolutional neural network model that incorporates a trainable quantum layer, effectively replacing a convolutional filter, in both quantum simulators and QPU.

    Our team proposes to design a trainable quantum convolutional filter in a quantum-classical hybrid neural network, appealing for the NISQ era, inspired by these papers: Hybrid quantum-classical Convolutional Neural Networks [1] and Quanvolutional Neural Networks [2] , but generalizing these previous works to use cloud based QPU.

    Here is a list of the expected outcomes/ questions to address of this project:

    Complete benchmarking of a quantum convolutional filter (Encoding of data + variational ansatz) embedded in a classical neural network, in the context of an image classification task with the MNIST dataset.
    
    Example of complete workflow for training a quantum-classical CNN interfacing Pennylane with TensorFlow/Pytorch for automatic differentiation of the quantum and classical layers, and amazon braket for running the workflow on a QPU.
    
    With the current noise level in cloud-based QPU, what size/depth of the parametrized quantum circuits is expressive enough without performance being buried under noisy conditions. Can we achieve a significant advantage (in terms of evaluation metrics for a fixed number of quantum vs classical parameters/weights) with today’s QPU?
    
    Visual exploration of convolved features ( output of filters) with both quantum and classical convolutional filters.
    

    Presentation:

    https://github.com/KetpuntoG/QFilters

    Source code:

    https://github.com/KetpuntoG/QFilters

    submission 
    opened by RicardoGaGu 1
  • [ENTRY] Feeding many trolls

    [ENTRY] Feeding many trolls

    Team Name:

    Team qumulus nimbus Praveen Jayakumar [email protected]

    Project Description:

    We provide a pennylane implementation of single qubit universal quantum classifier similar to that presented in [1] and [2]. We then provide an efficient method to parallely process classical data using a qram setup for the universal single qubit classifier.

    We then attempt to address quantum classifiers by data reuploading for Quantum Data for experiments when we have copies of the quantum state and show it's performance. We observe that it does not help improve the performance of the classifier.

    We use the universal quantum classifier method made earlier and fidelity based measurement strategies described in [1] to demonstrate a method of quantum music learning and generation by recasting the classifier into a markov chain like setup. We provide implementation for 7 note scales, 12 note full octaves and 5 note pentatonic scales. This method can be extended to different mappings too which is left for future work.

    Presentation:

    Jupyter notebook

    Source code:

    GitHub Repository

    submission 
    opened by Praveen91299 1
  • [ENTRY] Event Classification with Layer-wise Learning for Data Re-uploading Classifier in High Energy Physics

    [ENTRY] Event Classification with Layer-wise Learning for Data Re-uploading Classifier in High Energy Physics

    Team name:

    Entangeled_Nets

    @eraraya-ricardo @VoicuTomut @T0gan

    Project Description:

    The large experiments conducted in the field of particle physics require the detection and analysis of data produced in particle collisions that occurred using high-energy accelerators such as the LHC. In these experiments, particles that are created by collisions are observed by layers of high-precision detectors surrounding the collision points, which produces large amounts of data about the collision. This motivated the use of "classical" machine learning techniques in different aspects to improve the performance and analysis of the data.

    Accordingly, these developed techniques are also adapted to quantum computing, e.g, the unfolding measurement distributions via quantum annealing. Intending to take advantage of both fields, the techniques in quantum machine learning, which are considered as one of the quantum computing algorithms that could bring quantum advantages over classical methods, will be used where QML algorithms developed for gate based quantum hardware, in particular the algorithms based on variational quantum circuits. In variational quantum circuits, the classical data input is encoded into quantum states and a QPU is used to obtain and measure the quantum states which vary with some parameters. Exploiting a complex Hilbert space that grows exponentially with the number of qubits.

    This project aims to use the method of data-reuploading, where qubits will be used as quantum classifiers to classify a certain dataset with high accuracy, and parametrized quantum circuit, whose variables are used to construct a cost function that should be minimized "classically". For our model, the SUSY dataset will be considered.

    Project presentation:

    Video presentation

    Jupyter notebook

    Final source code:

    GitHub repo

    submission 
    opened by t0gan 1
  • [ENTRY] Image Generation and Distribution Loading with Quantum Generative Adversarial Networks

    [ENTRY] Image Generation and Distribution Loading with Quantum Generative Adversarial Networks

    Team Name:

    Penn Ave Fish Company

    Project Description:

    Introduction

    A prerequisite for quantum algorithms to outperform their classical counterparts lies in the ability to efficiently load the classical input of the algorithms into quantum states. However, to prepare a generic quantum state exactly requires O(2^n) gates [1], which can impede the power of exponential-speedup quantum algorithms before they come into play. For practical purposes, Quantum Machine Learning (QML) can be adopted to approximate the desired loading channel via training. Quantum Generative Adversarial Network (qGAN) in particular has shown great promise in accomplishing the task with O(poly(n)) gates [2]. Similar to its classical counterpart, qGAN consists of both a generator for synthesizing data to match the real data and a discriminator for discerning real data from the product of the generator. The difference between the two is that qGAN uses a quantum generator to approximate the quantum state, and the discriminator can be either classical or quantum depending on whether the input data is classical or quantum [3]. Generally, the qGAN trains its generator and discriminator alternatively in the form of a zero-sum game, and ends the training when the relative entropy (i.e. the difference between the real data and the synthesized one, one measure of the training performance) converges to ~0 [2].

    For our project, we aim to demonstrate the efficient loading of multi-dimensional classical distribution using qGAN with a classical discriminator. To better present our result and offer a potential generalization of our project, we choose images with multi-dimensional features as our classical datasets. The distributions of such images can be engineered into the multi-dimensional distributions of multiple qubit states given an explicit formulation. Meanwhile, with the demonstration of image recognition, our work also explores the power of qGAN for real-world learning problems.

    Highlights

    In the implementation, we use PennyLane to construct the quantum circuit for the task. The classical discriminator is trained using Keras layers on TensorFlow. We successfully demonstrate reliable learning of images with multimodal distributions. It is well documented that arbitrary images and distributions with high-frequency features are difficult to train. For these instances, we devised a remapping routine that utilizes an array automorphism to simplify the target distribution to a unimodal one. Compared to the state-of-the-art work [4] on qGAN for image generation, our method shows significant improvement in parameter complexity, circuit depth and training time. Under remapping, when configured with 8 qubits, 16 parameters and a circuit depth of 5 and trained on 16x16 MNIST images, our model is able to reduce the cross-entropy between generated distribution and target distribution to below 1.5x10^{-2}.

    Presentation:

    https://allenator.github.io/quantum-gan-image-generation/

    Source code:

    https://github.com/Allenator/quantum-gan-image-generation

    References

    [1] Grover, L. K. Synthesis of quantum superpositions by quantum computation. Phys. Rev. Lett. 85, 1334–1337 (2000).
    [2] Zoufal, C., Lucchi, A. & Woerner, S. Quantum Generative Adversarial Networks for learning and loading random distributions. npj Quantum Inf 5, 103 (2019).
    [3] PennyLane dev team, Quantum Generative Adversarial Networks with Cirq + TensorFlow (2021).
    [4] H. Huang et al., Experimental Quantum Generative Adversarial Networks for Image Generation. arxiv-preprint, (2020).

    submission floq 
    opened by Allenator 1
Owner
Xanadu
Quantum Computing Powered by Light
Xanadu
The challenge for Quantum Coalition Hackathon 2021

Qchack 2021 Google Challenge This is a challenge for the brave 2021 qchack.io participants. Instructions Hello, intrepid qchacker, welcome to the <G|o

quantumlib 18 May 4, 2022
Learn about quantum computing and algorithm on quantum computing

quantum_computing this repo contains everything i learn about quantum computing and algorithm on quantum computing what is aquantum computing quantum

arfy slowy 8 Dec 25, 2022
Iran Open Source Hackathon

Iran Open Source Hackathon is an open-source hackathon (duh) with the aim of encouraging participation in open-source contribution amongst Iranian dev

OSS Hackathon 121 Dec 25, 2022
McGill Physics Hackathon 2021: Reaction-Diffusion Models for the Generation of Biological Patterns

DiffuseAnimals: Reaction-Diffusion Models for the Generation of Biological Patterns Introduction Reaction-diffusion equations can be utilized in order

Austin Szuminsky 2 Mar 7, 2022
Official Repo of my work for SREC Nandyal Machine Learning Bootcamp

About the Bootcamp A 3-day Machine Learning Bootcamp organised by Department of Electronics and Communication Engineering, Santhiram Engineering Colle

MS 1 Nov 29, 2021
Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks

Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks This repository contains the code and data for the corresp

Friederike Metz 7 Apr 23, 2022
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Erik Linder-Norén 21.8k Jan 9, 2023
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

This is the Vowpal Wabbit fast online learning code. Why Vowpal Wabbit? Vowpal Wabbit is a machine learning system which pushes the frontier of machin

Vowpal Wabbit 8.1k Jan 6, 2023
Simulating Sycamore quantum circuits classically using tensor network algorithm.

Simulating the Sycamore quantum supremacy circuit This repo contains data we have obtained in simulating the Sycamore quantum supremacy circuits with

Feng Pan 46 Nov 17, 2022
'Solving the sampling problem of the Sycamore quantum supremacy circuits

solve_sycamore This repo contains data, contraction code, and contraction order for the paper ''Solving the sampling problem of the Sycamore quantum s

Feng Pan 29 Nov 28, 2022
An implementation of quantum convolutional neural network with MindQuantum. Huawei, classifying MNIST dataset

关于实现的一点说明 山东大学 2020级 苏博南 www.subonan.com 文件说明 tools.py 这里面主要有两个函数: resize(a, lenb) 这其实是我找同学写的一个小算法hhh。给出一个$28\times 28$的方阵a,返回一个$lenb\times lenb$的方阵。因

ぼっけなす 2 Aug 29, 2022
Filtering variational quantum algorithms for combinatorial optimization

Current gate-based quantum computers have the potential to provide a computational advantage if algorithms use quantum hardware efficiently.

null 1 Feb 9, 2022
Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers

Effect of Different Encodings and Distance Functions on Quantum Instance-based Classifiers The repository contains the code to reproduce the experimen

Alessandro Berti 4 Aug 24, 2022
Code repo for "RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network" (Machine Learning and the Physical Sciences workshop in NeurIPS 2021).

RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network An official PyTorch implementation of the RBSRICNN network as desc

Rao Muhammad Umer 6 Nov 14, 2022
The official repo of the CVPR 2021 paper Group Collaborative Learning for Co-Salient Object Detection .

GCoNet The official repo of the CVPR 2021 paper Group Collaborative Learning for Co-Salient Object Detection . Trained model Download final_gconet.pth

Qi Fan 46 Nov 17, 2022
Official Repo for ICCV2021 Paper: Learning to Regress Bodies from Images using Differentiable Semantic Rendering

[ICCV2021] Learning to Regress Bodies from Images using Differentiable Semantic Rendering Getting Started DSR has been implemented and tested on Ubunt

Sai Kumar Dwivedi 83 Nov 27, 2022
A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Master status: Development status: Package information: TPOT stands for Tree-based Pipeline Optimization Tool. Consider TPOT your Data Science Assista

Epistasis Lab at UPenn 8.9k Dec 30, 2022
Scripts of Machine Learning Algorithms from Scratch. Implementations of machine learning models and algorithms using nothing but NumPy with a focus on accessibility. Aims to cover everything from basic to advance.

Algo-ScriptML Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The goal of this project is not t

Algo Phantoms 81 Nov 26, 2022