A PyTorch-centric hybrid classical-quantum machine learning framework

Overview

torchquantum

A PyTorch-centric hybrid classical-quantum dynamic neural networks framework.

MIT License

News

  • Add a simple example script using quantum gates to do MNIST classification.
  • v0.0.1 available. Feedbacks are highly welcomed!

Installation

git clone https://github.com/Hanrui-Wang/pytorch-quantum.git
cd pytorch-quantum
pip install --editable .

Usage

Construct quantum NN models as simple as constructing a normal pytorch model.

import torch.nn as nn
import torch.nn.functional as F 
import torchquantum as tq
import torchquantum.functional as tqf

class QFCModel(nn.Module):
  def __init__(self):
    super().__init__()
    self.n_wires = 4
    self.q_device = tq.QuantumDevice(n_wires=self.n_wires)
    self.measure = tq.MeasureAll(tq.PauliZ)
    
    self.encoder_gates = [tqf.rx] * 4 + [tqf.ry] * 4 + \
                         [tqf.rz] * 4 + [tqf.rx] * 4
    self.rx0 = tq.RX(has_params=True, trainable=True)
    self.ry0 = tq.RY(has_params=True, trainable=True)
    self.rz0 = tq.RZ(has_params=True, trainable=True)
    self.crx0 = tq.CRX(has_params=True, trainable=True)

  def forward(self, x):
    bsz = x.shape[0]
    # down-sample the image
    x = F.avg_pool2d(x, 6).view(bsz, 16)
    
    # reset qubit states
    self.q_device.reset_states(bsz)
    
    # encode the classical image to quantum domain
    for k, gate in enumerate(self.encoder_gates):
      gate(self.q_device, wires=k % self.n_wires, params=x[:, k])
    
    # add some trainable gates (need to instantiate ahead of time)
    self.rx0(self.q_device, wires=0)
    self.ry0(self.q_device, wires=1)
    self.rz0(self.q_device, wires=3)
    self.crx0(self.q_device, wires=[0, 2])
    
    # add some more non-parameterized gates (add on-the-fly)
    tqf.hadamard(self.q_device, wires=3)
    tqf.sx(self.q_device, wires=2)
    tqf.cnot(self.q_device, wires=[3, 0])
    tqf.qubitunitary(self.q_device0, wires=[1, 2], params=[[1, 0, 0, 0],
                                                           [0, 1, 0, 0],
                                                           [0, 0, 0, 1j],
                                                           [0, 0, -1j, 0]])
    
    # perform measurement to get expectations (back to classical domain)
    x = self.measure(self.q_device).reshape(bsz, 2, 2)
    
    # classification
    x = x.sum(-1).squeeze()
    x = F.log_softmax(x, dim=1)

    return x

Features

  • Easy construction of parameterized quantum circuits in PyTorch.
  • Support batch mode inference and training on CPU/GPU.
  • Support dynamic computation graph for easy debugging.
  • Support easy deployment on real quantum devices such as IBMQ.

TODOs

  • Support more gates
  • Support compile a unitary with descriptions to speedup training
  • Support other measurements other than analytic method
  • In einsum support multiple qubit sharing one letter. So that more than 26 qubit can be simulated.
  • Support bmm based implementation to solve scalability issue
  • Support conversion from torchquantum to qiskit

Dependencies

  • Python >= 3.7
  • PyTorch >= 1.8.0
  • configargparse >= 0.14
  • GPU model training requires NVIDIA GPUs

MNIST Example

Train a quantum circuit to perform MNIST task and deploy on the real IBM Yorktown quantum computer as in mnist_example.py script:

python mnist_example.py

Files

File Description
devices.py QuantumDevice class which stores the statevector
encoding.py Encoding layers to encode classical values to quantum domain
functional.py Quantum gate functions
operators.py Quantum gate classes
layers.py Layer templates such as RandomLayer
measure.py Measurement of quantum states to get classical values
graph.py Quantum gate graph used in static mode
super_layer.py Layer templates for SuperCircuits
plugins/qiskit* Convertors and processors for easy deployment on IBMQ
examples/ More examples for training QML and VQE models

More Examples

The examples/ folder contains more examples to train the QML and VQE models. Example usage for a QML circuit:

# train the circuit with 36 params in the U3+CU3 space
python examples/train.py examples/configs/mnist/four0123/train/baseline/u3cu3_s0/rand/param36.yml

# evaluate the circuit with torchquantum
python examples/eval.py examples/configs/mnist/four0123/eval/tq/all.yml --run-dir=runs/mnist.four0123.train.baseline.u3cu3_s0.rand.param36

# evaluate the circuit with real IBMQ-Yorktown quantum computer
python examples/eval.py examples/configs/mnist/four0123/eval/x2/real/opt2/300.yml --run-dir=runs/mnist.four0123.train.baseline.u3cu3_s0.rand.param36

Example usage for a VQE circuit:

# Train the VQE circuit for h2
python examples/train.py examples/configs/vqe/h2/train/baseline/u3cu3_s0/human/param12.yml

# evaluate the VQE circuit with torchquantum
python examples/eval.py examples/configs/vqe/h2/eval/tq/all.yml --run-dir=runs/vqe.h2.train.baseline.u3cu3_s0.human.param12/

# evaluate the VQE circuit with real IBMQ-Yorktown quantum computer
python examples/eval.py examples/configs/vqe/h2/eval/x2/real/opt2/all.yml --run-dir=runs/vqe.h2.train.baseline.u3cu3_s0.human.param12/

Detailed documentations coming soon.

Contact

Hanrui Wang ([email protected])

Comments
  • Cannot use qiskit simulation when running mnist_example.py

    Cannot use qiskit simulation when running mnist_example.py

    I tried to run mnist_example.py with IBM Q token already set. But I ran into trouble when doing qiskit simulation. The line is

    valid_test(dataflow, 'test', model, device, qiskit=True)
    

    I think the execution should be fast, but it got stuck after the following messages:

    Test with Qiskit Simulator
    [2022-03-22 22:36:14.573] Before transpile: {'depth': 32, 'size': 77, 'width': 8, 'n_single_gates': 62, 'n_two_gates': 11, 'n_three_more_gates': 0, 'n_gates_dict': {'ry': 19, 'rz': 24, 'rx': 17, 'cx': 10, 'crx': 1, 'h': 1, 'sx': 1, 'measure': 4}}
    [2022-03-22 22:36:14.864] After transpile: {'depth': 23, 'size': 49, 'width': 8, 'n_single_gates': 33, 'n_two_gates': 12, 'n_three_more_gates': 0, 'n_gates_dict': {'ry': 8, 'rz': 7, 'rx': 4, 'cx': 12, 'u1': 2, 'u3': 11, 'u2': 1, 'measure': 4}}
    

    I interrupted the program using ctrl+c after 2-3 minutes, getting a very long error log for interrupting multiprocessing. I want to know what causes that trouble and how to deal with it.

    Thank!


    I am using torchquantum master branch, qiskit 0.19.2.

    error_log.txt

    opened by royess 10
  • Always print “The qiskit parameterization bug is already fixed!” when running mnist_example.py

    Always print “The qiskit parameterization bug is already fixed!” when running mnist_example.py

    I tried to run the minist_example.py. And I commented out the later part about Qiskit.

    I tried to run it with the following command:

    python mnist_example.py --epochs 1

    But always print "The qiskit parameterization bug is already fixed!" in the terminal. I wish I had a way to stop printing, but I haven't found one yet.

    Thank!

    print_log.txt

    opened by ex193170 5
  • Testing of mnist_example_no_binding.py file produces error

    Testing of mnist_example_no_binding.py file produces error

    Hi, I tried testing the mnist_example_no_binding.py file. I keep getting the below error:

      File "C:\Users\manuc\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\__init__.py", line 126, in <module>
        raise err
    OSError: [WinError 1455] The paging file is too small for this operation to complete. Error loading "C:\Users\manuc\AppData\Local\Programs\Python\Python39\
    lib\site-packages\torch\lib\cusparse64_11.dll" or one of its dependencies.
    Traceback (most recent call last):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "<string>", line 1, in <module>
    
    

    Please suggest me a way to resolve this. Because of this error, my testing is not yet complete.

    opened by manu123416 4
  • Cannot use Qiskit simulation when running example1

    Cannot use Qiskit simulation when running example1

    I tried to runtorchquantum-master\artifact\example1\mnist_example.py, but I ran into some trouble when doing qiskit simulation, too.

    Because the flie "examples.core.datasets" is missing, I copied it from the file from https://zenodo.org/record/5787244#.YbunmBPMJhE.(\torchquantum-master\examples\core\datasets). To avoid BrokenPipeError, I reset the "num_workers" in line 152 to 0.

    After these messages:

    Test with Qiskit Simulator
    [2022-10-20 14:33:05.579] Before transpile: {'depth': 36, 'size': 77, 'width': 8, 'n_single_gates': 52, 'n_two_gates': 21, 'n_three_more_gates': 0, 'n_gates_dict': {'ry': 19, 'rz': 18, 'rx': 13, 'cx': 20, 'crx': 1, 'h': 1, 'sx': 1, 'measure': 4}}
    [2022-10-20 14:33:06.257] After transpile: {'depth': 31, 'size': 61, 'width': 8, 'n_single_gates': 37, 'n_two_gates': 20, 'n_three_more_gates': 0, 'n_gates_dict': {'ry': 11, 'rz': 7, 'rx': 6, 'cx': 20, 'u3': 11, 'u1': 2, 'measure': 4}}
    

    an error occurred, saying "need at least one array to stack". the details are in the file "errorlog1"

    I also tried to add ", parallel=False" and modify the file qiskit/assembler/assemble_circuits.py as the Issue#9 does, but another error occurred, the details are in the file "errorlog2"

    The version information is as follow,

    >>> import qiskit
    >>> qiskit.version.QiskitVersion()
    {'qiskit-terra': '0.19.2', 'qiskit-aer': '0.10.3', 'qiskit-ignis': '0.7.0', 'qiskit-ibmq-provider': '0.18.3', 'qiskit-aqua': '0.9.5', 'qiskit': '0.34.2', 'qiskit-nature': None, 'qiskit-finance': None, 'qiskit-optimization': None, 'qiskit-machine-learning': None}
    

    and I'm running the code under python 3.9.

    By the way, I tried the code in https://zenodo.org/record/5787244#.YbunmBPMJhE.. The same problem occurred. I wonder how to deal with it. Any help would be greatly appreciated.

    errorlog1.txt errorlog2.txt

    opened by yzr-mint 2
  • Code for reproducing QuantumNAS results missing

    Code for reproducing QuantumNAS results missing

    The .py and .yml files referenced in the shell scripts used to reproduce the results from the QuantumNAS paper seem to be missing - when running the Colab notebooks, I always run into this error:

    can't open file 'examples/train.py': [Errno 2] No such file or directory

    I tried searching for the files in the repo manually, but could not find the .py or the .yml files anywhere.

    opened by SashwatAnagolum 2
  • GPU is not utilized during VQE training

    GPU is not utilized during VQE training

    I tried to use codes in VQE examples but found that GPU was not utilized. However, 2GB of the GPU memory is used.

    My configuration:

    [2022-05-31 13:54:52.758] /home/yuxuan/.julia/conda/3/envs/qtorch39/bin/python  examples/vqe/xxz_noncritical_configs.yml --gpu 0
    [2022-05-31 13:54:52.758] Training started: "runs/vqe.xxz_noncritical_configs".
    dataset:
      name: vqe
      input_name: input
      target_name: target
    trainer:
      name: params_shift_trainer
    run:
      steps_per_epoch: 10
      workers_per_gpu: 8
      n_epochs: 10
      bsz: 1
      device: gpu
    model:
      transpile_before_run: False
      load_op_list: False
      hamil_filename: examples/vqe/h2.txt
      arch:
        n_wires: 6
        n_layers_per_block: 6
        q_layer_name: seth_0
        n_blocks: 6
      name: vqe_0
    qiskit:
      use_qiskit: False
      use_qiskit_train: True
      use_qiskit_valid: True
      use_real_qc: False
      backend_name: ibmq_quito
      noise_model_name: None
      n_shots: 8192
      initial_layout: None
      optimization_level: 0
      max_jobs: 1
    ckpt:
      load_ckpt: False
      load_trainer: False
      name: checkpoints/min-loss-valid.pt
    debug:
      pdb: False
      set_seed: False
    optimizer:
      name: adam
      lr: 0.05
      weight_decay: 0.0001
      lambda_lr: 0.01
    criterion:
      name: minimize
    scheduler:
      name: cosine
    callbacks: [{'callback': 'InferenceRunner', 'split': 'valid', 'subcallbacks': [{'metrics': 'MinError', 'name': 'loss/valid'}]}, {'callback': 'MinSaver', 'name': 'loss/valid'}, {'callback': 'Saver', 'max_to_keep': 10}]
    regularization:
      unitary_loss: False
    legalization:
      legalize: False
    

    GPU status by Nvitop: (The last line is for VQE training process.)

    image

    Version information:

    python                    3.9.11               h12debd9_2
    tensorboard               2.9.0                    pypi_0    pypi
    tensorboard-data-server   0.6.1                    pypi_0    pypi
    tensorboard-plugin-wit    1.8.1                    pypi_0    pypi
    tensorflow                2.9.1                    pypi_0    pypi
    tensorflow-estimator      2.9.0                    pypi_0    pypi
    tensorflow-io-gcs-filesystem 0.26.0                   pypi_0    pypi
    tensorpack                0.11                     pypi_0    pypi
    torch                     1.11.0+cu113             pypi_0    pypi
    torchaudio                0.11.0+cu113             pypi_0    pypi
    torchpack                 0.3.1                    pypi_0    pypi
    torchquantum              0.1.0                     dev_0    <develop>
    torchvision               0.12.0+cu113             pypi_0    pypi
    
    opened by royess 2
  • inconvenient to run VQE example

    inconvenient to run VQE example

    I find it not very convenient to run VQE example. If I run python examples/vqe/train.py directly, I will get an error message

    ......
        from examples.vqe import builder
    ModuleNotFoundError: No module named 'examples.vqe'
    

    But python -c "from examples.vqe import builder" works fine, which is strange to me.

    And my current way to run the script is by opening a python REPL and running

    from examples.vqe import train
    import sys
    sys.argv.append('examples/vqe/vqe_configs.yml')
    train.main()
    

    I wonder whether I can do it in a simpler way. Or there is a need to modify codes.

    opened by royess 2
  • Got stuck while running .\artifact\example2

    Got stuck while running .\artifact\example2

    I tried to runtorchquantum-master\artifact\example2\quantumnas\1_train_supercircuit.sh, but I got stuck.

    The program seems stuck after it begins to train. After the message "0% 0/92 [00:00<?, ?it/s]" came out, I waited for hours but nothing happened. The output is in the file"errorlog.log".

    The version information is as follow,

    >>> import qiskit
    >>> qiskit.version.QiskitVersion()
    {'qiskit-terra': '0.19.2', 'qiskit-aer': '0.10.3', 'qiskit-ignis': '0.7.0', 'qiskit-ibmq-provider': '0.18.3', 'qiskit-aqua': '0.9.5', 'qiskit': '0.34.2', 'qiskit-nature': None, 'qiskit-finance': None, 'qiskit-optimization': None, 'qiskit-machine-learning': None}
    

    and I'm running the code under python 3.9.

    I wonder how to deal with it. Any help would be greatly appreciated.

    errorlog.log

    opened by yzr-mint 1
  • Apple Silicon Mac needs one more step: install hdf5

    Apple Silicon Mac needs one more step: install hdf5

    git clone https://github.com/mit-han-lab/torchquantum.git
    cd torchquantum
    brew install hdf5
    export HDF5_DIR="$(brew --prefix hdf5)"
    pip install --editable .
    python fix_qiskit_parameterization.py
    

    reference: https://stackoverflow.com/questions/66741778/how-to-install-h5py-needed-for-keras-on-macos-with-m1

    opened by frogcjn 1
  • (feat/github-ci) ensure python style consistency, add pre-commit hooks

    (feat/github-ci) ensure python style consistency, add pre-commit hooks

    Hello 👋

    This small PR adds new GH CI workflows, flake8 configuration to the torchquantum project to ensure Python style consistency, prevents codebase from common mistakes.

    opened by almostagi 1
  • How to save the QNN model like a normal pytorch model?

    How to save the QNN model like a normal pytorch model?

    Hi,

    How can I save the QNN model in such a way that it can be loaded back in the same way we load a normal pytorch model. Basically, I want to load it for this use case.

    I did check the saving example from the examples section, but it doesn't save the entire model but just a checkpoint.

    opened by sauravtii 1
  • A simple way to improve the regression example

    A simple way to improve the regression example

    https://github.com/mit-han-lab/torchquantum/blob/7122388a3d58d5b6c48db44fbd4b27198941ed2f/examples/regression/run_regression.py#L138

    In this example, after the measurement, you get 3 numbers (output_all.shape = [bsz, 3]). However, in the loss function, only the 2nd number is utilized, i.e., [:, 1]. This leads to poor performance. A simple fix can significantly improve the performance (I already tested).

    1. Add res = self.linear(res) to the last step of the self.forward() function, where self.linear = nn.torch.Linear(self.n_wires, 1)
    2. The targets need to unsqueeze(dim=-1) so the dimension of outputs_all and targets match

    BTW: I have been playing around with torchquantum recently. It is a very good tool.

    opened by caitaozhan 0
  • Support for fake backends

    Support for fake backends

    First of all, I appreciate your effort! This framework is so helpful for new learners!

    I think it would be great if this framework supports fake backends as well for reproducibility!

    Thank you.

    opened by j0807s 1
  • Train data label and image are different

    Train data label and image are different

    Hi , I tried testing your quantum neural network code on jupyter notebook. I think, there is some bug in the training data.

    dataset = MNIST(root='../Data_Manu',
                     train_valid_split_ratio=[0.9, 0.1],
                digits_of_interest=[3, 5],
    #             n_train_samples = 75,
                n_test_samples=75)
    

    data_train= dataset['train'][0]

    {'image': tensor([[[-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.3733, -0.1696,
               -0.1696,  0.8868,  1.4468,  2.1087,  0.0213, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242,  0.1995,  0.6704,  1.8032,  1.9560,  2.7960,
                2.8088,  2.7960,  2.7960,  2.7960,  2.4142, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
                1.6250,  2.1723,  1.3068,  1.3068,  1.3196,  1.3068,  1.3068,
                1.4978,  2.5542,  2.7069,  2.7960,  2.7960,  2.7960,  2.7960,
                2.8088,  2.7960,  2.7960,  2.3251,  1.8160, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
                2.2996,  2.7960,  2.7960,  2.7960,  2.8088,  2.7960,  2.7960,
                2.7960,  2.7960,  2.8088,  2.7960,  2.7960,  2.7960,  2.7960,
                2.0323,  0.9886,  0.3140, -0.3606, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
                0.7977,  2.8088,  2.8088,  2.8088,  2.8215,  2.8088,  2.8088,
                2.8088,  2.8088,  2.6433,  1.9560,  0.8232,  0.6322, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
                0.5049,  2.7960,  2.7960,  2.7960,  2.8088,  2.4778,  1.2941,
                0.6322,  0.0722, -0.0424, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  0.4795,
                2.6433,  2.7960,  2.7960,  2.7960,  1.0523, -0.2715, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  2.2360,
                2.7960,  2.7960,  2.5160,  0.5813, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  2.8088,
                2.7960,  2.7960,  2.4906,  0.8232,  0.8359,  0.8232,  0.8232,
                0.8232, -0.1315, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  2.0451,
                2.8088,  2.8088,  2.8088,  2.8088,  2.8215,  2.8088,  2.8088,
                2.8088,  2.8088,  2.0451,  0.1740, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,  0.1740,
                1.8796,  2.5415,  2.5415,  2.6433,  2.5542,  2.5415,  2.5415,
                2.5415,  2.6433,  2.8088,  2.3760, -0.2460, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.0424, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.0424,  2.4142,  2.7960,  2.1087, -0.3478, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.2206,  2.2360,  2.7960,  2.7960, -0.1824, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.0296,
                1.4850,  2.3378,  2.8088,  2.7960,  2.7960,  0.3013, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242,  0.6195,  1.6505,  2.8088,
                2.8088,  2.8088,  2.8215,  2.8088,  1.9051, -0.3224, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.3351,  0.5049,  2.0196,  2.8088,  2.7960,  2.7960,
                2.7960,  2.7960,  2.4524,  1.2050, -0.2715, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.3351,  1.7269,  2.7960,  2.7960,  2.8088,  2.7960,  2.7960,
                2.6306,  1.7905,  0.3395, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.3097,  1.4341,  2.2869,  2.2869,  2.2996,  1.7141,  1.0650,
               -0.1315, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242],
              [-0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242,
               -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242, -0.4242]]]),
     'digit': 1}
    
    The tensor matrix contains 5 but the label shows 1?
    
    opened by manu123416 3
  • Density matrix and mixed state

    Density matrix and mixed state

    Hi,

    we are currently using Torchquantum to implement hybrid models, and we're wondering does Torchquantum plan to support mixed states and density matrix simulation in the near future since we'd like to implement e.g. something like qiskit.quantum_info.partial_trace?

    Without density matrix/mixed states, is something like https://quantumai.google/reference/python/cirq/partial_trace_of_state_vector_as_mixture currently doable with Torchquantum?

    Thanks for making such an awesome library available!

    opened by wcqc 6
Releases(v0.1.5)
  • v0.1.2(Sep 14, 2022)

    1. Add support for state.gate such as state.h
    2. Add more examples

    What's Changed

    • RZ gate by @jessding in https://github.com/mit-han-lab/torchquantum/pull/1
    • [major] merge pruning branch to master by @Hanrui-Wang in https://github.com/mit-han-lab/torchquantum/pull/2
    • Jiaqi by @JeremieMelo in https://github.com/mit-han-lab/torchquantum/pull/3
    • Jiaqi by @JeremieMelo in https://github.com/mit-han-lab/torchquantum/pull/4
    • Params shift by @abclzr in https://github.com/mit-han-lab/torchquantum/pull/6
    • Corrected module name to import MNIST by @googlercolin in https://github.com/mit-han-lab/torchquantum/pull/23
    • modify doc conf to init docstring publish task by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/24
    • refine class template by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/25
    • [minor] update format and theme by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/26
    • [minor] adjust dark theme code block and add function template by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/27
    • add customized furo doc theme by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/28
    • [doc] add ipynb and md support into doc, add one example by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/29
    • [major] fix bugs in torchquantum/measure.py by @abclzr in https://github.com/mit-han-lab/torchquantum/pull/30
    • [doc] Fix examples page in examples/index.rst by @frogcjn in https://github.com/mit-han-lab/torchquantum/pull/31

    New Contributors

    • @jessding made their first contribution in https://github.com/mit-han-lab/torchquantum/pull/1
    • @Hanrui-Wang made their first contribution in https://github.com/mit-han-lab/torchquantum/pull/2
    • @JeremieMelo made their first contribution in https://github.com/mit-han-lab/torchquantum/pull/3
    • @abclzr made their first contribution in https://github.com/mit-han-lab/torchquantum/pull/6
    • @googlercolin made their first contribution in https://github.com/mit-han-lab/torchquantum/pull/23
    • @frogcjn made their first contribution in https://github.com/mit-han-lab/torchquantum/pull/24

    Full Changelog: https://github.com/mit-han-lab/torchquantum/commits/v0.1.2

    Source code(tar.gz)
    Source code(zip)
    torchquantum-0.1.2-py3-none-any.whl(107.24 KB)
    torchquantum-0.1.2.tar.gz(93.23 KB)
Owner
MIT HAN Lab
Accelerating Deep Learning Computing
MIT HAN Lab
Classical OCR DCNN reproduction based on PaddlePaddle framework.

Paddle-SVHN Classical OCR DCNN reproduction based on PaddlePaddle framework. This project reproduces Multi-digit Number Recognition from Street View I

null 1 Nov 12, 2021
text_recognition_toolbox: The reimplementation of a series of classical scene text recognition papers with Pytorch in a uniform way.

text recognition toolbox 1. 项目介绍 该项目是基于pytorch深度学习框架,以统一的改写方式实现了以下6篇经典的文字识别论文,论文的详情如下。该项目会持续进行更新,欢迎大家提出问题以及对代码进行贡献。 模型 论文标题 发表年份 模型方法划分 CRNN 《An End-t

null 168 Dec 24, 2022
Pytorch Implementations of large number classical backbone CNNs, data enhancement, torch loss, attention, visualization and some common algorithms.

Torch-template-for-deep-learning Pytorch implementations of some **classical backbone CNNs, data enhancement, torch loss, attention, visualization and

Li Shengyan 270 Dec 31, 2022
QHack—the quantum machine learning hackathon

Official repo for QHack—the quantum machine learning hackathon

Xanadu 72 Dec 21, 2022
Official repo for QHack—the quantum machine learning hackathon

Note: This repository has been frozen while we consider the submissions for the QHack Open Hackathon. We hope you enjoyed the event! Welcome to QHack,

Xanadu 118 Jan 5, 2023
Object-Centric Learning with Slot Attention

Slot Attention This is a re-implementation of "Object-Centric Learning with Slot Attention" in PyTorch (https://arxiv.org/abs/2006.15055). Requirement

Untitled AI 72 Jan 2, 2023
A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.

collie_recs Collie is a library for preparing, training, and evaluating implicit deep learning hybrid recommender systems, named after the Border Coll

ShopRunner 97 Jan 3, 2023
A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.

collie Collie is a library for preparing, training, and evaluating implicit deep learning hybrid recommender systems, named after the Border Collie do

ShopRunner 96 Dec 29, 2022
A hybrid framework (neural mass model + ML) for SC-to-FC prediction

The current workflow simulates brain functional connectivity (FC) from structural connectivity (SC) with a neural mass model. Gradient descent is applied to optimize the parameters in the neural mass model.

Yilin Liu 1 Jan 26, 2022
NUANCED is a user-centric conversational recommendation dataset that contains 5.1k annotated dialogues and 26k high-quality user turns.

NUANCED: Natural Utterance Annotation for Nuanced Conversation with Estimated Distributions Overview NUANCED is a user-centric conversational recommen

Facebook Research 18 Dec 28, 2021
EMNLP'2021: Simple Entity-centric Questions Challenge Dense Retrievers

EntityQuestions This repository contains the EntityQuestions dataset as well as code to evaluate retrieval results from the the paper Simple Entity-ce

Princeton Natural Language Processing 119 Sep 28, 2022
EMNLP'2021: Simple Entity-centric Questions Challenge Dense Retrievers

EntityQuestions This repository contains the EntityQuestions dataset as well as code to evaluate retrieval results from the the paper Simple Entity-ce

Princeton Natural Language Processing 50 Sep 24, 2021
Does MAML Only Work via Feature Re-use? A Data Set Centric Perspective

Does-MAML-Only-Work-via-Feature-Re-use-A-Data-Set-Centric-Perspective Does MAML Only Work via Feature Re-use? A Data Set Centric Perspective Installin

null 2 Nov 7, 2022
Team nan solution repository for FPT data-centric competition. Data augmentation, Albumentation, Mosaic, Visualization, KNN application

FPT_data_centric_competition - Team nan solution repository for FPT data-centric competition. Data augmentation, Albumentation, Mosaic, Visualization, KNN application

Pham Viet Hoang (Harry) 2 Oct 30, 2022
StyleGAN-Human: A Data-Centric Odyssey of Human Generation

StyleGAN-Human: A Data-Centric Odyssey of Human Generation Abstract: Unconditional human image generation is an important task in vision and graphics,

stylegan-human 762 Jan 8, 2023
[CVPR 2022 Oral] Versatile Multi-Modal Pre-Training for Human-Centric Perception

Versatile Multi-Modal Pre-Training for Human-Centric Perception Fangzhou Hong1  Liang Pan1  Zhongang Cai1,2,3  Ziwei Liu1* 1S-Lab, Nanyang Technologic

Fangzhou Hong 96 Jan 3, 2023
The official repo for OC-SORT: Observation-Centric SORT on video Multi-Object Tracking. OC-SORT is simple, online and robust to occlusion/non-linear motion.

OC-SORT Observation-Centric SORT (OC-SORT) is a pure motion-model-based multi-object tracker. It aims to improve tracking robustness in crowded scenes

Jinkun Cao 325 Jan 5, 2023
PyTorch implementation of paper: HPNet: Deep Primitive Segmentation Using Hybrid Representations.

HPNet This repository contains the PyTorch implementation of paper: HPNet: Deep Primitive Segmentation Using Hybrid Representations. Installation The

Siming Yan 42 Dec 7, 2022