Generative Adversarial Networks for High Energy Physics extended to a multi-layer calorimeter simulation

Overview

CaloGAN

Simulating 3D High Energy Particle Showers in Multi-Layer Electromagnetic Calorimeters with Generative Adversarial Networks.

This repository contains what you'll need to reproduce M. Paganini (@mickypaganini), L. de Oliveira (@lukedeo), B. Nachman (@bnachman), CaloGAN: Simulating 3D High Energy Particle Showers in Multi-Layer Electromagnetic Calorimeters with Generative Adversarial Networks [arXiv:1705.02355].

You are more than welcome to use the open data and open-source software provided here for any of your projects, but we kindly ask you that you please cite them using the DOIs provided below:

Asset Location
Training Data (GEANT4 showers, ⟂ to center) DOI
Source Code (this repo!) DOI

For any use of paper ideas and results, please cite

@article{paganini_calogan,
      author         = "Paganini, Michela and de Oliveira, Luke and Nachman,
                        Benjamin",
      title          = "{CaloGAN: Simulating 3D High Energy Particle Showers in
                        Multi-Layer Electromagnetic Calorimeters with Generative
                        Adversarial Networks}",
      year           = "2017",
      eprint         = "1705.02355",
      archivePrefix  = "arXiv",
      primaryClass   = "hep-ex",
}

Goal

The goal of this project is to help physicists at CERN speed up their simulations by encoding the most computationally expensive portion of the simulation process (i.e., showering in the EM calorimeter) in a deep generative model.

The challenges come from the fact that this portion of the detector is longitudinally segmented into three layers with heterogeneous granularity. For simplicity, we can visualize the energy depositions of particles passing through the detector as a series of three images per shower, while keeping in mind the sequential nature of their relationship in the generator.

To download and better understand the training dataset, visit the Mendeley Data repository.

3D shower in the EM calorimeter

Getting Started

This repository contains three main folders: generation, models and analysis, which represent the three main stages in this project. You may decide to only engage in a subset of them, so we decided to provide you with all you need to jump right in at the level you are interested in.

generation contains the code to build the electromagnetic calorimeter geometry and shoot particles at it with a give energy. This is all based on Geant4. For more instructions, go to Generation on PDSF.

models contains the core ML code for this project. The file train.py takes care of loading the data and training the GAN.

analysis contains Jupyter Notebooks used to evaluate performance and produce all plots for the paper.

GEANT4 Generation on PDSF

On PDSF, your should run all generation code that outputs big files to a scratch space. To make a scratch environment, run mkdir /global/projecta/projectdirs/atlas/{your-username}, and for convenience, link to home via ln -s /global/projecta/projectdirs/atlas/{your-username} ~/scratch.

To build the generation code on PDSF, simply run source cfg/pdsf-env.sh from the generation/ folder in the repository. This loads modules.

Next, you can type make which should build an executable called generate. Because of how Geant4 works, this executable gets deposited in $HOME/geant4_workdir/bin/Linux-g++/, which is in your $PATH when the modules from cfg/pdsf-env.sh are loaded.

To run the generation script, run generate -m cfg/run2.mac. You can change generation parameters inside cfg/run2.mac (follow these instructions).

This will output a file called plz_work_kthxbai.root with a TTree named fancy_tree, which will contain a branch for each calorimeter cell (cell_#) with histograms of the energy deposited in that cell across the various shower events. The last three cells (numbered 504, 505, and 506) actually represent the overflow for each calorimeter layer. Finally, a branch called TotalEnergy is added for bookkeeping.

We provide you with a convenient script to convert the ROOT file into a more manageable HDF5 archive. The convert.py script is located in the generation/ folder and can be used as follows:

usage: convert.py [-h] --in-file IN_FILE --out-file OUT_FILE --tree TREE

Convert GEANT4 output files into ML-able HDF5 files

optional arguments:
  -h, --help            show this help message and exit
  --in-file IN_FILE, -i IN_FILE
                        input ROOT file
  --out-file OUT_FILE, -o OUT_FILE
                        output HDF5 file
  --tree TREE, -t TREE  input tree for the ROOT file

So you can run, for example, python convert.py --in-file plz_work_kthxbai.root --out-file test.h5 --tree fancy_tree. Assuming you specified /run/beamOn 1000 in your run.mac file, the structure of the output HDF5 should look like this:

energy                   Dataset {1000, 1}
layer_0                  Dataset {1000, 3, 96}
layer_1                  Dataset {1000, 12, 12}
layer_2                  Dataset {1000, 12, 6}
overflow                 Dataset {1000, 3}

To launch a batch job on PDSF, simply run ./launch <num_jobs>, to launch <num_jobs> concurrent tasks in a job array.

The CaloGAN Model

This work builds on the solution presented in arXiv/1701.05927 which we named LAGAN, or Location-Aware Generative Adversarial Network. This is a Physics specific modification of the more common DCGAN and ACGAN frameworks which is specifically designed to handle the levels of sparsity, the location dependence, and the high dynamic range that characterizes Physics images.

Generator

Generator The Generator contains three parallel LAGAN-style streams with an in-painting mechanism to handle the sequential nature of these shower images.

Discriminator

Discriminator The discriminator uses convolutional features combined with ad-hoc notions of sparsity and reconstructed energy to classify real and fake images as well as comparing the reconstructred energy with the condition requested by the user.

Training the CaloGAN model

To begin the training process, create a YAML file with the specification in models/particles.yaml for all the particles you want to train on. In the paper, we only train one GAN per particle type. If you specify more than one particle type and dataset in the YAML file, it will train an ACGAN, which we found to not be as performant. Assuming that you have a folder called data/ in the root directory of the project, and you have a file inside called eplus.h5 (downloaded from Mendeley, perhaps?), you can train your own CaloGAN by running

python -m models.train models/particles.yaml

We recommend running python -m models.train -h at least once to see all the parameters one can change.

Performance Analysis

Performance evaluation is done both from a qualitative and a quantitative standpoint. The jupyter notebook available in the analysis folder will guide you through our plotting conventions.

For quick handling, we have pre-extracted the shower shape variables into a pandas dataframe (stored as HDF5) and made it available available on S3. To load, you can simply do pd.read_hdf('path/to/shower-shapes.h5').

Docker

Running in local :

$ docker run -it --rm -v $PWD/CaloGAN:/home/CaloGAN engineren/calogan-docker python -m models.train models/particles.yaml

Running naf-ilc-gpu :

$ singularity pull docker://engineren/calogan-docker:latest

$ singularity instance start --bind data:/home/CaloGAN/data --nv calogan-docker_latest.sif caloGAN

$ singularity run instance://caloGAN python -m models.train models/particles.yaml

Copyright Notice

“CaloGAN” Copyright (c) 2017, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.

If you have questions about your rights to use or distribute this software, please contact Berkeley Lab's Innovation & Partnerships Office at [email protected].

NOTICE. This Software was developed under funding from the U.S. Department of Energy and the U.S. Government consequently retains certain rights. As such, the U.S. Government has been granted for itself and others acting on its behalf a paid-up, nonexclusive, irrevocable, worldwide license in the Software to reproduce, distribute copies to the public, prepare derivative works, and perform publicly and display publicly, and to permit other to do so.

You might also like...
Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) in PyTorch
Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) in PyTorch

alias-free-gan-pytorch Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) This implementation

PyTorch implementations of Generative Adversarial Networks.
PyTorch implementations of Generative Adversarial Networks.

This repository has gone stale as I unfortunately do not have the time to maintain it anymore. If you would like to continue the development of it as

Image Deblurring using Generative Adversarial Networks
Image Deblurring using Generative Adversarial Networks

DeblurGAN arXiv Paper Version Pytorch implementation of the paper DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks. Our netwo

Code for the paper "TadGAN: Time Series Anomaly Detection Using Generative Adversarial Networks"

TadGAN: Time Series Anomaly Detection Using Generative Adversarial Networks This is a Python3 / Pytorch implementation of TadGAN paper. The associated

Partial implementation of ODE-GAN technique from the paper Training Generative Adversarial Networks by Solving Ordinary Differential Equations
Partial implementation of ODE-GAN technique from the paper Training Generative Adversarial Networks by Solving Ordinary Differential Equations

ODE GAN (Prototype) in PyTorch Partial implementation of ODE-GAN technique from the paper Training Generative Adversarial Networks by Solving Ordinary

Pytorch implementation for reproducing StackGAN_v2 results in the paper StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks
Pytorch implementation for reproducing StackGAN_v2 results in the paper StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks

StackGAN-v2 StackGAN-v1: Tensorflow implementation StackGAN-v1: Pytorch implementation Inception score evaluation Pytorch implementation for reproduci

Code for
Code for "On the Effects of Batch and Weight Normalization in Generative Adversarial Networks"

Note: this repo has been discontinued, please check code for newer version of the paper here Weight Normalized GAN Code for the paper "On the Effects

PyTorch implementation of
PyTorch implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"

DiscoGAN in PyTorch PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. * All samples in READM

Official implementation of
Official implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"

DiscoGAN Official PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. Prerequisites Python 2.7

Comments
  • Crash: `You must feed a value for placeholder tensor 'z' with dtype float`

    Crash: `You must feed a value for placeholder tensor 'z' with dtype float`

    Hello,

    First, thanks for the very nice repository and documentation. I've been trying to run the train.py module and running into trouble.

    You can try to replicate my error by doing the following...

    This will set up my software environment:

    • git clone https://github.com/gnperdue/BashDotFiles.git
    • cd BashDotFiles/anaconda_configs && bash miniconda_py2dl.sh
    • export PATH=$HOME/miniconda2/bin && source activate py2dl

    I have a version checker there that shows what this pulls down for this particular setup:

    $ python pyverchecker.py
    Python version: 2.7.13 |Continuum Analytics, Inc.| (default, Dec 20 2016, 23:05:08)
    [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)]
    pandas version: 0.20.1
    matplotlib version: 2.0.2
    numpy version: 1.12.1
    scipy version: 0.19.0
    IPython version: 5.3.0
    sklearn version: 0.18.1
    skimage version: 0.13.0
    Missing protobuf
    Missing numexpr
    Missing sympy
    tensorflow version: 1.0.0
    pymc3 version: 3.0
    theano version: 0.9.0
    h5py version: 2.7.0
    Using TensorFlow backend.
    keras version: 2.0.2
    Possible problem with module: xlrd, 'module' object has no attribute '__version__'
    Missing lasagne
    yaml version: 3.12
    Missing nltk
    

    Next, I retrieved the data you provided and updated the models/particles.yaml file:

    # put the paths to actual data here!
    positron: 'data/eplus.hdf5'
    # pion: 'data/piplus.hdf5'
    # gamma: 'data/gamma.hdf5'
    

    I also opened the file and poked around and everything looks fine. My Keras configuration is like so:

    .keras$ more keras.json
    {
        "epsilon": 1e-07,
        "floatx": "float32",
        "image_data_format": "channels_last",
        "backend": "tensorflow"
    }
    

    python -m models.train -h produces sensible results. However, python -m models.train models/particles.yaml produces output like:

    (py2dl) CaloGAN$ python -m models.train models/particles.yaml
    Using TensorFlow backend.
    2017-05-30 17:32:16,289 - models.train[INFO]: 1 particle types found.
    2017-05-30 17:32:16,526 - models.train[INFO]: Building discriminator
    ...
    Caused by op u'z', defined at:
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/runpy.py", line 174, in _run_module_as_main
        "__main__", fname, loader, pkg_name)
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/runpy.py", line 72, in _run_code
        exec code in run_globals
      File "/Users/perdue/Dropbox/ArtificialIntelligence/DSHEP/CaloGAN/models/train.py", line 302, in <module>
        latent = Input(shape=(latent_size, ), name='z')
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/site-packages/keras/engine/topology.py", line 1388, in Input
        input_tensor=tensor)
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/site-packages/keras/engine/topology.py", line 1299, in __init__
        name=self.name)
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 349, in placeholder
        x = tf.placeholder(dtype, shape=shape, name=name)
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 1520, in placeholder
        name=name)
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 2149, in _placeholder
        name=name)
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
        op_def=op_def)
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2395, in create_op
        original_op=self._default_original_op, op_def=op_def)
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1264, in __init__
        self._traceback = _extract_stack()
    
    InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'z' with dtype float
    [[Node: z = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
    

    I walked through the code in the debugger for a while and could not figure out what was causing the crash. It isn't clear to me how the latent input (named 'z', and part of generator_inputs in the generator network definition) enters where it does, as the crash is here, in discriminator.train_on_batch:

    426
    427             real_batch_loss = discriminator.train_on_batch(
    428                 [image_batch_1, image_batch_2, image_batch_3, energy_batch],
    429                 disc_outputs_real,
    430                 loss_weights
    431             )
    432
    

    Any thoughts or suggestions would be deeply appreciated. Thanks!

    pax Gabe

    opened by gnperdue 2
  • load_weights

    load_weights

    The weight folder is missing. The jupyter notebook try to load the weight from a file called g_epoch_049.hdf5 which cannot be found anywhere in the repo. Also most of the PDSF urls are broken.

    opened by saras022 2
  • ValueError: The truth value of an array with more than one element is ambigious

    ValueError: The truth value of an array with more than one element is ambigious

    Hi! I meet another problem: Traceback (most recent call last): File "/cvmfs/mlgpu.ihep.ac.cn/anaconda3/envs/tensorflow-gpu/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/cvmfs/mlgpu.ihep.ac.cn/anaconda3/envs/tensorflow-gpu/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/hpcfs/bes/mlgpu/cuijb/CaloGAN/models/train.py", line 232, in sparsity_mbd=True File "/hpcfs/bes/mlgpu/cuijb/CaloGAN/models/architectures.py", line 111, in build_discriminator empirical_sparsity = sparsity_detector(image) File "/cvmfs/mlgpu.ihep.ac.cn/anaconda3/envs/tensorflow-gpu/lib/python3.6/site-packages/keras/engine/topology.py", line 596, in call output = self.call(inputs, **kwargs) File "/cvmfs/mlgpu.ihep.ac.cn/anaconda3/envs/tensorflow-gpu/lib/python3.6/site-packages/keras/layers/core.py", line 652, in call return self.function(inputs, **arguments) File "/hpcfs/bes/mlgpu/cuijb/CaloGAN/models/ops.py", line 105, in sparsity_level K.cast(x > 0.0, K.floatx()), axis=np.arange(1, len(_shape)) File "/cvmfs/mlgpu.ihep.ac.cn/anaconda3/envs/tensorflow-gpu/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 1189, in sum axis = _normalize_axis(axis, ndim(x)) File "/cvmfs/mlgpu.ihep.ac.cn/anaconda3/envs/tensorflow-gpu/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 1134, in _normalize_axis if axis is not None and axis < 0: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

    opened by Drjiabin 0
  • no module named 'ops'

    no module named 'ops'

    When I run this project ,it meet no module named 'ops',could you tell me how to solve this problem? I guess this problem is caused by the version of tensorflow?

    opened by Drjiabin 5
Releases(v1.0)
  • v1.0(May 30, 2017)

    This is a stable version of the code used to generate GEANT4 simulation data, train and evaluate the CaloGAN model, and produce analysis plots for publication.

    Source code(tar.gz)
    Source code(zip)
Owner
Deep Learning for HEP
Developing Deep Learning solutions for High Energy Physics
Deep Learning for HEP
Generate high quality pictures. GAN. Generative Adversarial Networks

ESRGAN generate high quality pictures. GAN. Generative Adversarial Networks """ Super-resolution of CelebA using Generative Adversarial Networks. The

Lieon 1 Dec 14, 2021
This code uses generative adversarial networks to generate diverse task allocation plans for Multi-agent teams.

Mutli-agent task allocation This code uses generative adversarial networks to generate diverse task allocation plans for Multi-agent teams. To change

Biorobotics Lab 5 Oct 12, 2022
Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis (CVPR2022)

Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis Multi-View Consistent Generative Adversarial Networks for 3D-aware

Xuanmeng Zhang 78 Dec 10, 2022
Minimal PyTorch implementation of Generative Latent Optimization from the paper "Optimizing the Latent Space of Generative Networks"

Minimal PyTorch implementation of Generative Latent Optimization This is a reimplementation of the paper Piotr Bojanowski, Armand Joulin, David Lopez-

Thomas Neumann 117 Nov 27, 2022
StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.

StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation.

null 3k Jan 8, 2023
[ICLR 2021, Spotlight] Large Scale Image Completion via Co-Modulated Generative Adversarial Networks

Large Scale Image Completion via Co-Modulated Generative Adversarial Networks, ICLR 2021 (Spotlight) Demo | Paper [NEW!] Time to play with our interac

Shengyu Zhao 373 Jan 2, 2023
Regularizing Generative Adversarial Networks under Limited Data (CVPR 2021)

Regularizing Generative Adversarial Networks under Limited Data [Project Page][Paper] Implementation for our GAN regularization method. The proposed r

Google 148 Nov 18, 2022
NR-GAN: Noise Robust Generative Adversarial Networks

NR-GAN: Noise Robust Generative Adversarial Networks (CVPR 2020) This repository provides PyTorch implementation for noise robust GAN (NR-GAN). NR-GAN

Takuhiro Kaneko 59 Dec 11, 2022
Generating Anime Images by Implementing Deep Convolutional Generative Adversarial Networks paper

AnimeGAN - Deep Convolutional Generative Adverserial Network PyTorch implementation of DCGAN introduced in the paper: Unsupervised Representation Lear

Rohit Kukreja 23 Jul 21, 2022
π-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis

π-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis Project Page | Paper | Data Eric Ryan Chan*, Marco Monteiro*, Pe

null 375 Dec 31, 2022