l2hmc-qcd
📊
Slides
- Recent talk on Training Topological Samplers for Lattice Gauge Theory from the Machine Learning for High Energy Physics, on and off the Lattice @ ect* Trento (09/30/2021)
📒
Example Notebook
- Accepted to the Deep Learning for Simulation (SimDL) Workshop at ICLR 2021
-
📚 : arXiv:2105.03418 -
📊 : poster
-
Overview
The L2HMC algorithm aims to improve upon HMC by optimizing a carefully chosen loss function which is designed to minimize autocorrelations within the Markov Chain, thereby improving the efficiency of the sampler.
This work is based on the original implementation: brain-research/l2hmc/.
A detailed description of the L2HMC algorithm can be found in the paper:
Generalizing Hamiltonian Monte Carlo with Neural Network
by Daniel Levy, Matt D. Hoffman and Jascha Sohl-Dickstein.
Broadly, given an analytically described target distribution, π(x), L2HMC provides a statistically exact sampler that:
- Quickly converges to the target distribution (fast burn-in).
- Quickly produces uncorrelated samples (fast mixing).
- Is able to efficiently mix between energy levels.
- Is capable of traversing low-density zones to mix between modes (often difficult for generic HMC).
L2HMC for LatticeQCD
Goal: Use L2HMC to efficiently generate gauge configurations for calculating observables in lattice QCD.
A detailed description of the (ongoing) work to apply this algorithm to simulations in lattice QCD (specifically, a 2D U(1) lattice gauge theory model) can be found in doc/main.pdf
.
Organization
Dynamics / Network
The base class for the augmented L2HMC leapfrog integrator is implemented in the BaseDynamics
(a tf.keras.Model
object).
The GaugeDynamics
is a subclass of BaseDynamics
containing modifications for the 2D U(1) pure gauge theory.
The network is defined in l2hmc-qcd/network/functional_net.py
.
Network Architecture
An illustration of the leapfrog layer
updating (x, v) --> (x', v')
can be seen below.
Lattice
Lattice code can be found in lattice.py
, specifically the GaugeLattice
object that provides the base structure on which our target distribution exists.
Additionally, the GaugeLattice
object implements a variety of methods for calculating physical observables such as the average plaquette, ɸₚ, and the topological charge Q,
Training
The training loop is implemented in l2hmc-qcd/utils/training_utils.py
.
To train the sampler on a 2D U(1) gauge model using the parameters specified in bin/train_configs.json
:
$ python3 /path/to/l2hmc-qcd/l2hmc-qcd/train.py --json_file=/path/to/l2hmc-qcd/bin/train_configs.json
Or via the bin/train.sh
script provided in bin/
.
Features
-
Distributed training (via
horovod
): Ifhorovod
is installed, the model can be trained across multiple GPUs (or CPUs) by:#!/bin/bash TRAINER=/path/to/l2hmc-qcd/l2hmc-qcd/train.py JSON_FILE=/path/to/l2hmc-qcd/bin/train_configs.json horovodrun -np ${PROCS} python3 ${TRAINER} --json_file=${JSON_FILE}
Contact
Code author: Sam Foreman
Pull requests and issues should be directed to: saforem2
Citation
If you use this code or found this work interesting, please cite our work along with the original paper:
@misc{foreman2021deep,
title={Deep Learning Hamiltonian Monte Carlo},
author={Sam Foreman and Xiao-Yong Jin and James C. Osborn},
year={2021},
eprint={2105.03418},
archivePrefix={arXiv},
primaryClass={hep-lat}
}
@article{levy2017generalizing,
title={Generalizing Hamiltonian Monte Carlo with Neural Networks},
author={Levy, Daniel and Hoffman, Matthew D. and Sohl-Dickstein, Jascha},
journal={arXiv preprint arXiv:1711.09268},
year={2017}
}
Acknowledgement
This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under contract DE_AC02-06CH11357. This work describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the work do not necessarily represent the views of the U.S. DOE or the United States Government. Declaration of Interests - None.