TensorFlow implementation for Bayesian Modeling and Uncertainty Quantification for Learning to Optimize: What, Why, and How

Overview

Bayesian Modeling and Uncertainty Quantification for Learning to Optimize: What, Why, and How

TensorFlow implementation for Bayesian Modeling and Uncertainty Quantification for Learning to Optimize: What, Why, and How

Yuning You, Yue Cao, Tianlong Chen, Zhangyang Wang, Yang Shen

In ICLR 2022.

Overview

In this repository, we perform Bayesian modeling in learning to optimize techniques, to address the practical need of accessment and quantification of optimization uncertainty. Experiments are conducted on optimizations in test functions, privacy attacks and protein docking.

Environments

Create conda environment via:

conda env create -f environment.yml
cd sonnet_modified_files

and then copy files: basic.py, gated_rnn.py into the conda environment directory as:

cp gate_rnn.py $CONDAENV_PATH/envs/tf_gpu_1.14/lib/python3.7/site-packages/sonnet/python/modules/
cp basic.py $CONDAENV_PATH/envs/tf_gpu_1.14/lib/python3.7/site-packages/sonnet/python/modules/

Training & Evaluation

mkdir ./weights; mkdir ./logs; cd src

Stage 1 training:

python train_dm_rs_cl.py --problem $problem_name --stage 1 --save_path ../weights/${problem_name}_stage1.ckpt

Stage 2 Bayesian training:

python train_dm_rs_cl.py --problem $problem_name --stage 2 --restore_path ../weights/${problem_name}_stage1.ckpt --save_path ../weights/${problem_name}_stage2.ckpt --lambda1 0.1

Evaluation:

python evaluate.py --problem $problem_name --path ../weights/${problem_name}_stage2.ckpt --output ../logs/${problem_name}.log --mode test

where

  • $problem_name = rastrigin06, rastrigin12, rastrigin18, rastrigin24, rastrigin30 means train on test function rastrigin on dim=6, 12, 18, 24, 30, respectively.
  • $problem_name = ackley06, ackley12, ackley18, ackley24, ackley30.
  • $problem_name = griewank06, griewank12, griewank18, griewank24, griewank30.
  • $problem_name = privacy_attack means privacy_attack experiment.
  • $problem_name = protein_dock means protein docking experiment.

and you can select $lambda1 from {10, 1, 0.1, 0.01, 0.001}.

Citation

If you use this code for you research, please cite our paper.

TBD
You might also like...
Official implementation for
Official implementation for "Symbolic Learning to Optimize: Towards Interpretability and Scalability"

Symbolic Learning to Optimize This is the official implementation for ICLR-2022 paper "Symbolic Learning to Optimize: Towards Interpretability and Sca

We will release the code of "ConTNet: Why not use convolution and transformer at the same time?" in this repo

ConTNet Introduction ConTNet (Convlution-Tranformer Network) is proposed mainly in response to the following two issues: (1) ConvNets lack a large rec

YOLO5Face: Why Reinventing a Face Detector (https://arxiv.org/abs/2105.12931)
YOLO5Face: Why Reinventing a Face Detector (https://arxiv.org/abs/2105.12931)

Introduction Yolov5-face is a real-time,high accuracy face detection. Performance Single Scale Inference on VGA resolution(max side is equal to 640 an

Official code for the paper "Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks".

Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks This repository contains the official code for the

Open-L2O: A Comprehensive and Reproducible Benchmark for Learning to Optimize Algorithms
Open-L2O: A Comprehensive and Reproducible Benchmark for Learning to Optimize Algorithms

Open-L2O This repository establishes the first comprehensive benchmark efforts of existing learning to optimize (L2O) approaches on a number of proble

An end-to-end machine learning library to directly optimize AUC loss
An end-to-end machine learning library to directly optimize AUC loss

LibAUC An end-to-end machine learning library for AUC optimization. Why LibAUC? Deep AUC Maximization (DAM) is a paradigm for learning a deep neural n

A bare-bones TensorFlow framework for Bayesian deep learning and Gaussian process approximation

Aboleth A bare-bones TensorFlow framework for Bayesian deep learning and Gaussian process approximation [1] with stochastic gradient variational Bayes

A repository that shares tuning results of trained models generated by TensorFlow / Keras. Post-training quantization (Weight Quantization, Integer Quantization, Full Integer Quantization, Float16 Quantization), Quantization-aware training. TensorFlow Lite. OpenVINO. CoreML. TensorFlow.js. TF-TRT. MediaPipe. ONNX. [.tflite,.h5,.pb,saved_model,tfjs,tftrt,mlmodel,.xml/.bin, .onnx]
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano https:

Owner
Shen Lab at Texas A&M University
Shen Lab at Texas A&M University
A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization

Website, Tutorials, and Docs    Uncertainty Toolbox A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualizatio

Uncertainty Toolbox 1.4k Dec 28, 2022
This is the repo for Uncertainty Quantification 360 Toolkit.

UQ360 The Uncertainty Quantification 360 (UQ360) toolkit is an open-source Python package that provides a diverse set of algorithms to quantify uncert

International Business Machines 207 Dec 30, 2022
Face uncertainty quantification or estimation using PyTorch.

Face-uncertainty-pytorch This is a demo code of face uncertainty quantification or estimation using PyTorch. The uncertainty of face recognition is af

Kaen 3 Sep 16, 2022
TensorFlow implementation of "A Simple Baseline for Bayesian Uncertainty in Deep Learning"

TensorFlow implementation of "A Simple Baseline for Bayesian Uncertainty in Deep Learning"

YeongHyeon Park 7 Aug 28, 2022
Natural Posterior Network: Deep Bayesian Predictive Uncertainty for Exponential Family Distributions

Natural Posterior Network This repository provides the official implementation o

Oliver Borchert 54 Dec 6, 2022
the code for paper "Energy-Based Open-World Uncertainty Modeling for Confidence Calibration"

EOW-Softmax This code is for the paper "Energy-Based Open-World Uncertainty Modeling for Confidence Calibration". Accepted by ICCV21. Usage Commnd exa

Yezhen Wang 36 Dec 2, 2022
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayesian-Torch is designed to be flexible and seamless in extending a deterministic deep neural network architecture to corresponding Bayesian form by simply replacing the deterministic layers with Bayesian layers.

Intel Labs 210 Jan 4, 2023
aka "Bayesian Methods for Hackers": An introduction to Bayesian methods + probabilistic programming with a computation/understanding-first, mathematics-second point of view. All in pure Python ;)

Bayesian Methods for Hackers Using Python and PyMC The Bayesian method is the natural approach to inference, yet it is hidden from readers behind chap

Cameron Davidson-Pilon 25.1k Jan 2, 2023
LBK 20 Dec 2, 2022