Code for "Universal inference meets random projections: a scalable test for log-concavity"

Overview

How to use this repository

This repository contains code to replicate the results of "Universal inference meets random projections: a scalable test for log-concavity" by Robin Dunn, Larry Wasserman, and Aaditya Ramdas.

Folder contents

  • batch_scripts: Contains SLURM batch scripts to run the simulations. Scripts are labeled by the figure for which their simulations produce data. These scripts run the code in sim_code, using the parameters in sim_params.
  • data: Output of simulations.
  • plot_code: Reads simulation outputs from data and reproduces all figures in the paper. Plots are saved to plots folder.
  • plots: Contains all plots in paper.
  • sim_code: R code to run simulations. Simulation output is saved to data folder.
  • sim_params: Parameters for simulations. Each row contains a single choice of parameters. The scripts in sim_code read in these files, and the scripts in batch_scripts loop through all choices of parameters.

How do I ...

Produce the simulations for a given figure?

In the batch_scripts folder, scripts are labeled by the figure for which they simulate data. Run all batch scripts corresponding to the figure of interest. The allocated run time is estimated from the choice of parameters for which the code has the longest run time. Many scripts will run faster than this time. The files in sim_code each contain progress bars to estimate the remaining run time. You may wish to start running these files outside of a batch submission to understand the run time on your computing system.

Alternatively, to run the code without using a job submission system, click on any .sh file. The Rscript lines can be run on a terminal, replacing $SLURM_ARRAY_TASK_ID with all of the indices in the batch array.

The simulation output will be stored in the data folder, with one dataset per choice of parameters. To combine these datasets into a single dataset (as they currently appear in data), run the code in sim_code/combine_datasets.R.

Example: batch_scripts/fig01_fully_NP_randproj.sh

This script reproduces the universal test simulations for Figure 1. To do this, it runs the R script at sim_code/fig01_fully_NP_randproj.R. It reads in the parameters from sim_params/fig01_fully_NP_randproj_params.csv. There are 30 sets of parameters in total. The results will be stored in the data folder, with names such as fig01_fully_NP_randproj_1.csv, ..., fig01_fully_NP_randproj_30.csv. To combine these files into a single .csv file, run the code at sim_code/combine_datasets.R.

Examine the code for a given simulation?

The R code in sim_code is labeled by the figures for which they simulate data. Click on all files corresponding to a given figure.

Reproduce a figure without rerunning the simulations?

The R scripts in plot_code are labeled by their corresponding plots. They read in the necessary simulated data from the data folder and output the figures to the plots folder.

You might also like...
Opinionated code formatter, just like Python's black code formatter but for Beancount

beancount-black Opinionated code formatter, just like Python's black code formatter but for Beancount Try it out online here Features MIT licensed - b

a delightful machine learning tool that allows you to train, test and use models without writing code
a delightful machine learning tool that allows you to train, test and use models without writing code

igel A delightful machine learning tool that allows you to train/fit, test and use models without writing code Note I'm also working on a GUI desktop

Pytorch Lightning code guideline for conferences

Deep learning project seed Use this seed to start new deep learning / ML projects. Built in setup.py Built in requirements Examples with MNIST Badges

Automatically Build Multiple ML Models with a Single Line of Code. Created by Ram Seshadri. Collaborators Welcome. Permission Granted upon Request.
Automatically Build Multiple ML Models with a Single Line of Code. Created by Ram Seshadri. Collaborators Welcome. Permission Granted upon Request.

Auto-ViML Automatically Build Variant Interpretable ML models fast! Auto_ViML is pronounced "auto vimal" (autovimal logo created by Sanket Ghanmare) N

Code samples for my book "Neural Networks and Deep Learning"

Code samples for "Neural Networks and Deep Learning" This repository contains code samples for my book on "Neural Networks and Deep Learning". The cod

Code for: https://berkeleyautomation.github.io/bags/

DeformableRavens Code for the paper Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks. Here is the

Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166
Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166

Region Proportion Regularized Inference (RePRI) for Few-Shot Segmentation In this repo, we provide the code for our paper : "Few-Shot Segmentation Wit

Applications using the GTN library and code to reproduce experiments in "Differentiable Weighted Finite-State Transducers"

gtn_applications An applications library using GTN. Current examples include: Offline handwriting recognition Automatic speech recognition Installing

Code for
Code for "Contextual Non-Local Alignment over Full-Scale Representation for Text-Based Person Search"

Contextual Non-Local Alignment over Full-Scale Representation for Text-Based Person Search This is an implementation for our paper Contextual Non-Loca

Owner
Robin Dunn
Principal Statistical Consultant, Novartis PhD in Statistics, Carnegie Mellon, 2021
Robin Dunn
Inference code for "StylePeople: A Generative Model of Fullbody Human Avatars" paper. This code is for the part of the paper describing video-based avatars.

NeuralTextures This is repository with inference code for paper "StylePeople: A Generative Model of Fullbody Human Avatars" (CVPR21). This code is for

Visual Understanding Lab @ Samsung AI Center Moscow 18 Oct 6, 2022
A code generator from ONNX to PyTorch code

onnx-pytorch Generating pytorch code from ONNX. Currently support onnx==1.9.0 and torch==1.8.1. Installation From PyPI pip install onnx-pytorch From

Wenhao Hu 94 Jan 6, 2023
This is the code for our KILT leaderboard submission to the T-REx and zsRE tasks. It includes code for training a DPR model then continuing training with RAG.

KGI (Knowledge Graph Induction) for slot filling This is the code for our KILT leaderboard submission to the T-REx and zsRE tasks. It includes code fo

International Business Machines 72 Jan 6, 2023
Convert Python 3 code to CUDA code.

Py2CUDA Convert python code to CUDA. Usage To convert a python file say named py_file.py to CUDA, run python generate_cuda.py --file py_file.py --arch

Yuval Rosen 3 Jul 14, 2021
Empirical Study of Transformers for Source Code & A Simple Approach for Handling Out-of-Vocabulary Identifiers in Deep Learning for Source Code

Transformers for variable misuse, function naming and code completion tasks The official PyTorch implementation of: Empirical Study of Transformers fo

Bayesian Methods Research Group 56 Nov 15, 2022
Reference implementation of code generation projects from Facebook AI Research. General toolkit to apply machine learning to code, from dataset creation to model training and evaluation. Comes with pretrained models.

This repository is a toolkit to do machine learning for programming languages. It implements tokenization, dataset preprocessing, model training and m

Facebook Research 408 Jan 1, 2023
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
Low-code/No-code approach for deep learning inference on devices

EzEdgeAI A concept project that uses a low-code/no-code approach to implement deep learning inference on devices. It provides a componentized framewor

On-Device AI Co., Ltd. 7 Apr 5, 2022
Code for all the Advent of Code'21 challenges mostly written in python

Advent of Code 21 Code for all the Advent of Code'21 challenges mostly written in python. They are not necessarily the best or fastest solutions but j

null 4 May 26, 2022
Code to use Augmented Shapiro Wilks Stopping, as well as code for the paper "Statistically Signifigant Stopping of Neural Network Training"

This codebase is being actively maintained, please create and issue if you have issues using it Basics All data files are included under losses and ea

J K Terry 32 Nov 9, 2021