Repo for "Physion: Evaluating Physical Prediction from Vision in Humans and Machines" submission to NeurIPS 2021 (Datasets & Benchmarks track)

Overview

Physion: Evaluating Physical Prediction from Vision in Humans and Machines

Animation of the 8 scenarios

This repo contains code and data to reproduce the results in our paper, Physion: Evaluating Physical Prediction from Vision in Humans and Machines. Please see below for details about how to download the Physion dataset, replicate our modeling & human experiments, and statistical analyses to reproduce our results.

  1. Downloading the Physion dataset
  2. Dataset generation
  3. Modeling experiments
  4. Human experiments
  5. Comparing models and humans

Downloading the Physion dataset

Downloading the Physion test set (a.k.a. stimuli)

PhysionTest-Core (270 MB)

PhysionTest-Core is all you need to evaluate humans and models on exactly the same test stimuli used in our paper.

It contains eight directories, one for each scenario type (e.g., collide, contain, dominoes, drape, drop, link, roll, support).

Each of these directories contains three subdirectories:

  • maps: Contains PNG segmentation maps for each test stimulus, indicating location of agent object in red and patient object in yellow.
  • mp4s: Contains the MP4 video files presented to human participants. The agent and patient objects appear in random colors.
  • mp4s-redyellow: Contains the MP4 video files passed into models. The agent and patient objects consistently appear in red and yellow, respectively.

Download URL: https://physics-benchmarking-neurips2021-dataset.s3.amazonaws.com/Physion.zip.

PhysionTest-Complete (380 GB)

PhysionTest-Complete is what you want if you need more detailed metadata for each test stimulus.

Each stimulus is encoded in an HDF5 file containing comprehensive information regarding depth, surface normals, optical flow, and segmentation maps associated with each frame of each trial, as well as other information about the physical states of objects at each time step.

Download URL: https://physics-benchmarking-neurips2021-dataset.s3.amazonaws.com/PhysionTestHDF5.tar.gz.

You can also download the testing data for individual scenarios from the table in the next section.

Downloading the Physion training set

Downloading PhysionTrain-Dynamics

PhysionTrain-Dynamics contains the full dataset used to train the dynamics module of models benchmarked in our paper. It consists of approximately 2K stimuli per scenario type.

Download URL (770 MB): https://physics-benchmarking-neurips2021-dataset.s3.amazonaws.com/PhysionTrainMP4s.tar.gz

Downloading PhysionTrain-Readout

PhysionTrain-Readout contains a separate dataset used for training the object-contact prediction (OCP) module for models pretrained on the PhysionTrain-Dynamics dataset. It consists of 1K stimuli per scenario type.

The agent and patient objects in each of these readout stimuli consistently appear in red and yellow, respectively (as in the mp4s-redyellow examples from PhysionTest-Core above).

NB: Code for using these readout sets to benchmark any pretrained model (not just models trained on the Physion training sets) will be released prior to publication.

Download URLs for complete PhysionTrain-Dynamics and PhysionTrain-Readout:

Scenario Dynamics Training Set Readout Training Set Test Set
Dominoes Dominoes_dynamics_training_HDF5s Dominoes_readout_training_HDF5s Dominoes_testing_HDF5s
Support Support_dynamics_training_HDF5s Support_readout_training_HDF5s Support_testing_HDF5s
Collide Collide_dynamics_training_HDF5s Collide_readout_training_HDF5s Collide_testing_HDF5s
Contain Contain_dynamics_training_HDF5s Contain_readout_training_HDF5s Contain_testing_HDF5s
Drop Drop_dynamics_training_HDF5s Drop_readout_training_HDF5s Drop_testing_HDF5s
Roll Roll_dynamics_training_HDF5s Roll_readout_training_HDF5s Roll_testing_HDF5s
Link Link_dynamics_training_HDF5s Link_readout_training_HDF5s Link_testing_HDF5s
Drape Drape_dynamics_training_HDF5s Drape_readout_training_HDF5s Drape_testing_HDF5s

Dataset generation

This repo depends on outputs from tdw_physics.

Specifically, tdw_physics is used to generate the dataset of physical scenarios (a.k.a. stimuli), including both the training datasets used to train physical-prediction models, as well as test datasets used to measure prediction accuracy in both physical-prediction models and human participants.

Instructions for using the ThreeDWorld simulator to regenerate datasets used in our work can be found here. Links for downloading the Physion testing, training, and readout fitting datasets can be found here.

Modeling experiments

The modeling component of this repo depends on the physopt repo. The physopt repo implements an interface through which a wide variety of physics prediction models from the literature (be they neural networks or otherwise) can be adapted to accept the inputs provided by our training and testing datasets and produce outputs for comparison with our human measurements.

The physopt also contains code for model training and evaluation. Specifically, physopt implements three train/test procols:

  • The only protocol, in which each candidate physics model architecture is trained -- using that model's native loss function as specified by the model's authors -- separately on each of the scenarios listed above (e.g. "dominoes", "support", &c). This produces eight separately-trained models per candidate architecture (one for each scenario). Each of these separate models are then tested in comparison to humans on the testing data for that scenario.
  • A all protocol, in which each candidate physics architecture is trained on mixed data from all of the scenarios simultaneously (again, using that model's native loss function). This single model is then tested and compared to humans separately on each scenario.
  • A all-but-one protocol, in which each candidate physics architecture is trained on mixed data drawn for all but one scenario -- separately for all possible choices of the held-out scenario. This produces eight separately-trained models per candidate architecture (one for each held-out scenario). Each of these separate models are then tested in comparison to humans on the testing data for that scenario.

Results from each of the three protocols are separately compared to humans (as described below in the section on comparison of humans to models). All model-human comparisons are carried using a representation-learning paradigm, in which models are trained on their native loss functions (as encoded by the original authors of the model). Trained models are then evaluated on the specific physion red-object-contacts-yellow-zone prediction task. This evaluation is carried by further training a "readout", implemented as a linear logistic regression. Readouts are always trained in a per-scenario fashion.

Currently, physopt implements the following specific physics prediction models:

Model Name Our Code Link Original Paper Description
SVG Denton and Fergus 2018 Image-like latent
OP3 Veerapaneni et. al. 2020
CSWM Kipf et. al. 2020
RPIN Qi et. al. 2021
pVGG-mlp
pVGG-lstm
pDEIT-mlp Touvron et. al. 2020
pDEIT-lstm
GNS Sanchez-Gonzalez et. al. 2020
GNS-R
DPI Li et. al. 2019

Human experiments

This repo contains code to conduct the human behavioral experiments reported in this paper, as well as analyze the resulting data from both human and modeling experiments.

The details of the experimental design and analysis plan are documented in our study preregistration contained within this repository. The format for this preregistration is adapted from the templates provided by the Open Science Framework for our studies, and put under the same type of version control as the rest of the codebase for this project.

Here is what each main directory in this repo contains:

  • experiments: This directory contains code to run the online human behavioral experiments reported in this paper. More detailed documentation of this code can be found in the README file nested within the experiments subdirectory.
  • analysis (aka notebooks): This directory contains our analysis jupyter/Rmd notebooks. This repo assumes you have also imported model evaluation results from physopt.
  • results: This directory contains "intermediate" results of modeling/human experiments. It contains three subdirectories: csv, plots, and summary.
    • /results/csv/ contains csv files containing tidy dataframes with "raw" data.
    • /results/plots/ contains .pdf/.png plots, a selection of which are then polished and formatted for inclusion in the paper using Adobe Illustrator.
    • Important: Before pushing any csv files containing human behavioral data to a public code repository, triple check that this data is properly anonymized. This means no bare AMT Worker ID's or Prolific participant IDs.
  • stimuli: This directory contains any download/preprocessing scripts for data (a.k.a. stimuli) that are the inputs to human behavioral experiments. This repo assumes you have generated stimuli using tdw_physics. This repo uses code in this directory to upload stimuli to AWS S3 and generate metadata to control the timeline of stimulus presentation in the human behavioral experiments.
  • utils: This directory is meant to contain any files containing general helper functions.

Comparing models and humans

The results reported in this paper can be reproduced by running the Jupyter notebooks contained in the analysis directory.

  1. Downloading results. To download the "raw" human and model prediction behavior, please navigate to the analysis directory and execute the following command at the command line: python download_results.py. This script will fetch several CSV files and download them to subdirectories within results/csv. If this does not work, please download this zipped folder (csv) and move it to the results directory: https://physics-benchmarking-neurips2021-dataset.s3.amazonaws.com/model_human_results.zip.
  2. Reproducing analyses. To reproduce the key analyses reported in the paper, please run the following notebooks in this sequence:
    • summarize_human_model_behavior.ipynb: The purpose of this notebook is to:
      • Apply preprocessing to human behavioral data
      • Visualize distribution and compute summary statistics over human physical judgments
      • Visualize distribution and compute summary statistics over model physical judgments
      • Conduct human-model comparisons
      • Output summary CSVs that can be used for further statistical modeling & create publication-quality visualizations
    • inference_human_model_behavior.ipynb: The purpose of this notebook is to:
      • Visualize human and model prediction accuracy (proportion correct)
      • Visualize average-human and model agreement (RMSE)
      • Visualize human-human and model-human agreement (Cohen's kappa)
      • Compare performance between models
    • paper_plots.ipynb: The purpose of this notebook is to create publication-quality figures for inclusion in the paper.
Comments
  • Any plan for releasing Modeling experiments codes?

    Any plan for releasing Modeling experiments codes?

    Hi, thanks for opening source this great work! I'm currently working on a CSWM-like object-centric model and want to test its performance on the Physion dataset. It would be very great if you can release the code of training/testing CSWM on this dataset, so that I can easily integrate my model to the train/test protocols here. So I'm just wondering is there any plan for releasing any of the baseline models benchmarked in your paper? Thanks!

    EDIT: okay so I found the code for particle models #45. But they're quite different from e.g. object-centric models because particle models are directly evaluated by calculating the min distance between patient/agent points. They don't involve the readout phase, while my question is mainly regarding the readout phase...

    opened by Wuziyi616 5
  • Error in feedback on familiarization trial for dominoes

    Error in feedback on familiarization trial for dominoes

    Spotted this error in feedback in second familiarization trial of this dominoes demo (https://cogtoolslab.org:8882/index.html): Problem: Says that I got it right when I predicted that "red" would not hit "yellow" but then said that "The red object did indeed hit the yellow area."

    image

    opened by judithfan 3
  • Instructions for Creating / Connecting to MongoDB

    Instructions for Creating / Connecting to MongoDB

    @felixbinder , could you augment the README for the human experiments to explain the necessary steps to create the MongoDB and ensure the JS code can connect to it?

    opened by RylanSchaeffer 2
  • Analysis documentation

    Analysis documentation

    • [x] Create requirements.txt?

    Check

    • [x] Can we download all necessary data to reproduce analysis?
    • [x] Given the data, can we reproduce the analysis using the notebooks?
    opened by eliaszwang 2
  • Verify compliance with NeurIPS 2021 D&B formatting requirements

    Verify compliance with NeurIPS 2021 D&B formatting requirements

    Link to details: https://neurips.cc/Conferences/2021/CallForDatasetsBenchmarks

    Main submission

    • [x] Verify that submission is <= 9 content pages in NeurIPS format, including all figures and tables; additional pages containing the paper checklist and references are allowed.
    Please carefully follow the Latex template for this track when preparing proposals. We follow the NeurIPS format, but with the appropriate headings, and without hiding the names of the authors.
    
    neurips_data_2021.tex -- LaTeX Template
    
    neurips_data_2021.sty -- style file for LaTeX 2e
    
    neurips_data_2021.pdf -- example PDF output
    
    • [x] Papers should be submitted via OpenReview (click to start your submission). Reviewing is single-blind, hence the paper should not be anonymized.
    • [x] Include public link to the dataset or benchmark data. Set S3 ACL settings to public-read now. @judithfan

    Supplementary materials

    • [x] Dataset documentation and intended uses. Recommended documentation frameworks include datasheets for datasets, dataset nutrition labels, data statements for NLP, and accountability frameworks.@yamins81 / @judithfan
    • [x] Create URL to website/platform where the dataset/benchmark can be viewed and downloaded by the reviewers. If the dataset can only be released later, you must include instructions for reviewers on how to access the dataset. This can only be done after the first submission: after submission, there will be an 'add dataset or benchmark' button where you can leave information for reviewers. We highly recommend making the dataset publicly available immediately or before the start of the NeurIPS conference. In select cases, requiring solid motivation, the release date can be stretched up to a year after the submission deadline. @judithfan
      • [ ] Link to model URLs: appendix.tex see section A.2: Computational details
      • [x] Training data generation
      • [x] Test data hdf5s - uploading the hdf5s
      • [x] Test data mp4 - write script that fetches the "new data" (videos)
      • [x] Human psychophysics - pointer to repo & analysis notebooks/scripts
    • [x] Add Author statement that they bear all responsibility in case of violation of rights, etc., and confirmation of the data license. @yamins81
    • [x] Hosting, licensing, and maintenance plan. The choice of hosting platform is yours, as long as you ensure access to the data (possibly through a curated interface) and will provide the necessary maintenance. @judithfan To ensure accessibility, we largely follow the NeurIPS guidelines for data submission, but also allowing more freedom for non static datasets. The supplementary materials for datasets must include the following:
    • [x] Links to access the dataset and its metadata. This can be hidden upon submission if the dataset is not yet publicly available but must be added in the camera-ready version. In select cases, e.g when the data can only be released at a later date, this can be added afterward. Simulation environments should link to (open source) code repositories. @judithfan
    • [x] The dataset itself should ideally use an open and widely used data format. Provide a detailed explanation on how the dataset can be read. For simulation environments, use existing frameworks or explain how they can be used.
    • [x] Long-term preservation: It must be clear that the dataset will be available for a long time, either by uploading to a data repository or by explaining how the authors themselves will ensure this
    • [x] Explicit license: Authors must choose a license, ideally a CC license for datasets, or an open source license for code (e.g. RL environments). An overview of licenses can be found here: https://paperswithcode.com/datasets/license @yamins81
    • [x] Add structured metadata to a dataset's meta-data page using Web standards (like schema.org and DCAT): This allows it to be discovered and organized by anyone. A guide can be found here: https://developers.google.com/search/docs/data-types/dataset. If you use an existing data repository, this is often done automatically. @yamins81
    • [x] Highly recommended: a persistent dereferenceable identifier (e.g. a DOI minted by a data repository or a prefix on identifiers.org) for datasets, or a code repository (e.g. GitHub, GitLab,...) for code. If this is not possible or useful, please explain why. @yamins81
    • [x] For benchmarks, the supplementary materials must ensure that all results are easily reproducible. Where possible, use a reproducibility framework such as the ML reproducibility checklist, or otherwise guarantee that all results can be easily reproduced, i.e. all necessary datasets, code, and evaluation procedures must be accessible and documented. @judithfan
    opened by judithfan 2
  • enable automatic dataset uploading / downloading

    enable automatic dataset uploading / downloading

    Similar to: https://github.com/dicarlolab/dldata/blob/2c6cfd038861bbcff94be6c8897768179aff74fc/dldata/stimulus_sets/rust_datasets.py

    TODO: replace with pointers to public repos

    enhancement 
    opened by judithfan 2
  • Uniformize Occluders and Distractors

    Uniformize Occluders and Distractors

    Draw occluders and distractors from a common subset of the model library and in a common sampling distribution across controllers.

    IMPORTANT occluders, distractors, and "quirky" objects should all be models with use_flex=True. This is because only these models have easily accessed meshes that can be used for modeling.

    For FLEX-based controllers, such as clothiness (@arty-p), all objects should have use_flex=True.

    A way to get all the models that are FLEX-enabled is:

    
    # lib = ModelLibrarian("models_full.json")
    # for record in lib.records:
    #     if record.flex:
    #         print(record.name)
    
    
    opened by danielbear 2
  • Test for data saving

    Test for data saving

    Write a test that automatically retrieves a single record from all collections where behavioral data are being saved & cross-checks that the variables being saved are consistent

    opened by felixbinder 2
  • Human experiments documentation

    Human experiments documentation

    • [x] write markdown with steps
    • [x] Link osf ~~add documentation for server-side code~~ ~~add documentation for metadata generation notebooks & S3 upload~~
    • [x] update readme on main page, point to more detailed readme inside ./experiments

    Check

    • [x] Can we follow these instructions to replicate our human experimental setup using exactly the same infrastructure @sradwan15 ~~Can we generate the input files needed for analysis, given experimental data inside a mongoDB?~~
    • [x] Can we use this codebase to extend to variants of our human experiments? **
    opened by eliaszwang 1
  • Second pass on human error analysis (aka

    Second pass on human error analysis (aka "adversarial trial" analysis)

    • [x] Naming consistency: Only 80-90 matching trials between human and modeling experiments? Need to debug the stimulus matching issue, there are known discrepancies between naming conventions for stimuli for model & human evaluations
    • [x] Replicate RMSE analysis for all trials (not just adversarial subset) to verify that we can recover the same pattern in model-human consistency across models (with particle models doing the best, convnets worse, etc.)
    • [x] Visualize raw correlation between human & model predictions on adversarial trials, contextualized among all trials.
    • [x] Computing noise ceiling on the adversarial trials only (same as in paper). Then normalizing these metrics by these noise ceiling estimates.
    opened by judithfan 1
  • Update wording for overlay

    Update wording for overlay

    • [x] Maybe the wording of "red object" is ambiguous, since naturally occurring objects in the scene are also red.
    • [x] We'll need updated familiarization trial instructions
    opened by felixbinder 1
  • from neuralphys.models.rpin import Net import error

    from neuralphys.models.rpin import Net import error

    Hi

    I was trying to run the RPIN model and was bumped into this import error

    ModuleNotFoundError: No module named 'neuralphys'

    It seems like neuralphys is not an open-source package that I can download.

    Also would you please explain how I can run the DPI model?

    It seems like the model takes as input some preprocessed dataset that is different from other H5 files.

    Any help is appreciated.

    opened by Cheng-Xue 2
  • Readme improvement to help better find controllers for stimuli generation

    Readme improvement to help better find controllers for stimuli generation

    - Tell users which set of controllers to use (the ones under tdw_physics/tdw_physics/target_controller or physics-benchmarking-neurips2021/stimuli/generation/controllers/)

    • If we want them to use tdw_physics ones, we need
      • clear instructions for where to find physion controllers (i.e maybe rename target_controller as physion_controller, or bolden target_controller to emphasize; the controller folder can easily be mistaken for the right one)
      • better pointer to command_line_args for regeneration (say it's under "physics-benchmarking-neurips2021/stimuli/generation/configs/" but not in tdw)
    • If we want them to use tdw_physics ones, we need to fix
      • a clear pointer to stimuli controllers (currently the right controller is in tdw_physics/tdw_physics/target_controller and there is also a ./controllers folder)
    documentation 
    opened by DylanTao 0
  • Model Evaluation of DPI-Net and other particle representation based work

    Model Evaluation of DPI-Net and other particle representation based work

    Hi,

    This is very impressive work! I believe it will be like ImageNet for Physicals prediction. However, I face many difficulties when I try to follow it. The problems are mostly about model evaluations of DPI-NET/GNS etc. The questions are followings:

    1. Did this dataset offers particle representations for evaluation and training of DPI-NET/GNS/HRN? Could we download it directly like the mp4 data as well? (That would be very very helpful!) How many particles are used? For each particle, is it contains information like mass, velocity, stiffness, etc? How could I obtain the particles data information?

    2. For DPI-Net, it only takes one G_0 as the initial state to produce the future simulations. For Physion, how many time steps are used as inputs? How many time steps are used as predictions? (In the human accuracy evaluation as well) Is the last frame is used as initial state for DPI-NET?

    3. This code offers the Physpot as tools to train the DPI-NET model. But I somehow find out it is very complex. Could you present the script to train DPI-NET and other particle-based models? It would be very important to provide more detailed explanations for this part for other researchers to follow this work.

    Thank you very much if you would like to help!

    opened by LittleFlyFish 3
  • Modeling documentation

    Modeling documentation

    • [ ] move utils from physion to physopt.model.utils, clean up physion
    • [ ] Update dataprovider with new file structure
    • [ ] Document paths, naming of artifacts, make sure aligns with analysis code
    • [ ] Add particle-based models to physopt
    • [ ] In physopt, make it easy to only parts of the full pipeline. Document in README
    • [ ] Update README for running each model
    • [ ] In "Modeling Experiments" section of README: describe how to reproduce physion modeling results, provide final model checkpoints?
    • [ ] create private+public repo setup

    Check

    • [ ] Can we train each model?
    • [ ] Can we train the readout and evaluate on human test set?
    • [ ] Can we generate the input files needed for analysis?
    • [ ] Can we add new models easily?
    opened by eliaszwang 0
Owner
Cognitive Tools Lab
reverse engineering the human cognitive toolkit
Cognitive Tools Lab
This repo is developed for Strong Baseline For Vehicle Re-Identification in Track 2 Ai-City-2021 Challenges

A STRONG BASELINE FOR VEHICLE RE-IDENTIFICATION This paper is accepted to the IEEE Conference on Computer Vision and Pattern Recognition Workshop(CVPR

Cybercore Co. Ltd 78 Dec 29, 2022
[NeurIPS 2021] Well-tuned Simple Nets Excel on Tabular Datasets

[NeurIPS 2021] Well-tuned Simple Nets Excel on Tabular Datasets Introduction This repo contains the source code accompanying the paper: Well-tuned Sim

null 52 Jan 4, 2023
Source code for Zalo AI 2021 submission

zalo_ltr_2021 Source code for Zalo AI 2021 submission Solution: Pipeline We use the pipepline in the picture below: Our pipeline is combination of BM2

null 128 Dec 27, 2022
AI grand challenge 2020 Repo (Speech Recognition Track)

KorBERT를 활용한 한국어 텍스트 기반 위협 상황인지(2020 인공지능 그랜드 챌린지) 본 프로젝트는 ETRI에서 제공된 한국어 korBERT 모델을 활용하여 폭력 기반 한국어 텍스트를 분류하는 다양한 분류 모델들을 제공합니다. 본 개발자들이 참여한 2020 인공지

Young-Seok Choi 23 Jan 25, 2022
This repo includes our code for evaluating and improving transferability in domain generalization (NeurIPS 2021)

Transferability for domain generalization This repo is for evaluating and improving transferability in domain generalization (NeurIPS 2021), based on

gordon 9 Nov 29, 2022
Code repo for "RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network" (Machine Learning and the Physical Sciences workshop in NeurIPS 2021).

RBSRICNN: Raw Burst Super-Resolution through Iterative Convolutional Neural Network An official PyTorch implementation of the RBSRICNN network as desc

Rao Muhammad Umer 6 Nov 14, 2022
An easy way to build PyTorch datasets. Modularly build datasets and automatically cache processed results

EasyDatas An easy way to build PyTorch datasets. Modularly build datasets and automatically cache processed results Installation pip install git+https

Ximing Yang 4 Dec 14, 2021
Deep Learning Datasets Maker is a QGIS plugin to make datasets creation easier for raster and vector data.

Deep Learning Dataset Maker Deep Learning Datasets Maker is a QGIS plugin to make datasets creation easier for raster and vector data. How to use Down

deepbands 25 Dec 15, 2022
Cl datasets - PyTorch image dataloaders and utility functions to load datasets for supervised continual learning

Continual learning datasets Introduction This repository contains PyTorch image

berjaoui 5 Aug 28, 2022
Code and model benchmarks for "SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology"

NeurIPS 2020 SEVIR Code for paper: SEVIR : A Storm Event Imagery Dataset for Deep Learning Applications in Radar and Satellite Meteorology Requirement

USAF - MIT Artificial Intelligence Accelerator 46 Dec 15, 2022
"NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search".

NAS-Bench-301 This repository containts code for the paper: "NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search". The

AutoML-Freiburg-Hannover 57 Nov 30, 2022
Benchmarks for semi-supervised domain generalization.

Semi-Supervised Domain Generalization This code is the official implementation of the following paper: Semi-Supervised Domain Generalization with Stoc

Kaiyang 49 Dec 10, 2022
Sequence modeling benchmarks and temporal convolutional networks

Sequence Modeling Benchmarks and Temporal Convolutional Networks (TCN) This repository contains the experiments done in the work An Empirical Evaluati

CMU Locus Lab 3.5k Jan 1, 2023
Source code and notebooks to reproduce experiments and benchmarks on Bias Faces in the Wild (BFW).

Face Recognition: Too Bias, or Not Too Bias? Robinson, Joseph P., Gennady Livitz, Yann Henon, Can Qin, Yun Fu, and Samson Timoner. "Face recognition:

Joseph P. Robinson 41 Dec 12, 2022
Training code and evaluation benchmarks for the "Self-Supervised Policy Adaptation during Deployment" paper.

Self-Supervised Policy Adaptation during Deployment PyTorch implementation of PAD and evaluation benchmarks from Self-Supervised Policy Adaptation dur

Nicklas Hansen 101 Nov 1, 2022
Benchmarks for the Optimal Power Flow Problem

Power Grid Lib - Optimal Power Flow This benchmark library is curated and maintained by the IEEE PES Task Force on Benchmarks for Validation of Emergi

A Library of IEEE PES Power Grid Benchmarks 207 Dec 8, 2022
Benchmark spaces - Benchmarks of how well different two dimensional spaces work for clustering algorithms

benchmark_spaces Benchmarks of how well different two dimensional spaces work fo

Bram Cohen 6 May 7, 2022
(under submission) Bayesian Integration of a Generative Prior for Image Restoration

BIGPrior: Towards Decoupling Learned Prior Hallucination and Data Fidelity in Image Restoration Authors: Majed El Helou, and Sabine Süsstrunk {Note: p

Majed El Helou 22 Dec 17, 2022
A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains (IJCV submission)

wsss-analysis The code of: A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains, arXiv pre-print 2019 paper.

Lyndon Chan 48 Dec 18, 2022