A model checker for verifying properties in epistemic models

Overview

Epistemic Model Checker

This is a model checker for verifying properties in epistemic models. The goal of the model checker is to check for Pluralistic Ignorace in complex scenarios and to probe its robustness. The motivation is based on the content of the paper [1].

Run the model checker by running run.py file using your favorite Python interpreter. This file takes two arguments, a model file and an epistemic logic formula:

python run.py model_file.em "(~p) \/ p"

Model file format:

{states}
{agents}
{agent}:({state}<={state}),...
...
{agent}:({state}<={state}),...
{state}:{proposition},...
...
{state}:{proposition},...

The models will be made transitive and reflexive upon parsing, so the user does not have to specify this. Here you can see an example model file:

s0,s1,s2
a,b
a:(s0<=s1),(s1<=s2)
b:(s2<=s0)
s1:q
s1:p
s2:p,r

This model contains three states, s0, s1 and s2 and two agents, a and b.

The formulas are written using the following operators:

  • K = knowledge
  • B = belief
  • S = safe belief
  • W = weakly safe belief
  • T = strong belief
  • I = ignorance
  • D = doubt
  • /\ = AND
  • \/ = OR
  • => = implication
  • ~ = NOT
  • (f1) ! (f2) = $[!\verb/f1/]\verb/f2/$

Some example formulas are shown below:

p /\ (~q)
I_a (D_b (p))
(K_a p) => (B_a p)
K_a ((B_b (~p)) ! (~(B_c (B_b (~p)))))

Please note that the parsing of parentheses can be tricky. If you encounter an error in the parse function, please try to add or remove some parentheses in the formula.

Future work

The following features would be nice to have:

  • Better parsing of logic formulas
  • Caching of results from intermediate steps
  • Define formulas in files instead of in the command line

Bibliography

[1] Hansen, Jens Ulrik. "A logic-based approach to pluralistic ignorance." Proceedings of PhDs in Logic III. to appear (2012).

You might also like...
Collections of pydantic models

pydantic-collections The pydantic-collections package provides BaseCollectionModel class that allows you to manipulate collections of pydantic models

In this tutorial, raster models of soil depth and soil water holding capacity for the United States will be sampled at random geographic coordinates within the state of Colorado.
In this tutorial, raster models of soil depth and soil water holding capacity for the United States will be sampled at random geographic coordinates within the state of Colorado.

Raster_Sampling_Demo (Resulting graph of this demo) Background Sampling values of a raster at specific geographic coordinates can be done with a numbe

Fitting thermodynamic models with pycalphad
Fitting thermodynamic models with pycalphad

ESPEI ESPEI, or Extensible Self-optimizing Phase Equilibria Infrastructure, is a tool for thermodynamic database development within the CALPHAD method

vartests is a Python library to perform some statistic tests to evaluate Value at Risk (VaR) Models

vartests is a Python library to perform some statistic tests to evaluate Value at Risk (VaR) Models, such as: T-test: verify if mean of distribution i

Fit models to your data in Python with Sherpa.

Table of Contents Sherpa License How To Install Sherpa Using Anaconda Using pip Building from source History Release History Sherpa Sherpa is a modeli

BAyesian Model-Building Interface (Bambi) in Python.

Bambi BAyesian Model-Building Interface in Python Overview Bambi is a high-level Bayesian model-building interface written in Python. It's built on to

The OHSDI OMOP Common Data Model allows for the systematic analysis of healthcare observational databases.

The OHSDI OMOP Common Data Model allows for the systematic analysis of healthcare observational databases.

Tkinter Izhikevich Neuron Model With Python
Tkinter Izhikevich Neuron Model With Python

TKINTER IZHIKEVICH NEURON MODEL WITH PYTHON Hodgkin-Huxley Model It is a mathematical model for the generation and transmission of action potentials i

Creating a statistical model to predict 10 year treasury yields
Creating a statistical model to predict 10 year treasury yields

Predicting 10-Year Treasury Yields Intitially, I wanted to see if the volatility in the stock market, represented by the VIX index (data source), had

Owner
Thomas Träff
Chooo chooooo
Thomas Träff
Physicochemical properties and indices for amino-acid sequences (ported from R).

peptides.py Physicochemical properties and indices for amino-acid sequences. ??️ Overview peptides.py is a pure-Python package to compute common descr

Martin Larralde 1 Oct 22, 2021
ForecastGA is a Python tool to forecast Google Analytics data using several popular time series models.

ForecastGA is a tool that combines a couple of popular libraries, Atspy and googleanalytics, with a few enhancements.

JR Oakes 36 Jan 3, 2023
Hidden Markov Models in Python, with scikit-learn like API

hmmlearn hmmlearn is a set of algorithms for unsupervised learning and inference of Hidden Markov Models. For supervised learning learning of HMMs and

null 2.7k Jan 3, 2023
A Python package for Bayesian forecasting with object-oriented design and probabilistic models under the hood.

Disclaimer This project is stable and being incubated for long-term support. It may contain new experimental code, for which APIs are subject to chang

Uber Open Source 1.6k Dec 29, 2022
Describing statistical models in Python using symbolic formulas

Patsy is a Python library for describing statistical models (especially linear models, or models that have a linear component) and building design mat

Python for Data 866 Dec 16, 2022
A probabilistic programming language in TensorFlow. Deep generative models, variational inference.

Edward is a Python library for probabilistic modeling, inference, and criticism. It is a testbed for fast experimentation and research with probabilis

Blei Lab 4.7k Jan 9, 2023
A probabilistic programming library for Bayesian deep learning, generative models, based on Tensorflow

ZhuSuan is a Python probabilistic programming library for Bayesian deep learning, which conjoins the complimentary advantages of Bayesian methods and

Tsinghua Machine Learning Group 2.2k Dec 28, 2022
A Pythonic introduction to methods for scaling your data science and machine learning work to larger datasets and larger models, using the tools and APIs you know and love from the PyData stack (such as numpy, pandas, and scikit-learn).

This tutorial's purpose is to introduce Pythonistas to methods for scaling their data science and machine learning work to larger datasets and larger models, using the tools and APIs they know and love from the PyData stack (such as numpy, pandas, and scikit-learn).

Coiled 102 Nov 10, 2022
Generate lookml for views from dbt models

dbt2looker Use dbt2looker to generate Looker view files automatically from dbt models. Features Column descriptions synced to looker Dimension for eac

lightdash 126 Dec 28, 2022
A Python package for the mathematical modeling of infectious diseases via compartmental models

A Python package for the mathematical modeling of infectious diseases via compartmental models. Originally designed for epidemiologists, epispot can be adapted for almost any type of modeling scenario.

epispot 12 Dec 28, 2022