Logical Neural Networks
LNNs are a novel Neuro = symbolic
framework designed to seamlessly provide key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning).
- Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly interpretable disentangled representation.
- Inference is omnidirectional rather than focused on predefined target variables, and corresponds to logical reasoning, including classical first-order logic (FOL) theorem proving as a special case.
- The model is end-to-end differentiable, and learning minimizes a novel loss function capturing logical contradiction, yielding resilience to inconsistent knowledge.
- It also enables the open-world assumption by maintaining bounds on truth values which can have probabilistic semantics, yielding resilience to incomplete knowledge.
Quickstart
To install the LNN:
- Install GraphViz
- Run:
pip install git+https://github.com/IBM/LNN.git
Documentation
Read the Docs | Academic Papers | Educational Resources | Neuro-Symbolic AI | API Overview | Python Module |
---|---|---|---|---|---|
Citation
If you use Logical Neural Networks for research, please consider citing the reference paper:
@article{riegel2020logical,
title={Logical neural networks},
author={Riegel, Ryan and Gray, Alexander and Luus, Francois and Khan, Naweed and Makondo, Ndivhuwo and Akhalwaya, Ismail Yunus and Qian, Haifeng and Fagin, Ronald and Barahona, Francisco and Sharma, Udit and others},
journal={arXiv preprint arXiv:2006.13155},
year={2020}
}