Convolutional Dynamic Alignment Networks for Interpretable Classifications
M. Böhle, M. Fritz, B. Schiele. B-cos Networks: Alignment is All we Need for Interpretability. CVPR, 2022.
Overview
- Qualitative Examples
- Apart from the examples shown above, we additionally present some comparisons to post-hoc explanations.
- In order to highlight the stability of the explanations, we additionally present contribution maps evaluated on videos.
- For the latter, see VideoEvaluation.ipynb, for the others check out the jupyter notebook Qualitative Examples.
Evaluated on videos
In order to highlight the stability of the contribution-based explanations of the B-cos-Nets, here we provide explanations on videos; for more information, see VideoEvaluation.ipynb.
Comparison to post-hoc methods
Quantitative Interpretability results
In order to reproduce these plots, check out the jupyter notebook Quantitative results. For more information, see the paper and check out the code at interpretability/
Copyright and license
Copyright (c) 2022 Moritz Böhle, Max-Planck-Gesellschaft
This code is licensed under the BSD License 2.0, see license.
Further, you use any of the code in this repository for your research, please cite as:
@inproceedings{Boehle2022CVPR,
author = {Moritz Böhle and Mario Fritz and Bernt Schiele},
title = {B-cos Networks: Alignment is All we Need for Interpretability},
journal = {IEEE/CVF Conference on Computer Vision and Pattern Recognition ({CVPR})},
year = {2022}
}