A library for benchmarking, developing and deploying deep learning anomaly detection algorithms

Overview

A library for benchmarking, developing and deploying deep learning anomaly detection algorithms


Key FeaturesGetting StartedDocsLicense

python pytorch openvino black Code Quality and Coverage Build Docs


Introduction

Anomalib is a deep learning library that aims to collect state-of-the-art anomaly detection algorithms for benchmarking on both public and private datasets. Anomalib provides several ready-to-use implementations of anomaly detection algorithms described in the recent literature, as well as a set of tools that facilitate the development and implementation of custom models. The library has a strong focus on image-based anomaly detection, where the goal of the algorithm is to identify anomalous images, or anomalous pixel regions within images in a dataset. Anomalib is constantly updated with new algorithms and training/inference extensions, so keep checking!

Sample Image

Key features:

  • The largest public collection of ready-to-use deep learning anomaly detection algorithms and benchmark datasets.
  • PyTorch Lightning based model implementations to reduce boilerplate code and limit the implementation efforts to the bare essentials.
  • All models can be exported to OpenVINO Intermediate Representation (IR) for accelerated inference on intel hardware.
  • A set of inference tools for quick and easy deployment of the standard or custom anomaly detection models.

Getting Started

To get an overview of all the devices where anomalib as been tested thoroughly, look at the Supported Hardware section in the documentation.

PyPI Install

You can get started with anomalib by just using pip.

pip install anomalib

Local Install

It is highly recommended to use virtual environment when installing anomalib. For instance, with anaconda, anomalib could be installed as,

yes | conda create -n anomalib_env python=3.8
conda activate anomalib_env
git clone https://github.com/openvinotoolkit/anomalib.git
cd anomalib
pip install -e .

Training

By default python tools/train.py runs PADIM model MVTec leather dataset.

python tools/train.py    # Train PADIM on MVTec leather

Training a model on a specific dataset and category requires further configuration. Each model has its own configuration file, config.yaml , which contains data, model and training configurable parameters. To train a specific model on a specific dataset and category, the config file is to be provided:

python tools/train.py --model_config_path <path/to/model/config.yaml>

For example, to train PADIM you can use

python tools/train.py --model_config_path anomalib/models/padim/config.yaml

Alternatively, a model name could also be provided as an argument, where the scripts automatically finds the corresponding config file.

python tools/train.py --model padim

where the currently available models are:

Inference

Anomalib contains several tools that can be used to perform inference with a trained model. The script in tools/inference contains an example of how the inference tools can be used to generate a prediction for an input image.

If the specified weight path points to a PyTorch Lightning checkpoint file (.ckpt), inference will run in PyTorch. If the path points to an ONNX graph (.onnx) or OpenVINO IR (.bin or .xml), inference will run in OpenVINO.

The following command can be used to run inference from the command line:

python tools/inference.py \
    --model_config_path <path/to/model/config.yaml> \
    --weight_path <path/to/weight/file> \
    --image_path <path/to/image>

As a quick example:

python tools/inference.py \
    --model_config_path anomalib/models/padim/config.yaml \
    --weight_path results/padim/mvtec/bottle/weights/model.ckpt \
    --image_path datasets/MVTec/bottle/test/broken_large/000.png

If you want to run OpenVINO model, ensure that compression apply is set to True in the respective model config.yaml.

optimization:
  compression:
    apply: true

Example OpenVINO Inference:

python tools/inference.py \
    --model_config_path  \
    anomalib/models/padim/config.yaml  \
    --weight_path  \
    results/padim/mvtec/bottle/compressed/compressed_model.xml  \
    --image_path  \
    datasets/MVTec/bottle/test/broken_large/000.png  \
    --meta_data  \
    results/padim/mvtec/bottle/compressed/meta_data.json

Ensure that you provide path to meta_data.json if you want the normalization to be applied correctly.


Datasets

MVTec Dataset

Image-Level AUC

Model Avg Carpet Grid Leather Tile Wood Bottle Cable Capsule Hazelnut Metal Nut Pill Screw Toothbrush Transistor Zipper
PatchCore Wide ResNet-50 0.980 0.984 0.959 1.000 1.000 0.989 1.000 0.990 0.982 1.000 0.994 0.924 0.960 0.933 1.000 0.982
PatchCore ResNet-18 0.973 0.970 0.947 1.000 0.997 0.997 1.000 0.986 0.965 1.000 0.991 0.916 0.943 0.931 0.996 0.953
CFlow Wide ResNet-50 0.962 0.986 0.962 1.0 0.999 0.993 1.0 0.893 0.945 1.0 0.995 0.924 0.908 0.897 0.943 0.984
PaDiM Wide ResNet-50 0.950 0.995 0.942 1.0 0.974 0.993 0.999 0.878 0.927 0.964 0.989 0.939 0.845 0.942 0.976 0.882
PaDiM ResNet-18 0.891 0.945 0.857 0.982 0.950 0.976 0.994 0.844 0.901 0.750 0.961 0.863 0.759 0.889 0.920 0.780
STFPM Wide ResNet-50 0.876 0.957 0.977 0.981 0.976 0.939 0.987 0.878 0.732 0.995 0.973 0.652 0.825 0.5 0.875 0.899
STFPM ResNet-18 0.893 0.954 0.982 0.989 0.949 0.961 0.979 0.838 0.759 0.999 0.956 0.705 0.835 0.997 0.853 0.645
DFM Wide ResNet-50 0.891 0.978 0.540 0.979 0.977 0.974 0.990 0.891 0.931 0.947 0.839 0.809 0.700 0.911 0.915 0.981
DFM ResNet-18 0.894 0.864 0.558 0.945 0.984 0.946 0.994 0.913 0.871 0.979 0.941 0.838 0.761 0.95 0.911 0.949
DFKDE Wide ResNet-50 0.774 0.708 0.422 0.905 0.959 0.903 0.936 0.746 0.853 0.736 0.687 0.749 0.574 0.697 0.843 0.892
DFKDE ResNet-18 0.762 0.646 0.577 0.669 0.965 0.863 0.951 0.751 0.698 0.806 0.729 0.607 0.694 0.767 0.839 0.866

Pixel-Level AUC

Model Avg Carpet Grid Leather Tile Wood Bottle Cable Capsule Hazelnut Metal Nut Pill Screw Toothbrush Transistor Zipper
PatchCore Wide ResNet-50 0.980 0.988 0.968 0.991 0.961 0.934 0.984 0.988 0.988 0.987 0.989 0.980 0.989 0.988 0.981 0.983
PatchCore ResNet-18 0.976 0.986 0.955 0.990 0.943 0.933 0.981 0.984 0.986 0.986 0.986 0.974 0.991 0.988 0.974 0.983
CFlow Wide ResNet-50 0.971 0.986 0.968 0.993 0.968 0.924 0.981 0.955 0.988 0.990 0.982 0.983 0.979 0.985 0.897 0.980
PaDiM Wide ResNet-50 0.979 0.991 0.970 0.993 0.955 0.957 0.985 0.970 0.988 0.985 0.982 0.966 0.988 0.991 0.976 0.986
PaDiM ResNet-18 0.968 0.984 0.918 0.994 0.934 0.947 0.983 0.965 0.984 0.978 0.970 0.957 0.978 0.988 0.968 0.979
STFPM Wide ResNet-50 0.903 0.987 0.989 0.980 0.966 0.956 0.966 0.913 0.956 0.974 0.961 0.946 0.988 0.178 0.807 0.980
STFPM ResNet-18 0.951 0.986 0.988 0.991 0.946 0.949 0.971 0.898 0.962 0.981 0.942 0.878 0.983 0.983 0.838 0.972

Image F1 Score

Model Avg Carpet Grid Leather Tile Wood Bottle Cable Capsule Hazelnut Metal Nut Pill Screw Toothbrush Transistor Zipper
PatchCore Wide ResNet-50 0.976 0.971 0.974 1.000 1.000 0.967 1.000 0.968 0.982 1.000 0.984 0.940 0.943 0.938 1.000 0.979
PatchCore ResNet-18 0.970 0.949 0.946 1.000 0.98 0.992 1.000 0.978 0.969 1.000 0.989 0.940 0.932 0.935 0.974 0.967
CFlow Wide ResNet-50 0.944 0.972 0.932 1.0 0.988 0.967 1.0 0.832 0.939 1.0 0.979 0.924 0.971 0.870 0.818 0.967
PaDiM Wide ResNet-50 0.951 0.989 0.930 1.0 0.960 0.983 0.992 0.856 0.982 0.937 0.978 0.946 0.895 0.952 0.914 0.947
PaDiM ResNet-18 0.916 0.930 0.893 0.984 0.934 0.952 0.976 0.858 0.960 0.836 0.974 0.932 0.879 0.923 0.796 0.915
STFPM Wide ResNet-50 0.926 0.973 0.973 0.974 0.965 0.929 0.976 0.853 0.920 0.972 0.974 0.922 0.884 0.833 0.815 0.931
STFPM ResNet-18 0.932 0.961 0.982 0.989 0.930 0.951 0.984 0.819 0.918 0.993 0.973 0.918 0.887 0.984 0.790 0.908
DFM Wide ResNet-50 0.918 0.960 0.844 0.990 0.970 0.959 0.976 0.848 0.944 0.913 0.912 0.919 0.859 0.893 0.815 0.961
DFM ResNet-18 0.919 0.895 0.844 0.926 0.971 0.948 0.977 0.874 0.935 0.957 0.958 0.921 0.874 0.933 0.833 0.943
DFKDE Wide ResNet-50 0.875 0.907 0.844 0.905 0.945 0.914 0.946 0.790 0.914 0.817 0.894 0.922 0.855 0.845 0.722 0.910
DFKDE ResNet-18 0.872 0.864 0.844 0.854 0.960 0.898 0.942 0.793 0.908 0.827 0.894 0.916 0.859 0.853 0.756 0.916
Comments
  • 1 epoch

    1 epoch

    Is it meant to be only one epoch in training? Your config files state 1 epoch, is that just as quick example? I tried to train PADIM for 10 on mvtec leather and wood and the metrics stay the same anyway, so it seems nothing as gained by training more. Lightning module also warn that there is no optimizer so I guess train only finds the correct thresholds and that takes 1 epoch.

    opened by sequoiagrove 20
  • Improving the result for custom dataset

    Improving the result for custom dataset

    Hi, I am able to running the code but when I am working on the custom dataset, the code did not achive to detect the defect. I tried change the threshold values ,epoch numbers etc.. Do you have any advice for getting good result on my custom dataset? Thanks,

    Metrics 
    opened by ZeynepRuveyda 18
  • add option to load metrics with kwargs

    add option to load metrics with kwargs

    Description

    • Fixes #687

    Changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] Refactor (non-breaking change which refactors the code base)
    • [X] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [X] This change requires a documentation update

    Checklist

    • [X] My code follows the pre-commit style and check guidelines of this project.
    • [X] I have performed a self-review of my code
    • [X] I have commented my code, particularly in hard-to-understand areas
    • [ ] I have made corresponding changes to the documentation
    • [ ] My changes generate no new warnings
    • [x] I have added tests that prove my fix is effective or that my feature works
    • [x] New and existing tests pass locally with my changes
    Metrics Callbacks Tests Docs 
    opened by jpcbertoldo 15
  • Fix #699

    Fix #699

    • Fixes #699

    Changes

    • [X] Bug fix (non-breaking change which fixes an issue)
    • [ ] Refactor (non-breaking change which refactors the code base)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [ ] This change requires a documentation update

    Checklist

    • [] My code follows the pre-commit style and check guidelines of this project.
    • [ ] I have performed a self-review of my code
    • [ ] I have commented my code, particularly in hard-to-understand areas
    • [ ] I have made corresponding changes to the documentation
    • [ ] My changes generate no new warnings
    • [ ] I have added tests that prove my fix is effective or that my feature works
    • [ ] New and existing tests pass locally with my changes
    opened by jpcbertoldo 14
  • TypeError: unsupported operand type(s) for /: 'NoneType' and 'float'

    TypeError: unsupported operand type(s) for /: 'NoneType' and 'float'

    I want to train CFLOW model on custom dataset.

    config.yaml

    dataset:
      name: Concrete_Crack #options: [mvtec, btech, folder, concrete_crack]
      format: folder # mvtec
      path: ./datasets/Concrete_Crack/ # ./datasets/MVTec
      normal_dir: 'train/Negative'
      abnormal_dir: 'test/Positive'
      normal_test_dir: 'test/Negative'
      task: segmentation
      mask: ./datasets/Concrete_Crack/ground_truth/
      extensions: '.jpg'
      split_ratio: 0.1
      seed: 0
    #  category: bottle
      image_size: 227
      train_batch_size: 8 # 16
      test_batch_size: 8 # 16
      inference_batch_size: 8 # 16
      fiber_batch_size: 64
      num_workers: 8
      transform_config:
        train: null
        val: null
      create_validation_set: false
    

    My dataset have the same structure like MVTec image

    To Reproduce python3 tools/train.py --model_config_path anomalib/models/cflow/config.yaml

    When I start the training, I get an error:

    File "/home/Projects/Anomalib/anomalib/anomalib/data/folder.py", line 282, in __getitem__
        mask = cv2.imread(mask_path, flags=0) / 255.0
    TypeError: unsupported operand type(s) for /: 'NoneType' and 'float'
    
    Data 
    opened by andriy-onufriyenko 14
  • PatchCore results are much worth than reported

    PatchCore results are much worth than reported

    Describe the bug

    • A clear and concise description of what the bug is.

    To Reproduce

    Steps to reproduce the behavior:

    1. Go to the Main directory
    • Run python tools/train.py --model patchcore

    Expected behavior

    The image AUCROC to be .98 for the category carpet of the MvTech dataset but it is very low. Fastflow work as expected so the problem seems to be patchcore. Screenshots

    image

    Hardware and Software Configuration

    • OS: [Ubuntu]
    • NVIDIA Driver Version [470.141.03]
    • CUDA Version [11.4]
    • CUDNN Version [e.g. v11.4.120]

    Log

    WARNING: CPU random generator seem to be failing, disabling hardware random number generation WARNING: RDRND generated: 0xffffffff 0xffffffff 0xffffffff 0xffffffff ----------------------------------/anomalib/config/config.py:166: UserWarning: config.project.unique_dir is set to False. This does not ensure that your results will be written in an empty directory and you may overwrite files. warn( 2022-11-16 11:52:49,662 - anomalib.data - INFO - Loading the datamodule 2022-11-16 11:52:49,662 - anomalib.pre_processing.pre_process - WARNING - Transform configs has not been provided. Images will be normalized using ImageNet statistics. 2022-11-16 11:52:49,663 - anomalib.pre_processing.pre_process - WARNING - Transform configs has not been provided. Images will be normalized using ImageNet statistics. 2022-11-16 11:52:49,663 - anomalib.models - INFO - Loading the model. 2022-11-16 11:52:49,667 - torch.distributed.nn.jit.instantiator - INFO - Created a temporary directory at /tmp/tmpk5fh8j6r 2022-11-16 11:52:49,667 - torch.distributed.nn.jit.instantiator - INFO - Writing /tmp/tmpk5fh8j6r/_remote_module_non_scriptable.py 2022-11-16 11:52:49,674 - anomalib.models.components.base.anomaly_module - INFO - Initializing PatchcoreLightning model. /home/-/code/anomalib/lib/python3.8/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric PrecisionRecallCurve will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint. warnings.warn(*args, **kwargs) 2022-11-16 11:52:50,882 - timm.models.helpers - INFO - Loading pretrained weights from url (https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/wide_resnet50_racm-8234f177.pth) 2022-11-16 11:52:51,009 - anomalib.utils.loggers - INFO - Loading the experiment logger(s) 2022-11-16 11:52:51,009 - anomalib.utils.callbacks - INFO - Loading the callbacks /home/-/code/anomalib/src/anomalib/anomalib/utils/callbacks/init.py:141: UserWarning: Export option: None not found. Defaulting to no model export warnings.warn(f"Export option: {config.optimization.export_mode} not found. Defaulting to no model export") 2022-11-16 11:52:51,012 - pytorch_lightning.utilities.rank_zero - INFO - GPU available: True, used: True 2022-11-16 11:52:51,012 - pytorch_lightning.utilities.rank_zero - INFO - TPU available: False, using: 0 TPU cores 2022-11-16 11:52:51,012 - pytorch_lightning.utilities.rank_zero - INFO - IPU available: False, using: 0 IPUs 2022-11-16 11:52:51,012 - pytorch_lightning.utilities.rank_zero - INFO - HPU available: False, using: 0 HPUs 2022-11-16 11:52:51,012 - pytorch_lightning.utilities.rank_zero - INFO - Trainer(limit_train_batches=1.0) was configured so 100% of the batches per epoch will be used.. 2022-11-16 11:52:51,012 - pytorch_lightning.utilities.rank_zero - INFO - Trainer(limit_val_batches=1.0) was configured so 100% of the batches will be used.. 2022-11-16 11:52:51,012 - pytorch_lightning.utilities.rank_zero - INFO - Trainer(limit_test_batches=1.0) was configured so 100% of the batches will be used.. 2022-11-16 11:52:51,012 - pytorch_lightning.utilities.rank_zero - INFO - Trainer(limit_predict_batches=1.0) was configured so 100% of the batches will be used.. 2022-11-16 11:52:51,012 - pytorch_lightning.utilities.rank_zero - INFO - Trainer(val_check_interval=1.0) was configured so validation will run at the end of the training epoch.. 2022-11-16 11:52:51,012 - anomalib - INFO - Training the model. 2022-11-16 11:52:51,016 - anomalib.data.mvtec - INFO - Found the dataset. 2022-11-16 11:52:51,018 - anomalib.data.mvtec - INFO - Setting up train, validation, test and prediction datasets. /-/-/code/anomalib/lib/python3.8/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric ROC will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint. warnings.warn(*args, **kwargs) 2022-11-16 11:52:52,479 - pytorch_lightning.accelerators.gpu - INFO - LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1] /-/-/code/anomalib/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py:183: UserWarning: LightningModule.configure_optimizers returned None, this fit will run with no optimizer rank_zero_warn( 2022-11-16 11:52:52,482 - pytorch_lightning.callbacks.model_summary - INFO - | Name | Type | Params

    0 | image_threshold | AnomalyScoreThreshold | 0
    1 | pixel_threshold | AnomalyScoreThreshold | 0
    2 | model | PatchcoreModel | 24.9 M 3 | image_metrics | AnomalibMetricCollection | 0
    4 | pixel_metrics | AnomalibMetricCollection | 0
    5 | normalization_metrics | MinMax | 0

    24.9 M Trainable params 0 Non-trainable params 24.9 M Total params 99.450 Total estimated model params size (MB) Epoch 0: 8%|▊ | 1/13 [00:01<00:16, 1.37s/it, loss=nan]/-/-/code/anomalib/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py:137: UserWarning: training_step returned None. If this was on purpose, ignore this warning... self.warning_cache.warn("training_step returned None. If this was on purpose, ignore this warning...") Epoch 0: 69%|██████▉ | 9/13 [00:01<00:00, 4.67it/s, loss=nan] Validation: 0it [00:00, ?it/s]2022-11-16 11:52:54,414 - anomalib.models.patchcore.lightning_model - INFO - Aggregating the embedding extracted from the training set. 2022-11-16 11:52:54,415 - anomalib.models.patchcore.lightning_model - INFO - Applying core-set subsampling to get the embedding. Epoch 0: 69%|██████▉ | 9/13 [00:20<00:08, 2.22s/it, loss=nan] Validation: 0%| | 0/4 [00:00<?, ?it/s] Validation DataLoader 0: 0%| | 0/4 [00:00<?, ?it/s] Validation DataLoader 0: 25%|██▌ | 1/4 [00:00<00:00, 4.13it/s] Epoch 0: 77%|███████▋ | 10/13 [00:59<00:17, 5.94s/it, loss=nan] Validation DataLoader 0: 50%|█████ | 2/4 [00:00<00:00, 4.04it/s] Epoch 0: 85%|████████▍ | 11/13 [00:59<00:10, 5.42s/it, loss=nan] Validation DataLoader 0: 75%|███████▌ | 3/4 [00:00<00:00, 4.02it/s] Epoch 0: 92%|█████████▏| 12/13 [00:59<00:04, 4.99s/it, loss=nan] Validation DataLoader 0: 100%|██████████| 4/4 [00:00<00:00, 4.63it/s] Epoch 0: 100%|██████████| 13/13 [01:00<00:00, 4.67s/it, loss=nan, pixel_F1Score=0.548, pixel_AUROC=0.986] Epoch 0: 100%|██████████| 13/13 [01:01<00:00, 4.69s/it, loss=nan, pixel_F1Score=0.548, pixel_AUROC=0.986] 2022-11-16 11:53:53,628 - anomalib.utils.callbacks.timer - INFO - Training took 61.15 seconds 2022-11-16 11:53:53,628 - anomalib - INFO - Loading the best model weights. 2022-11-16 11:53:53,628 - anomalib - INFO - Testing the model. 2022-11-16 11:53:53,632 - anomalib.data.mvtec - INFO - Found the dataset. 2022-11-16 11:53:53,633 - anomalib.data.mvtec - INFO - Setting up train, validation, test and prediction datasets. /-/code/anomalib/lib/python3.8/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric ROC will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint. warnings.warn(*args, **kwargs) 2022-11-16 11:53:53,716 - pytorch_lightning.accelerators.gpu - INFO - LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1] 2022-11-16 11:53:53,718 - anomalib.utils.callbacks.model_loader - INFO - Loading the model from /home/-/code/anomalib/src/anomalib/results/patchcore/mvtec/carpet/run/weights/model.ckpt Testing DataLoader 0: 100%|██████████| 4/4 [00:19<00:00, 4.65s/it]2022-11-16 11:54:14,762 - anomalib.utils.callbacks.timer - INFO - Testing took 20.9255051612854 seconds Throughput (batch_size=32) : 5.591262867883519 FPS Testing DataLoader 0: 100%|██████████| 4/4 [00:19<00:00, 4.97s/it] ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Test metric DataLoader 0 ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── image_AUROC 0.4036917984485626 image_F1Score 0.8640776872634888 pixel_AUROC 0.9860672950744629 pixel_F1Score 0.5481611490249634 ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

    Process finished with exit code 0

    Bug 
    opened by spoorgholi74 13
  • Migrate old configs to new CLI format

    Migrate old configs to new CLI format

    Description

    • Migrate old configs to match the new CLI design

    TODO

    • [x] Compare performance after migration (ensure that all the optimizers and callbacks are loaded correctly)
    • [x] Remove LightningModels (they exist for backward compatibility) with the new design we can use the same one as LightningCLI
    • [x] Extend to cflow, dfkde, dfm, draem, fastflow, ganomaly, strpm, patchcore, reverse_distillation
    • [x] Modify tests
    Callbacks Data Logger Config CLI Inference Tests Docs Benchmarking HPO Tools Notebooks 
    opened by ashwinvaidya17 13
  • Error(s) in loading state_dict for PatchcoreLightning

    Error(s) in loading state_dict for PatchcoreLightning

    Describe the bug

    Running tools/inference/torch_inference.py throws:

    Traceback (most recent call last):
      File "/workspace/tools/inference/torch_inference.py", line 97, in <module>
        infer()
      File "/workspace/tools/inference/torch_inference.py", line 73, in infer
        inferencer = TorchInferencer(config=args.config, model_source=args.weights)
      File "/workspace/anomalib/deploy/inferencers/torch_inferencer.py", line 54, in __init__
        self.model = self.load_model(model_source)
      File "/workspace/anomalib/deploy/inferencers/torch_inferencer.py", line 85, in load_model
        model.load_state_dict(torch.load(path)["state_dict"])
      File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1497, in load_state_dict
        raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
    RuntimeError: Error(s) in loading state_dict for PatchcoreLightning:
    	Unexpected key(s) in state_dict: "normalization_metrics.min", "normalization_metrics.max".
    

    To Reproduce

    1. Using default patchcore/config.yaml
    2. Train the model: python tools/train.py --config anomalib/models/patchcore/config.yaml
    3. Use trained model for torch_inference:
    python tools/inference/torch_inference.py \
    --config anomalib/models/patchcore/config.yaml \
    --weights results/patchcore/mvtec/bottle/weights/model-v14.ckpt \
    --input datasets/MVTec/bottle/test/broken_large/000.png \
    --output results/patchcore/mvtec/bottle/images
    

    Additional context

    Error is present since commit d6951ebaaf36477bf245b17b5b6a563d35892e81

    opened by Luksalos 13
  • Issues with heatmap scaling when training without masks

    Issues with heatmap scaling when training without masks

    I ran into an issue when training on a dataset (subset of MVTec), where I set all ground truth masks to zero (to simulate training on a dataset for which I have no ground truth masks). When training with the actual ground truth masks, the model produces heatmaps as expected as in the first image below (produced with tools/inference.py). However, when training with the zero masks, the heatmaps seem to be scaled differently as in the second image below. The confidence score seems unaffected.

    ground_truth_masks

    zero_masks

    This behaviour is the same for both PADIM and PatchCore. I haven't tested the other models.

    This is my model config for PADIM

    dataset:
      name: mvtec_test
      format: folder
      path: ./datasets/mvtec_test/images
      normal: normal
      abnormal: abnormal
      task: segmentation
      mask: ./datasets/mvtec_test/masks_orig
      extensions: null
      seed: 0  
      image_size: 224
      train_batch_size: 32
      test_batch_size: 1
      num_workers: 16
      transform_config: null
      split_ratio: 0.2
      create_validation_set: true
      tiling:
        apply: false
        tile_size: null
        stride: null
        remove_border_count: 0
        use_random_tiling: False
        random_tile_count: 16
    
    model:
      name: padim
      backbone: resnet18
      layers:
        - layer1
        - layer2
        - layer3
      metric: auc
      normalization_method: min_max # options: [none, min_max, cdf]
      threshold:
        image_default: 3
        pixel_default: 3
        adaptive: true
    
    project:
      seed: 42
      path: ./results
      log_images_to: ["local"]
      logger: false
      save_to_csv: false
    
    optimization:
      openvino:
        apply: false
    
    # PL Trainer Args. Don't add extra parameter here.
    trainer:
      accelerator: null
      accumulate_grad_batches: 1
      amp_backend: native
      auto_lr_find: false
      auto_scale_batch_size: false
      auto_select_gpus: false
      benchmark: false
      check_val_every_n_epoch: 1 # Don't validate before extracting features.
      checkpoint_callback: true
      default_root_dir: null
      deterministic: false
      fast_dev_run: false
      gpus: 1
      gradient_clip_val: 0
      limit_predict_batches: 1.0
      limit_test_batches: 1.0
      limit_train_batches: 1.0
      limit_val_batches: 1.0
      log_every_n_steps: 50
      log_gpu_memory: null
      max_epochs: 1
      max_steps: -1
      min_epochs: null
      min_steps: null
      move_metrics_to_cpu: false
      multiple_trainloader_mode: max_size_cycle
      num_nodes: 1
      num_processes: 1
      num_sanity_val_steps: 0
      overfit_batches: 0.0
      plugins: null
      precision: 32
      prepare_data_per_node: true
      process_position: 0
      profiler: null
      progress_bar_refresh_rate: null
      replace_sampler_ddp: true
      stochastic_weight_avg: false
      sync_batchnorm: false
      terminate_on_nan: false
      tpu_cores: null
      track_grad_norm: -1
      val_check_interval: 1.0 # Don't validate before extracting features.
      weights_save_path: null
      weights_summary: top
    
    opened by LukasBommes 13
  • Support Kaggle and Colab Environments

    Support Kaggle and Colab Environments

    Is your feature request related to a problem? Please describe.

    • Kaggle and Colab are free and great resources to do quick experiments. Currently, I've found that it's not possible to run anomalib on these environments. I guess, when the PIP packages get ready, it would be possible. Supporting these environments would be a great opportunity for the end-user for sure.
    opened by innat 13
  • How to train in a custom dataset?

    How to train in a custom dataset?

    I am having a bit of trouble training on a custom dataset. I reproduced the MVTec structure but I still get some errors like:

    IndexError: too many indices for tensor of dimension 0

    opened by opassos 13
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    Data 
    opened by TrellixVulnTeam 0
  • [Detection] Compute box score when generating boxes from masks

    [Detection] Compute box score when generating boxes from masks

    Description

    Small PR that adds an anomaly score for bounding boxes generated from anomaly maps, which is needed for the OTX task.

    • Extends masks_to_boxes to take anomaly maps as optional additional input, and compute an anomaly score for each bounding box as the max value of the anomaly map within the pixel region covered by the bounding box.
    • Update the logic in AnomalyModule.

    Changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] Refactor (non-breaking change which refactors the code base)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [ ] This change requires a documentation update

    Checklist

    • [x] My code follows the pre-commit style and check guidelines of this project.
    • [x] I have performed a self-review of my code
    • [ ] I have commented my code, particularly in hard-to-understand areas
    • [ ] I have made corresponding changes to the documentation
    • [x] My changes generate no new warnings
    • [x] I have added tests that prove my fix is effective or that my feature works
    • [x] New and existing tests pass locally with my changes
    • [ ] I have added a summary of my changes to the CHANGELOG (not for minor changes, docs and tests).
    Data Inference Tests 
    opened by djdameln 0
  • [WIP] CLI refactor

    [WIP] CLI refactor

    Description

    • In progress marked as draft as this contains changes from https://github.com/openvinotoolkit/anomalib/pull/713 and is targeting that branch for now.

    Done

    • [x] Add HPO and Benchmarking to the CLI
    • [x] Migrate old configs to the new format
    • [x] Basic train,test,hpo etc work with the new CLI

    TODO

    • [ ] Expose internal configuration in benchmarking and hpo scripts.
    • [ ] Add tests for CLI
    • [ ] Test logging from CLI
    • [ ] Test if all notebooks run with these changes
    • [ ] Use the new @tiler decorator in the models
    • [ ] Refactor code (especially the new callback stuff)
    • [ ] Potentially investigate replacing lightning cli with custom one based on json argparse
    • [ ] Add custom loops

    Changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] Refactor (non-breaking change which refactors the code base)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [ ] This change requires a documentation update

    Checklist

    • [ ] My code follows the pre-commit style and check guidelines of this project.
    • [ ] I have performed a self-review of my code
    • [ ] I have commented my code, particularly in hard-to-understand areas
    • [ ] I have made corresponding changes to the documentation
    • [ ] My changes generate no new warnings
    • [ ] I have added tests that prove my fix is effective or that my feature works
    • [ ] New and existing tests pass locally with my changes
    • [ ] I have added a summary of my changes to the CHANGELOG (not for minor changes, docs and tests).
    Callbacks Pre-Processing Config CLI Inference Tests Benchmarking HPO Tools 
    opened by ashwinvaidya17 0
  • [Bug]: how to use tiler in training / inference

    [Bug]: how to use tiler in training / inference

    Describe the bug

    We try to enable tiling in the training / inference, by setting configs like this:

      tiling:
        apply: true #false
        tile_size: 224 #null
    

    however, setting this doesn't actually enable the tiler. Closely reading the code, we've seen anomalib/models/patchcore/torch_model.py actually tiler is not set at all

        def __init__(
            self,
            input_size: Tuple[int, int],
            layers: List[str],
            backbone: str = "wide_resnet50_2",
            pre_trained: bool = True,
            num_neighbors: int = 9,
        ) -> None:
            super().__init__()
            self.tiler: Optional[Tiler] = None   ######### here is None
    
            self.backbone = backbone
            self.layers = layers
            self.input_size = input_size
            self.num_neighbors = num_neighbors
    

    could you please suggest how to enable tiling? Thanks!

    Dataset

    MVTec

    Model

    PatchCore

    Steps to reproduce the behavior

    N/A

    OS information

    OS information:

    • OS: [e.g. Ubuntu 20.04]
    • Python version: [e.g. 3.8.10]
    • Anomalib version: [e.g. 0.3.6]
    • PyTorch version: [e.g. 1.9.0]
    • CUDA/cuDNN version: [e.g. 11.1]
    • GPU models and configuration: [e.g. 2x GeForce RTX 3090]
    • Any other relevant information: [e.g. I'm using a custom dataset] N/A

    Expected behavior

    Enable tiling in train/test/inference.

    Screenshots

    No response

    Pip/GitHub

    pip

    What version/branch did you use?

    No response

    Configuration YAML

    dataset:
      name: mvtec #options: [mvtec, btech, folder]
      format: mvtec
      path: ./datasets/MVTec
      task: segmentation
      category: bottle   
      image_size: 336 #224
      train_batch_size: 2 #32
      test_batch_size: 2 #32
      num_workers: 8
      transform_config:
        train: null
        val: null
      create_validation_set: false
      tiling:
        apply: true #false
        tile_size: 224 #null
        stride: null
        remove_border_count: 0
        use_random_tiling: False
        random_tile_count: 16
    
    model:
      name: patchcore
      backbone: wide_resnet50_2
      pre_trained: true
      layers:
        - layer2
        - layer3
      coreset_sampling_ratio: 0.1
      num_neighbors: 9
      normalization_method: min_max # options: [null, min_max, cdf]
    
    metrics:
      image:
        - F1Score
        - AUROC
      pixel:
        - F1Score
        - AUROC
      threshold:
        #method: adaptive #options: [adaptive, manual]
        method: manual
        manual_image: 0.2 #null
        manual_pixel: 0.2 # null
    
    visualization:
      show_images: False # show images on the screen
      save_images: True # save images to the file system
      log_images: True # log images to the available loggers (if any)
      image_save_path: null # path to which images will be saved
      mode: full # options: ["full", "simple"]
    
    project:
      seed: 0
      path: ./results
    
    logging:
      logger: [] # options: [comet, tensorboard, wandb, csv] or combinations.
      log_graph: false # Logs the model graph to respective logger.
    
    optimization:
      export_mode: null # options: onnx, openvino
    
    # PL Trainer Args. Don't add extra parameter here.
    trainer:
      accelerator: auto # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">
      accumulate_grad_batches: 1
      amp_backend: native
      auto_lr_find: false
      auto_scale_batch_size: false
      auto_select_gpus: false
      benchmark: false
      check_val_every_n_epoch: 1 # Don't validate before extracting features.
      default_root_dir: null
      detect_anomaly: false
      deterministic: false
      devices: 1
      enable_checkpointing: true
      enable_model_summary: true
      enable_progress_bar: true
      fast_dev_run: false
      gpus: null # Set automatically
      gradient_clip_val: 0
      ipus: null
      limit_predict_batches: 1.0
      limit_test_batches: 1.0
      limit_train_batches: 1.0
      limit_val_batches: 1.0
      log_every_n_steps: 50
      log_gpu_memory: null
      max_epochs: 1
      max_steps: -1
      max_time: null
      min_epochs: null
      min_steps: null
      move_metrics_to_cpu: false
      multiple_trainloader_mode: max_size_cycle
      num_nodes: 1
      num_processes: null
      num_sanity_val_steps: 0
      overfit_batches: 0.0
      plugins: null
      precision: 32
      profiler: null
      reload_dataloaders_every_n_epochs: 0
      replace_sampler_ddp: true
      strategy: null
      sync_batchnorm: false
      tpu_cores: null
      track_grad_norm: -1
      val_check_interval: 1.0 # Don't validate before extracting features.
    

    Logs

    N/A
    

    Code of Conduct

    • [X] I agree to follow this project's Code of Conduct
    opened by frankmanbb 0
  • [Dataset] Add VisA dataset

    [Dataset] Add VisA dataset

    Description

    • This PR adds the Visual Anomaly (VisA) dataset.

    • The dataset follows the same format as MVTec, so we could re-use the make_mvtec_dataset function.

    • The make_mvtec_dataset function was slightly modified to make the mask file naming convention a bit more flexible (mvtec uses "000_mask.png", while visa uses "000.png").

    • There was a lot of duplication in the download and extract functionality of the different datasets, so this was moved to a shared location.

    Currently targeted to feature branch, but will re-target to main once #822 has been merged.

    Some examples: 004 020

    Known Issues

    • ~~CI will probably fail, because the dataset is not yet installed on the CI machine.~~

    Changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] Refactor (non-breaking change which refactors the code base)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [ ] This change requires a documentation update

    Checklist

    • [x] My code follows the pre-commit style and check guidelines of this project.
    • [x] I have performed a self-review of my code
    • [x] I have commented my code, particularly in hard-to-understand areas
    • [ ] I have made corresponding changes to the documentation
    • [x] My changes generate no new warnings
    • [x] I have added tests that prove my fix is effective or that my feature works
    • [x] New and existing tests pass locally with my changes
    • [x] I have added a summary of my changes to the CHANGELOG (not for minor changes, docs and tests).
    Data Config Tests Docs 
    opened by djdameln 0
Releases(rkde-weights)
  • rkde-weights(Jan 2, 2023)

  • v0.3.7(Oct 28, 2022)

    What's Changed

    New Contributors

    Full Changelog: https://github.com/openvinotoolkit/anomalib/compare/v0.3.6...v0.3.7

    Source code(tar.gz)
    Source code(zip)
  • v0.3.6(Sep 2, 2022)

    What's Changed

    • Add publish workflow + update references to main by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/480
    • Fix Dockerfile by @ORippler in https://github.com/openvinotoolkit/anomalib/pull/478
    • Fix onnx export by rewriting GaussianBlur by @ORippler in https://github.com/openvinotoolkit/anomalib/pull/476
    • DFKDE refactor to accept any layer name like other models by @ashishbdatta in https://github.com/openvinotoolkit/anomalib/pull/482
    • 🐞 Log benchmarking results in sub folder by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/483
    • 🐞 Fix Visualization keys in new CLI by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/487
    • fix Perlin augmenter for non divisible image sizes by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/490
    • 📝 Update the license headers by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/491
    • change default parameter values for DRAEM by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/495
    • Add reset methods to metrics by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/488
    • Feature Extractor Refactor by @ashishbdatta in https://github.com/openvinotoolkit/anomalib/pull/451
    • Convert AnomalyMapGenerator to nn.Module by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/497
    • Add github pr labeler to automatically label PRs by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/498
    • Add coverage by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/499
    • 🐞 Change if check by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/501
    • SSPCAB implementation by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/500
    • 🛠 Refactor Normalization by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/496
    • Enable generic exporting of a trained model to ONNX or OpenVINO IR by @ashishbdatta in https://github.com/openvinotoolkit/anomalib/pull/509
    • Updated documentation to add examples for exporting model by @ashishbdatta in https://github.com/openvinotoolkit/anomalib/pull/515
    • Ignore pixel metrics in classification task by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/516
    • Update export documentation by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/521
    • FIX: PaDiM didn't use config.model.pre_trained. by @jingt2ch in https://github.com/openvinotoolkit/anomalib/pull/514
    • Reset adaptive threshold between epochs by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/527
    • Add PRO metric by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/508
    • Set full_state_update attribute in custom metrics by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/531
    • 🐞 Set normalization method from anomaly module by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/530

    New Contributors

    • @ashishbdatta made their first contribution in https://github.com/openvinotoolkit/anomalib/pull/482
    • @jingt2ch made their first contribution in https://github.com/openvinotoolkit/anomalib/pull/514

    Full Changelog: https://github.com/openvinotoolkit/anomalib/compare/v0.3.5...v0.3.6

    Source code(tar.gz)
    Source code(zip)
  • v0.3.5(Aug 2, 2022)

  • v0.3.4(Aug 1, 2022)

    What's Changed

    New Contributors

    Full Changelog: https://github.com/openvinotoolkit/anomalib/compare/0.3.3...0.3.4

    Source code(tar.gz)
    Source code(zip)
  • 0.3.3(Jul 5, 2022)

    What's Changed

    • Move initialization log message to base class by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/363
    • 🚚 Move logging from train.py to the getter functions by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/365
    • 🚜 Refactor loss computation by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/364
    • 📝 Add a technical blog post to explain how to run anomalib. by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/359
    • 📚 Add datamodule jupyter notebooks. by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/357
    • 📝 Add benchmarking notebook by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/353
    • ➕ Add PyPI downloads badge to the readme. by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/370
    • Update README.md by @innat in https://github.com/openvinotoolkit/anomalib/pull/382
    • Create Anomalib CLI by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/378
    • 🛠 Fix configs to remove logging heatmaps from classification models. by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/387
    • Add FastFlow model training testing inference via Anomalib API by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/386
    • PaDim occasionally NaNs in anomaly map by @VdLMV in https://github.com/openvinotoolkit/anomalib/pull/392
    • Inference + Visualization by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/390

    New Contributors

    • @innat made their first contribution in https://github.com/openvinotoolkit/anomalib/pull/382
    • @VdLMV made their first contribution in https://github.com/openvinotoolkit/anomalib/pull/392

    Full Changelog: https://github.com/openvinotoolkit/anomalib/compare/v.0.3.2...0.3.3

    Source code(tar.gz)
    Source code(zip)
  • 0.2.4a(Jun 14, 2022)

  • v.0.3.2(Jun 9, 2022)

    What's Changed

    • Refactor AnomalyModule and LightningModules to explicitly define class arguments. by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/315
    • 🐞 Fix inferencer in Gradio by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/332
    • fix too many open images warning by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/334
    • Upgrade wandb version by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/340
    • Minor fix: Update folder dataset + notebooks link by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/338
    • Upgrade TorchMetrics version by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/342
    • 🚀 Set pylint version in tox.ini by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/345
    • Add metrics configuration callback to benchmarking by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/346
    • ➕ Add FastFlow Model by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/336
    • ✨ Add toy dataset to the repository by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/350
    • Add DRAEM Model by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/344
    • 📃Update documentation by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/280
    • 🏷️ Refactor Datamodule names by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/354
    • ✨ Add Reverse Distillation by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/343

    Full Changelog: https://github.com/openvinotoolkit/anomalib/compare/v.0.3.1...v.0.3.2

    Source code(tar.gz)
    Source code(zip)
  • v.0.3.1(May 17, 2022)

    What's Changed

    • 🔧 Properly assign values to dataframe in folder dataset. by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/272
    • ➕ Add warnings ⚠️ for inproper task setting in config files. by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/274
    • Updated CHANGELOG.md by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/276
    • ➕ Add long description to setup.py to make README.md PyPI friendly. by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/279
    • ✨ Add hash check to data download by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/284
    • ➕ Add Gradio by @julien-blanchon in https://github.com/openvinotoolkit/anomalib/pull/283
    • 🔨 Fix nncf key issue in nightly job by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/238
    • Visualizer improvements pt1 by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/293
    • 🧪 Fix nightly by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/299
    • 🧪 Add tests for benchmarking script by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/297
    • ➕ add input_info to nncf config when not defined by user by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/307
    • 🐞 Increase tolerance + nightly path fix by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/318
    • ➕ Add jupyter notebooks directory and first tutorial for getting-started by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/292

    New Contributors

    • @julien-blanchon made their first contribution in https://github.com/openvinotoolkit/anomalib/pull/283

    Full Changelog: https://github.com/openvinotoolkit/anomalib/compare/v0.3.0...v.0.3.1

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Apr 25, 2022)

    What's Changed

    • 🛠 Fix get_version in setup.py to avoid hard-coding version. by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/229
    • 🐞 Fix image loggers by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/233
    • Configurable metrics by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/230
    • Make OpenVINO throughput optional in benchmarking by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/239
    • Fix configs to properly use pytorch-lightning==1.6 with GPU by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/234
    • 🔨 Minor fix: Ensure docs build runs only on isea-server by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/245
    • 🏷 Rename --model_config_path to config by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/246
    • Revert "🏷 Rename --model_config_path to config" by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/247
    • ➕ Add --model_config_path deprecation warning to inference.py by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/248
    • Add console logger by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/241
    • Add segmentation mask to inference output by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/242
    • 🛠 Fix broken mvtec link, and split url to fit to 120 by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/264
    • 🛠 Fix mask filenames in folder dataset by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/249

    Full Changelog: https://github.com/openvinotoolkit/anomalib/compare/v0.2.6...v0.3.0

    Source code(tar.gz)
    Source code(zip)
  • v0.2.6(Apr 12, 2022)

    What's Changed

    • ✏️ Add torchtext==0.9.1 to support Kaggle environments. by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/165
    • 🛠 Fix KeyError:'label' in classification folder dataset by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/175
    • 📝 Added MVTec license to the repo by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/177
    • load best model from checkpoint by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/195
    • Replace SaveToCSVCallback with PL CSVLogger by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/198
    • WIP Refactor test by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/197
    • 🔧 Dockerfile enhancements by @LukasBommes in https://github.com/openvinotoolkit/anomalib/pull/172
    • 🛠 Fix visualization issue for fully defected images by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/194
    • ✨ Add hpo search using wandb by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/82
    • Separate train and validation transformations by @alexriedel1 in https://github.com/openvinotoolkit/anomalib/pull/168
    • 🛠 Fix docs workflow by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/200
    • 🔄 CFlow: Switch soft permutation to false by default to speed up training. by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/201
    • Return only image, path and label for classification tasks in Mvtec and Btech datasets. by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/196
    • 🗑 Remove freia as dependency and include it in anomalib/models/components by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/174
    • Visualizer show classification and segmentation by @alexriedel1 in https://github.com/openvinotoolkit/anomalib/pull/178
    • ↗️ Bump up pytorch-lightning version to 1.6.0 or higher by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/193
    • 🛠 Refactor DFKDE model by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/207
    • 🛠 Minor fixes: Update callbacks to AnomalyModule by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/208
    • 🛠 Minor update: Update pre-commit docs by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/206
    • ✨ Directory streaming by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/210
    • ✏️ Updated documentation for development on Docker by @LukasBommes in https://github.com/openvinotoolkit/anomalib/pull/217
    • 🏷 Fix Mac M1 dependency conflicts by @dreaquil in https://github.com/openvinotoolkit/anomalib/pull/158
    • 🐞 Set tiling off in pathcore to correctly reproduce the stats. by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/222
    • 🐞fix support for non-square images by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/204
    • Allow specifying feature layer and pool factor in DFM by @nahuja-intel in https://github.com/openvinotoolkit/anomalib/pull/215
    • 📝 Add GANomaly metrics to readme by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/224
    • ↗️ Bump the version to 0.2.6 by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/223
    • 📝 🛠 Fix inconsistent benchmarking throughput/time by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/221
    • assign test split for folder dataset by @alexriedel1 in https://github.com/openvinotoolkit/anomalib/pull/220
    • 🛠 Refactor model implementations by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/225

    New Contributors

    • @LukasBommes made their first contribution in https://github.com/openvinotoolkit/anomalib/pull/172
    • @dreaquil made their first contribution in https://github.com/openvinotoolkit/anomalib/pull/158
    • @nahuja-intel made their first contribution in https://github.com/openvinotoolkit/anomalib/pull/215

    Full Changelog: https://github.com/openvinotoolkit/anomalib/compare/v.0.2.5...v0.2.6

    Source code(tar.gz)
    Source code(zip)
  • v.0.2.5(Mar 25, 2022)

    What's Changed

    • Bugfix: fix random val/test split issue by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/48
    • Fix Readmes by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/46
    • Updated changelog by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/49
    • add distinction between image and pixel threshold in postprocessor by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/50
    • Fix docstrings by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/22
    • Fix networkx requirement by @LeonidBeynenson in https://github.com/openvinotoolkit/anomalib/pull/52
    • Add min-max normalization by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/53
    • Change hardcoded dataset path to environ variable by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/51
    • Added cflow algorithm by @blakshma in https://github.com/openvinotoolkit/anomalib/pull/47
    • perform metric computation on cpu by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/64
    • Fix Inferencer by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/60
    • Updated readme for cflow and change default config to reflect results by @blakshma in https://github.com/openvinotoolkit/anomalib/pull/68
    • Fixed issue with model loading by @blakshma in https://github.com/openvinotoolkit/anomalib/pull/69
    • Docs/sa/fix readme by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/71
    • Updated coreset subsampling method to improve accuracy by @blakshma in https://github.com/openvinotoolkit/anomalib/pull/73
    • Revert "Updated coreset subsampling method to improve accuracy" by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/79
    • Replace SupportIndex with int by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/76
    • Added reference to official CFLOW repo by @blakshma in https://github.com/openvinotoolkit/anomalib/pull/81
    • Fixed issue with k_greedy method by @blakshma in https://github.com/openvinotoolkit/anomalib/pull/80
    • Fix Mix Data type issue on inferencer by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/77
    • Create CODE_OF_CONDUCT.md by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/86
    • ✨ Add GANomaly by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/70
    • Reorder auc only when needed by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/91
    • Bump up the pytorch lightning to master branch due to vulnurability issues by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/55
    • 🚀 CI: Nightly Build by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/66
    • Refactor by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/87
    • Benchmarking Script by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/17
    • 🐞 Fix tensor detach and gpu count issues in benchmarking script by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/100
    • Return predicted masks in predict step by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/103
    • Add Citation to the Readme by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/106
    • Nightly build by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/104
    • c_idx cast to LongTensor in random sparse projection by @alexriedel1 in https://github.com/openvinotoolkit/anomalib/pull/113
    • Update Nightly by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/126
    • Updated logos by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/131
    • Add third-party-programs.txt file and update license by @LeonidBeynenson in https://github.com/openvinotoolkit/anomalib/pull/132
    • 🔨 Increase inference + openvino support by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/122
    • Fix/da/image size bug by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/135
    • Fix/da/image size bug by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/140
    • optimize compute_anomaly_score by using torch native funcrtions by @alexriedel1 in https://github.com/openvinotoolkit/anomalib/pull/141
    • Fix IndexError in adaptive threshold computation by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/146
    • Feature/data/btad by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/120
    • Update for nncf_task by @AlexanderDokuchaev in https://github.com/openvinotoolkit/anomalib/pull/145
    • fix non-adaptive thresholding bug by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/152
    • Calculate feature map shape patchcore by @alexriedel1 in https://github.com/openvinotoolkit/anomalib/pull/148
    • Add transform_config to the main config.yaml file. by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/156
    • Add Custom Dataset Training Support by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/154
    • Added extension as an option when saving the result images. by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/162
    • Update anomalib version and requirements by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/163

    New Contributors

    • @LeonidBeynenson made their first contribution in https://github.com/openvinotoolkit/anomalib/pull/52
    • @blakshma made their first contribution in https://github.com/openvinotoolkit/anomalib/pull/47
    • @alexriedel1 made their first contribution in https://github.com/openvinotoolkit/anomalib/pull/113
    • @AlexanderDokuchaev made their first contribution in https://github.com/openvinotoolkit/anomalib/pull/145

    Full Changelog: https://github.com/openvinotoolkit/anomalib/compare/v.0.2.4...v.0.2.5

    Source code(tar.gz)
    Source code(zip)
  • v.0.2.4(Dec 22, 2021)

    What's Changed

    • Bump up the version to 0.2.4 by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/45
    • fix heatmap color scheme by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/44

    Full Changelog: https://github.com/openvinotoolkit/anomalib/compare/v.0.2.3...v.0.2.4

    Source code(tar.gz)
    Source code(zip)
  • v.0.2.3(Dec 23, 2021)

    What's Changed

    • Address docs build failing issue by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/39
    • Fix docs pipeline 📄 by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/41
    • Feature/dick/anomaly score normalization by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/35
    • Shuffle train dataloader by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/42

    Full Changelog: https://github.com/openvinotoolkit/anomalib/compare/v0.2.2...v.0.2.3

    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Dec 20, 2021)

    What's Changed

    • Add PR and Issue Templates by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/30
    • Organize anomalib dependencies by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/32
    • Limit parallel runners by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/38
    • Bump lxml from 4.6.3 to 4.6.5 in /requirements by @dependabot in https://github.com/openvinotoolkit/anomalib/pull/37

    Full Changelog: https://github.com/openvinotoolkit/anomalib/compare/v0.2.1...v0.2.2

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Dec 16, 2021)

    What's Changed

    • Bump up anomalib version
    • Docs/dick/root readme by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/31
    • Add wandb logger by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/23

    Full Changelog: https://github.com/openvinotoolkit/anomalib/compare/v0.2.0...v0.2.1

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Dec 14, 2021)

    What's Changed

    • Address compatibility issues with OTE, that are caused by the legacy code. by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/24
    • Initial docs string by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/9
    • Load model did not work correctly as DFMModel did not inherit by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/5
    • Refactor/samet/data by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/8
    • Delete make.bat by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/11
    • TorchMetrics by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/7
    • ONNX node naming by @djdameln in https://github.com/openvinotoolkit/anomalib/pull/13
    • Add FPS counter to TimerCallback by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/12

    New Contributors

    • @samet-akcay made their first contribution in https://github.com/openvinotoolkit/anomalib/pull/8
    • @djdameln made their first contribution in https://github.com/openvinotoolkit/anomalib/pull/7

    Full Changelog: https://github.com/openvinotoolkit/anomalib/commits/v0.2.0

    Source code(tar.gz)
    Source code(zip)
Machine Learning algorithms implementation.

Machine Learning Algorithms Machine Learning algorithms implementation. What can I find here? ML Algorithms KNN K-Means-Clustering SVM (MultiClass) Pe

David Levin 1 Dec 10, 2021
Algorithms and data structures for educational, demonstrational and experimental purposes.

Algorithms and Data Structures (ands) Introduction This project was created for personal use mostly while studying for an exam (starting in the month

null 50 Dec 6, 2022
Minimal examples of data structures and algorithms in Python

Pythonic Data Structures and Algorithms Minimal and clean example implementations of data structures and algorithms in Python 3. Contributing Thanks f

Keon 22k Jan 9, 2023
Repository for data structure and algorithms in Python for coding interviews

Python Data Structures and Algorithms This repository contains questions requiring implementation of data structures and algorithms concepts. It is us

Prabhu Pant 1.9k Jan 1, 2023
:computer: Data Structures and Algorithms in Python

Algorithms in Python Implementations of a few algorithms and datastructures for fun and profit! Completed Karatsuba Multiplication Basic Sorting Rabin

Prakhar Srivastav 2.9k Jan 1, 2023
Solving a card game with three search algorithms: BFS, IDS, and A*

Search Algorithms Overview In this project, we want to solve a card game with three search algorithms. In this card game, we have to sort our cards by

Korosh 5 Aug 4, 2022
Planning Algorithms in AI and Robotics. MSc course at Skoltech Data Science program

Planning Algorithms in AI and Robotics course T2 2021-22 The Planning Algorithms in AI and Robotics course at Skoltech, MS in Data Science, during T2,

Mobile Robotics Lab. at Skoltech 6 Sep 21, 2022
My dynamic programming algorithms for exercise and fun

My Dynamic Programming Algorithms giraycoskun [email protected] It is a repo for various dynamic programming algorithms for exercise.

Giray Coskun 1 Nov 13, 2021
CLI Eight Puzzle mini-game featuring BFS, DFS, Greedy and A* searches as solver algorithms.

?? Eight Puzzle CLI Jogo do quebra-cabeças de 8 peças em linha de comando desenvolvido para a disciplina de Inteligência Artificial. Escrito em python

Lucas Nakahara 1 Jun 30, 2021
Algorithms and utilities for SAR sensors

WARNING: THIS CODE IS NOT READY FOR USE Sarsen Algorithms and utilities for SAR sensors Objectives Be faster and simpler than ESA SNAP and cloud nativ

B-Open 201 Dec 27, 2022
Distributed algorithms, reimplemented for fun and practice

Distributed Algorithms Playground for reimplementing and experimenting with algorithms for distributed computing. Usage Running the code for Ring-AllR

Mahan Tourkaman 1 Oct 16, 2022
Python Client for Algorithmia Algorithms and Data API

Algorithmia Common Library (python) Python client library for accessing the Algorithmia API For API documentation, see the PythonDocs Algorithm Develo

Algorithmia 138 Oct 26, 2022
Classic algorithms including Fizz Buzz, Bubble Sort, the Fibonacci Sequence, a Sudoku solver, and more.

Algorithms Classic algorithms including Fizz Buzz, Bubble Sort, the Fibonacci Sequence, a Sudoku solver, and more. Algorithm Complexity Time and Space

null 1 Jan 14, 2022
Sorting-Algorithms - All information about sorting algorithm you need and you can visualize the code tracer

Sorting-Algorithms - All information about sorting algorithm you need and you can visualize the code tracer

Ahmed Hossam 15 Oct 16, 2022
Cormen-Lib - An academic tool for data structures and algorithms courses

The Cormen-lib module is an insular data structures and algorithms library based on the Thomas H. Cormen's Introduction to Algorithms Third Edition. This library was made specifically for administering and grading assignments related to data structure and algorithms in computer science.

Cormen Lib 12 Aug 18, 2022
BCI datasets and algorithms

Brainda Welcome! First and foremost, Welcome! Thank you for visiting the Brainda repository which was initially released at this repo and reorganized

null 52 Jan 4, 2023
All Algorithms implemented in Python

The Algorithms - Python All algorithms implemented in Python (for education) These implementations are for learning purposes only. Therefore they may

The Algorithms 150.6k Jan 3, 2023
Algorithms implemented in Python

Python Algorithms Library Laurent Luce Description The purpose of this library is to help you with common algorithms like: A* path finding. String Mat

Laurent Luce 264 Dec 6, 2022
A command line tool for memorizing algorithms in Python by typing them.

Algo Drills A command line tool for memorizing algorithms in Python by typing them. In alpha and things will change. How it works Type out an algorith

Travis Jungroth 43 Dec 2, 2022