AI4Good project for detecting waste in the environment

Overview

DOI

Detect waste

AI4Good project for detecting waste in environment. www.detectwaste.ml.

Our latest results were published in Waste Management journal in article titled Deep learning-based waste detection in natural and urban environments.

You can find more technical details in our technical report Waste detection in Pomerania: non-profit project for detecting waste in environment.

Did you know that we produce 300 million tons of plastic every year? And only the part of it is properly recycled.

The idea of detect waste project is to use Artificial Intelligence to detect plastic waste in the environment. Our solution is applicable for video and photography. Our goal is to use AI for Good.

Datasets

In Detect Waste in Pomerania project we used 9 publicity available datasets, and additional data collected using Google Images Download.

For more details, about the data we used, check our jupyter notebooks with data exploratory analysis.

Data download (WIP)

  • TACO bboxes - in progress. TACO dataset can be downloaded here. TACO bboxes will be avaiable for download soon.

    Clone Taco repository git clone https://github.com/pedropro/TACO.git

    Install requirements pip3 install -r requirements.txt

    Download annotated data python3 download.py

  • UAVVaste

    Clone UAVVaste repository git clone https://github.com/UAVVaste/UAVVaste.git

    Install requirements pip3 install -r requirements.txt

    Download annotated data python3 main.py

  • TrashCan 1.0

    Download directly from web wget https://conservancy.umn.edu/bitstream/handle/11299/214865/dataset.zip?sequence=12&isAllowed=y

  • TrashICRA

    Download directly from web wget https://conservancy.umn.edu/bitstream/handle/11299/214366/trash_ICRA19.zip?sequence=12&isAllowed=y

  • MJU-Waste

    Download directly from google drive

  • Drinking Waste Classification

    In order to download you must first authenticate using a kaggle API token. Read about it here

    kaggle datasets download -d arkadiyhacks/drinking-waste-classification

  • Wade-ai

    Clone wade-ai repository git clone https://github.com/letsdoitworld/wade-ai.git

    For coco annotation check: majsylw/wade-ai/tree/coco-annotation

  • TrashNet - The dataset spans six classes: glass, paper, cardboard, plastic, metal, and trash.

    Clone trashnet repository git clone https://github.com/garythung/trashnet

  • waste_pictures - The dataset contains ~24k images grupped by 34 classes of waste for classification purposes.

    In order to download you must first authenticate using a kaggle API token. Read about it here

    kaggle datasets download -d wangziang/waste-pictures

For more datasets check: waste-datasets-review

Data preprocessing

Multiclass training

To train only on TACO dataset with detect-waste classes:

  • run annotations_preprocessing.py

    python3 annotations_preprocessing.py

    new annotations will be saved in annotations/annotations_train.json and annotations/annotations_test.json

    For binary detection (litter and background) check also generated new annotations saved in annotations/annotations_binary_train.json and annotations/annotations_binary_test.json.

Single class training

To train on one or multiple datasets on a single class:

  • run annotations_preprocessing_multi.py

    python3 annotations_preprocessing_multi.py

    new annotations will be split and saved in annotations/binary_mixed_train.json and annotations/binary_mixed_test.json

    Example bash file is in annotations_preprocessing_multi.sh and can be run by

    bash annotations_preprocessing_multi.sh

Script will automatically split all datasets to train and test set with MultilabelStratifiedShuffleSplit. Then it will convert datasets to one class - litter. Finally all datasets will be concatenated to form single train and test files annotations/binary_mixed_train.json and annotations/binary_mixed_test.

For more details check annotations directory.

Models

To read more about past waste detection works check litter-detection-review.

  • EfficientDet

    To train EfficientDet check efficientdet/README.md

    To train EfficientDet implemented in Pytorch Lightning check branch effdet_lightning

    We based our implementation on efficientdet-pytorch by Ross Wightman.

  • DETR

    To train detr check detr/README.md (WIP)

    PyTorch training code and pretrained models for DETR (DEtection TRansformer). Authors replaced the full complex hand-crafted object detection pipeline with a Transformer, and matched Faster R-CNN with a ResNet-50, obtaining 42 AP on COCO using half the computation power (FLOPs) and the same number of parameters. Inference in 50 lines of PyTorch.

    For implementation details see End-to-End Object Detection with Transformers by Facebook.

  • Mask R-CNN

    To train Mask R-CNN check MaskRCNN/README.md

    Our implementation based on tutorial.

  • Faster R-CNN

    To train Faster R-CNN on TACO dataset check FastRCNN/README.md

  • Classification with ResNet50 and EfficientNet

    To train choosen model check classifier/README.md

Example usage - models training

  1. Waste detection using EfficientDet

In our github repository you will find EfficientDet code already adjusted for our mixed dataset. To run training for single class just clone repository, move to efficientdet directory, install necessary dependencies, and launch train.py script with adjusted parameters, like: path to images, path to directory with annotations (you can use ours provided in annotations directory), model parameters and its specific name. It can be done as in the example below.

python3 train.py path_to_all_images \
--ann_name ../annotations/binary_mixed --model tf_efficientdet_d2 \
--batch-size 4 --decay-rate 0.95 --lr .001 --workers 4 --warmup-epochs 5 \
--model-ema --dataset multi --pretrained --num-classes 1 --color-jitter 0.1 \
--reprob 0.2 --epochs 20 --device cuda:0
  1. Waste classification using EfficientNet

In this step switch to classifier directory. At first just crop waste objects from images of waste (the same as in previous step).

python3 cut_bbox_litter.py --src_img path_to_whole_images \
                           --dst_img path_to_destination_directory_for_images \
                           --square --zoom 1

In case of using unlabelled OpenLitterMap dataset, make pseudo-predictions using previously trained EfficientDet and map them with orginal openlittermap annotations.

python3 sort_openlittermap.py \
                        --src_ann path_to_original_openlittermap_annotations \
                        --coco path_to_our_openlittermap_annotations \
                        --src_img path_to_whole_images \
                        --dst_img path_to_destination_directory_for_images

To run classifier training in command line just type:

python train_effnet.py --data_img path/to/images/train/ \
                       --save path/to/checkpoint.ckpt \
                       --model efficientnet-b2 \
                       --gpu 0 \
                       --pseudolabel_mode per-batch

Evaluation

We provided make_predictions.py script to draw bounding boxes on choosen image. For example script can be run on GPU (id=0) with arguments:

    python make_predictions.py --save directory/to/save/image.png \
                               --detector path/to/detector/checkpoint.pth \
                               --classifier path/to/clasifier/checkpoint.pth \
                               --img path/or/url/to/image --device cuda:0

or on video with --video argument:

    python make_predictions.py --save directory/to/save/frames \
                               --detector path/to/detector/checkpoint.pth \
                               --classifier path/to/clasifier/checkpoint.pth \
                               --img path/to/video.mp4 --device cuda:0 --video \
                               --classes label0 label1 label2

If you managed to process all the frames, just run the following command from the directory where you saved the results:

    ffmpeg -i img%08d.jpg movie.mp4

Tracking experiments

For experiment tracking we mostly used neptune.ai. To use Neptune follow the official Neptune tutorial on their website:

  • Log in to your account

  • Find and set Neptune API token on your system as environment variable (your NEPTUNE_API_TOKEN should be added to ~./bashrc)

  • Add your project_qualified_name name in the train_ .py

      neptune.init(project_qualified_name = 'YOUR_PROJECT_NAME/detect-waste')

    Currently it is set to a private detect-waste neptune space.

  • install neptune-client library

      pip install neptune-client

For more check LINK.

Our results

Detection/Segmentation task

model backbone Dataset # classes bbox [email protected] bbox [email protected]:0.95 mask [email protected] mask [email protected]:0.95
DETR ResNet 50 TACO bboxes 1 46.50 24.35 x x
DETR ResNet 50 TACO bboxes 7 12.03 6.69 x x
DETR ResNet 50 *Multi 1 50.68 27.69 **54.80 **32.17
DETR ResNet 101 *Multi 1 51.63 29.65 37.02 19.33
Mask R-CNN ResNet 50 *Multi 1 27.95 16.49 23.05 12.94
Mask R-CNN ResNetXt 101 *Multi 1 19.70 6.20 24.70 13.20
EfficientDet-D2 EfficientNet-B2 Taco bboxes 1 61.05 x x x
EfficientDet-D2 EfficientNet-B2 Taco bboxes 7 18.78 x x x
EfficientDet-D2 EfficientNet-B2 Drink-waste 4 99.60 x x x
EfficientDet-D2 EfficientNet-B2 MJU-Waste 1 97.74 x x x
EfficientDet-D2 EfficientNet-B2 TrashCan v1 8 91.28 x x x
EfficientDet-D2 EfficientNet-B2 Wade-AI 1 33.03 x x x
EfficientDet-D2 EfficientNet-B2 UAVVaste 1 79.90 x x x
EfficientDet-D2 EfficientNet-B2 Trash ICRA19 7 9.47 x x x
EfficientDet-D2 EfficientNet-B2 *Multi 1 74.81 x x x
EfficientDet-D3 EfficientNet-B3 *Multi 1 74.53 x x x
  • * Multi - name for mixed open dataset (with listed below datasets) for detection/segmentation task
  • ** results achived with frozeen weights from detection task (after addition of mask head)

Classification task

model # classes ACC sampler pseudolabeling
EfficientNet-B2 8 73.02 Weighted per batch
EfficientNet-B2 8 74.61 Random per epoch
EfficientNet-B2 8 72.84 Weighted per epoch
EfficientNet-B4 7 71.02 Random per epoch
EfficientNet-B4 7 67.62 Weighted per epoch
EfficientNet-B2 7 72.66 Random per epoch
EfficientNet-B2 7 68.31 Weighted per epoch
EfficientNet-B2 7 74.43 Random None
ResNet-50 8 60.60 Weighted None
  • 8 classes - 8th class for additional background category
  • we provided 2 methods to update pseudo-labels: per batch and per epoch

Citation

@article{MAJCHROWSKA2022274,
      title = {Deep learning-based waste detection in natural and urban environments},
      journal = {Waste Management},
      volume = {138},
      pages = {274-284},
      year = {2022},
      issn = {0956-053X},
      doi = {https://doi.org/10.1016/j.wasman.2021.12.001},
      url = {https://www.sciencedirect.com/science/article/pii/S0956053X21006474},
      author = {Sylwia Majchrowska and Agnieszka Mikołajczyk and Maria Ferlin and Zuzanna Klawikowska
                and Marta A. Plantykow and Arkadiusz Kwasigroch and Karol Majek},
      keywords = {Object detection, Semi-supervised learning, Waste classification benchmarks,
                  Waste detection benchmarks, Waste localization, Waste recognition},
}

@misc{majchrowska2021waste,
      title={Waste detection in Pomerania: non-profit project for detecting waste in environment}, 
      author={Sylwia Majchrowska and Agnieszka Mikołajczyk and Maria Ferlin and Zuzanna Klawikowska
              and Marta A. Plantykow and Arkadiusz Kwasigroch and Karol Majek},
      year={2021},
      eprint={2105.06808},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Project Organization (WIP)


├── LICENSE
├── README.md
|         <- The top-level README for developers using this project.
├── annotations        <- annotations in json
│   
├── classifier        <- implementation of CNN for litter classification
│
├── detr              <- implementation of DETR for litter detection
│
├── efficientdet      <- implementation of EfficientDet for litter detection
│
├── fastrcnn          <- implementation of FastRCNN for litter segmentation
│
├── maskrcnn          <- implementation of MaskRCNN for litter segmentation
│
├── notebooks          <- jupyter notebooks.
│   
├── utils              <- source code with useful functions
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
├── setup.py           <- makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.

Comments
  • About clasifiy-waste dataset

    About clasifiy-waste dataset

    Dear wimlds, Thanks for your greate effort

    I try the source code, but how to create classify-waste dataset. I cant preproduce it now. Can you share me your classify-waste dataset for training

    Thanks, Xuan Tran

    opened by XuanTr 5
  • How to use the data?

    How to use the data?

    Hello. I am building an object detector based on the YOLO family of models, and i would like to know how to use this dataset. Specifically the TACO box dataset annotations, but I'm confused on how to use it. Could you help me? Thanks!

    opened by superfast852 2
  • Path to all images

    Path to all images

    To run train.py of EfficientDet the path_to_all images means that I have to take the images of each dataset and join in just one folder ?

    python3 train.py path_to_all_images
    --ann_name ../annotations/binary_mixed --model tf_efficientdet_d2
    --batch-size 4 --decay-rate 0.95 --lr .001 --workers 4 --warmup-epochs 5
    --model-ema --dataset multi --pretrained --num-classes 1 --color-jitter 0.1
    --reprob 0.2 --epochs 20 --device cuda:0

    opened by ver0z 2
  • classify-waste and Detect-waste

    classify-waste and Detect-waste

    Hi, Thanks for your wonderful work in the domain. I have worked in underwater waste detection and now i want to use your dataset(waste-classify and waste detect)for my model. Can you guide me how and where to get these datasets.

    Thanks again

    opened by aliman80 1
  • Waste detection model loading error

    Waste detection model loading error

    Dear wimlds, First of all, thanks for the great works,

    I trained EfficientNet model for waste detection with "tf_efficientdet_d2" (multiclass training)

    But when i evaluate the model with demo.py, An error occurred during the model loading.

    The error is as below,

    $> python3 demo.py --save ./results/image.png --checkpoint /home/jupyter/detect-waste/efficientdet/output/train/20220415-025926-tf_efficientdet_d2/checkpoint-0.pth.tar

    Traceback (most recent call last): File "demo.py", line 268, in main(args) File "demo.py", line 235, in main model = set_model("tf_efficientdet_d2", num_classes, args.checkpoint, args.device) File "demo.py", line 221, in set_model checkpoint_path=checkpoint_path File "/home/jupyter/detect-waste/efficientdet/effdet/factory.py", line 14, in create_model checkpoint_path=checkpoint_path, checkpoint_ema=checkpoint_ema, **kwargs) File "/home/jupyter/detect-waste/efficientdet/effdet/factory.py", line 47, in create_model_from_config load_checkpoint(model, checkpoint_path, use_ema=checkpoint_ema) File "/opt/conda/lib/python3.7/site-packages/timm/models/helpers.py", line 64, in load_checkpoint model.load_state_dict(state_dict, strict=strict) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1498, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for EfficientDet: size mismatch for class_net.predict.conv_pw.weight: copying a param with shape torch.Size([63, 112, 1, 1]) from checkpoint, the shape in current model is torch.Size([9, 112, 1, 1]). size mismatch for class_net.predict.conv_pw.bias: copying a param with shape torch.Size([63]) from checkpoint, the shape in current model is torch.Size([9]).

    Can you give some advices to solve this error.

    Thanks. Jong Wuk Son

    opened by jwson97 1
  • Changes in misc.py

    Changes in misc.py

    For preventing the error of ImportError: cannot import name '_new_empty_tensor' from 'torchvision.ops' I just change 2 lines that I believe came with the error from the Master of DETR repository. The issue #417 is solved by the same way of #404

    opened by ver0z 1
  • Upload weights?

    Upload weights?

    Thanks for the great work.

    Would you be willing to add trained weights (and possibly a single-file out-of-sample classification demo) to the repo? This way people could directly apply your best classifier to a photo of their choice without having to retrain the whole model themselves.

    opened by simonheb 1
  • openlittermap_downloader

    openlittermap_downloader

    Thanks for your great projects and Thanks for share the code of your great work! I run the openlittermap_downloader.py.but I can't download the database. can you give me some advice?thanks very much!!!

    opened by a07913838438 1
  • Bump numpy from 1.21.0 to 1.22.0 in /classifier

    Bump numpy from 1.21.0 to 1.22.0 in /classifier

    Bumps numpy from 1.21.0 to 1.22.0.

    Release notes

    Sourced from numpy's releases.

    v1.22.0

    NumPy 1.22.0 Release Notes

    NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:

    • Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.
    • A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.
    • NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.
    • New methods for quantile, percentile, and related functions. The new methods provide a complete set of the methods commonly found in the literature.
    • A new configurable allocator for use by downstream projects.

    These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.

    The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.

    Expired deprecations

    Deprecated numeric style dtype strings have been removed

    Using the strings "Bytes0", "Datetime64", "Str0", "Uint32", and "Uint64" as a dtype will now raise a TypeError.

    (gh-19539)

    Expired deprecations for loads, ndfromtxt, and mafromtxt in npyio

    numpy.loads was deprecated in v1.15, with the recommendation that users use pickle.loads instead. ndfromtxt and mafromtxt were both deprecated in v1.17 - users should use numpy.genfromtxt instead with the appropriate value for the usemask parameter.

    (gh-19615)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Update numpy

    Update numpy

    A Buffer Overflow vulnerability exists in NumPy 1.9.x in the PyArray_NewFromDescr_int function of ctors.c when specifying arrays of large dimensions (over 32) from Python code, which could let a malicious user cause a Denial of Service.

    opened by AgaMiko 0
  • Bump pillow from 9.0.0 to 9.0.1 in /classifier

    Bump pillow from 9.0.0 to 9.0.1 in /classifier

    Bumps pillow from 9.0.0 to 9.0.1.

    Release notes

    Sourced from pillow's releases.

    9.0.1

    https://pillow.readthedocs.io/en/stable/releasenotes/9.0.1.html

    Changes

    • In show_file, use os.remove to remove temporary images. CVE-2022-24303 #6010 [@​radarhere, @​hugovk]
    • Restrict builtins within lambdas for ImageMath.eval. CVE-2022-22817 #6009 [radarhere]
    Changelog

    Sourced from pillow's changelog.

    9.0.1 (2022-02-03)

    • In show_file, use os.remove to remove temporary images. CVE-2022-24303 #6010 [radarhere, hugovk]

    • Restrict builtins within lambdas for ImageMath.eval. CVE-2022-22817 #6009 [radarhere]

    Commits
    • 6deac9e 9.0.1 version bump
    • c04d812 Update CHANGES.rst [ci skip]
    • 4fabec3 Added release notes for 9.0.1
    • 02affaa Added delay after opening image with xdg-open
    • ca0b585 Updated formatting
    • 427221e In show_file, use os.remove to remove temporary images
    • c930be0 Restrict builtins within lambdas for ImageMath.eval
    • 75b69dd Dont need to pin for GHA
    • cd938a7 Autolink CWE numbers with sphinx-issues
    • 2e9c461 Add CVE IDs
    • See full diff in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Suggest to loosen the dependency on funcy

    Suggest to loosen the dependency on funcy

    Hi, your project detect-waste requires "funcy==1.15" in its dependency. After analyzing the source code, we found that some other versions of funcy can also be suitable without affecting your project, i.e., funcy 1.16, 1.17. Therefore, we suggest to loosen the dependency on funcy from "funcy==1.15" to "funcy>=1.15,<=1.17" to avoid any possible conflict for importing more packages or for downstream projects that may use detect-waste.

    May I pull a request to loosen the dependency on funcy?

    By the way, could you please tell us whether such dependency analysis may be potentially helpful for maintaining dependencies easier during your development?



    For your reference, here are details in our analysis.

    Your project detect-waste(commit id: 6113385147a68b5e0929c98ff970283bd98cb730) directly uses 3 APIs from package funcy.

    funcy.seqs.lfilter, funcy.seqs.lremove, funcy.seqs.lmap
    
    

    From which, 14 functions are then indirectly called, including 9 funcy's internal APIs and 5 outsider APIs, as follows (neglecting some repeated function occurrences).

    [/wimlds-trojmiasto/detect-waste]
    +--funcy.seqs.lfilter
    |      +--funcy.compat.lfilter
    |      |      +--itertools.ifilter
    |      +--funcy.funcmakers.make_pred
    |      |      +--funcy.funcmakers.make_func
    |      |      |      +--funcy.strings.re_tester
    |      |      |      |      +--re.compile
    |      |      |      +--funcy.strings.re_finder
    |      |      |      |      +--funcy.strings._prepare
    |      |      |      |      |      +--re.compile
    |      |      |      |      |      +--funcy.strings._make_getter
    |      |      |      |      |      |      +--operator.methodcaller
    |      |      |      +--operator.itemgetter
    +--funcy.seqs.lremove
    |      +--funcy.seqs.remove
    |      |      +--funcy.funcmakers.make_pred
    +--funcy.seqs.lmap
    |      +--funcy.compat.lmap
    |      |      +--itertools.imap
    |      +--funcy.funcmakers.make_func
    

    We scan funcy's versions among [1.16, 1.17] and 1.15, the changing functions (diffs being listed below) have none intersection with any function or API we mentioned above (either directly or indirectly called by this project).

    diff: 1.15(original) 1.16
    ['funcy.colls.has_path', 'funcy.calc.CacheMemory.__setitem__', 'funcy.decorators.has_1pos_and_kwonly', 'funcy.calc.CacheMemory.expire', 'funcy.calc.memoize', 'funcy.calc.SkipMemoization', 'funcy.flow.throttle', 'funcy.calc.CacheMemory', 'funcy._inspect.get_spec', 'funcy.decorators.decorator', 'funcy.calc.CacheMemory.clear', 'funcy.flow._ensure_exceptable', 'funcy.flow.reraise', 'funcy.funcs.curry', 'funcy.decorators.has_single_arg', 'funcy.calc.CacheMemory.__init__', 'funcy.calc._memory_decorator', 'funcy.funcs.autocurry', 'funcy.calc.SkipMemory', 'funcy.calc.CacheMemory.__getitem__', 'funcy.calc.cache', 'funcy.flow._is_exception_type', 'funcy.funcs.rcurry']
    
    diff: 1.15(original) 1.17
    ['funcy.colls.has_path', 'funcy.calc.CacheMemory.__setitem__', 'funcy.colls.zip_values', 'funcy.decorators.Call.__str__', 'funcy.colls.zip_dicts', 'funcy.decorators.has_1pos_and_kwonly', 'funcy.calc.CacheMemory.expire', 'funcy.calc.memoize', 'funcy.calc.SkipMemoization', 'funcy.flow.throttle', 'funcy.calc.CacheMemory', 'funcy._inspect.get_spec', 'funcy._inspect._sig_to_spec', 'funcy.decorators.decorator', 'funcy.calc.CacheMemory.clear', 'funcy.flow._ensure_exceptable', 'funcy.flow.reraise', 'funcy.funcs.curry', 'funcy.decorators.has_single_arg', 'funcy.calc.CacheMemory.__init__', 'funcy.flow.limit_error_rate', 'funcy.calc._memory_decorator', 'funcy.funcs.autocurry', 'funcy.decorators.Call', 'funcy.calc.SkipMemory', 'funcy.calc.CacheMemory.__getitem__', 'funcy.colls.del_in', 'funcy.decorators.Call.__repr__', 'funcy.calc.cache', 'funcy.flow._is_exception_type', 'funcy.funcs.rcurry']
    
    

    As for other packages, the APIs of @outside_package_name are called by funcy in the call graph and the dependencies on these packages also stay the same in our suggested versions, thus avoiding any outside conflict.

    Therefore, we believe that it is quite safe to loose your dependency on funcy from "funcy==1.15" to "funcy>=1.15,<=1.17". This will improve the applicability of detect-waste and reduce the possibility of any further dependency conflict with other projects/packages.

    opened by Agnes-U 0
  • Bump pillow from 9.0.1 to 9.3.0 in /classifier

    Bump pillow from 9.0.1 to 9.3.0 in /classifier

    Bumps pillow from 9.0.1 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
Releases(v1.1-alpha)
This project uses Template Matching technique for object detecting by detection of template image over base image.

Object Detection Project Using OpenCV This project uses Template Matching technique for object detecting by detection the template image over base ima

Pratham Bhatnagar 7 May 29, 2022
This project uses Template Matching technique for object detecting by detection of template image over base image

Object Detection Project Using OpenCV This project uses Template Matching technique for object detecting by detection the template image over base ima

Pratham Bhatnagar 4 Nov 16, 2021
This project provides a stock market environment using OpenGym with Deep Q-learning and Policy Gradient.

Stock Trading Market OpenAI Gym Environment with Deep Reinforcement Learning using Keras Overview This project provides a general environment for stoc

Kim, Ki Hyun 769 Dec 25, 2022
A DeepStack custom model for detecting common objects in dark/night images and videos.

DeepStack_ExDark This repository provides a custom DeepStack model that has been trained and can be used for creating a new object detection API for d

MOSES OLAFENWA 98 Dec 24, 2022
A custom DeepStack model for detecting 16 human actions.

DeepStack_ActionNET This repository provides a custom DeepStack model that has been trained and can be used for creating a new object detection API fo

MOSES OLAFENWA 16 Nov 11, 2022
Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label.

Tensorflow-Mobile-Generic-Object-Localizer Python Tensorflow 2 scripts for detecting objects of any class in an image without knowing their label. Ori

Ibai Gorordo 11 Nov 15, 2022
Python TFLite scripts for detecting objects of any class in an image without knowing their label.

Python TFLite scripts for detecting objects of any class in an image without knowing their label.

Ibai Gorordo 42 Oct 7, 2022
Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples This project is for the paper "Training Confidence-Calibrated Clas

null 168 Nov 29, 2022
CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images

Code and result about CCAFNet(IEEE TMM) 'CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images' IEE

zyrant丶 14 Dec 29, 2021
Implementation for the IJCAI2021 work "Beyond the Spectrum: Detecting Deepfakes via Re-synthesis"

Beyond the Spectrum Implementation for the IJCAI2021 work "Beyond the Spectrum: Detecting Deepfakes via Re-synthesis" by Yang He, Ning Yu, Margret Keu

Yang He 27 Jan 7, 2023
Alleviating Over-segmentation Errors by Detecting Action Boundaries

Alleviating Over-segmentation Errors by Detecting Action Boundaries Forked from ASRF offical code. This repo is the a implementation of replacing orig

null 13 Dec 12, 2022
Face recognition with trained classifiers for detecting objects using OpenCV

Face_Detector Face recognition with trained classifiers for detecting objects using OpenCV Libraries required to be installed using pip Command: cv2 n

Chumui Tripura 0 Oct 31, 2021
Starter code for the ICCV 2021 paper, 'Detecting Invisible People'

Detecting Invisible People [ICCV 2021 Paper] [Website] Tarasha Khurana, Achal Dave, Deva Ramanan Introduction This repository contains code for Detect

Tarasha Khurana 28 Sep 16, 2022
A object detecting neural network powered by the yolo architecture and leveraging the PyTorch framework and associated libraries.

Yolo-Powered-Detector A object detecting neural network powered by the yolo architecture and leveraging the PyTorch framework and associated libraries

Luke Wilson 1 Dec 3, 2021
Doods2 - API for detecting objects in images and video streams using Tensorflow

DOODS2 - Return of DOODS Dedicated Open Object Detection Service - Yes, it's a b

Zach 101 Jan 4, 2023
A custom DeepStack model that has been trained detecting ONLY the USPS logo

This repository provides a custom DeepStack model that has been trained detecting ONLY the USPS logo. This was created after I discovered that the Deepstack OpenLogo custom model I was using did not contain USPS.

Stephen Stratoti 9 Dec 27, 2022
Python Environment for Bayesian Learning

Pebl is a python library and command line application for learning the structure of a Bayesian network given prior knowledge and observations. Pebl in

Abhik Shah 103 Jul 14, 2022
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

null 185 Dec 26, 2022