This repo provides the source code for "Cross-Domain Adaptive Teacher for Object Detection".

Overview

Cross-Domain Adaptive Teacher for Object Detection

License: CC BY-NC 4.0

License: CC BY-NC 4.0

This is the PyTorch implementation of our paper:
Cross-Domain Adaptive Teacher for Object Detection
Yu-Jhe Li, Xiaoliang Dai, Chih-Yao Ma, Yen-Cheng Liu, Kan Chen, Bichen Wu, Zijian He, Kris Kitani, Peter Vajda
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022

[Paper] [Project]

Installation

Prerequisites

  • Python ≥ 3.6
  • PyTorch ≥ 1.5 and torchvision that matches the PyTorch installation.
  • Detectron2 == 0.3 (The version I used to run my code)

Install python env

To install required dependencies on the virtual environment of the python (e.g., virtualenv for python3), please run the following command at the root of this code:

$ python3 -m venv /path/to/new/virtual/environment/.
$ source /path/to/new/virtual/environment/bin/activate

For example:

$ mkdir python_env
$ python3 -m venv python_env/
$ source python_env/bin/activate

Build Detectron2 from Source

Follow the INSTALL.md to install Detectron2.

Dataset download

  1. Download the datasets

  2. Organize the dataset as the Cityscapes and PASCAL VOC format following:

adaptive_teacher/
└── datasets/
    └── cityscapes/
        ├── gtFine/
            ├── train/
            └── test/
            └── val/
        ├── leftImg8bit/
            ├── train/
            └── test/
            └── val/
   └── cityscapes_foggy/
        ├── gtFine/
            ├── train/
            └── test/
            └── val/
        ├── leftImg8bit/
            ├── train/
            └── test/
            └── val/
   └── VOC2012/
        ├── Annotations/
        ├── ImageSets/
        └── JPEGImages/
   └── clipark/
        ├── Annotations/
        ├── ImageSets/
        └── JPEGImages/
   └── watercolor/
        ├── Annotations/
        ├── ImageSets/
        └── JPEGImages/
    

Training

  • Train the Adaptive Teacher under PASCAL VOC (source) and Clipart1k (target)
python train_net.py \
      --num-gpus 8 \
      --config configs/faster_rcnn_R101_cross_clipart.yaml\
  • Train the Adaptive Teacher under cityscapes (source) and foggy cityscapes (target)
python train_net.py\
      --num-gpus 8\
      --config configs/faster_rcnn_VGG_cross_city.yaml\
      OUTPUT_DIR output/exp_city

Resume the training

python train_net.py \
      --resume \
      --num-gpus 8 \
      --config configs/faster_rcnn_R101_cross_clipart.yaml MODEL.WEIGHTS <your weight>.pth

Evaluation

python train_net.py \
      --eval-only \
      --num-gpus 8 \
      --config configs/faster_rcnn_R101_cross_clipart.yaml \
      MODEL.WEIGHTS <your weight>.pth

Results and Model Weights

Real to Artistic Adaptation:

Backbone Source set (labeled) Target set (unlabeled) Batch size [email protected] Model Weights Comment
R101 VOC12 Clipark1k 16 labeled + 16 unlabeled 40.6 link (coming soon) Ours w/o discriminator
R101 VOC12 Clipark1k 16 labeled + 16 unlabeled 49.3 link (coming soon) Ours in the paper
R101+FPN VOC12 Clipark1k 16 labeled + 16 unlabeled 51.2 link (coming soon) For future work

Weather Adaptation:

Backbone Source set (labeled) Target set (unlabeled) Batch size [email protected] Model Weights Comment
VGG16 Cityscapes Foggy Cityscapes 16 labeled + 16 unlabeled 48.7 link (coming soon) Ours w/o discriminator
VGG16 Cityscapes Foggy Cityscapes 16 labeled + 16 unlabeled 50.9 link (coming soon) Ours in the paper
VGG16+FPN Cityscapes Foggy Cityscapes 16 labeled + 16 unlabeled 57.4 link (coming soon) For future work

Citation

If you use Adaptive Teacher in your research or wish to refer to the results published in the paper, please use the following BibTeX entry.

@inproceedings{li2022cross,
    title={Cross-Domain Adaptive Teacher for Object Detection},
    author={Li, Yu-Jhe and Dai, Xiaoliang and Ma, Chih-Yao and Liu, Yen-Cheng and Chen, Kan and Wu, Bichen and He, Zijian and Kitani, Kris and Vajda, Peter},
    booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2022}
} 

Also, if you use Detectron2 in your research, please use the following BibTeX entry.

@misc{wu2019detectron2,
  author =       {Yuxin Wu and Alexander Kirillov and Francisco Massa and
                  Wan-Yen Lo and Ross Girshick},
  title =        {Detectron2},
  howpublished = {\url{https://github.com/facebookresearch/detectron2}},
  year =         {2019}
}

License

This project is licensed under CC-BY-NC 4.0 License, as found in the LICENSE file.

Comments
  • Distributed training failure

    Distributed training failure

    Hi,

    When running the training code, I encountered the following issue.

    Exception during training: Traceback (most recent call last): File "/research/cbim/vast/tl601/projects/adaptive_teacher/adapteacher/engine/trainer.py", line 402, in train_loop self.run_step_full_semisup() File "/research/cbim/vast/tl601/projects/adaptive_teacher/adapteacher/engine/trainer.py", line 597, in run_step_full_semisup all_label_data, branch="supervised" File "/research/cbim/vast/tl601/anaconda3/envs/adapteacher/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/research/cbim/vast/tl601/anaconda3/envs/adapteacher/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 787, in forward if torch.is_grad_enabled() and self.reducer._rebuild_buckets(): RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by making sure all forward function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable). Parameter indices which did not receive grad for rank 1: 66 67 68 69 70 71 72 73 In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error

    Then I added find_unused_parameters=True to DistributedDataParallel() function. And the problem has been solved.

    But now I have another issue.

    Exception during training: Traceback (most recent call last): File "/research/cbim/vast/tl601/projects/adaptive_teacher/adapteacher/engine/trainer.py", line 403, in train_loop self.run_step_full_semisup() File "/research/cbim/vast/tl601/projects/adaptive_teacher/adapteacher/engine/trainer.py", line 657, in run_step_full_semisup losses.backward() File "/research/cbim/vast/tl601/anaconda3/envs/adapteacher/lib/python3.7/site-packages/torch/_tensor.py", line 255, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/research/cbim/vast/tl601/anaconda3/envs/adapteacher/lib/python3.7/site-packages/torch/autograd/init.py", line 149, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the forward function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple checkpoint functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations. Parameter at index 65 with name roi_heads.box_predictor.bbox_pred.bias has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration.

    The answer from online suggests setting find_unused_parameters=False. But this will cause the previous error.

    I was wondering if you have a better solution.

    My environment: detectron v0.5 pytorch1.9.0 cuda 11.1

    Thanks

    opened by litingfeng 9
  • Why 2 seperate annotation files?

    Why 2 seperate annotation files?

    In readme.md, it is instructed to put gtFine folder to both clear and foggy data directories. But foggy images are created from clear ones therefore their annotations should be exactly the same. Indeed, there is only one annotation file in download link for cityscapes. Are we supposed to put exactly same gtFine folder to 2 different places? Or is it structured like this to train custom datasets? Because unsupervised images dont have to be created synthetically from clean images. Thanks

    opened by victor00070 4
  • AP NaN

    AP NaN

    Hello,

    I formed a new target dataset in Pascal VOC format and as I understand, the target dataset should be unlabeled so I did not add .xml files to the Annotations folder of the target dataset. But how does the evaluation for the unlabeled images work in the teacher model if there are no ground truth boxes?

    Specifically, at every EVAL_PERIOD iteration this line returns NaN: https://github.com/facebookresearch/adaptive_teacher/blob/d57d20640ae314a42c43dd82b1c1e26e90fa4b95/adapteacher/evaluation/pascal_voc_evaluation.py#L305

    What should be done instead? Thanks!

    opened by darkhan-s 4
  • some puzzles about using

    some puzzles about using "branch.startswith("supervised")" in adapteacher/modeling/meta_arch/rcnn.py

    Hi,I find you write "if branch.startswith("supervised")" in line 217 of adapteacher/modeling/meta_arch/rcnn.py. I am confused of it. I think it might be some problem when we run loss of unlabeled data with pseudo label ( in line 605 of adapteacher/engine/trainer.py), which should run in branch "supervised_target". And I think this will result wrong label for loss_D_img_s_pesudo. Please check it.

    opened by gedebachichi 3
  • possible to wrap the teacher model by DistributedDataParallel?

    possible to wrap the teacher model by DistributedDataParallel?

    Hello, I'm trying to use your idea in my thesis work, thanks for your great idea and code! I set require_grad=False for all the parameters in the teacher model, and wrapped it in DistributedDataParallel. But what I got with my own code is that the training stucks at loss.backward(), the losses are not NaN. If I lower down the batch size and run with just 1 GPU, the code works fine. But if I use DistributedDataParallel, the training will stuck immediately.

    Would you have an idea about it? Is it because the exponential moving average somehow affects the computation graph? Thanks

    opened by Weijiang-Xiong 2
  • Evaluation for each category

    Evaluation for each category

    Hi authors,

    Great work! I tried to reproduce the adaptive teacher, but I find the script used to evaluate is only for coco style metrics. Do you have the script to output per category AP so that we can compare with the results in the paper?

    Thanks!

    opened by helq2612 2
  • Why discriminator is trained in supervised and target branch?

    Why discriminator is trained in supervised and target branch?

    I noticed that in the rcnn.py loss_D_img_s and loss_D_img_t are trained with a small weight. I don't know what is the meaning of these two lines of code?

    Is this the way to initialize the discriminator? Will it prevent the model suffer from Model Collapse, which is caused by the discriminator?

    losses["loss_D_img_s"] = loss_D_img_s*0.001 losses["loss_D_img_t"] = loss_D_img_t*0.001

    Will the performance of the model be affected if the two lines of code above are removed and the model just be trained with the following two lines of code in the domain branch?

    losses["loss_D_img_s"] = loss_D_img_s losses["loss_D_img_t"] = loss_D_img_t

    opened by Pandaxia8 2
  • Regarding the Watercolor Dataset Config

    Regarding the Watercolor Dataset Config

    Hello, so I am wondering whether there is a specific split file we have to run to obtain the voc split for the watercolor dataset? The burn in training for watercolor only trains on the 7 overlapping classes that span both voc and watercolor. Is that part of the code missing or is it assumed to be done by ourselves.? Many Thanks.

    opened by michaelku1 2
  • Question about Figure 4

    Question about Figure 4

    Thank you for your work. But I have a question about Figure 4 in the main paper.

    It seems that with only 10k iterations of the source-only pre-training, the model has achieved around 33.0 mAP, which has significantly outperformed the well-trained source-only results (28.8). Does it mean that the detectron2 implemented FRCNN works better?

    opened by tmp12316 2
  • how to change backbone to resnet

    how to change backbone to resnet

    Current code here uses Faster-rcnn with vgg16 backbone, right? Paper mentions mask-rcnn was used but that would require segmentation in annotations, but I only used bounding boxes for my dataset.

    Also, as title mentions, how can I change vgg backbone to resnet 101 or 50?

    Thanks

    opened by victor00070 1
  • Memory Leak

    Memory Leak

    Hi, thanks for your work.

    https://github.com/facebookresearch/adaptive_teacher/blob/d57d20640ae314a42c43dd82b1c1e26e90fa4b95/adapteacher/data/datasets/cityscapes_foggy.py#L79-L85

    Here if we don't close the pool, that may cause a memory leak.

    pool.close()
    
    opened by GeoffreyChen777 1
  • Error while loading pretrained model weights from `detectron2://ImageNetPretrained/MSRA/R-101.pkl` for training with custom dataset

    Error while loading pretrained model weights from `detectron2://ImageNetPretrained/MSRA/R-101.pkl` for training with custom dataset

    Hello authors, thanks for code.

    I was trying to adapt the model to our custom dataset and faced the following issue.

    Traceback (most recent call last):
      File "models/adaptive_teacher/train_net.py", line 84, in <module>
        args=(args,),
      File "/no_backups/s1437/.pyenv/versions/adaptiveteacher/lib/python3.7/site-packages/detectron2/engine/launch.py", line 59, in launch
        daemon=False,
      File "/no_backups/s1437/.pyenv/versions/adaptiveteacher/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 199, in spawn
        return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
      File "/no_backups/s1437/.pyenv/versions/adaptiveteacher/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 157, in start_processes
        while not context.join():
      File "/no_backups/s1437/.pyenv/versions/adaptiveteacher/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 118, in join
        raise Exception(msg)
    Exception: 
    
    -- Process 0 terminated with the following error:
    Traceback (most recent call last):
      File "/no_backups/s1437/.pyenv/versions/adaptiveteacher/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
        fn(i, *args)
      File "/no_backups/s1437/.pyenv/versions/adaptiveteacher/lib/python3.7/site-packages/detectron2/engine/launch.py", line 94, in _distributed_worker
        main_func(*args)
      File "/misc/no_backups/s1437/DA-Multimodal-OD/models/adaptive_teacher/train_net.py", line 69, in main
        trainer.resume_or_load(resume=args.resume)
      File "/misc/no_backups/s1437/DA-Multimodal-OD/models/adaptive_teacher/adapteacher/engine/trainer.py", line 377, in resume_or_load
        self.cfg.MODEL.WEIGHTS, resume=resume
      File "/no_backups/s1437/.pyenv/versions/adaptiveteacher/lib/python3.7/site-packages/fvcore/common/checkpoint.py", line 229, in resume_or_load
        return self.load(path, checkpointables=[])
      File "/no_backups/s1437/.pyenv/versions/adaptiveteacher/lib/python3.7/site-packages/fvcore/common/checkpoint.py", line 158, in load
        incompatible = self._load_model(checkpoint)
      File "/misc/no_backups/s1437/DA-Multimodal-OD/models/adaptive_teacher/adapteacher/checkpoint/detection_checkpoint.py", line 28, in _load_model
        incompatible = self._load_student_model(checkpoint)
      File "/misc/no_backups/s1437/DA-Multimodal-OD/models/adaptive_teacher/adapteacher/checkpoint/detection_checkpoint.py", line 70, in _load_student_model
        self._convert_ndarray_to_tensor(checkpoint_state_dict)
      File "/no_backups/s1437/.pyenv/versions/adaptiveteacher/lib/python3.7/site-packages/fvcore/common/checkpoint.py", line 368, in _convert_ndarray_to_tensor
        for k in list(state_dict.keys()):
    AttributeError: 'NoneType' object has no attribute 'keys'
    

    Configuration used:

    _BASE_: "./Base-RCNN-C4.yaml"
    MODEL:
      META_ARCHITECTURE: "DAobjTwoStagePseudoLabGeneralizedRCNN"
      WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-101.pkl"
      MASK_ON: False
      RESNETS:
        DEPTH: 101
      PROPOSAL_GENERATOR:
        NAME: "PseudoLabRPN"
      PIXEL_MEAN: [87.0, 91.0, 95.0]
      # RPN:
      #   POSITIVE_FRACTION: 0.25
      ROI_HEADS:
        NAME: "StandardROIHeadsPseudoLab"
        LOSS: "CrossEntropy" # variant: "CrossEntropy"
        NUM_CLASSES: 4 # this doesn't include background.
      ROI_BOX_HEAD:
        NAME: "FastRCNNConvFCHead"
        NUM_FC: 2
        POOLER_RESOLUTION: 7
    SOLVER:
      LR_SCHEDULER_NAME: "WarmupTwoStageMultiStepLR"
      STEPS: (60000, 80000, 90000, 360000)
      FACTOR_LIST: (1, 1, 1, 1, 1)
      MAX_ITER: 100000
      IMG_PER_BATCH_LABEL: 4
      IMG_PER_BATCH_UNLABEL: 4
      BASE_LR: 0.04
      CHECKPOINT_PERIOD: 1000
    DATALOADER:
      SUP_PERCENT: 100.0
    DATASETS:
      CROSS_DATASET: True
      TRAIN_LABEL: ("train_clear_day",) #voc_2012_train
      TRAIN_UNLABEL: ("train_dense_fog_day",) #Clipart1k_train
      TEST: ("test_clear_day",) #Clipart1k_test
    SEMISUPNET:
      Trainer: "ateacher"
      BBOX_THRESHOLD: 0.8
      TEACHER_UPDATE_ITER: 1
      BURN_UP_STEP: 20000
      EMA_KEEP_RATE: 0.9996
      UNSUP_LOSS_WEIGHT: 1.0
      SUP_LOSS_WEIGHT: 1.0
      DIS_TYPE: "res4" #["concate","p2","multi"]
    TEST:
      EVAL_PERIOD: 1000
    

    Please have look into it.

    opened by bitbeast18 0
  • resnet backbone dont converge

    resnet backbone dont converge

    I changed the backbone from vgg to resnet50. First, I have to use very low learning rate otherwise training diverges. Second, mAP is usually lower than vgg backbone adaptive teacher even if I train long. How can I achieve good performance with resnet50 backbone? I am using gradient clipping and learning rate of 0.001 to stop diverging.

    opened by victor00070 1
  • mAP predicted by student_model is sometimes higher than teacher

    mAP predicted by student_model is sometimes higher than teacher

    I train the model with my dataset,but the ap50 is instable, I may get vary different result by same parameters. Moreover,the teacher' AP50 sometimes lower than student'AP50. Is this phenomenon normal in DAOD?

    图片

    opened by firekeepers 7
  •  why were the mAP and AP50 getting smaller with the increase of iterations

    why were the mAP and AP50 getting smaller with the increase of iterations

    With detectron2 for the first time, I didn't understand a problem. When I trained the source only, AT model, I loaded the pretrained model of model zoo, and the mAP and AP50 were getting smaller with the increase of iterations.

    opened by sysuzgg 1
Owner
Meta Research
Meta Research
Nasdaq Cloud Data Service (NCDS) provides a modern and efficient method of delivery for realtime exchange data and other financial information. This repository provides an SDK for developing applications to access the NCDS.

Nasdaq Cloud Data Service (NCDS) Nasdaq Cloud Data Service (NCDS) provides a modern and efficient method of delivery for realtime exchange data and ot

Nasdaq 8 Dec 1, 2022
REPO USERBOT YANG DIBUAT DARI BERBAGAI REPO USERBOT GITHUB.

Lord Userbot Userbot Yang Digunakan Untuk Bersenang-Senang Di Telegram Repo Lord Userbot Repo Yang Dibuat Alvin Dari Berbagai Repo Userbot Github CARA

Alvin 70 Jan 2, 2023
A Telegram Repo For Devs To Controll The Bots Under Maintenance.This Bot Is For Developers, If Your Bot Is Down, Use This Repo To Give Your Dear Subscribers Some Support By Providing Them Response.

Maintenance Bot A Telegram Repo For Devs To Controll The Bots Under Maintenance About This Bot This Bot Is For Developers, If Your Bot Is Down, Use Th

Vɪᴠᴇᴋ 47 Dec 29, 2022
A Telegram Repo For Devs To Controll The Bots Under Maintenance.This Bot Is For Developers, If Your Bot Is Down, Use This Repo To Give Your Dear Subscribers Some Support By Providing Them Response.

Maintenance Bot A Telegram Repo For Devs To Controll The Bots Under Maintenance About This Bot This Bot Is For Developers, If Your Bot Is Down, Use Th

Vɪᴠᴇᴋ 47 Dec 29, 2022
PyMed is a Python library that provides access to PubMed.

IMPORTANT NOTE: I don't have time to maintain this library (as some of you might have noticed). The PubMed API is a little chaotic, without a clear do

Gijs Wobben 143 Dec 21, 2022
Discord Bot that leverages the idea of nested containers using podman, runs untrusted user input, executes Quantum Circuits, allows users to refer to the Qiskit Documentation, and provides the ability to search questions on the Quantum Computing StackExchange.

Discord Bot that leverages the idea of nested containers using podman, runs untrusted user input, executes Quantum Circuits, allows users to refer to the Qiskit Documentation, and provides the ability to search questions on the Quantum Computing StackExchange.

Mehul 23 Oct 18, 2022
As Slack no longer provides an API to invite people, this is a Selenium Python script to do so

As Slack no longer provides an API to invite people, this is a Selenium Python script to do so

Mehdi Bounya 4 Feb 15, 2022
A Python Module That Uses ANN To Predict A Stocks Price And Also Provides Accurate Technical Analysis With Many High Potential Implementations!

Stox ⚡ A Python Module For The Stock Market ⚡ A Module to predict the "close price" for the next day and give "technical analysis". It uses a Neural N

Dopevog 31 Dec 16, 2022
A bot which provides online/offline and player status for Thicc SMP, using Replit.

AlynaaStatus A bot which provides online/offline and player status for Thicc SMP. Currently being hosted on Replit. How to use? Create a repl on Repli

QuanTrieuPCYT 8 Dec 15, 2022
A Telegram Bot That Provides Permanent Download Links For Sent Files.

FileStreamBot A Telegram bot to all media and documents files to web link . Report a Bug | Request Feature Demo Bot: ?? About This Bot : This bot will

Flux Inc. 1 Nov 2, 2021
This repository provides a set functions to extract paragraphs from AWS Textract responses.

extract-paragraphs-with-aws-textract Since AWS Textract (the AWS OCR service) does not have a native function to extract paragraphs, this repository p

Juan Anzola 3 Jan 26, 2022
Pydf: A modular Telegram Bot which provides Pdf Tools using PyPdf2

pyDF-Bot ?? Pydf - Pyrogram Document File Bot, a modular Telegram Bot which prov

HyDrix 2 Feb 18, 2022
Public repo of the bot

wiki-reddit-bot Public repo of u/wikipedia_answer_bot Tools Language: Python Libraries: praw (Reddit API) mediawikiapi (Wikipedia API) tenacity How it

TheBugYouCantFix 51 Dec 3, 2022
A repo to automate the booking process for vaccinations

OntarioVaccineFormAutomaker A repo to automate the booking process for vaccinations Requirements Allow ALL sights to be able to know your location (on

Rafid Dewan 7 May 31, 2021
Poll-Bot Repo For Telegram #telegram

Intro This Is A Simple Bot To Create Poll In Channel and Groups And Also This Is our First Project Too.. Enter you tokens at these are very important

BotsUniverse 6 Oct 21, 2022
This repo contains a simple library for work with Eitaa messenger's api

Eitaa PyKit This repo contains a simple library for work with Eitaa messenger's api PyPI Page : https://pypi.org/project/Eitaa-PyKit Install via pip p

Bistcuite 20 Sep 16, 2022
A GitHub Actions repo for tracking the dummies sending free money to Alex Jones + co.

A GitHub Actions repo for tracking the dummies sending free money to Alex Jones + co.

Egarok 2 Jul 20, 2022
GitGram Bot. Bot Then Message You Your Repo Starts, Forks, And Many More

Yet Another GitAlertBot Inspired From Dev-v2's GitGram Run Bot: Local Host Git Clone Repo : For Telethon Version : git clone https://github.com/TeamAl

Alina RoBot 2 Nov 24, 2021
A repo containing toolings and software useful for a DevOps Engineer

DevOps-Tooling A repo containing toolings and software useful for a DevOps Engineer (or if you're setting up your Mac from the beginning) Currently se

Mohamed Abukar 45 Dec 12, 2022