Implementation of popular SOTA self-supervised learning algorithms as Fastai Callbacks.

Related tags

self_supervised
Overview

Self Supervised Learning with Fastai

Implementation of popular SOTA self-supervised learning algorithms as Fastai Callbacks.

CI PyPI DOI

Install

pip install self-supervised

Documentation

Please read the documentation here.

To go back to github repo please click here.

Algorithms

Please read the papers or blog posts before getting started with an algorithm, you may also check out documentation page of each algorithm to get a better understanding.

Here are the list of implemented self_supervised.vision algorithms:

Here are the list of implemented self_supervised.multimodal algorithms:

  • CLIP
  • CLIP-MoCo (No paper, own idea)

For vision algorithms all models from timm and fastai can be used as encoders.

For multimodal training currently CLIP supports ViT-B/32 and ViT-L/14, following best architectures from the paper.

Simple Usage

Vision

SimCLR

from self_supervised.vision.simclr import *
dls = get_dls(resize, bs)
# encoder = create_encoder("xresnet34", n_in=3, pretrained=False) # a fastai encoder
encoder = create_encoder("tf_efficientnet_b4_ns", n_in=3, pretrained=False) # a timm encoder
model = create_simclr_model(encoder, hidden_size=2048, projection_size=128)
aug_pipelines = get_simclr_aug_pipelines(size=size)
learn = Learner(dls,model,cbs=[SimCLR(aug_pipelines, temp=0.07)])
learn.fit_flat_cos(100, 1e-2)

MoCo

from self_supervised.vision.moco import *
dls = get_dls(resize, bs)
# encoder = create_encoder("xresnet34", n_in=3, pretrained=False) # a fastai encoder
encoder = create_encoder("tf_efficientnet_b4_ns", n_in=3, pretrained=False) # a timm encoder
model = create_moco_model(encoder, hidden_size=2048, projection_size=128)
aug_pipelines = get_moco_aug_pipelines(size=size)
learn = Learner(dls, model,cbs=[MOCO(aug_pipelines=aug_pipelines, K=128)])
learn.fit_flat_cos(100, 1e-2)

BYOL

from self_supervised.vision.byol import *
dls = get_dls(resize, bs)
# encoder = create_encoder("xresnet34", n_in=3, pretrained=False) # a fastai encoder
encoder = create_encoder("tf_efficientnet_b4_ns", n_in=3, pretrained=False) # a timm encoder
model = create_byol_model(encoder, hidden_size=2048, projection_size=128)
aug_pipelines = get_byol_aug_pipelines(size=size)
learn = Learner(dls, model,cbs=[BYOL(aug_pipelines=aug_pipelines)])
learn.fit_flat_cos(100, 1e-2)

SWAV

from self_supervised.vision.swav import *
dls = get_dls(resize, bs)
encoder = create_encoder("xresnet34", n_in=3, pretrained=False) # a fastai encoder
encoder = create_encoder("tf_efficientnet_b4_ns", n_in=3, pretrained=False) # a timm encoder
model = create_swav_model(encoder, hidden_size=2048, projection_size=128)
aug_pipelines = get_swav_aug_pipelines(num_crops=[2,6],
                                       crop_sizes=[128,96], 
                                       min_scales=[0.25,0.05],
                                       max_scales=[1.0,0.3])
learn = Learner(dls, model, cbs=[SWAV(aug_pipelines=aug_pipelines, crop_assgn_ids=[0,1], K=bs*2**6, queue_start_pct=0.5)])
learn.fit_flat_cos(100, 1e-2)

Barlow Twins

from self_supervised.vision.simclr import *
dls = get_dls(resize, bs)
# encoder = create_encoder("xresnet34", n_in=3, pretrained=False) # a fastai encoder
encoder = create_encoder("tf_efficientnet_b4_ns", n_in=3, pretrained=False) # a timm encoder
model = create_barlow_twins_model(encoder, hidden_size=2048, projection_size=128)
aug_pipelines = get_barlow_twins_aug_pipelines(size=size)
learn = Learner(dls,model,cbs=[BarlowTwins(aug_pipelines, lmb=5e-3)])
learn.fit_flat_cos(100, 1e-2)

DINO

from self_supervised.models.vision_transformer import *
from self_supervised.vision.dino import *
dls = get_dls(resize, bs)

deits16 = MultiCropWrapper(deit_small(patch_size=16, drop_path_rate=0.1))
dino_head = DINOHead(deits16.encoder.embed_dim, 2**16, norm_last_layer=True)
student_model = nn.Sequential(deits16,dino_head)

deits16 = MultiCropWrapper(deit_small(patch_size=16))
dino_head = DINOHead(deits16.encoder.embed_dim, 2**16, norm_last_layer=True)
teacher_model = nn.Sequential(deits16,dino_head)

dino_model = DINOModel(student_model, teacher_model)
aug_pipelines = get_dino_aug_pipelines(num_crops=[2,6],
                                       crop_sizes=[128,96], 
                                       min_scales=[0.25,0.05],
                                       max_scales=[1.0,0.3])
 learn = Learner(dls,model,cbs=[DINO(aug_pipelines=aug_pipelines)])
learn.fit_flat_cos(100, 1e-2)

Multimodal

CLIP

from self_supervised.multimodal.clip import *
dls = get_dls(...)
clip_tokenizer = ClipTokenizer()
vitb32_config_dict = vitb32_config(224, clip_tokenizer.context_length, clip_tokenizer.vocab_size)
clip_model = CLIP(**vitb32_config_dict, checkpoint=False, checkpoint_nchunks=0)
learner = Learner(dls, clip_model, loss_func=noop, cbs=[CLIPTrainer()])
learn.fit_flat_cos(100, 1e-2)

CLIP-MoCo

from self_supervised.multimodal.clip_moco import *
dls = get_dls(...)
clip_tokenizer = ClipTokenizer()
vitb32_config_dict = vitb32_config(224, clip_tokenizer.context_length, clip_tokenizer.vocab_size)
clip_model = CLIPMOCO(K=4096,m=0.999, **vitb32_config_dict, checkpoint=False, checkpoint_nchunks=0)
learner = Learner(dls, clip_model, loss_func=noop, cbs=[CLIPMOCOTrainer()])
learn.fit_flat_cos(100, 1e-2)

ImageWang Benchmarks

All of the algorithms implemented in this library have been evaluated in ImageWang Leaderboard.

In overall superiority of the algorithms are as follows SwAV > MoCo > BYOL > SimCLR in most of the benchmarks. For details you may inspect the history of ImageWang Leaderboard through github.

BarlowTwins is still under testing on ImageWang.

It should be noted that during these experiments no hyperparameter selection/tuning was made beyond using learn.lr_find() or making sanity checks over data augmentations by visualizing batches. So, there is still space for improvement and overall rankings of the alogrithms may change based on your setup. Yet, the overall rankings are on par with the papers.

Contributing

Contributions and or requests for new self-supervised algorithms are welcome. This repo will try to keep itself up-to-date with recent SOTA self-supervised algorithms.

Before raising a PR please create a new branch with name <self-supervised-algorithm>. You may refer to previous notebooks before implementing your Callback.

Please refer to sections Developers Guide, Abbreviations Guide, and Style Guide from https://docs.fast.ai/dev-setup and note that same rules apply for this library.

Comments
  • Self Supervised v.1.0.0 : Major-Breaking Update

    Self Supervised v.1.0.0 : Major-Breaking Update

    Current

    • [x] New augmentations module with kornia, torchvision, fastai pipes.
    • [x] New layers modules to support all fastai and timm models.
    • [x] Refactor all modules with new modules and fastai version >=2.2.5. Callback changes.
    • [x] Support for native to_fp16 with correct callback order.
    • [x] Multi sample augmentation visualizations with show() method for SimCLR and BYOL.
    • [x] Update all ImageWang example notebooks.
    • [x] Embedding extraction
    • [x] Custom Random Blur augmentation until kornia version 0.4.2. (current version == 0.4.1)
    • [x] Medical domain experiments/examples (NIH-CXR POC)
    • [x] MoCo
    • [x] Merge SimCLR and SimCLR-V2
    • [x] Distributed InfoNCE loss for SimCLR
    • [x] Identify remaining implementations details for each algorithm
    • [x] Allow queue in SwAV
    • [x] Put all current modules to new parent module vision
    • [x] Gradient Checkpointing for timm models (resnet, efficientnet) and fastai models
    • [x] New parent multimodal module - starting with CLIP / Distributed CLIP / MoCo CLIP
    • [x] Separate aug_pipelines.
    • [x] ZeRO optimizer in CLIP examples.
    • [x] Update README / examples / docs for newest version

    After release

    • Better docs
    • Embedding visualization
    • Update to latest kornia version 0.4.2.
    • Rand Augment pipe from timm.
    • Microsoft Hexa
    • ViewMaker
    v1.0.0 
    opened by KeremTurgutlu 26
  • SwAV: Difference to original implementation

    SwAV: Difference to original implementation

    Hi Kerem, Thanks for sharing your code and results on imagewang. It was motivating to see positive results on a smaller dataset.

    In the last week, I was comparing your swav implementation with the official one of FAIR(https://github.com/facebookresearch/swav). Mainly, because I get much better (and faster) results with yours. Unfortunately, I was not able to find the relevant bit, that makes your implementation better.

    Differences, that I already integrated in the original implementation and that didn't make a difference are:

    • xresnet
    • LabelSmoothingCrossEntropy during fine-tuning
    • ranger optimizer
    • same training hyperparameters
    • same augmentation hyperparameters
    • disable extra features like queue, LARC, lr schedule

    I was wondering, if you have an idea what else could make the difference? I am running out of ideas...

    Thanks in advance!

    discussion 
    opened by RGring 12
  • Is there any way to train a CLIP model without install pytorch from source ?

    Is there any way to train a CLIP model without install pytorch from source ?

    Hi,

    I have been trying to train a CLIP model from scratch. After edit data loader functions in this code, When I want to start training with the following code, I got the following error.

    Running parameters: python -m fastai.launch "D:\Kariyer\Projects\YTU\YTU_Multi_Modal_Contrastive_Learning\Multi_Modal_Contrastive_Learning\Kerem_Turgutlu\examples\training_clip.py" --arch vitb32 --size 224 --bs 360 --epochs 24 --lr 1e-4 --use_grad_check True --grad_check_nchunks 2

    Error :

    Dataframe is read
    1533 10000
    Distributed training mode
    vitb32 True <class 'bool'> 2
    Traceback (most recent call last):
      File "D:\Kariyer\Projects\YTU\YTU_Multi_Modal_Contrastive_Learning\Multi_Modal_Contrastive_Learning\Kerem_Turgutlu\examples\training_clip.py", line 126, in <module>
        def main(
      File "C:\Users\Yusuf\anaconda3\lib\site-packages\fastcore\script.py", line 110, in call_parse
        return _f()
      File "C:\Users\Yusuf\anaconda3\lib\site-packages\fastcore\script.py", line 105, in _f
        tfunc(**merge(args, args_from_prog(func, xtra)))
      File "D:\Kariyer\Projects\YTU\YTU_Multi_Modal_Contrastive_Learning\Multi_Modal_Contrastive_Learning\Kerem_Turgutlu\examples\training_clip.py", line 212, in main
        learner.fit_flat_cos(epochs, lr, pct_start=0.25)
      File "C:\Users\Yusuf\anaconda3\lib\site-packages\fastai\callback\schedule.py", line 131, in fit_flat_cos
        if self.opt is None: self.create_opt()
      File "C:\Users\Yusuf\anaconda3\lib\site-packages\fastai\learner.py", line 149, in create_opt
        self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
      File "D:\Kariyer\Projects\YTU\YTU_Multi_Modal_Contrastive_Learning\Multi_Modal_Contrastive_Learning\Kerem_Turgutlu\examples\training_clip.py", line 167, in zero
        return OptimWrapper(ZeroRedundancyOptimizer(params, optimizer_class=torch.optim.Adam, lr=lr))
      File "D:\Kariyer\Projects\YTU\YTU_Multi_Modal_Contrastive_Learning\Multi_Modal_Contrastive_Learning\Kerem_Turgutlu\examples\zero_optimizer.py", line 173, in __init__
        self.world_size = dist.get_world_size(self.group)
      File "C:\Users\Yusuf\anaconda3\lib\site-packages\torch\distributed\distributed_c10d.py", line 638, in get_world_size
        return _get_group_size(group)
      File "C:\Users\Yusuf\anaconda3\lib\site-packages\torch\distributed\distributed_c10d.py", line 220, in _get_group_size
        _check_default_pg()
      File "C:\Users\Yusuf\anaconda3\lib\site-packages\torch\distributed\distributed_c10d.py", line 210, in _check_default_pg
        assert _default_pg is not None, \
    AssertionError: Default process group is not initialized
    

    After some googling that error, I found this solution. So, I added the following code After the code on line 166 in zero_optimizer .py file.

    The code I added : dist.init_process_group(backend="mpi", group_name="main")

    It was like that after I add :

            ...
            self._all_params = params
            self._reference_is_trainable_mask = list(map(_is_trainable, self._all_params))
            #print(torch.distributed.is_available())
            #print(torch.distributed.get_backend(group=None))
            dist.init_process_group(backend="mpi", group_name="main")
    
            # Build the wrapped optimizer, responsible for a shard of the params
            self.group = group if group is not None else dist.group.WORLD
            ...
    

    After applying that solution, I encountered another error and I understand that I must install PyTorch from the source.

    Error:

    Dataframe is read
    1533 10000
    Distributed training mode
    vitb32 True <class 'bool'> 2
    Traceback (most recent call last):
      File "D:\Kariyer\Projects\YTU\YTU_Multi_Modal_Contrastive_Learning\Multi_Modal_Contrastive_Learning\Kerem_Turgutlu\examples\training_clip.py", line 126, in <module>
        def main(
      File "C:\Users\Yusuf\anaconda3\lib\site-packages\fastcore\script.py", line 110, in call_parse
        return _f()
      File "C:\Users\Yusuf\anaconda3\lib\site-packages\fastcore\script.py", line 105, in _f
        tfunc(**merge(args, args_from_prog(func, xtra)))
      File "D:\Kariyer\Projects\YTU\YTU_Multi_Modal_Contrastive_Learning\Multi_Modal_Contrastive_Learning\Kerem_Turgutlu\examples\training_clip.py", line 212, in main
        learner.fit_flat_cos(epochs, lr, pct_start=0.25)
      File "C:\Users\Yusuf\anaconda3\lib\site-packages\fastai\callback\schedule.py", line 131, in fit_flat_cos
        if self.opt is None: self.create_opt()
      File "C:\Users\Yusuf\anaconda3\lib\site-packages\fastai\learner.py", line 149, in create_opt
        self.opt = self.opt_func(self.splitter(self.model), lr=self.lr)
      File "D:\Kariyer\Projects\YTU\YTU_Multi_Modal_Contrastive_Learning\Multi_Modal_Contrastive_Learning\Kerem_Turgutlu\examples\training_clip.py", line 167, in zero
        return OptimWrapper(ZeroRedundancyOptimizer(params, optimizer_class=torch.optim.Adam, lr=lr))
      File "D:\Kariyer\Projects\YTU\YTU_Multi_Modal_Contrastive_Learning\Multi_Modal_Contrastive_Learning\Kerem_Turgutlu\examples\zero_optimizer.py", line 169, in __init__
        dist.init_process_group(backend="mpi", group_name="main")
      File "C:\Users\Yusuf\anaconda3\lib\site-packages\torch\distributed\distributed_c10d.py", line 422, in init_process_group
        _default_pg = _new_process_group_helper(
      File "C:\Users\Yusuf\anaconda3\lib\site-packages\torch\distributed\distributed_c10d.py", line 495, in _new_process_group_helper
        raise RuntimeError(
    RuntimeError: Distributed package doesn't have MPI built in. MPI is only included if you build PyTorch from source on a host that has MPI installed.
    

    I tried many times to install PyTorch from source on my windows, but I couldn't manage it yet. Also, I have tried the same steps on Google Colab too. But not worked.

    Is there any way to train CLIP with normal PyTorch or am I missing something?

    Can you share example Colab notebook for CLIP?

    opened by yusufani 7
  • Proposal: Remove implicit augmentation-building pipeline in callbacks

    Proposal: Remove implicit augmentation-building pipeline in callbacks

    This is definitely a topic where it will come down to your personal preference. In my opinion, I would remove the initialization of the augmentation functions in the callbacks.

    From a new user perspective

    Let's take a look from the perspective of a new user. I am a developer who has never really done anything with self-supervised learning. I notice your repository because it uses fastai and is awesome. ;) After skimming the documentation, I decided to test it out with SimCLR and see that I only need to use your Callback. If I've already worked with MixUp the callback style is familiar to me. So I build up my item_tfm and batch_tfm pipeline. I use your SWAV() callback without any changes as it already has default values for everything. At this point, I may still haven't realized that these models have a special augmentation pipeline and that the augmentation is crucial for the performance. If the user skips to the Example usage section, they may miss the building a data-augmentation pipeline step.

    I understand the desire to use a single transformation pipeline for all models and to give the user the ability to very quickly train the model in an unsupervised manner, but I do believe that the user should always provide the pipeline or a list of pipeline transformations.

    From my perspective

    I would like to use your code with wandb sweeps. In this scenario, I rewrite parts of my CLI code just that the wandb sweep scheduler can supply the parameters without any modifications. Specifically, I usually end up have CLI arguments such as resize-scale-min and resize-scale-max, just so that it is easier to optimize both values independently with wandb. This is just one example of why I would like to use my own transformation pipeline (I also have other reasons, but I believe that some customization to the augmentation function will be done by more experienced developers). Now let's take a look at SWAV:

    for nc, size, mins, maxs in zip(num_crops, crop_sizes, min_scales, max_scales):
                self.augs += [aug_func(size, resize_scale=(mins, maxs), **aug_kwargs) for i in range(nc)]
            if print_augs: 
                for aug in self.augs: print(aug)
    

    Here we can see that even if you allow me to use a custom aug_func I still need to look up the source code and modify my function signature to comply with this calling signature. Sure, we could document this necessity, but then like in my example above, maybe I have reasons, why I would like a different signature?

    It also couples parts together that, in my opinion, do not belong together. The multi-crop augmentation trick was introduced in the SWaV paper, but maybe I would like to test it without the multi-crop technique because my images are very low resolution?

    Plus, I will always have to look up what arguments I can pass into your augmentation function. Here it would be nice to have some auto-complete features.

    Proposal

    Instead of supplying the function and assembling the function in the init function, let's get the pipeline directly. I would recommend making this parameter non-optional:

    • The user will be aware that this pipeline will be used to generate the different views
    • The augmentation pipeline stays completely customizable for the user
    • The normal user who want to use your augmentation pipeline can simply call your function and will get full auto-completion + documentation from it.
      • Requires an extra wrapper that returns the pipeline, but that shouldn't be a problem with delegates
      • Yes, for the SWAV scenario the user would maybe need to call another function that generates the different view-pipelines, but then it is obvious that the multi-crop is strictly speaking not necessary for the SWAV callback
    • The main difference for the examples would only be a single line, where the pipeline/list of pipelines is generated, but again, I believe that this makes the general flow more obvious

    In general, we would then have to double-check that pipeline.idx is set to 0 and that either a full list for each branch is provided or a single pipeline for all branches.

    I would really like to hear your comments on this and I am happy to provide some suggestion on how I would implement it if you would consider this to be a reasonable change :)

    opened by kai-tub 5
  • Wrong `max_n` value in SimCLR fastai_update branch

    Wrong `max_n` value in SimCLR fastai_update branch

    Thanks for updating the show method so quickly! But I think you've made a typo in SimCLR:

        @torch.no_grad()
        def show(self, n=1):
            bs = self.learn.x.size(0)//2
            x1,x2  = torch.split(self.learn.x, [bs,bs])
            idxs = np.random.choice(range(bs),n,False)
            x1 = self.aug1.decode(x1[idxs].to('cpu').clone()).clamp(0,1)
            x2 = self.aug2.decode(x2[idxs].to('cpu').clone()).clamp(0,1)
            images = []
            for i in range(n): images += [x1[i],x2[i]] 
            return show_batch(x1[0], None, images, max_n=n * n, ncols=None, nrows=n)
    

    max_n shouldn't be n * n but 2n. This will not impact the functionality but is technically wrong. I would suggest simply using max_n=len(images). Then it will never be out-of-sync and it would be more obvious.

    Sadly it still doesn't work for my special tensor class, because torch.split seems to remove the metadata information of the fastai tensor classes... But that is either a bug in torch/fastai and not in your code base. :)

    opened by kai-tub 5
  • Hotfix for kornia augmentation import format

    Hotfix for kornia augmentation import format

    This PR changes the importing of the kornia augmentations that fixes the issue raised here: https://github.com/KeremTurgutlu/self_supervised/issues/72

    hotfix 
    opened by jimmiemunyi 4
  • ImportError: cannot import name 'augmentation' from 'kornia.augmentation'

    ImportError: cannot import name 'augmentation' from 'kornia.augmentation'

    Please confirm your versions of fastai, timm, kornia, fastcore, and nbdev prior to reporting a bug

    fastai - 2.5.3 timm - 0.5.4 kornia - 0.6.3 fastcore - 1.3.27 nbdev - 1.1.23

    Describe the bug Cannot get past the imports, I get the error ImportError: cannot import name 'augmentation' from 'kornia.augmentation' I installed the package as advised, pip install self-supervised. I am actually running the tutorials on your documentation as they are in Colab

    To Reproduce Steps to reproduce the behavior:

    1. pip install -Uqq self-supervised
    2. The necessary import:
    
    from self_supervised.augmentations import *
    from self_supervised.layers import *
    from self_supervised.vision.simclr import *
    

    Expected behavior Import successful

    Error with full stack trace

    Place between these lines with triple backticks:

    ImportError                               Traceback (most recent call last)
    [<ipython-input-3-5df5ab89f4c7>](https://localhost:8080/#) in <module>()
    ----> 1 from self_supervised.augmentations import *
          2 from self_supervised.layers import *
          3 from self_supervised.vision.simclr import *
    
    [/usr/local/lib/python3.7/dist-packages/self_supervised/augmentations.py](https://localhost:8080/#) in <module>()
          6 # Cell
          7 from fastai.vision.all import *
    ----> 8 from kornia.augmentation import augmentation as korniatfm
          9 import torchvision.transforms as tvtfm
         10 import kornia
    
    ImportError: cannot import name 'augmentation' from 'kornia.augmentation' (/usr/local/lib/python3.7/dist-packages/kornia/augmentation/__init__.py)
    

    Additional context Add any other context about the problem here.

    opened by jimmiemunyi 4
  • Barlow twins

    Barlow twins

    • Adding Barlow Twins to vision module.
    • Minor changes in docs, utility function arguments for mlp module creation.
    • Removed GaussianBlur code, now it's available with kornia v0.5
    • Finalized Barlow Twins Imagewang experiments.
    barlow-twins 
    opened by KeremTurgutlu 4
  • Loss not decreasing in SWAV pretext task

    Loss not decreasing in SWAV pretext task

    Discussion link is not working so I posted here. I am using SWAV with efficientnet-b4, adam opt. and lr = 0.311. My dataset is small (around 1200 images ) 370x300 in dimension and is black and white (1-channel). For some reason the train and valid loss doesn't decrease beyond 1st epoch.

    What I have already tried :

    1. Removing normalization
    2. Increasing / decreasing batch size
    3. Setting pretrained = False
    4. Changing projection_size = 128

    None gave any improvement

    Here is the code to load data and prepare the model

    
    from self_supervised.vision.swav import *
    from self_supervised.layers import create_encoder
    
    item_tfms = [Resize((376, 300), method='squish')]
    batch_tfms = [Normalize.from_stats(*imagenet_stats)]
    bs = 8
    
    dls = ImageDataLoaders.from_df(all_data, valid_col = 'is_valid', item_tfms = item_tfms, fn_col = 'img',
                                   batch_tfms = None, bs = bs, label_col = 'label')
    
    
    
    encoder = create_encoder("tf_efficientnet_b4_ns", n_in=3, pretrained=True) 
    
    
    model = create_swav_model(encoder, hidden_size=2048, projection_size=512)
    aug_pipelines = get_swav_aug_pipelines(num_crops=[2,6],
                                           crop_sizes=[128,96], 
                                           min_scales=[0.25,0.05],
                                           max_scales=[1.0,0.3])
    
    learn = Learner(dls, model, cbs=[SWAV(aug_pipelines=aug_pipelines, crop_assgn_ids=[0,1], K=bs*2**6, queue_start_pct=0.5)])
    

    The data cannot be shared but is of similar nature as medical scans.

    opened by trancenoid 3
  • learn.summary() raises exception with MOCO trainer

    learn.summary() raises exception with MOCO trainer

    Please confirm your versions of fastai, timm, kornia, fastcore, and nbdev prior to reporting a bug

    • torch 1.7.0, torchvision 0.8.1, fastai 2.2.5, using latest self_supervised build as of 4/15.

    Describe the bug

    • learn.summary() raises exception when fitting, see MOCO(Callback), raise Exception("Key encoder and queue is already defined")

    To Reproduce

    • Use NB in gitrepo, insert learn.summary() before learn.fit() to repro bug

    Expected behavior

    • learn.summary() should not cause errors in fitting
    • A possible workaround is to remove the exception and just reinitialize the encoder and queue and issue a warning when encoder and queue has been defined before. Not sure if this will cause training issues though...
    opened by ai-padawan 3
  • `TypeError: RAdam() got an unexpected keyword argument 'sqrmom'`

    `TypeError: RAdam() got an unexpected keyword argument 'sqrmom'`

    Describe the bug The fine tuning steps raise the error TypeError: RAdam() got an unexpected keyword argument 'sqrmom' in at least the SimCLR and SwAV tutorials (e.g. finetune(size, epochs=5, arch='xresnet34', encoder_path=f'models/swav_iwang_sz{size}_epc100_encoder.pth') in https://keremturgutlu.github.io/self_supervised/training_swav_iwang.html).

    To Reproduce Steps to reproduce the behavior:

    1. Open https://colab.research.google.com/github/KeremTurgutlu/self_supervised/blob/master/examples/training_swav_iwang.ipynb
    2. Go to Runtime > Change runtime type in the menu and set Hardware accelerator to GPU.
    3. Create a new code cell at the top with contents !pip install self-supervised.
    4. Comment out from fastai.callback.wandb import WandbCallback and import wandb.
    5. Change bs, resize, size = 96, 256, 224 to bs, resize, size = 48, 256, 224 to avoid a CUDA out of memory error.
    6. Optional: change lr,wd,epochs=1e-2,1e-2,100 to lr,wd,epochs=1e-2,1e-2,1 and finetune(size, epochs=5, arch='xresnet34', encoder_path=f'models/swav_iwang_sz{size}_epc100_encoder.pth' to finetune(size, epochs=5, arch='xresnet34', encoder_path=f'models/swav_iwang_sz{size}_epc1_encoder.pth' to get to the error much faster.
    7. Run all cells.

    Expected behavior I expect the model to train without errors.

    Error with full stack trace

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    <ipython-input-27-30032d6c1d48> in <module>()
          1 acc = []
          2 runs = 5
    ----> 3 for i in range(runs): acc += [finetune(size, epochs=5, arch='xresnet34', encoder_path=f'models/swav_iwang_sz{size}_epc1_encoder.pth')]
    
    4 frames
    /usr/local/lib/python3.7/dist-packages/fastai/optimizer.py in ranger(p, lr, mom, wd, eps, **kwargs)
        319 def ranger(p, lr, mom=0.95, wd=0.01, eps=1e-6, **kwargs):
        320     "Convenience method for `Lookahead` with `RAdam`"
    --> 321     return Lookahead(RAdam(p, lr=lr, mom=mom, wd=wd, eps=eps, **kwargs))
        322 
        323 # Cell
    
    TypeError: RAdam() got an unexpected keyword argument 'sqrmom'
    

    Additional context The notebook runs fine if you remove sqrmom from optdict = dict(sqrmom=0.99,mom=0.95,beta=0.,eps=1e-4), but I suspect the preferred solution involves calling a different function that accepts sqrmom.

    I also get this error when I run the notebook locally after pip install self-supervised in a fresh pyenv-virtualenv environment.

    My library versions look right to me:

    image

    opened by gsganden 3
  • TResnet models are not working as encoder for SimCLR

    TResnet models are not working as encoder for SimCLR

    I wanted to try
    arch = 'tresnet_m_miil_in21k' encoder = create_encoder(arch, pretrained=True, n_in=3) But got this error
    Traceback (most recent call last): File "train_affectnet_simclr.py", line 58, in <module> model = create_simclr_model(encoder, hidden_size=2048, projection_size=128) File "/usr/local/lib/python3.7/dist-packages/self_supervised/vision/simclr.py", line 20, in create_simclr_model with torch.no_grad(): representation = encoder(torch.randn((2,n_in,128,128))) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/timm/models/tresnet.py", line 274, in forward x = self.forward_features(x) File "/usr/local/lib/python3.7/dist-packages/timm/models/tresnet.py", line 268, in forward_features return self.body(x) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py", line 139, in forward input = module(input) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py", line 139, in forward input = module(input) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 457, in forward return self._conv_forward(input, self.weight, self.bias) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 454, in _conv_forward self.padding, self.dilation, self.groups) RuntimeError: Given groups=1, weight of size [64, 48, 3, 3], expected input[2, 768, 32, 32] to have 48 channels, but got 768 channels instead

    opened by kankanar 0
  • Bump nokogiri from 1.13.2 to 1.13.9 in /docs

    Bump nokogiri from 1.13.2 to 1.13.9 in /docs

    Bumps nokogiri from 1.13.2 to 1.13.9.

    Release notes

    Sourced from nokogiri's releases.

    1.13.9 / 2022-10-18

    Security

    Dependencies

    • [CRuby] Vendored libxml2 is updated to v2.10.3 from v2.9.14.
    • [CRuby] Vendored libxslt is updated to v1.1.37 from v1.1.35.
    • [CRuby] Vendored zlib is updated from 1.2.12 to 1.2.13. (See LICENSE-DEPENDENCIES.md for details on which packages redistribute this library.)

    Fixed

    • [CRuby] Nokogiri::XML::Namespace objects, when compacted, update their internal struct's reference to the Ruby object wrapper. Previously, with GC compaction enabled, a segmentation fault was possible after compaction was triggered. [#2658] (Thanks, @​eightbitraptor and @​peterzhu2118!)
    • [CRuby] Document#remove_namespaces! now defers freeing the underlying xmlNs struct until the Document is GCed. Previously, maintaining a reference to a Namespace object that was removed in this way could lead to a segfault. [#2658]

    sha256 checksums:

    9b69829561d30c4461ea803baeaf3460e8b145cff7a26ce397119577a4083a02  nokogiri-1.13.9-aarch64-linux.gem
    e76ebb4b7b2e02c72b2d1541289f8b0679fb5984867cf199d89b8ef485764956  nokogiri-1.13.9-arm64-darwin.gem
    15bae7d08bddeaa898d8e3f558723300137c26a2dc2632a1f89c8574c4467165  nokogiri-1.13.9-java.gem
    f6a1dbc7229184357f3129503530af73cc59ceba4932c700a458a561edbe04b9  nokogiri-1.13.9-x64-mingw-ucrt.gem
    36d935d799baa4dc488024f71881ff0bc8b172cecdfc54781169c40ec02cbdb3  nokogiri-1.13.9-x64-mingw32.gem
    ebaf82aa9a11b8fafb67873d19ee48efb565040f04c898cdce8ca0cd53ff1a12  nokogiri-1.13.9-x86-linux.gem
    11789a2a11b28bc028ee111f23311461104d8c4468d5b901ab7536b282504154  nokogiri-1.13.9-x86-mingw32.gem
    01830e1646803ff91c0fe94bc768ff40082c6de8cfa563dafd01b3f7d5f9d795  nokogiri-1.13.9-x86_64-darwin.gem
    8e93b8adec22958013799c8690d81c2cdf8a90b6f6e8150ab22e11895844d781  nokogiri-1.13.9-x86_64-linux.gem
    96f37c1baf0234d3ae54c2c89aef7220d4a8a1b03d2675ff7723565b0a095531  nokogiri-1.13.9.gem
    

    1.13.8 / 2022-07-23

    Deprecated

    • XML::Reader#attribute_nodes is deprecated due to incompatibility between libxml2's xmlReader memory semantics and Ruby's garbage collector. Although this method continues to exist for backwards compatibility, it is unsafe to call and may segfault. This method will be removed in a future version of Nokogiri, and callers should use #attribute_hash instead. [#2598]

    Improvements

    • XML::Reader#attribute_hash is a new method to safely retrieve the attributes of a node from XML::Reader. [#2598, #2599]

    Fixed

    ... (truncated)

    Changelog

    Sourced from nokogiri's changelog.

    1.13.9 / 2022-10-18

    Security

    Dependencies

    • [CRuby] Vendored libxml2 is updated to v2.10.3 from v2.9.14.
    • [CRuby] Vendored libxslt is updated to v1.1.37 from v1.1.35.
    • [CRuby] Vendored zlib is updated from 1.2.12 to 1.2.13. (See LICENSE-DEPENDENCIES.md for details on which packages redistribute this library.)

    Fixed

    • [CRuby] Nokogiri::XML::Namespace objects, when compacted, update their internal struct's reference to the Ruby object wrapper. Previously, with GC compaction enabled, a segmentation fault was possible after compaction was triggered. [#2658] (Thanks, @​eightbitraptor and @​peterzhu2118!)
    • [CRuby] Document#remove_namespaces! now defers freeing the underlying xmlNs struct until the Document is GCed. Previously, maintaining a reference to a Namespace object that was removed in this way could lead to a segfault. [#2658]

    1.13.8 / 2022-07-23

    Deprecated

    • XML::Reader#attribute_nodes is deprecated due to incompatibility between libxml2's xmlReader memory semantics and Ruby's garbage collector. Although this method continues to exist for backwards compatibility, it is unsafe to call and may segfault. This method will be removed in a future version of Nokogiri, and callers should use #attribute_hash instead. [#2598]

    Improvements

    • XML::Reader#attribute_hash is a new method to safely retrieve the attributes of a node from XML::Reader. [#2598, #2599]

    Fixed

    • [CRuby] Calling XML::Reader#attributes is now safe to call. In Nokogiri <= 1.13.7 this method may segfault. [#2598, #2599]

    1.13.7 / 2022-07-12

    Fixed

    XML::Node objects, when compacted, update their internal struct's reference to the Ruby object wrapper. Previously, with GC compaction enabled, a segmentation fault was possible after compaction was triggered. [#2578] (Thanks, @​eightbitraptor!)

    1.13.6 / 2022-05-08

    Security

    • [CRuby] Address CVE-2022-29181, improper handling of unexpected data types, related to untrusted inputs to the SAX parsers. See GHSA-xh29-r2w5-wx8m for more information.

    ... (truncated)

    Commits
    • 897759c version bump to v1.13.9
    • aeb1ac3 doc: update CHANGELOG
    • c663e49 Merge pull request #2671 from sparklemotion/flavorjones-update-zlib-1.2.13_v1...
    • 212e07d ext: hack to cross-compile zlib v1.2.13 on darwin
    • 76dbc8c dep: update zlib to v1.2.13
    • 24e3a9c doc: update CHANGELOG
    • 4db3b4d Merge pull request #2668 from sparklemotion/flavorjones-namespace-scopes-comp...
    • 73d73d6 fix: Document#remove_namespaces! use-after-free bug
    • 5f58b34 fix: namespace nodes behave properly when compacted
    • b08a858 test: repro namespace_scopes compaction issue
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump addressable from 2.7.0 to 2.8.1 in /docs

    Bump addressable from 2.7.0 to 2.8.1 in /docs

    Bumps addressable from 2.7.0 to 2.8.1.

    Changelog

    Sourced from addressable's changelog.

    Addressable 2.8.1

    • refactor Addressable::URI.normalize_path to address linter offenses (#430)
    • remove redundant colon in Addressable::URI::CharacterClasses::AUTHORITY regex (#438)
    • update gemspec to reflect supported Ruby versions (#466, #464, #463)
    • compatibility w/ public_suffix 5.x (#466, #465, #460)
    • fixes "invalid byte sequence in UTF-8" exception when unencoding URLs containing non UTF-8 characters (#459)
    • Ractor compatibility (#449)
    • use the whole string instead of a single line for template match (#431)
    • force UTF-8 encoding only if needed (#341)

    #460: sporkmonger/addressable#460 #463: sporkmonger/addressable#463 #464: sporkmonger/addressable#464 #465: sporkmonger/addressable#465 #466: sporkmonger/addressable#466

    Addressable 2.8.0

    • fixes ReDoS vulnerability in Addressable::Template#match
    • no longer replaces + with spaces in queries for non-http(s) schemes
    • fixed encoding ipv6 literals
    • the :compacted flag for normalized_query now dedupes parameters
    • fix broken escape_component alias
    • dropping support for Ruby 2.0 and 2.1
    • adding Ruby 3.0 compatibility for development tasks
    • drop support for rack-mount and remove Addressable::Template#generate
    • performance improvements
    • switch CI/CD to GitHub Actions
    Commits
    • 8657465 Update version, gemspec, and CHANGELOG for 2.8.1 (#474)
    • 4fc5bb6 CI: remove Ubuntu 18.04 job (#473)
    • 860fede Force UTF-8 encoding only if needed (#341)
    • 99810af Merge pull request #431 from ojab/ct-_do_not_parse_multiline_strings
    • 7ce0f48 Merge branch 'main' into ct-_do_not_parse_multiline_strings
    • 7ecf751 Merge pull request #449 from okeeblow/freeze_concatenated_strings
    • 41f12dd Merge branch 'main' into freeze_concatenated_strings
    • 068f673 Merge pull request #459 from jarthod/iso-encoding-problem
    • b4c9882 Merge branch 'main' into iso-encoding-problem
    • 08d27e8 Merge pull request #471 from sporkmonger/sporkmonger-enable-codeql
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump tzinfo from 1.2.7 to 1.2.10 in /docs

    Bump tzinfo from 1.2.7 to 1.2.10 in /docs

    Bumps tzinfo from 1.2.7 to 1.2.10.

    Release notes

    Sourced from tzinfo's releases.

    v1.2.10

    TZInfo v1.2.10 on RubyGems.org

    v1.2.9

    • Fixed an incorrect InvalidTimezoneIdentifier exception raised when loading a zoneinfo file that includes rules specifying an additional transition to the final defined offset (for example, Africa/Casablanca in version 2018e of the Time Zone Database). #123.

    TZInfo v1.2.9 on RubyGems.org

    v1.2.8

    • Added support for handling "slim" format zoneinfo files that are produced by default by zic version 2020b and later. The POSIX-style TZ string is now used calculate DST transition times after the final defined transition in the file. The 64-bit section is now always used regardless of whether Time has support for 64-bit times. #120.
    • Rubinius is no longer supported.

    TZInfo v1.2.8 on RubyGems.org

    Changelog

    Sourced from tzinfo's changelog.

    Version 1.2.10 - 19-Jul-2022

    Version 1.2.9 - 16-Dec-2020

    • Fixed an incorrect InvalidTimezoneIdentifier exception raised when loading a zoneinfo file that includes rules specifying an additional transition to the final defined offset (for example, Africa/Casablanca in version 2018e of the Time Zone Database). #123.

    Version 1.2.8 - 8-Nov-2020

    • Added support for handling "slim" format zoneinfo files that are produced by default by zic version 2020b and later. The POSIX-style TZ string is now used calculate DST transition times after the final defined transition in the file. The 64-bit section is now always used regardless of whether Time has support for 64-bit times. #120.
    • Rubinius is no longer supported.
    Commits
    • 0814dcd Fix the release date.
    • fd05e2a Preparing v1.2.10.
    • b98c32e Merge branch 'fix-directory-traversal-1.2' into 1.2
    • ac3ee68 Remove unnecessary escaping of + within regex character classes.
    • 9d49bf9 Fix relative path loading tests.
    • 394c381 Remove private_constant for consistency and compatibility.
    • 5e9f990 Exclude Arch Linux's SECURITY file from the time zone index.
    • 17fc9e1 Workaround for 'Permission denied - NUL' errors with JRuby on Windows.
    • 6bd7a51 Update copyright years.
    • 9905ca9 Fix directory traversal in Timezone.get when using Ruby data source
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • intro_tutorial.ipynb not running on google Colab (fastai v 2.7.6)

    intro_tutorial.ipynb not running on google Colab (fastai v 2.7.6)

    Thanks for the library, I'm really looking forward to use it, but I ran into some problems with the intro tutorial. I tried running the intro_tutorial.ipynb on google_colab. There are a few places where the code runs into bugs. This is because of fastai version fastai v2.7.6 fastcore v1.4.5

    All the bugs I describe below are fixed when downgrading fastai to e.g. v2.3.0

    Describe the bug

    1. Encoder bug =====================

    e.g. in block 9: encoder = create_encoder("xresnet34", n_in=3, pretrained=False)

    yields this error

    ---------------------------------------------------------------------------
    AttributeError                            Traceback (most recent call last)
    [<ipython-input-9-b3399242d3af>](https://localhost:8080/#) in <module>()
    ----> 1 encoder = create_encoder("xresnet34", n_in=3, pretrained=False)
          2 model = create_simclr_model(encoder, hidden_size=2048, projection_size=128)
          3 aug_pipelines = get_simclr_aug_pipelines(size=size, rotate=True, jitter=True, bw=True, blur=True, blur_s=(4,16), blur_p=0.25, cuda=False)
          4 learn = Learner(dls, model,loss_func=noop,cbs=[SimCLR(aug_pipelines, temp=0.07, print_augs=True),ShortEpochCallback(0.001)])
    
    2 frames
    [/usr/local/lib/python3.7/dist-packages/self_supervised/layers.py](https://localhost:8080/#) in create_encoder(arch, pretrained, n_in, pool_type)
         34 def create_encoder(arch:str, pretrained=True, n_in=3, pool_type=PoolingType.CatAvgMax):
         35     "A utility for creating encoder without specifying the package"
    ---> 36     if arch in globals(): return create_fastai_encoder(globals()[arch], pretrained, n_in, pool_type)
         37     else:                 return create_timm_encoder(arch, pretrained, n_in, pool_type)
         38 
    
    [/usr/local/lib/python3.7/dist-packages/self_supervised/layers.py](https://localhost:8080/#) in create_fastai_encoder(arch, pretrained, n_in, pool_type)
         20 def create_fastai_encoder(arch:str, pretrained=True, n_in=3, pool_type=PoolingType.CatAvgMax):
         21     "Create timm encoder from a given arch backbone"
    ---> 22     encoder = create_body(arch, n_in, pretrained, cut=None)
         23     pool = AdaptiveConcatPool2d() if pool_type == "catavgmax" else nn.AdaptiveAvgPool2d(1)
         24     return nn.Sequential(*encoder, pool, Flatten())
    
    [/usr/local/lib/python3.7/dist-packages/fastai/vision/learner.py](https://localhost:8080/#) in create_body(model, n_in, pretrained, cut)
         81     _update_first_layer(model, n_in, pretrained)
         82     if cut is None:
    ---> 83         ll = list(enumerate(model.children()))
         84         cut = next(i for i,o in reversed(ll) if has_pool_type(o))
         85     return cut_model(model, cut)
    
    AttributeError: 'function' object has no attribute 'children'
    

    ===========

    This is fixed by replacing the encoder as suggested in the README.md:

    # encoder = create_encoder("xresnet34", n_in=3, pretrained=False) # a fastai encoder
    encoder = create_encoder("tf_efficientnet_b4_ns", n_in=3, pretrained=False) # a timm encoder
    
    1. show bug ===================== Then, in block [19]: learn.sim_clr.show(n=10); yields the following error
    ---------------------------------------------------------------------------
    AttributeError                            Traceback (most recent call last)
    [<ipython-input-19-1dbc22d0ad4a>](https://localhost:8080/#) in <module>()
    ----> 1 learn.sim_clr.show(n=10);
    
    3 frames
    [/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py](https://localhost:8080/#) in decorate_context(*args, **kwargs)
         25         def decorate_context(*args, **kwargs):
         26             with self.clone():
    ---> 27                 return func(*args, **kwargs)
         28         return cast(F, decorate_context)
         29 
    
    [/usr/local/lib/python3.7/dist-packages/self_supervised/vision/simclr.py](https://localhost:8080/#) in show(self, n)
         69         images = []
         70         for i in range(n): images += [x1[i],x2[i]]
    ---> 71         return show_batch(x1[0], None, images, max_n=len(images), nrows=n)
         72 
         73 # Cell
    
    [/usr/local/lib/python3.7/dist-packages/fastcore/dispatch.py](https://localhost:8080/#) in __call__(self, *args, **kwargs)
        121         elif self.inst is not None: f = MethodType(f, self.inst)
        122         elif self.owner is not None: f = MethodType(f, self.owner)
    --> 123         return f(*args, **kwargs)
        124 
        125     def __get__(self, inst, owner):
    
    [/usr/local/lib/python3.7/dist-packages/fastai/data/core.py](https://localhost:8080/#) in show_batch(x, y, samples, ctxs, max_n, **kwargs)
         29     else:
         30         for i in range_of(samples[0]):
    ---> 31             ctxs = [b.show(ctx=c, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs,range(max_n))]
         32     return ctxs
         33 
    
    AttributeError: 'list' object has no attribute 'itemgot'
    

    A similar error is raised by the BYOL model, but the MOCO model works without error


    Thanks in advance

    opened by abauville 1
Releases(v.1.0.4)
  • v.1.0.4(Mar 5, 2022)

  • v.1.0.3(May 18, 2021)

  • v.1.0.2(Mar 20, 2021)

  • v.1.0.1(Mar 14, 2021)

    Self Supervised v1.0.1

    A Fast, Performant and Accessible Library for Training SOTA Self Supervised Algorithms!

    Algorithms

    Now, there are two main modules with implementations of popular vision and multimodal self-supervised algorithms:

    Vision: SimCLR V1/V2, MoCo, BYOL and SwAV. Multimodal: CLIP and CLIP-MoCo

    Augmentations

    New augmentations module offers helpers for easily constructing augmentation pipelines for self supervised algorithms. It's fast, extensible, and provides all the proven augmentations from the papers out of the box. It also provides an optimal combination of torchvision/kornia/fastai batch augmentations for improving performance and speed. It's now under a new module called self_supervised.augmentations

    Layers

    This is a new module which provides functionality to allow all the timm and fastai vision architectures to be used as backbones for training any vision self-supervised algorithm in the library. It supports gradient checkpointing for all fastai models, and for any resnet and efficientnet architectures from timm. It also makes it possible to create layers for downstream classification task and modify the MLP module with ease which is commonly used in self-supervised training frameworks.

    Train your own CLIP Model

    Support for training CLIP models either from scratch or finetuning them from open source checkpoints from OpenAI. Currently support ViT-B/32 and ViT-L/14 (Resnets are not included due to inferior performance).

    Just a Thought: CLIP-MoCo

    A custom implementation which combines CLIP with MoCo Queue implementation to reduce the requirement for using large batch sizes during training.

    Distributed Training

    CLIP and SimCLR algorithms have distributed training versions, which simply uses a distributed implementation of the underlying InfoNCE Loss. This allows an increase in effective batch size/negative samples during loss calculation. In experiments, regular CLIPTrainer() callback achieves a faster and better convergence than DistributedCLIPTrainer(). Distributed callbacks should be used with DistributedDataParallel.

    Fastai Upgrade

    Changes are compatible with latest fastai release. Library uses latest timm and fastai as it's requirement to keep up with the current improvements in both. Tests are also written based on that.

    Large Scale and Large Batch Training

    Now, Learner.to_fp16()is supported using Callback order attribute, allowing to increase the batch size x2, linearly decrease the training time by %50. Gradient checkpointing allows 25%-40% gains in GPU memory. Although gradient checkpointing trades memory with computation when batch size is increased to reclaim the freed GPU memory one can achieve %10 decrease in training time. ZeRO can be also used gain 40% GPU memory, in experiments it didn't lead to faster nor slower training as it does with gradient checkpoint, it's mainly useful for increasing batch size or training larger models.

    SimCLR V1 & V2

    Library provides all the utilities and helpers to choose from any augmentation pipeline, any timm or fastai vision model as backbone, any custom MLP layers, and more. In short, it has all the capability to switch from SimCLR V1 to V2 or to your own experimental V3.

    MoCo V1 & V2 (Single and Multi GPU Support)

    Similar to SimCLR, it's pretty simple to switch from MoCo v1 to v2 using the parts of the library since core algorithm / loss function stays the same. Also, MoCo implementation in this library is different from the official one in the sense that it doesn't uses Shuffle BN and rather uses both positives and negatives in the current batch. Experiments show success with this change, you can also read this issue for more detail. As Shuffle BN depends on DistributedDataParallel it requires a multi-gpu environment but without it users can train both on a single or multi GPU.

    SwAV Queue

    Queue implementation complete for SwAV.

    • Pypi link: https://pypi.org/project/self-supervised/1.0.1/
    • Changes can be found here: https://github.com/KeremTurgutlu/self_supervised/pull/19.
    Source code(tar.gz)
    Source code(zip)
  • v1.0.1-doi(Mar 15, 2021)

Owner
Kerem Turgutlu
Student: Fast.ai Professional: ML Engineer @Adobe
Kerem Turgutlu
The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.

Published by SpaceML • About SpaceML • Quick Colab Example Self-Supervised Learner The Self-Supervised Learner can be used to train a classifier with

SpaceML 92 Nov 30, 2022
deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

deep-table implements various state-of-the-art deep learning and self-supervised learning algorithms for tabular data using PyTorch.

null 63 Oct 17, 2022
Pytorch implementations of popular off-policy multi-agent reinforcement learning algorithms, including QMix, VDN, MADDPG, and MATD3.

Off-Policy Multi-Agent Reinforcement Learning (MARL) Algorithms This repository contains implementations of various off-policy multi-agent reinforceme

null 183 Dec 28, 2022
Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR

UniSpeech The family of UniSpeech: UniSpeech (ICML 2021): Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR UniSpeech-

Microsoft 282 Jan 9, 2023
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page >> coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page >> coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.

Detectron is deprecated. Please see detectron2, a ground-up rewrite of Detectron in PyTorch. Detectron Detectron is Facebook AI Research's software sy

Facebook Research 25.5k Jan 7, 2023
The fastai deep learning library

Welcome to fastai fastai simplifies training fast and accurate neural nets using modern best practices Important: This documentation covers fastai v2,

fast.ai 23.2k Jan 7, 2023
The fastai deep learning library

Welcome to fastai fastai simplifies training fast and accurate neural nets using modern best practices Important: This documentation covers fastai v2,

fast.ai 20.4k Feb 12, 2021
tsai is an open-source deep learning package built on top of Pytorch & fastai focused on state-of-the-art techniques for time series classification, regression and forecasting.

Time series Timeseries Deep Learning Pytorch fastai - State-of-the-art Deep Learning with Time Series and Sequences in Pytorch / fastai

timeseriesAI 2.8k Jan 8, 2023
Scripts of Machine Learning Algorithms from Scratch. Implementations of machine learning models and algorithms using nothing but NumPy with a focus on accessibility. Aims to cover everything from basic to advance.

Algo-ScriptML Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The goal of this project is not t

Algo Phantoms 81 Nov 26, 2022
Implementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification

STAM - Pytorch Implementation of STAM (Space Time Attention Model), yet another pure and simple SOTA attention model that bests all previous models in

Phil Wang 109 Dec 28, 2022
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

Phil Wang 12.6k Jan 9, 2023
Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks

Uniformer - Pytorch Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification ta

Phil Wang 90 Nov 24, 2022
[CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang

The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models Codes for this paper The Lottery Tickets Hypo

VITA 59 Dec 28, 2022
Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models

Patch-Rotation(PatchRot) Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models Submitted to Neurips2021 To

null 4 Jul 12, 2021
fastgradio is a python library to quickly build and share gradio interfaces of your trained fastai models.

fastgradio is a python library to quickly build and share gradio interfaces of your trained fastai models.

Ali Abdalla 34 Jan 5, 2023
An Agnostic Computer Vision Framework - Pluggable to any Training Library: Fastai, Pytorch-Lightning with more to come

IceVision is the first agnostic computer vision framework to offer a curated collection with hundreds of high-quality pre-trained models from torchvision, MMLabs, and soon Pytorch Image Models. It orchestrates the end-to-end deep learning workflow allowing to train networks with easy-to-use robust high-performance libraries such as Pytorch-Lightning and Fastai

airctic 789 Dec 29, 2022
Optimizing DR with hard negatives and achieving SOTA first-stage retrieval performance on TREC DL Track (SIGIR 2021 Full Paper).

Optimizing Dense Retrieval Model Training with Hard Negatives Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, Shaoping Ma This repo provi

Jingtao Zhan 99 Dec 27, 2022