# An implementation for the loss function proposed in Decoupled Contrastive Loss paper.

### Related tags

Deep Learning Decoupled-Contrastive-Learning

# Decoupled-Contrastive-Learning

This repository is an implementation for the loss function proposed in Decoupled Contrastive Loss paper.

• Pytorch
• Numpy

## Usage Example

```import torch
import torchvision.models as models

from loss import dcl

resnet18 = models.resnet18()
random_input = torch.rand((10, 3, 244, 244))
output = resnet18(random_input)

# for DCL
loss_fn = dcl.DCL(temperature=0.5)
loss = loss_fn(output, output)  # loss = tensor(-0.2726, grad_fn=

# for DCLW
loss_fn = dcl.DCLW(temperature=0.5, sigma=0.5)
loss = loss_fn(output, output)  # loss = tensor(38.8402, grad_fn=

)
```

## Results

• #### Implementation of DCLW

Hi,

I saw your implementation of the nominator of DCLW weight_fn, which uses element multiplication for the z1 and z2 calculation. But in the paper, the nominator of DCLW weight_fn formula is: exp(<z1, z2> / sigma). Can you tell me why you use element multiplication for the z1 and z2 calculation instead of torch.mm(z1, z2) or dot multiplication?

Thanks.

opened by wqtwjt1996 2
• #### About the optimization setting

Hello, Thanks for your good work! I'd like to know if the cosine annealing schedule is also applied to the small-scale dataset experiments of CIFAR and STL10?

opened by YEARNLL 2
• #### Was trying to work with the formula

Hi there,

Great work! I am trying to learn from the formula and walk through the proof of propositions. I was trying to find the derivatives of the loss function and encountered finding the partial derivative of the cosine similarity term. By comparing my answer and steps in the paper I kind of get this result (as the image shows).

I am not quite sure if that is correct and how that works. Would you mind help me a bit with this?

opened by mikelmh025 2
• #### DCLW bug?

In DCLW, your code is like:

``````weight_fn = lambda z1, z2: 2 - z1.size(0) * torch.nn.functional.softmax((z1 * z2).sum(dim=1) / sigma, dim=0).squeeze()
``````

I think the right way shall be like:

``````weight_fn = lambda z1, z2: 2 - torch.nn.functional.softmax((z1 * z2).sum(dim=1) / sigma, dim=0).squeeze()
``````

`z1.size(0)` is not a variable introduced in the origin paper.

What do you think of it?

opened by tangzhy 2
• #### Follow-up Issue of DCLW Implementation

``````    Hi @raminnakhli Thanks for the reply.
``````

As you can see from the two pictures, the formulas are similar which all include: exp(<z_1, z_2>). But in your code, for formula(5) (6) you use matrix multiplication, and for the formula of w(z_1, z_2), you use element multiplication.

Could you please explain to me why? Thanks!

Originally posted by @wqtwjt1996 in https://github.com/raminnakhli/Decoupled-Contrastive-Learning/issues/10#issuecomment-1293916353

opened by wqtwjt1996 1
###### Ramin Nakhli
Ph.D.@UBC: Self-Supervised models / GNNs - Grad CS MSc@University of Tehran - Grad EE BSc@Sharif University of Tehran
###### The official implementation of the CVPR2021 paper: Decoupled Dynamic Filter Networks

Decoupled Dynamic Filter Networks This repo is the official implementation of CVPR2021 paper: "Decoupled Dynamic Filter Networks". Introduction DDF is

180 Dec 30, 2022
###### Code for paper Decoupled Dynamic Spatial-Temporal Graph Neural Network for Traffic Forecasting

Decoupled Spatial-Temporal Graph Neural Networks Code for our paper: Decoupled Dynamic Spatial-Temporal Graph Neural Network for Traffic Forecasting.

43 Jan 4, 2023
###### Implement Decoupled Neural Interfaces using Synthetic Gradients in Pytorch

disclaimer: this code is modified from pytorch-tutorial Image classification with synthetic gradient in Pytorch I implement the Decoupled Neural Inter

114 Dec 22, 2022
###### Seach Losses of our paper 'Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search', accepted by ICLR 2021.

CSE-Autoloss Designing proper loss functions for vision tasks has been a long-standing research direction to advance the capability of existing models

54 Dec 17, 2022
###### Implementation of the method proposed in the paper "Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation"

Neural Descriptor Fields (NDF) PyTorch implementation for training continuous 3D neural fields to represent dense correspondence across objects, and u

167 Jan 6, 2023
###### Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"

Memory Efficient Attention Pytorch Implementation of a memory efficient multi-head attention as proposed in the paper, Self-attention Does Not Need O(

180 Jan 5, 2023
###### A PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-Supervised Learning Framework".

Mugs: A Multi-Granular Self-Supervised Learning Framework This is a PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-

62 Nov 8, 2022
###### PyTorch implementation of Soft-DTW: a Differentiable Loss Function for Time-Series in CUDA

Soft DTW Loss Function for PyTorch in CUDA This is a Pytorch Implementation of Soft-DTW: a Differentiable Loss Function for Time-Series which is batch

76 Dec 20, 2022
###### Implementation of "A Deep Learning Loss Function based on Auditory Power Compression for Speech Enhancement" by pytorch

This repository is used to suspend the results of our paper "A Deep Learning Loss Function based on Auditory Power Compression for Speech Enhancement"

19 Sep 30, 2022
42 Nov 24, 2022
###### Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation)

Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation) Download Synthia dataset The model uses

32 Sep 21, 2022
###### The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation".

Cutoff: A Simple Data Augmentation Approach for Natural Language This repository contains source code necessary to reproduce the results presented in

49 Dec 22, 2022
###### Code and data of the Fine-Grained R2R Dataset proposed in paper Sub-Instruction Aware Vision-and-Language Navigation

Fine-Grained R2R Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP2020 paper Sub-Instruction Aware Vision-and-Language Navigation. C

34 Nov 15, 2022
###### PyTorch implementation for Partially View-aligned Representation Learning with Noise-robust Contrastive Loss (CVPR 2021)

2021-CVPR-MvCLN This repo contains the code and data of the following paper accepted by CVPR 2021 Partially View-aligned Representation Learning with

33 Nov 1, 2022
###### Official pytorch implementation of "Feature Stylization and Domain-aware Contrastive Loss for Domain Generalization" ACMMM 2021 (Oral)

Feature Stylization and Domain-aware Contrastive Loss for Domain Generalization This is an official implementation of "Feature Stylization and Domain-

22 Sep 22, 2022
###### Multi-scale discriminator feature-wise loss function

Multi-Scale Discriminative Feature Loss This repository provides code for Multi-Scale Discriminative Feature (MDF) loss for image reconstruction algor

76 Dec 12, 2022
###### clDice - a Novel Topology-Preserving Loss Function for Tubular Structure Segmentation

README clDice - a Novel Topology-Preserving Loss Function for Tubular Structure Segmentation CVPR 2021 Authors: Suprosanna Shit and Johannes C. Paetzo

110 Dec 29, 2022
###### HistoSeg : Quick attention with multi-loss function for multi-structure segmentation in digital histology images

HistoSeg : Quick attention with multi-loss function for multi-structure segmentation in digital histology images Histological Image Segmentation This

11 Dec 16, 2022