Contrastive Loss Gradient Attack (CLGA)

Related tags

Deep Learning CLGA
Overview

Contrastive Loss Gradient Attack (CLGA)

Official implementation of Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation, WWW22

Built based on GCA and DeepRobust.

Requirements

Tested on pytorch 1.7.1 and torch_geometric 1.6.3.

Usage

1.To produce poisoned graphs with CLGA

python CLGA.py --dataset Cora --num_epochs 3000 --device cuda:0

It will automatically save three poisoned adjacency matrices in ./poisoned_adj which have 1%/5%/10% edges perturbed respectively. You may reduce the number of epochs for a faster training.

2.To produce poisoned graphs with baseline attack methods

python baseline_attacks.py --dataset Cora --method dice --rate 0.10 --device cuda:0

It will save one poisoned adjacency matrix in ./poisoned_adj.

3.To train the graph contrastive model with the poisoned graph

python train_GCA.py --dataset Cora --perturb --attack_method CLGA --attack_rate 0.10 --device cuda:0

It will load and train on the corresponding poisoned adjacency matrix specified by dataset, attack_method and attack_rate.

You might also like...
Contextualized Perturbation for Textual Adversarial Attack, NAACL 2021

Contextualized Perturbation for Textual Adversarial Attack Introduction This is a PyTorch implementation of Contextualized Perturbation for Textual Ad

Lightweight tool to perform MITM attack on local network

ARPSpy - A lightweight tool to perform MITM attack Using many library to perform ARP Spoof and auto-sniffing HTTP packet containing credential. (Never

We will see a basic program that is basically a hint to brute force attack to crack passwords. In other words, we will make a program to Crack Any Password Using Python. Show some ❤️ by starring this repository!

Crack Any Password Using Python We will see a basic program that is basically a hint to brute force attack to crack passwords. In other words, we will

G-NIA model from
G-NIA model from "Single Node Injection Attack against Graph Neural Networks" (CIKM 2021)

Single Node Injection Attack against Graph Neural Networks This repository is our Pytorch implementation of our paper: Single Node Injection Attack ag

Learnable Multi-level Frequency Decomposition and Hierarchical Attention Mechanism for Generalized Face Presentation Attack Detection
Learnable Multi-level Frequency Decomposition and Hierarchical Attention Mechanism for Generalized Face Presentation Attack Detection

LMFD-PAD Note This is the official repository of the paper: LMFD-PAD: Learnable Multi-level Frequency Decomposition and Hierarchical Attention Mechani

This is the official code for the paper
This is the official code for the paper "Ad2Attack: Adaptive Adversarial Attack for Real-Time UAV Tracking".

Ad^2Attack:Adaptive Adversarial Attack on Real-Time UAV Tracking Demo video 📹 Our video on bilibili demonstrates the test results of Ad^2Attack on se

This is the repository for paper NEEDLE: Towards Non-invertible Backdoor Attack to Deep Learning Models.

This is the repository for paper NEEDLE: Towards Non-invertible Backdoor Attack to Deep Learning Models.

Code for
Code for "Adversarial Attack Generation Empowered by Min-Max Optimization", NeurIPS 2021

Min-Max Adversarial Attacks [Paper] [arXiv] [Video] [Slide] Adversarial Attack Generation Empowered by Min-Max Optimization Jingkang Wang, Tianyun Zha

FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack
FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack

FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view Physical Adversarial Attack Case study of the FCA. The code can be find in FCA. Cas

Comments
  • 运行时出错

    运行时出错

    作者你好: 我在运行CLGA.py时出现如下错误,我应该如何解决,期待你的回答! Perturbing edges: 50/527. Finished in 8.24s Perturbing edges: 51/527. Finished in 8.91s Perturbing edges: 52/527. Finished in 9.18s Traceback (most recent call last): File "C:\Users\zhangqi\Desktop\CLGA-main\CLGA.py", line 243, in poisoned_adj = model.attack() File "C:\Users\zhangqi\Desktop\CLGA-main\CLGA.py", line 147, in attack pkl.dump(output_adj.to(torch.device('cpu')), open('poisoned_adj/%s_CLGA_0.010000_adj.pkl' % args.dataset, 'wb')) FileNotFoundError: [Errno 2] No such file or directory: 'poisoned_adj/Cora_CLGA_0.010000_adj.pkl'

    opened by xiangxinchufa 1
  • about poisoned graph in polblogs

    about poisoned graph in polblogs

    Hi, RinneSz

    I've already run your code,but i don't know how to generate poisoned graphs in the dataset polblogs(Cora and CiteSeer is successful).How can I do to generate it?

    opened by lizehaodashuaibi 5
Owner
null
Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet

Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet, CVPR2021 安全AI挑战者计划第六期:ImageNet无限制对抗攻击 决赛第四名(team name: Advers)

null 51 Dec 1, 2022
transfer attack; adversarial examples; black-box attack; unrestricted Adversarial Attacks on ImageNet; CVPR2021 天池黑盒竞赛

transfer_adv CVPR-2021 AIC-VI: unrestricted Adversarial Attacks on ImageNet CVPR2021 安全AI挑战者计划第六期赛道2:ImageNet无限制对抗攻击 介绍 : 深度神经网络已经在各种视觉识别问题上取得了最先进的性能。

null 25 Dec 8, 2022
A PyTorch implementation of Learning to learn by gradient descent by gradient descent

Intro PyTorch implementation of Learning to learn by gradient descent by gradient descent. Run python main.py TODO Initial implementation Toy data LST

Ilya Kostrikov 300 Dec 11, 2022
Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation)

Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation) Download Synthia dataset The model uses

null 32 Sep 21, 2022
Implement of "Training deep neural networks via direct loss minimization" in PyTorch for 0-1 loss

This is the implementation of "Training deep neural networks via direct loss minimization" published at ICML 2016 in PyTorch. The implementation targe

Cuong Nguyen 1 Jan 18, 2022
PyTorch implementation for Partially View-aligned Representation Learning with Noise-robust Contrastive Loss (CVPR 2021)

2021-CVPR-MvCLN This repo contains the code and data of the following paper accepted by CVPR 2021 Partially View-aligned Representation Learning with

XLearning Group 33 Nov 1, 2022
Official pytorch implementation of "Feature Stylization and Domain-aware Contrastive Loss for Domain Generalization" ACMMM 2021 (Oral)

Feature Stylization and Domain-aware Contrastive Loss for Domain Generalization This is an official implementation of "Feature Stylization and Domain-

null 22 Sep 22, 2022
Keras Image Embeddings using Contrastive Loss

Keras-Image-Embeddings-using-Contrastive-Loss Image to Embedding projection in vector space. Implementation in keras and tensorflow for custom data. B

Shravan Anand K 5 Mar 21, 2022
Keras Image Embeddings using Contrastive Loss

Image to Embedding projection in vector space. Implementation in keras and tensorflow of batch all triplet loss for one-shot/few-shot learning.

Shravan Anand K 5 Mar 21, 2022
Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Denis Emelin 42 Nov 24, 2022