KIDA: Knowledge Inheritance in Data Aggregation
This project releases our 1st place solution on NeurIPS2021 ML4CO Dual Task.
Slide and model weights are available.
Paper and training code will be released soon.
This project releases our 1st place solution on NeurIPS2021 ML4CO Dual Task.
Slide and model weights are available.
Paper and training code will be released soon.
PSLA: Improving Audio Tagging with Pretraining, Sampling, Labeling, and Aggregation Introduction Getting Started FSD50K Recipe AudioSet Recipe Label E
Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network Requirements pytorch 1.1+ torchvision 0.3+ pyclipper opencv3 gcc
A Deep Feature Aggregation Network for Accurate Indoor Camera Localization This is the PyTorch implementation of our paper "A Deep Feature Aggregation
CSAC Introduction This repository contains the implementation code for paper: Co
AutoSF The code for our paper "AutoSF: Searching Scoring Functions for Knowledge Graph Embedding" and this paper has been accepted by ICDE2020. News:
Data Efficient Stagewise Knowledge Distillation Table of Contents Data Efficient Stagewise Knowledge Distillation Table of Contents Requirements Image
Logic Tensor Networks (LTN) Logic Tensor Network (LTN) is a neurosymbolic framework that supports querying, learning and reasoning with both rich data
light-weight-depth-estimation Boosting Light-Weight Depth Estimation Via Knowledge Distillation, https://arxiv.org/abs/2105.06143 Junjie Hu, Chenyou F
MosaicKD Code for NeurIPS-21 paper "Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data" 1. Motivation Natural images share common l
hi, zixuan, I trained the model using your codes on "Item Placement", and evaluated the rewards of 10, 20, 30, 40, 50 iterations and choose models that have top-3 rewards. But the reward on the test dataset is only 4400, I wonder if there are any other tricks I haven't added. Also, I found that the seed is not set in the script, does this affect the final result?
python generate_data.py item_placement --file_count ${i} --njobs 5 --train_size 10000 --valid_size 4000
Hello, I am master's student of A.I. course.
First, i'm really congratulations your ML4CO dual task winning.
I read your paper "ML4CO-KIDA: Knowledge Inheritance in Dataset Aggregation" recently. but i am beginner of this field(solving COP with ML), i have some questions for this paper.
1. Are there reasons why KIDA works better than EWA?(What i thought is, KIDA can calculate average policy evenly)
2. How can KIDA work well? I thought it will have some limitations, because it use strong branching as data which did bad performance on test case, and it trains with loss(strong branching, policy prediction)
3. in Figure 3: step 0~1000, there is big drop. is that coincidence? or representation of drop?(represent train accuracy at step 0)
hello, i found that the efficiency of training data generation is a big problem when running your codes. Although multi-threading is used here, as far as I know, python multi-threading is not much faster than single threading. So, i want to know if you have any other methods to speed up the generation of training data? Thank you!
DataFree A benchmark of data-free knowledge distillation from paper "Contrastive Model Inversion for Data-Free Knowledge Distillation" Authors: Gongfa
A PyTorch Reproduction of HCN Co-occurrence Feature Learning from Skeleton Data for Action Recognition and Detection with Hierarchical Aggregation. Ch
FunMatch-Distillation TF2 implementation of knowledge distillation using the "function matching" hypothesis from the paper Knowledge distillation: A g
KaGRMN-DSG_ABSA This repository contains the PyTorch source Code for our paper: Understand me, if you refer to Aspect Knowledge: Knowledge-aware Gated
Learning to Estimate Hidden Motions with Global Motion Aggregation (GMA) This repository contains the source code for our paper: Learning to Estimate
Deep3DMM Official repository for the CVPR 2021 paper Learning Feature Aggregation for Deep 3D Morphable Models. Requirements This code is tested on Py
Temporal Context Aggregation Network - Pytorch This repo holds the pytorch-version codes of paper: "Temporal Context Aggregation Network for Temporal
DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning (ACMMM 2021) Overview We release the code of the DSANet (Dynamic S
Container : Context Aggregation Network If you use this code for a paper please cite: @article{gao2021container, title={Container: Context Aggregati
CSA: Contextual Similarity Aggregation with Self-attention for Visual Re-ranking PyTorch training code for CSA (Contextual Similarity Aggregation). We