Code and data for paper "Deep Photo Style Transfer"

Overview

deep-photo-styletransfer

Code and data for paper "Deep Photo Style Transfer"

Disclaimer

This software is published for academic and non-commercial use only.

Setup

This code is based on torch. It has been tested on Ubuntu 14.04 LTS.

Dependencies:

CUDA backend:

Download VGG-19:

sh models/download_models.sh

Compile cuda_utils.cu (Adjust PREFIX and NVCC_PREFIX in makefile for your machine):

make clean && make

Usage

Quick start

To generate all results (in examples/) using the provided scripts, simply run

run('gen_laplacian/gen_laplacian.m')

in Matlab or Octave and then

python gen_all.py

in Python. The final output will be in examples/final_results/.

Basic usage

  1. Given input and style images with semantic segmentation masks, put them in examples/ respectively. They will have the following filename form: examples/input/in .png , examples/style/tar .png and examples/segmentation/in .png , examples/segmentation/tar .png ;
  2. Compute the matting Laplacian matrix using gen_laplacian/gen_laplacian.m in Matlab. The output matrix will have the following filename form: gen_laplacian/Input_Laplacian_3x3_1e-7_CSR .mat ;

Note: Please make sure that the content image resolution is consistent for Matting Laplacian computation in Matlab and style transfer in Torch, otherwise the result won't be correct.

  1. Run the following script to generate segmented intermediate result:
th neuralstyle_seg.lua -content_image  -style_image 
Comments
  • Is it possible to run this code without a GPU?

    Is it possible to run this code without a GPU?

    Hi, thanks for the project - it's mind blowing to say the least!

    I would like to run this script inside a VPS with no GPU and I was wondering how much of the code is tied to CUDA...

    opened by neslinesli93 20
  • OSX

    OSX "Segmentation fault: 11" when loading libcuda_utils.so in torch or executing the two lua files

    Hello, it's me again :-)

    Unfortunately the compiled libcuda_utils.so is still not working as intended.

    When i import it via torch:

    th
    
      ______             __   |  Torch7
     /_  __/__  ________/ /   |  Scientific computing for Lua.
      / / / _ \/ __/ __/ _ \  |  Type ? for help
     /_/  \___/_/  \__/_//_/  |  https://github.com/torch
                              |  http://torch.ch
    
    th> require 'libcuda_utils'
    Segmentation fault: 11
    

    torch abruptly quits with the error Segmentation fault: 11.

    When i try to run the first lua script, the same thing happens:

    th neuralstyle_seg.lua -content_image examples/input/in1.png -style_image examples/style/tar1.png -content_seg examples/segmentation/in1.png -style_seg examples/segmentation/tar1.png -index 1 -num_iterations 1000 -save_iter 100 -print_iter 1 -gpu 0 -serial examples/tmp_results
    Segmentation fault: 11
    

    And the process luajit also crashes.

    If it is of any help, i've tried to get some info from my libcuda_utils.so via nm:

    Nm displays the name list (symbol table) of each object file in the argument list. If an argument is an archive, a listing for each object file in the archive will be produced. File can be of the form libx.a(x.o), in which case only symbols from that member of the object file are listed.

    I tried the -a flag:

    -a Display all symbol table entries, including those inserted for use by debuggers.

    This is the output (too long to post it here): http://pastebin.com/J2PKBFvX

    Could you please run nm -a [pathTo-libcuda_utils.so] on your file and compare your output to mine? But just if this is not too tedious for you to check and only if it even helps in this case.

    Do you have any ideas how i can check what is causing the Segmentation fault?   I guess that OSX needs more specific compiler options... something is still going wrong when compiling libcuda_utils.so, even though i get no errors.

    Man, i was so excited to finally test it with my own images. The matlab code was executing fine, the 60 Laplacian-.mat files are ready to test. The only missing thing is getting the lua code to run in torch without crashing...

    PS: My first idea to test the code would be using, say 60-120 still images from a 24h-timelapse sequence as style images and then apply them to a similar image that i have taken. Any kind of landscape – doesn't matter, as long as input and style are somewhat similar. Then take the transformed output images, animate them and watch how my image changes to different lighting scenarios from the timelapse. I won't give up till i see the result of this :-).

    opened by subzerofun 16
  • Make a product based on this white paper?

    Make a product based on this white paper?

    I and some my friends are excited to try to build real ios/android app based on this white paper ideas. We do not have to limit our scope with it, but we feel impressed by this demo. If you are interested in participating drop me an email to [email protected] - I add you to our trello/slack.

    opened by nike-17 14
  • make error on OSX 10.12 / CUDA 8

    make error on OSX 10.12 / CUDA 8

    Hey thanks for the code, unfortunately i get a compile error when i try to build the cuda_utils.cu:

    make clean && make
    find . -type f | xargs -n 5 touch
    rm -f libcuda_utils.so
    /usr/local/cuda/bin/nvcc -arch sm_35 -O3 -DNDEBUG --compiler-options '-fPIC' -o libcuda_utils.so --shared cuda_utils.cu -I/Users/david/torch/install/include/THC -I/Users/david/torch/install/include/TH -I/Users/david/torch/install/include -L/Users/david/torch/install/lib -Xlinker -rpath,/Users/david/torch/install/lib -lluaT -lTHC -lTH -lpng
    Undefined symbols for architecture x86_64:
      "_luaL_checknumber", referenced from:
          matting_laplacian(lua_State*) in tmpxft_0001773d_00000000-16_cuda_utils.o
          smooth_local_affine(lua_State*) in tmpxft_0001773d_00000000-16_cuda_utils.o
      "_luaL_error", referenced from:
          checkCudaError(lua_State*) in tmpxft_0001773d_00000000-16_cuda_utils.o
          matting_laplacian(lua_State*) in tmpxft_0001773d_00000000-16_cuda_utils.o
          smooth_local_affine(lua_State*) in tmpxft_0001773d_00000000-16_cuda_utils.o
      "_luaL_openlib", referenced from:
          _luaopen_libcuda_utils in tmpxft_0001773d_00000000-16_cuda_utils.o
      "_lua_call", referenced from:
          getCutorchState(lua_State*) in tmpxft_0001773d_00000000-16_cuda_utils.o
          matting_laplacian(lua_State*) in tmpxft_0001773d_00000000-16_cuda_utils.o
          smooth_local_affine(lua_State*) in tmpxft_0001773d_00000000-16_cuda_utils.o
      "_lua_getfield", referenced from:
          getCutorchState(lua_State*) in tmpxft_0001773d_00000000-16_cuda_utils.o
          matting_laplacian(lua_State*) in tmpxft_0001773d_00000000-16_cuda_utils.o
          smooth_local_affine(lua_State*) in tmpxft_0001773d_00000000-16_cuda_utils.o
      "_lua_settop", referenced from:
          getCutorchState(lua_State*) in tmpxft_0001773d_00000000-16_cuda_utils.o
          matting_laplacian(lua_State*) in tmpxft_0001773d_00000000-16_cuda_utils.o
          smooth_local_affine(lua_State*) in tmpxft_0001773d_00000000-16_cuda_utils.o
      "_lua_touserdata", referenced from:
          getCutorchState(lua_State*) in tmpxft_0001773d_00000000-16_cuda_utils.o
          matting_laplacian(lua_State*) in tmpxft_0001773d_00000000-16_cuda_utils.o
          smooth_local_affine(lua_State*) in tmpxft_0001773d_00000000-16_cuda_utils.o
    ld: symbol(s) not found for architecture x86_64
    clang: error: linker command failed with exit code 1 (use -v to see invocation)
    make: *** [libcuda_utils.so] Error 1
    

    I've adjusted the links in the makefile and checked my torch install – it should be working. Am i missing something or does OSX need some specific compiler arguments?

    opened by subzerofun 9
  • Is removing the Matlab dependency possible?

    Is removing the Matlab dependency possible?

    I'd love to experiment with this project, but I'd rather not have to buy Matlab first as it is not free like Torch, Tensorflow, Theano, Caffe, etc...

    So is removing the Matlab dependency possible? And if so, how difficult would it be?

    opened by ProGamerGov 8
  • Add info to Readme about mask colors (RGB or hex values) that are recognized by neuralstyle_seg.lua & deepmatting_seg.lua

    Add info to Readme about mask colors (RGB or hex values) that are recognized by neuralstyle_seg.lua & deepmatting_seg.lua

    I've already mentioned this here: https://github.com/luanfujun/deep-photo-styletransfer/issues/28 But i thought it would be better to add a separate issue for it.


    Mask colors

    A quick question about the colors you can use for your mask pngs: In neuralstyle_seg.lua and deepmatting_seg.lua you define that these colors are processed from the segmentation images:

    `local color_codes = {'blue', 'green', 'black', 'white', 'red', 'yellow', 'grey', 'lightblue', 'purple'}`
    

    The problem for me (and maybe others users too) was to recognize that you are restricted to these colors when you "paint" your masks. At first i didn't look at the code and just selected random colors in Photoshop – i only realized that something was wrong when the generated output didn't match my selections.

    So could you maybe include a short info in your Readme about which colors you can use (maybe RGB + hex values?) when you manually create your masks?

    I'm not sure if everyone automatically knows what colors are actually processed.

    Thanks!

    opened by subzerofun 7
  • make clean && make includes THC.h but it's not found

    make clean && make includes THC.h but it's not found

    the structure of torch seems to have changed, THC isn't found at install/include/THC, moved to extra/cutorch/lib/THC.

    Including the now found THC.h file results in another issue I already reported here; https://github.com/torch/cutorch/issues/737

    opened by Milaine 6
  • wrong CSR format

    wrong CSR format

    Hi @luanfujun

    The following code in the file deep-photo-styletransfer/gen_laplacian/gen_laplacian.m is really confusing me.

        disp('Save to disk');
        n = nnz(A);
        [Ai, Aj, Aval] = find(A);
        CSC = [Ai, Aj, Aval];
        %save(['Input_Laplacian_3x3_1e-7_CSC' int2str(i) '.mat'], 'CSC');
        
        [rp ci ai] = sparse_to_csr(A);
        Ai = sort(Ai);
        Aj = ci;
        Aval = ai;
        CSR = [Ai, Aj, Aval];
        save(['Input_Laplacian_3x3_1e-7_CSR' int2str(i) '.mat'], 'CSR');
     
    

    The rp variable, which is the signature of the CSR format, is not used at all. So I think maybe the saved .mat file is not a CSR file.

    However, you used mate.load.CSR function to load .mat file in lua file

    -- load matting laplacian
      local CSR_fn = 'gen_laplacian/Input_Laplacian_'..tostring(params.patch)..'x'..tostring(params.patch)..'_1e-7_CSR' .. tostring(index) .. '.mat'
      print('loading matting laplacian...', CSR_fn)
      local CSR = matio.load(CSR_fn).CSR:cuda()
    

    could you please explain this conflict?

    opened by John-HW-Cao 4
  • Issue regarding compiling cuda_utils.cu

    Issue regarding compiling cuda_utils.cu

    When I try to compile cuda_utils.cu, I get this message:

    sacha@scriabin:~/deep-photo-styletransfer-master$ make clean && make

    find . -type f | xargs -n 5 touch rm -f libcuda_utils.so
    /usr/local/cuda-8.0/bin/nvcc -arch sm_35 -O3 -DNDEBUG --compiler-options '-fPIC' -o libcuda_utils.so --shared cuda_utils.cu -I/home/torch/install/include/THC -I/home/torch/install/include/TH -I/home/torch/install/include -L/home/torch/install/lib -Xlinker -rpath,/home/torch/install/lib -lluaT -lTHC -lTH -lpng cuda_utils.cu:2:18: fatal error: lua.h: No such file or directory #include "lua.h" ^ compilation terminated. make: *** [libcuda_utils.so] Error 1

    I changed my makefile to the following:

    PREFIX=/home/torch/install NVCC_PREFIX=/usr/local/cuda-8.0/bin

    lua.h, lualib.h and lauxlib.h are all in the same folder: /home/torch/install/include.

    I already tried to add -lluajit like proposed in #5 and 21 - doesn't work either.

    I'm on Ubuntu 14.04.5 LTS, and after a long reading everything before worked just fine (that wasn't the case with my first shot in another computer, now I'm being way more careful about each step). I kept track of all the command lines I used and the files I downloaded if required. The luarocks I used was the one within torch.

    Thanks for your help!

    opened by M4lchik 4
  • Out of memory with 1 image and Titan X Pascal

    Out of memory with 1 image and Titan X Pascal

    I have only one image in the input folder (style has a different size):

    $ identify examples/*/*
    examples/input/in1.png PNG 1000x750 1000x750+0+0 8-bit sRGB 970KB 0.000u 0:00.009
    examples/segmentation/in1.png PNG 1000x750 1000x750+0+0 8-bit sRGB 4.09KB 0.000u 0:00.000
    examples/segmentation/tar1.png PNG 700x393 700x393+0+0 8-bit sRGB 938B 0.000u 0:00.000
    examples/style/tar1.png PNG 700x393 700x393+0+0 8-bit sRGB 586KB 0.000u 0:00.000
    

    Segmentation masks are completely black, just like for example in9, and I believe I should have enough GPU horsepower for this:

    $ nvidia-smi
    Wed Mar 29 11:26:09 2017       
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 375.26                 Driver Version: 375.26                    |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |===============================+======================+======================|
    |   0  TITAN X (Pascal)    Off  | 0000:02:00.0     Off |                  N/A |
    | 23%   24C    P8     9W / 250W |      0MiB / 12189MiB |      0%      Default |
    +-------------------------------+----------------------+----------------------+
    |   1  TITAN X (Pascal)    Off  | 0000:03:00.0     Off |                  N/A |
    | 23%   30C    P8    10W / 250W |      0MiB / 12189MiB |      0%      Default |
    +-------------------------------+----------------------+----------------------+
    |   2  TITAN X (Pascal)    Off  | 0000:82:00.0     Off |                  N/A |
    | 23%   23C    P8     8W / 250W |      0MiB / 12189MiB |      0%      Default |
    +-------------------------------+----------------------+----------------------+
    |   3  TITAN X (Pascal)    Off  | 0000:83:00.0     Off |                  N/A |
    | 23%   27C    P8     9W / 250W |      0MiB / 12189MiB |      0%      Default |
    +-------------------------------+----------------------+----------------------+
                                                                                   
    +-----------------------------------------------------------------------------+
    | Processes:                                                       GPU Memory |
    |  GPU       PID  Type  Process name                               Usage      |
    |=============================================================================|
    |  No running processes found                                                 |
    +-----------------------------------------------------------------------------+
    

    But I get out of memory errors whenever trying to run gen_all.py (with modified variables for 1 image and 1 GPU) or the th script directly:

    $ python gen_all.py 
    working on image pair index = 1
    gpu, idx = 	0	1	
    [libprotobuf WARNING google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message.  If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
    [libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 574671192
    Successfully loaded models/VGG_ILSVRC_19_layers.caffemodel
    conv1_1: 64 3 3 3
    conv1_2: 64 64 3 3
    conv2_1: 128 64 3 3
    conv2_2: 128 128 3 3
    conv3_1: 256 128 3 3
    conv3_2: 256 256 3 3
    conv3_3: 256 256 3 3
    conv3_4: 256 256 3 3
    conv4_1: 512 256 3 3
    conv4_2: 512 512 3 3
    conv4_3: 512 512 3 3
    conv4_4: 512 512 3 3
    conv5_1: 512 512 3 3
    conv5_2: 512 512 3 3
    conv5_3: 512 512 3 3
    conv5_4: 512 512 3 3
    fc6: 1 1 25088 4096
    fc7: 1 1 4096 4096
    fc8: 1 1 4096 1000
    Exp serial:	examples/tmp_results	
    Setting up style layer  	2	:	relu1_1	
    Setting up style layer  	7	:	relu2_1	
    Setting up style layer  	12	:	relu3_1	
    Setting up style layer  	21	:	relu4_1	
    Setting up content layer	23	:	relu4_2	
    Setting up style layer  	30	:	relu5_1	
    WARNING: Skipping content loss	
    THCudaCheck FAIL file=/tmp/luarocks_cutorch-scm-1-6378/cutorch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory
    .../torch/install/bin/luajit: .../torch/install/share/lua/5.1/nn/Container.lua:67: 
    In 4 module of nn.Sequential:
    cuda runtime error (2) : out of memory at /tmp/luarocks_cutorch-scm-1-6378/cutorch/lib/THC/generic/THCStorage.cu:66
    stack traceback:
    	[C]: at 0x7f1263b53940
    	[C]: in function 'cmul'
    	neuralstyle_seg.lua:465: in function <neuralstyle_seg.lua:456>
    	[C]: in function 'xpcall'
    	.../torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
    	.../torch/install/share/lua/5.1/nn/Sequential.lua:55: in function 'updateGradInput'
    	neuralstyle_seg.lua:232: in function 'opfunc'
    	.../torch/install/share/lua/5.1/optim/lbfgs.lua:66: in function 'lbfgs'
    	neuralstyle_seg.lua:253: in function 'main'
    	neuralstyle_seg.lua:546: in main chunk
    	[C]: in function 'dofile'
    	.../torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
    	[C]: at 0x55b757411610
    
    WARNING: If you see a stack trace below, it doesn't point to the place where this error occurred. Please use only the one above.
    stack traceback:
    	[C]: in function 'error'
    	.../torch/install/share/lua/5.1/nn/Container.lua:67: in function 'rethrowErrors'
    	.../torch/install/share/lua/5.1/nn/Sequential.lua:55: in function 'updateGradInput'
    	neuralstyle_seg.lua:232: in function 'opfunc'
    	.../torch/install/share/lua/5.1/optim/lbfgs.lua:66: in function 'lbfgs'
    	neuralstyle_seg.lua:253: in function 'main'
    	neuralstyle_seg.lua:546: in main chunk
    	[C]: in function 'dofile'
    	.../torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
    	[C]: at 0x55b757411610
    

    As far as I know, I followed the procedure in README, so this seems to be either a bug or there are something undocumented that I should do before running gen_all.py.

    opened by MicaelCarvalho 4
  • Loading VGG model fails

    Loading VGG model fails

    Running

    th neuralstyle_seg.lua -content_image examples/input/in1.png -style_image examples/style/tar1.png -content_seg examples/segmentation/in1.png -style_seg examples/segmentation/tar1.png
    

    yields

    gpu, idx =      0       1
    Couldn't load models/VGG_ILSVRC_19_layers.caffemodel
    /home/ubuntu/torch/install/bin/luajit: neuralstyle_seg.lua:98: attempt to index a nil value
    stack traceback:
            neuralstyle_seg.lua:98: in function 'main'
            neuralstyle_seg.lua:546: in main chunk
            [C]: in function 'dofile'
            ...untu/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
            [C]: at 0x00406670
    

    loadcaffe andt matio are installed.

    opened by themightyoarfish 4
  • Try Running on Kaggle ?

    Try Running on Kaggle ?

    Hi everyone,

    Has anyone try running this project on Google Kaggle (similar to Colab) ? Thanks.

    Amber Ji

    @luanfujun Hi Dr. Luan! I admire your great work (this + deep-painterly-harmonization)! May I add you on QQ please? Mine is 1543386340.

    opened by galoisgroupcn 0
  • Licence Problem

    Licence Problem

    You say this is published for “non-commercial use only”, yet it is published on a commercial site. Is the site allowed to make money off it, but us ordinary users are not?

    opened by ldo 3
  • the result is not good when i run my example

    the result is not good when i run my example

    hi, @luanfujun I run this code with my example. the temporary results is okay, but the final results is not good enough.Could you give me some advice?Here are my result.

    1

    Help is greatly appreciated!

    opened by cccusername 0
  • some confuse about covariance formular

    some confuse about covariance formular

    I have read your code. I am confused with this line, why not compute the covariance using built-in function cov directly. Can you explain why the covariance is written this way ? thanks.

    opened by chaomaer 0
Owner
Fujun Luan
Fujun Luan
code for paper "Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning" by Zhongzheng Ren*, Raymond A. Yeh*, Alexander G. Schwing.

Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning Overview This code is for paper: Not All Unlabeled Data are Equa

Jason Ren 22 Nov 23, 2022
Inference code for "StylePeople: A Generative Model of Fullbody Human Avatars" paper. This code is for the part of the paper describing video-based avatars.

NeuralTextures This is repository with inference code for paper "StylePeople: A Generative Model of Fullbody Human Avatars" (CVPR21). This code is for

Visual Understanding Lab @ Samsung AI Center Moscow 18 Oct 6, 2022
Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Zhensu Sun 1 Oct 26, 2021
Automatically download the cwru data set, and then divide it into training data set and test data set

Automatically download the cwru data set, and then divide it into training data set and test data set.自动下载cwru数据集,然后分训练数据集和测试数据集

null 6 Jun 27, 2022
This is the official source code for SLATE. We provide the code for the model, the training code, and a dataset loader for the 3D Shapes dataset. This code is implemented in Pytorch.

SLATE This is the official source code for SLATE. We provide the code for the model, the training code and a dataset loader for the 3D Shapes dataset.

Gautam Singh 66 Dec 26, 2022
Code and data of the Fine-Grained R2R Dataset proposed in paper Sub-Instruction Aware Vision-and-Language Navigation

Fine-Grained R2R Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP2020 paper Sub-Instruction Aware Vision-and-Language Navigation. C

YicongHong 34 Nov 15, 2022
Data and Code for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning"

Introduction Code and data for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning". We cons

Pan Lu 81 Dec 27, 2022
[IJCAI-2021] A benchmark of data-free knowledge distillation from paper "Contrastive Model Inversion for Data-Free Knowledge Distillation"

DataFree A benchmark of data-free knowledge distillation from paper "Contrastive Model Inversion for Data-Free Knowledge Distillation" Authors: Gongfa

ZJU-VIPA 47 Jan 9, 2023
Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting

InversePrompting Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting Code: The code is provided in the "chinese_ip"

THUDM 101 Dec 16, 2022
Code and data for ACL2021 paper Cross-Lingual Abstractive Summarization with Limited Parallel Resources.

Multi-Task Framework for Cross-Lingual Abstractive Summarization (MCLAS) The code for ACL2021 paper Cross-Lingual Abstractive Summarization with Limit

Yu Bai 43 Nov 7, 2022
This repo contains the code and data used in the paper "Wizard of Search Engine: Access to Information Through Conversations with Search Engines"

Wizard of Search Engine: Access to Information Through Conversations with Search Engines by Pengjie Ren, Zhongkun Liu, Xiaomeng Song, Hongtao Tian, Zh

null 19 Oct 27, 2022
code for our paper "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer"

SHOT++ Code for our TPAMI submission "Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer" that is ext

null 75 Dec 16, 2022
Code and data of the ACL 2021 paper: Few-Shot Text Ranking with Meta Adapted Synthetic Weak Supervision

MetaAdaptRank This repository provides the implementation of meta-learning to reweight synthetic weak supervision data described in the paper Few-Shot

THUNLP 5 Jun 16, 2022
Code and data for the paper "Hearing What You Cannot See"

Hearing What You Cannot See: Acoustic Vehicle Detection Around Corners Public repository of the paper "Hearing What You Cannot See: Acoustic Vehicle D

TU Delft Intelligent Vehicles 26 Jul 13, 2022
null 190 Jan 3, 2023
code and data for paper "GIANT: Scalable Creation of a Web-scale Ontology"

GIANT Code and data for paper "GIANT: Scalable Creation of a Web-scale Ontology" https://arxiv.org/pdf/2004.02118.pdf Please cite our paper if this pr

Excalibur 39 Dec 29, 2022
This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper

Deep Continuous Clustering Introduction This is a Pytorch implementation of the DCC algorithms presented in the following paper (paper): Sohil Atul Sh

Sohil Shah 197 Nov 29, 2022
Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://arxiv.org/abs/2103.06332).

Hurdles to Progress in Long-form Question Answering This repository contains the official scripts and datasets accompanying our NAACL 2021 paper, "Hur

Kalpesh Krishna 41 Nov 8, 2022
Code and data for the EMNLP 2021 paper "Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts". Coming soon!

ToxiChat Code and data for the EMNLP 2021 paper "Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts". Install depen

Ashutosh Baheti 11 Jan 1, 2023