This repository focus on Image Captioning & Video Captioning & Seq-to-Seq Learning & NLP

Overview

Awesome-Visual-CaptioningAwesome

Table of Contents

Paper Roadmap

ACL-2021

Image Captioning

  • Control Image Captioning Spatially and Temporally
  • SMURF: SeMantic and linguistic UndeRstanding Fusion for Caption Evaluation via Typicality Analysis [paper] [code]
  • Enhancing Descriptive Image Captioning with Natural Language Inference
  • UMIC: An Unreferenced Metric for Image Captioning via Contrastive Learning [paper]
  • Semantic Relation-aware Difference Representation Learning for Change Captioning

Video Captioning

  • Hierarchical Context-aware Network for Dense Video Event Captioning
  • Video Paragraph Captioning as a Text Summarization Task
  • O2NA: An Object-Oriented Non-Autoregressive Approach for Controllable Video Captioning

CVPR-2021

Image Captioning

  • Connecting What to Say With Where to Look by Modeling Human Attention Traces. [paper] [code]
  • Multiple Instance Captioning: Learning Representations from Histopathology Textbooks and Articles. [paper]
  • Improving OCR-Based Image Captioning by Incorporating Geometrical Relationship. [paper]
  • Image Change Captioning by Learning From an Auxiliary Task. [paper]
  • Scan2Cap: Context-aware Dense Captioning in RGB-D Scans. [paper] [code]
  • Towards Bridging Event Captioner and Sentence Localizer for Weakly Supervised Dense Event Captioning. paper
  • TAP: Text-Aware Pre-Training for Text-VQA and Text-Caption. [paper]
  • Towards Accurate Text-Based Image Captioning With Content Diversity Exploration. [paper]
  • FAIEr: Fidelity and Adequacy Ensured Image Caption Evaluation. [paper]
  • RSTNet: Captioning With Adaptive Attention on Visual and Non-Visual Words. [paper]
  • Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles. [paper]

Video Captioning

  • Open-Book Video Captioning With Retrieve-Copy-Generate Network. [paper]
  • Towards Diverse Paragraph Captioning for Untrimmed Videos. [paper]

AAAI-2021

Image Captioning

  • Partially Non-Autoregressive Image Captioning. [code]
  • Improving Image Captioning by Leveraging Intra- and Inter-layer Global Representation in Transformer Network. [paper]
  • Object Relation Attention for Image Paragraph Captioning [paper]
  • Dual-Level Collaborative Transformer for Image Captioning. [paper] [code]
  • Memory-Augmented Image Captioning [paper]
  • Image Captioning with Context-Aware Auxiliary Guidance. [paper]
  • Consensus Graph Representation Learning for Better Grounded Image Captioning. [paper]
  • FixMyPose: Pose Correctional Captioning and Retrieval. [paper] [code] [website]
  • VIVO: Visual Vocabulary Pre-Training for Novel Object Captioning [paper]

Video Captioning

  • Non-Autoregressive Coarse-to-Fine Video Captioning. [paper]
  • Semantic Grouping Network for Video Captioning. [paper] [code]
  • Augmented Partial Mutual Learning with Frame Masking for Video Captioning. [paper]

ACMMM-2020

Image Captioning

  • Structural Semantic Adversarial Active Learning for Image Captioning. oral [paper]
  • Iterative Back Modification for Faster Image Captioning. [paper]
  • Bridging the Gap between Vision and Language Domains for Improved Image Captioning. [paper]
  • Hierarchical Scene Graph Encoder-Decoder for Image Paragraph Captioning. [paper]
  • Improving Intra- and Inter-Modality Visual Relation for Image Captioning. [paper]
  • ICECAP: Information Concentrated Entity-aware Image Captioning. [paper]
  • Attacking Image Captioning Towards Accuracy-Preserving Target Words Removal. [paper]
  • Multimodal Attention with Image Text Spatial Relationship for OCR-Based Image Captioning. [paper]

Video Captioning

  • Controllable Video Captioning with an Exemplar Sentence. oral [paper]
  • Poet: Product-oriented Video Captioner for E-commerce. oral [paper]
  • Learning Semantic Concepts and Temporal Alignment for Narrated Video Procedural Captioning. [paper]
  • Relational Graph Learning for Grounded Video Description Generation. [paper]

NeurIPS-2020

  • Prophet Attention: Predicting Attention with Future Attention for Improved Image Captioning. [paper]
  • RATT: Recurrent Attention to Transient Tasks for Continual Image Captioning. [paper]
  • Diverse Image Captioning with Context-Object Split Latent Spaces. [paper]

ECCV-2020

Image Captioning

  • Compare and Reweight: Distinctive Image Captioning Using Similar Images Sets. oral [paper]
  • In-Home Daily-Life Captioning Using Radio Signals. oral [paper] [website]
  • TextCaps: a Dataset for Image Captioning with Reading Comprehension. oral [paper] [website] [code]
  • SODA: Story Oriented Dense Video Captioning Evaluation Framework. [paper]
  • Towards Unique and Informative Captioning of Images. [paper]
  • Learning Visual Representations with Caption Annotations. [paper] [website]
  • Fashion Captioning: Towards Generating Accurate Descriptions with Semantic Rewards. [paper]
  • Length Controllable Image Captioning. [paper] [code]
  • Comprehensive Image Captioning via Scene Graph Decomposition. [paper] [website]
  • Finding It at Another Side: A Viewpoint-Adapted Matching Encoder for Change Captioning. [paper]
  • Captioning Images Taken by People Who Are Blind. [paper]
  • Learning to Generate Grounded Visual Captions without Localization Supervision. [paper] [code]

Video Captioning

  • Learning Modality Interaction for Temporal Sentence Localization and Event Captioning in Videos. Spotlight [paper] [code]
  • Character Grounding and Re-Identification in Story of Videos and Text Descriptions. Spotlight [paper] [code]
  • Identity-Aware Multi-Sentence Video Description. [paper]

CVPR-2020

Image Captioning

  • Context-Aware Group Captioning via Self-Attention and Contrastive Features [paper]
    Zhuowan Li, Quan Tran, Long Mai, Zhe Lin, Alan L. Yuille
  • More Grounded Image Captioning by Distilling Image-Text Matching Model [paper] [code]
    Yuanen Zhou, Meng Wang, Daqing Liu, Zhenzhen Hu, Hanwang Zhang
  • Show, Edit and Tell: A Framework for Editing Image Captions [paper] [code]
    Fawaz Sammani, Luke Melas-Kyriazi
  • Say As You Wish: Fine-Grained Control of Image Caption Generation With Abstract Scene Graphs [paper] [code]
    Shizhe Chen, Qin Jin, Peng Wang, Qi Wu
  • Normalized and Geometry-Aware Self-Attention Network for Image Captioning [paper]
    Longteng Guo, Jing Liu, Xinxin Zhu, Peng Yao, Shichen Lu, Hanqing Lu
  • Meshed-Memory Transformer for Image Captioning [paper] [code]
    Marcella Cornia, Matteo Stefanini, Lorenzo Baraldi, Rita Cucchiara
  • X-Linear Attention Networks for Image Captioning [paper] [code]
    Yingwei Pan, Ting Yao, Yehao Li, Tao Mei
  • Transform and Tell: Entity-Aware News Image Captioning [paper] [code] [website]
    Alasdair Tran, Alexander Mathews, Lexing Xie

Video Captioning

  • Object Relational Graph With Teacher-Recommended Learning for Video Captioning [paper]
    Ziqi Zhang, Yaya Shi, Chunfeng Yuan, Bing Li, Peijin Wang, Weiming Hu, Zheng-Jun Zha

  • Spatio-Temporal Graph for Video Captioning With Knowledge Distillation [paper] [code]
    Boxiao Pan, Haoye Cai, De-An Huang, Kuan-Hui Lee, Adrien Gaidon, Ehsan Adeli, Juan Carlos Niebles

  • Better Captioning With Sequence-Level Exploration [paper]
    Jia Chen, Qin Jin

  • Syntax-Aware Action Targeting for Video Captioning [code]
    Qi Zheng, Chaoyue Wang, Dacheng Tao

ACL-2020

Image Captioning

  • Clue: Cross-modal Coherence Modeling for Caption Generation [paper]
    Malihe Alikhani, Piyush Sharma, Shengjie Li, Radu Soricut and Matthew Stone

  • Improving Image Captioning Evaluation by Considering Inter References Variance [paper]
    Yanzhi Yi, Hangyu Deng and Jinglu Hu

  • Improving Image Captioning with Better Use of Caption [paper] [code]
    Zhan Shi, Xu Zhou, Xipeng Qiu and Xiaodan Zhu

Video Captioning

  • MART: Memory-Augmented Recurrent Transformer for Coherent Video Paragraph Captioning [paper] [code]
    Jie Lei, Liwei Wang, Yelong Shen, Dong Yu, Tamara Berg and Mohit Bansal

AAAI-2020

Image Captioning

  • Unified VLP: Unified Vision-Language Pre-Training for Image Captioning and VQA [paper]
    Luowei Zhou (University of Michigan); Hamid Palangi (Microsoft Research); Lei Zhang (Microsoft); Houdong Hu (Microsoft AI and Research); Jason Corso (University of Michigan); Jianfeng Gao (Microsoft Research)

  • OffPG: Reinforcing an Image Caption Generator using Off-line Human Feedback [paper]
    Paul Hongsuck Seo (POSTECH); Piyush Sharma (Google Research); Tomer Levinboim (Google); Bohyung Han(Seoul National University); Radu Soricut (Google)

  • MemCap: Memorizing Style Knowledge for Image Captioning [paper]
    Wentian Zhao (Beijing Institute of Technology); Xinxiao Wu (Beijing Institute of Technology); Xiaoxun Zhang(Alibaba Group)

  • C-R Reasoning: Joint Commonsense and Relation Reasoning for Image and Video Captioning [paper]
    Jingyi Hou (Beijing Institute of Technology); Xinxiao Wu (Beijing Institute of Technology); Xiaoxun Zhang (AlibabaGroup); Yayun Qi (Beijing Institute of Technology); Yunde Jia (Beijing Institute of Technology); Jiebo Luo (University of Rochester)

  • MHTN: Learning Long- and Short-Term User Literal-Preference with Multimodal Hierarchical Transformer Network for Personalized Image Caption [paper]
    Wei Zhang (East China Normal University); Yue Ying (East China Normal University); Pan Lu (The University of California, Los Angeles); Hongyuan Zha (GEORGIA TECH)

  • Show, Recall, and Tell: Image Captioning with Recall Mechanism [paper]
    Li WANG (MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China); Zechen BAI(Institute of Software, Chinese Academy of Science, China); Yonghua Zhang (Bytedance); Hongtao Lu (Shanghai Jiao Tong University)

  • Interactive Dual Generative Adversarial Networks for Image Captioning
    Junhao Liu (Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences); Kai Wang (Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences); Chunpu Xu (Huazhong University of Science and Technology); Zhou Zhao (Zhejiang University); Ruifeng Xu (Harbin Institute of Technology (Shenzhen)); Ying Shen (Peking University Shenzhen Graduate School); Min Yang ( Chinese Academy of Sciences)

  • FDM-net: Feature Deformation Meta-Networks in Image Captioning of Novel Objects [paper]
    Tingjia Cao (Fudan University); Ke Han (Fudan University); Xiaomei Wang (Fudan University); Lin Ma (Tencent AI Lab); Yanwei Fu (Fudan University); Yu-Gang Jiang (Fudan University); Xiangyang Xue (Fudan University)

Video Captioning

  • An Efficient Framework for Dense Video Captioning
    Maitreya Suin (Indian Institute of Technology Madras)*; Rajagopalan Ambasamudram (Indian Institute of Technology Madras)

ACL-2019

  • Informative Image Captioning with External Sources of Information [paper]
    Sanqiang Zhao, Piyush Sharma, Tomer Levinboim and Radu Soricut

  • Dense Procedure Captioning in Narrated Instructional Videos [paper]
    Botian Shi, Lei Ji, Yaobo Liang, Nan Duan, Peng Chen, Zhendong Niu and Ming Zhou

  • Bridging by Word: Image Grounded Vocabulary Construction for Visual Captioning [paper]
    Zhihao Fan, Zhongyu Wei, Siyuan Wang and Xuanjing Huang

  • Bridging by Word: Image Grounded Vocabulary Construction for Visual Captioning [paper]
    Zhihao Fan, Zhongyu Wei, Siyuan Wang and Xuanjing Huang

  • Generating Question Relevant Captions to Aid Visual Question Answering [paper]
    Jialin Wu, Zeyuan Hu and Raymond Mooney

  • Bridging by Word: Image Grounded Vocabulary Construction for Visual Captioning [paper]
    Zhihao Fan, Zhongyu Wei, Siyuan Wang and Xuanjing Huang

NeurIPS-2019

Image Captioning

  • AAT: Adaptively Aligned Image Captioning via Adaptive Attention Time [paper] [code]
    Lun Huang, Wenmin Wang, Yaxian Xia, Jie Chen
  • ObjRel Transf: Image Captioning: Transforming Objects into Words [paper] [code]
    Simao Herdade, Armin Kappeler, Kofi Boakye, Joao Soares
  • VSSI-cap: Variational Structured Semantic Inference for Diverse Image Captioning [paper]
    Fuhai Chen, Rongrong Ji, Jiayi Ji, Xiaoshuai Sun, Baochang Zhang, Xuri Ge, Yongjian Wu, Feiyue Huang

ICCV-2019

Video Captioning

  • VATEX: A Large-Scale, High-Quality Multilingual Dataset for Video-and-Language Research [paper] [challenge]
    Xin Wang, Jiawei Wu, Junkun Chen, Lei Li, Yuan-Fang Wang, William Yang Wang
    ICCV 2019 Oral

  • POS+CG: Controllable Video Captioning With POS Sequence Guidance Based on Gated Fusion Network [paper]
    Bairui Wang, Lin Ma, Wei Zhang, Wenhao Jiang, Jingwen Wang, Wei Liu

  • POS: Joint Syntax Representation Learning and Visual Cue Translation for Video Captioning [paper]
    Jingyi Hou, Xinxiao Wu, Wentian Zhao, Jiebo Luo, Yunde Jia

Image Captioning

  • DUDA: Robust Change Captioning
    Dong Huk Park, Trevor Darrell, Anna Rohrbach [paper]
    ICCV 2019 Oral

  • AoANet: Attention on Attention for Image Captioning [paper]
    Lun Huang, Wenmin Wang, Jie Chen, Xiao-Yong Wei
    ICCV 2019 Oral

  • MaBi-LSTMs: Exploring Overall Contextual Information for Image Captioning in Human-Like Cognitive Style [paper]
    Hongwei Ge, Zehang Yan, Kai Zhang, Mingde Zhao, Liang Sun

  • Align2Ground: Align2Ground: Weakly Supervised Phrase Grounding Guided by Image-Caption Alignment [paper]
    Samyak Datta, Karan Sikka, Anirban Roy, Karuna Ahuja, Devi Parikh, Ajay Divakaran*

  • GCN-LSTM+HIP: Hierarchy Parsing for Image Captioning [paper]
    Ting Yao, Yingwei Pan, Yehao Li, Tao Mei

  • IR+Tdiv: Generating Diverse and Descriptive Image Captions Using Visual Paraphrases [paper]
    Lixin Liu, Jiajun Tang, Xiaojun Wan, Zongming Guo

  • CNM+SGAE: Learning to Collocate Neural Modules for Image Captioning [paper]
    Xu Yang, Hanwang Zhang, Jianfei Cai

  • Seq-CVAE: Sequential Latent Spaces for Modeling the Intention During Diverse Image Captioning [paper]
    Jyoti Aneja, Harsh Agrawal, Dhruv Batra, Alexander Schwing

  • Towards Unsupervised Image Captioning With Shared Multimodal Embeddings [paper]
    Iro Laina, Christian Rupprecht, Nassir Navab

  • Human Attention in Image Captioning: Dataset and Analysis [paper]
    Sen He, Hamed R. Tavakoli, Ali Borji, Nicolas Pugeault

  • RDN: Reflective Decoding Network for Image Captioning [paper]
    Lei Ke, Wenjie Pei, Ruiyu Li, Xiaoyong Shen, Yu-Wing Tai

  • PSST: Joint Optimization for Cooperative Image Captioning [paper]
    Gilad Vered, Gal Oren, Yuval Atzmon, Gal Chechik

  • MUTAN: Watch, Listen and Tell: Multi-Modal Weakly Supervised Dense Event Captioning [paper]
    Tanzila Rahman, Bicheng Xu, Leonid Sigal

  • ETA: Entangled Transformer for Image Captioning [paper]
    Guang Li, Linchao Zhu, Ping Liu, Yi Yang

  • nocaps: novel object captioning at scale [paper]
    Harsh Agrawal, Karan Desai, Yufei Wang, Xinlei Chen, Rishabh Jain, Mark Johnson, Dhruv Batra, Devi Parikh, Stefan Lee, Peter Anderson

  • Cap2Det: Learning to Amplify Weak Caption Supervision for Object Detection [paper]
    Keren Ye, Mingda Zhang, Adriana Kovashka, Wei Li, Danfeng Qin, Jesse Berent

  • Graph-Align: Unpaired Image Captioning via Scene Graph Alignments paper
    Jiuxiang Gu, Shafiq Joty, Jianfei Cai, Handong Zhao, Xu Yang, Gang Wang

  • : Learning to Caption Images Through a Lifetime by Asking Questions [paper]
    Tingke Shen, Amlan Kar, Sanja Fidler

CVPR-2019

Image Captioning

  • SGAE: Auto-Encoding Scene Graphs for Image Captioning [paper] [code]
    XU YANG (Nanyang Technological University); Kaihua Tang (Nanyang Technological University); Hanwang Zhang (Nanyang Technological University); Jianfei Cai (Nanyang Technological University)
    CVPR 2019 Oral

  • POS: Fast, Diverse and Accurate Image Captioning Guided by Part-Of-Speech [paper]
    Aditya Deshpande (University of Illinois at UC); Jyoti Aneja (University of Illinois, Urbana-Champaign); Liwei Wang (Tencent AI Lab); Alexander Schwing (UIUC); David Forsyth (Univeristy of Illinois at Urbana-Champaign)
    CVPR 2019 Oral

  • Unsupervised Image Captioning [paper] [code]
    Yang Feng (University of Rochester); Lin Ma (Tencent AI Lab); Wei Liu (Tencent); Jiebo Luo (U. Rochester)

  • Adversarial Attack to Image Captioning via Structured Output Learning With Latent Variables [paper]
    Yan Xu (UESTC); Baoyuan Wu (Tencent AI Lab); Fumin Shen (UESTC); Yanbo Fan (Tencent AI Lab); Yong Zhang (Tencent AI Lab); Heng Tao Shen (University of Electronic Science and Technology of China (UESTC)); Wei Liu (Tencent)

  • Describing like Humans: On Diversity in Image Captioning [paper]
    Qingzhong Wang (Department of Computer Science, City University of Hong Kong); Antoni Chan (City University of Hong Kong, Hong, Kong)

  • MSCap: Multi-Style Image Captioning With Unpaired Stylized Text [paper]
    Longteng Guo ( Institute of Automation, Chinese Academy of Sciences); Jing Liu (National Lab of Pattern Recognition, Institute of Automation,Chinese Academy of Sciences); Peng Yao (University of Science and Technology Beijing); Jiangwei Li (Huawei); Hanqing Lu (NLPR, Institute of Automation, CAS)

  • CapSal: Leveraging Captioning to Boost Semantics for Salient Object Detection [paper] [code]
    Lu Zhang (Dalian University of Technology); Huchuan Lu (Dalian University of Technology); Zhe Lin (Adobe Research); Jianming Zhang (Adobe Research); You He (Naval Aviation University)

  • Context and Attribute Grounded Dense Captioning [paper]
    Guojun Yin (University of Science and Technology of China); Lu Sheng (The Chinese University of Hong Kong); Bin Liu (University of Science and Technology of China); Nenghai Yu (University of Science and Technology of China); Xiaogang Wang (Chinese University of Hong Kong, Hong Kong); Jing Shao (Sensetime)

  • Dense Relational Captioning: Triple-Stream Networks for Relationship-Based Captioning [paper]
    Dong-Jin Kim (KAIST); Jinsoo Choi (KAIST); Tae-Hyun Oh (MIT CSAIL); In So Kweon (KAIST)

  • Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions [paper]
    Marcella Cornia (University of Modena and Reggio Emilia); Lorenzo Baraldi (University of Modena and Reggio Emilia); Rita Cucchiara (Universita Di Modena E Reggio Emilia)

  • Self-Critical N-step Training for Image Captioning [paper]
    Junlong Gao (Peking University Shenzhen Graduate School); Shiqi Wang (CityU); Shanshe Wang (Peking University); Siwei Ma (Peking University, China); Wen Gao (PKU)

  • Look Back and Predict Forward in Image Captioning [paper]
    Yu Qin (Shanghai Jiao Tong University); Jiajun Du (Shanghai Jiao Tong University); Hongtao Lu (Shanghai Jiao Tong University); Yonghua Zhang (Bytedance)

  • Intention Oriented Image Captions with Guiding Objects [paper]
    Yue Zheng (Tsinghua University); Ya-Li Li (THU); Shengjin Wang (Tsinghua University)

  • Adversarial Semantic Alignment for Improved Image Captions [paper]
    Pierre Dognin (IBM); Igor Melnyk (IBM); Youssef Mroueh (IBM Research); Jarret Ross (IBM); Tom Sercu (IBM Research AI)

  • Good News, Everyone! Context driven entity-aware captioning for news images [paper] [code]
    Ali Furkan Biten (Computer Vision Center); Lluis Gomez (Universitat Autónoma de Barcelona); Marçal Rusiñol (Computer Vision Center, UAB); Dimosthenis Karatzas (Computer Vision Centre)

  • Pointing Novel Objects in Image Captioning [paper]
    Yehao Li (Sun Yat-Sen University); Ting Yao (JD AI Research); Yingwei Pan (JD AI Research); Hongyang Chao (Sun Yat-sen University); Tao Mei (AI Research of JD.com)

  • Engaging Image Captioning via Personality [paper]
    Kurt Shuster (Facebook); Samuel Humeau (Facebook); Hexiang Hu (USC); Antoine Bordes (Facebook); Jason Weston (FAIR)

  • Intention Oriented Image Captions With Guiding Objects [paper]
    Yue Zheng, Yali Li, Shengjin Wang

  • Exact Adversarial Attack to Image Captioning via Structured Output Learning With Latent Variables [paper]
    Yan Xu, Baoyuan Wu, Fumin Shen, Yanbo Fan, Yong Zhang, Heng Tao Shen, Wei Liu

Video Captioning

  • SDVC: Streamlined Dense Video Captioning [paper]
    Jonghwan Mun (POSTECH); Linjie Yang (ByteDance AI Lab); Zhou Ren (Snap Inc.); Ning Xu (Snap); Bohyung Han (Seoul National University)
    CVPR 2019 Oral

  • GVD: Grounded Video Description [paper]
    Luowei Zhou (University of Michigan); Yannis Kalantidis (Facebook Research); Xinlei Chen (Facebook AI Research); Jason J Corso (University of Michigan); Marcus Rohrbach (Facebook AI Research)
    CVPR 2019 Oral

  • HybridDis: Adversarial Inference for Multi-Sentence Video Description [paper]
    Jae Sung Park (UC Berkeley); Marcus Rohrbach (Facebook AI Research); Trevor Darrell (UC Berkeley); Anna Rohrbach (UC Berkeley)
    CVPR 2019 Oral

  • OA-BTG: Object-aware Aggregation with Bidirectional Temporal Graph for Video Captioning [paper]
    Junchao Zhang (Peking University); Yuxin Peng (Peking University)

  • MARN: Memory-Attended Recurrent Network for Video Captioning [paper]
    Wenjie Pei (Tencent); Jiyuan Zhang (Tencent YouTu); Xiangrong Wang (Delft University of Technology); Lei Ke (Tencent); Xiaoyong Shen (Tencent); Yu-Wing Tai (Tencent)

  • GRU-EVE: Spatio-Temporal Dynamics and Semantic Attribute Enriched Visual Encoding for Video Captioning [paper]
    Nayyer Aafaq (The University of Western Australia); Naveed Akhtar (The University of Western Australia); Wei Liu (University of Western Australia); Syed Zulqarnain Gilani (The University of Western Australia); Ajmal Mian (University of Western Australia)

AAAI-2019

Image Captioning

  • Improving Image Captioning with Conditional Generative Adversarial Nets [paper]
    CHEN CHEN (Tencent); SHUAI MU (Tencent); WANPENG XIAO (Tencent); ZEXIONG YE (Tencent); LIESI WU (Tencent); QI JU (Tencent)
    AAAI 2019 Oral
  • PAGNet: Connecting Language to Images: A Progressive Attention-Guided Network for Simultaneous Image Captioning and Language Grounding [paper]
    Lingyun Song (Xi'an JiaoTong University); Jun Liu (Xi'an Jiaotong Univerisity); Buyue Qian (Xi'an Jiaotong University); Yihe Chen (University of Toronto)
    AAAI 2019 Oral
  • Meta Learning for Image Captioning [paper]
    Nannan Li (Wuhan University); Zhenzhong Chen (WHU); Shan Liu (Tencent America)
  • DA: Deliberate Residual based Attention Network for Image Captioning [paper] Lianli Gao (The University of Electronic Science and Technology of China); kaixuan fan (University of Electronic Science and Technology of China); Jingkuan Song (UESTC); Xianglong Liu (Beihang University); Xing Xu (University of Electronic Science and Technology of China); Heng Tao Shen (University of Electronic Science and Technology of China (UESTC))
  • HAN: Hierarchical Attention Network for Image Captioning [paper]
    Weixuan Wang (School of Electronic and Information Engineering, Sun Yat-sen University);Zhihong Chen (School of Electronic and Information Engineering, Sun Yat-sen University); Haifeng Hu (School of Electronic and Information Engineering, Sun Yat-sen University)
  • COCG: Learning Object Context for Dense Captioning [paper]
    Xiangyang Li (Institute of Computing Technology, Chinese Academy of Sciences); Shuqiang Jiang (ICT, China Academy of Science); Jungong Han (Lancaster University)

Video Captioning

  • TAMoE: Learning to Compose Topic-Aware Mixture of Experts for Zero-Shot Video Captioning [code] [paper]
    Xin Wang (University of California, Santa Barbara); Jiawei Wu (University of California, Santa Barbara); Da Zhang (UC Santa Barbara); Yu Su (OSU); William Wang (UC Santa Barbara)
    AAAI 2019 Oral

  • TDConvED: Temporal Deformable Convolutional Encoder-Decoder Networks for Video Captioning [paper]
    Jingwen Chen (Sun Yat-set University); Yingwei Pan (JD AI Research); Yehao Li (Sun Yat-Sen University); Ting Yao (JD AI Research); Hongyang Chao (Sun Yat-sen University); Tao Mei (AI Research of JD.com)
    AAAI 2019 Oral

  • FCVC-CF&IA: Fully Convolutional Video Captioning with Coarse-to-Fine and Inherited Attention [paper]
    Kuncheng Fang (Fudan University); Lian Zhou (Fudan University); Cheng Jin (Fudan University); Yuejie Zhang (Fudan University); Kangnian Weng (Shanghai University of Finance and Economics); Tao Zhang (Shanghai University of Finance and Economics); Weiguo Fan (University of Iowa)

  • MGSA: Motion Guided Spatial Attention for Video Captioning [paper]
    Shaoxiang Chen (Fudan University); Yu-Gang Jiang (Fudan University)

You might also like...
Diverse Image Captioning with Context-Object Split Latent Spaces (NeurIPS 2020)
Diverse Image Captioning with Context-Object Split Latent Spaces (NeurIPS 2020)

Diverse Image Captioning with Context-Object Split Latent Spaces This repository is the PyTorch implementation of the paper: Diverse Image Captioning

Semi-Autoregressive Transformer for Image Captioning

Semi-Autoregressive Transformer for Image Captioning Requirements Python 3.6 Pytorch 1.6 Prepare data Please use git clone --recurse-submodules to clo

improvement of CLIP features over the traditional resnet features on the visual question answering, image captioning, navigation and visual entailment tasks.

CLIP-ViL In our paper "How Much Can CLIP Benefit Vision-and-Language Tasks?", we show the improvement of CLIP features over the traditional resnet fea

VisualGPT: Data-efficient Adaptation of Pretrained Language Models for Image Captioning
VisualGPT: Data-efficient Adaptation of Pretrained Language Models for Image Captioning

VisualGPT Our Paper VisualGPT: Data-efficient Adaptation of Pretrained Language Models for Image Captioning Main Architecture of Our VisualGPT Downloa

Unofficial pytorch implementation for Self-critical Sequence Training for Image Captioning. and others.

An Image Captioning codebase This is a codebase for image captioning research. It supports: Self critical training from Self-critical Sequence Trainin

An unreferenced image captioning metric (ACL-21)

UMIC This repository provides an unferenced image captioning metric from our ACL 2021 paper UMIC: An Unreferenced Metric for Image Captioning via Cont

Image Captioning using CNN and Transformers
Image Captioning using CNN and Transformers

Image-Captioning Keras/Tensorflow Image Captioning application using CNN and Transformer as encoder/decoder. In particulary, the architecture consists

Optimized code based on M2 for faster image captioning training

Transformer Captioning This repository contains the code for Transformer-based image captioning. Based on meshed-memory-transformer, we further optimi

An Image Captioning codebase

An Image Captioning codebase This is a codebase for image captioning research. It supports: Self critical training from Self-critical Sequence Trainin

Comments
  • Paper and code links of

    Paper and code links of "Semantic Relation-aware Difference Representation Learning for Change Captioning"

    Can you kindly add the links of our paper "Semantic Relation-aware Difference Representation Learning for Change Captioning" to your list? paper link: https://aclanthology.org/2021.findings-acl.6.pdf code link: https://github.com/tuyunbin/SRDRL

    Thanks!

    opened by tuyunbin 0
  • Add Journal and one journal paper

    Add Journal and one journal paper

    Thank you for providing the accumulated list of visual captioning! Currently, all papers in the repo are published via conference, therefore I am not sure if the repo accepts journal papers or the format of it. So I took a wild guess and added a Journal-2022 section along with one paper. This added manuscript Adaptive Curriculum Learning for Video Captioning is published as open access and at IEEE Access.

    Disclaimer: I am one of the authors of the above manuscript.

    opened by asuith 0
Owner
Ziqi Zhang
Ziqi Zhang
Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal, multi-exposure and multi-focus image fusion.

U2Fusion Code of U2Fusion: a unified unsupervised image fusion network for multiple image fusion tasks, including multi-modal (VIS-IR, medical), multi

Han Xu 129 Dec 11, 2022
A Parameter-free Deep Embedded Clustering Method for Single-cell RNA-seq Data

A Parameter-free Deep Embedded Clustering Method for Single-cell RNA-seq Data Overview Clustering analysis is widely utilized in single-cell RNA-seque

AI-Biomed @NSCC-gz 3 May 8, 2022
Validated, scalable, community developed variant calling, RNA-seq and small RNA analysis

Validated, scalable, community developed variant calling, RNA-seq and small RNA analysis. You write a high level configuration file specifying your in

Blue Collar Bioinformatics 917 Jan 3, 2023
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Erik Linder-Norén 21.8k Jan 9, 2023
PyTorchVideo is a deeplearning library with a focus on video understanding work

PyTorchVideo is a deeplearning library with a focus on video understanding work. PytorchVideo provides resusable, modular and efficient components needed to accelerate the video understanding research. PyTorchVideo is developed using PyTorch and supports different deeplearning video components like video models, video datasets, and video-specific transforms.

Facebook Research 2.7k Jan 7, 2023
AdaFocus (ICCV 2021) Adaptive Focus for Efficient Video Recognition

AdaFocus (ICCV 2021) This repo contains the official code and pre-trained models for AdaFocus. Adaptive Focus for Efficient Video Recognition Referenc

Rainforest Wang 115 Dec 21, 2022
Scripts of Machine Learning Algorithms from Scratch. Implementations of machine learning models and algorithms using nothing but NumPy with a focus on accessibility. Aims to cover everything from basic to advance.

Algo-ScriptML Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The goal of this project is not t

Algo Phantoms 81 Nov 26, 2022
Syntax-Aware Action Targeting for Video Captioning

Syntax-Aware Action Targeting for Video Captioning Code for SAAT from "Syntax-Aware Action Targeting for Video Captioning" (Accepted to CVPR 2020). Th

null 59 Oct 13, 2022
Weakly Supervised Dense Event Captioning in Videos, i.e. generating multiple sentence descriptions for a video in a weakly-supervised manner.

WSDEC This is the official repo for our NeurIPS paper Weakly Supervised Dense Event Captioning in Videos. Description Repo directories ./: global conf

Melon(Xuguang Duan) 96 Nov 1, 2022
Videocaptioning.pytorch - A simple implementation of video captioning

pytorch implementation of video captioning recommend installing pytorch and pyth

Yiyu Wang 2 Jan 1, 2022