Answer a series of contextually-dependent questions like they may occur in natural human-to-human conversations.

Overview

SCAI-QReCC-21

[leaderboards] [registration] [forum] [contact] [SCAI]

Answer a series of contextually-dependent questions like they may occur in natural human-to-human conversations.

  • Submission deadline: September 8, 2021 Extended: September 15, 2021
  • Results announcement: September 30, 2021
  • Workshop presentations: October 8, 2021

Data

[Zenodo] [original]

File names here refer to the respective files hosted on [Zenodo].

The passage collection (passages.zip) is 27.5GB with 54M passages!

The input format for the task (scai-qrecc21-[toy,training,test]-questions[,-rewritten].json) is a JSON file:

, "Turn_no": X, "Question": " " }, ... ]">
[
  {
    "Conversation_no": 
    
     ,
    "Turn_no": X,
    "Question": "
     
      "
  }, ...
]

     
    

With X being the number of the question in the conversation. Questions with the same Conversation_no are from the same conversation.

The questions-rewritten.json-files contain human rewritten questions that can be used by systems that do not want to participate in question rewriting.

Submission

Register for the task using this form. We will then send you your TIRA login once it is ready.

The challenge is hosted on TIRA. Participants are encouraged to upload their code and run the evaluation on the VMs provided by the platform to ensure reproducibility of the results. It is also possible to upload the submission as a single JSON file.

The submission format for the task is a JSON file similar to the input (all Model_xxx-fields are optional and you can omit them from the submission, e.g. provide only Conversation_no, Turn_no and Model_answer to get the EM and F1 scores for the generated answers):

, "Turn_no": X, "Model_rewrite": " ", "Model_passages": { " ": , ... }, "Model_answer": " " }, ... ]">
[
  {
    "Conversation_no": 
       
        ,
    "Turn_no": X,
    "Model_rewrite": "
        
         ",
    "Model_passages": { 
      "
         
          ": 
          
           , ...
    },
    "Model_answer": "
           
            " }, ... ] 
           
          
         
        
       

Example: scai-qrecc21-naacl-baseline.zip

You can use the code of our simple baseline to get started.

Software Submission

We recommend participants to upload (through SSH or RDP) their software/system to their dedicated TIRA virtual machine (assigned after registration), so that their runs can be reproduced and so that they can be easily applied to different data (of same format) in the future. The mail send to you after registration gives you the credentials to access the TIRA web interface and your VM. If you cannot connect to your VM, ensure it is powered on in the TIRA web interface.

Your software is expected to accept two arguments:

  • An input directory (named $inputDataset in TIRA) that contains the questions.json input file and passages-index-anserini directory. The latter contains a full Anserini index of the passage collection. Note that you need to install openjdk-11-jdk-headless to use it. We may be able to add more of such indices on request.
  • An output directory (named $outputDir in TIRA) into which your software needs to place the submission as run.json.

Install your software to your VM. Then go to the TIRA web interface and click "Add software". Specify the command to run your software (see the image for the simple baseline).

IMPORTANT: To ensure reproducibility, create a "Software" in the TIRA web interface for each parameter setting that you consider a submission to the challenge.

Click on "Run" to execute your software for the selected input dataset. Your VM will not be accessible while your system is running, be detached from the internet (to ensure your software is fully installed in your virtual machine), and afterwards restored to the state before the run. Since the test set is rather large (the simple baseline takes nearly 11 hours to complete), we highly recommend you first test your software on the scai-qrecc21-toy-dataset-2021-07-20 input dataset. This dataset contains the first conversation (6 turns/questions) only. For the test-dataset, send us a mail at [email protected] so that we unblind your results.

TIRA Interface: VM status and submission

Then go to the "Runs" section below and click on the blue (i)-icon of the software run to check the software output. You can also download the run from there.

NOTE: By submitting your software you retain full copyrights. You agree to grant us usage rights for evaluation of the corresponding data generated by your software. We agree not to share your software with a third party or use it for any purpose other than research.

Run Submission

You can upload a JSON file as a submission at https://www.tira.io/run-upload-scai-qrecc21.

TIRA Interface: VM status and submission

Please specify the name and a description of your run in the form. After a successful upload, the page will redirect you to the overview of all your submissions where you should evaluate your run to verify that your run is valid. At the "Runs" section, you can click on the blue (i)-icon to double-check your upload. You can also download the run from there.

Evaluation

[script]

Once you run your software or uploaded your run, "Run" the evaluator on that run through the TIRA web interface (below the software; works out-of-the-box).

TIRA Interface: Evaluation

Then go to the "Runs" section below and click on the blue (i)-icon of the evaluator run to see your scores.

Ground truth

We use the QReCC paper annotations in the initial phase, and will update them with alternative answer spans and passages by pooling and crowdsourcing the relevance judgements over the results submitted by the challenge participants (similar to the TREC evaluation setup).

Metrics

We use the same metrics as the QReCC paper, but may add more for the final evaluation: ROUGE1-R for question rewriting, Mean Reciprocal Rank (MRR) for passage retrieval, and F1 and Exact Match for question answering.

Baselines

We provide the following baselines for comparison:

  • scai-qrecc21-simple-baseline: BM25 baseline for passage retrieval using original conversational questions without rewriting. We recommend to use this code as a boilerplate to kickstart your own submission using the VM.
  • scai-qrecc21-naacl-baseline: results for the end-to-end approach using supervised question rewriting and QA models reported in the QReCC paper (accepted at NAACL'21). This sample run is available on Zenodo as scai-qrecc21-naacl-baseline.zip.

Note that the baseline results differ from the ones reported in the paper since we made several corrections to the evaluation script and the ground truth annotations:

  • We excluded the samples for which the ground truth is missing from the evaluation (i.e., no relevant passages or no answer text or no rewrite provided by the human annotators)

  • We removed 5,251 passages judgements annotated by the heuristic as relevant for the short answers with lengths <= 5 since these matches are often trivial and unrelated, e.g., the same noun phrase appearing in different contexts.

Resources

Some useful links to get you started on a new conversational open-domain QA system:

Conversational Passage Retrieval

Answer Generation

Passage Retrieval

Conversational Question Reformulation

You might also like...
Official implementation of the paper:
Official implementation of the paper: "LDNet: Unified Listener Dependent Modeling in MOS Prediction for Synthetic Speech"

LDNet Author: Wen-Chin Huang (Nagoya University) Email: [email protected] This is the official implementation of the paper "LDNet

Time Dependent DFT in Tamm-Dancoff Approximation
Time Dependent DFT in Tamm-Dancoff Approximation

Density Function Theory Program - kspy-tddft(tda) This is an implementation of Time-Dependent Density Functional Theory(TDDFT) using the Tamm-Dancoff

šŸ„ˆ78th place in Riiid Answer Correctness Prediction competition

Riiid Answer Correctness Prediction Introduction This repository is the code that placed 78th in Riiid Answer Correctness Prediction competition. Requ

Transformer part of 12th place solution in Riiid! Answer Correctness Prediction

kaggle_riiid Transformer part of 12th place solution in Riiid! Answer Correctness Prediction. Please see here for more information. Execution You need

Predicts an answer in yes or no.

Oui-ou-non-prediction Predicts an answer in 'yes' or 'no'. It is based on the game 'effeuiller la marguerite' in which the person plucks flower petals

Wordle-solver - Wordle answer generation program in python
Wordle-solver - Wordle answer generation program in python

šŸŸØ Wordle Solver šŸŸ© Wordle answer generation program in python āœ”ļø Requirements U

Time-series-deep-learning - Developing Deep learning LSTM, BiLSTM models, and NeuralProphet for multi-step time-series forecasting of stockĀ price.
Time-series-deep-learning - Developing Deep learning LSTM, BiLSTM models, and NeuralProphet for multi-step time-series forecasting of stockĀ price.

Stock Price Prediction Using Deep Learning Univariate Time Series Predicting stock price using historical data of a company using Neural networks for

ImageNet-CoG is a benchmark for concept generalization. It provides a full evaluation framework for pre-trained visual representations which measure how well they generalize to unseen concepts.

The ImageNet-CoG Benchmark Project Website Paper (arXiv) Code repository for the ImageNet-CoG Benchmark introduced in the paper "Concept Generalizatio

Axel - 3D printed robotic hands and they controll with Raspberry Pi and Arduino combo

Axel It's our graduation project about 3D printed robotic hands and they control

Comments
  • Mapping from answer url to passage id

    Mapping from answer url to passage id

    Hi,

    How can I find the passage ids for a given answer url.

    E.g. in scai-qrecc21-test-turns.json the "Answer_URL" is a raw url such as https://www.nursepractitionerschools.com/faq/np-vs-physician-assistant/ whereas the passage ids have a web archive prefix and passage number postfix such as http://web.archive.org/web/20200930210358id_/https://www.nursepractitionerschools.com/faq/np-vs-physician-assistant/_p1

    I would like to know the appropriate archive prefix (and also number of passages if possible) for each answer url.

    Thanks!

    opened by yanpo 1
SymPy-powered, Wolfram|Alpha-like answer engine totally in your browser, without backend computation

SymPy Beta SymPy Beta is a fork of SymPy Gamma. The purpose of this project is to run a SymPy-powered, Wolfram|Alpha-like answer engine totally in you

Liumeo 25 Dec 21, 2022
This repo contains the code and data used in the paper "Wizard of Search Engine: Access to Information Through Conversations with Search Engines"

Wizard of Search Engine: Access to Information Through Conversations with Search Engines by Pengjie Ren, Zhongkun Liu, Xiaomeng Song, Hongtao Tian, Zh

null 19 Oct 27, 2022
Official repository for "Action-Based Conversations Dataset: A Corpus for Building More In-Depth Task-Oriented Dialogue Systems"

Action-Based Conversations Dataset (ABCD) This respository contains the code and data for ABCD (Chen et al., 2021) Introduction Whereas existing goal-

ASAPP Research 49 Oct 9, 2022
DSTC10 Track 2 - Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations

DSTC10 Track 2 - Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations This repository contains the data, scripts and baseline co

Alexa 51 Dec 17, 2022
Code for SIMMC 2.0: A Task-oriented Dialog Dataset for Immersive Multimodal Conversations

The Second Situated Interactive MultiModal Conversations (SIMMC 2.0) Challenge 2021 Welcome to the Second Situated Interactive Multimodal Conversation

Facebook Research 81 Nov 22, 2022
Source code for our paper "Improving Empathetic Response Generation by Recognizing Emotion Cause in Conversations"

Source code for our paper "Improving Empathetic Response Generation by Recognizing Emotion Cause in Conversations" this repository is maintained by bo

Yuhan Liu 24 Nov 29, 2022
This repo contains implementation of different architectures for emotion recognition in conversations.

Emotion Recognition in Conversations Updates ?? ?? ?? Date Announcements 03/08/2021 ?? ?? We have released a new dataset M2H2: A Multimodal Multiparty

Deep Cognition and Language Research (DeCLaRe) Lab 1k Dec 30, 2022
PAthological QUpath Obsession - QuPath and Python conversations

PAQUO: PAthological QUpath Obsession Welcome to paquo ?? , a library for interacting with QuPath from Python. paquo's goal is to provide a pythonic in

Bayer AG 60 Dec 31, 2022
:hot_pepper: RĀ²SQL: "Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing." (AAAI 2021)

RĀ²SQL The PyTorch implementation of paper Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing. (AAAI 2021) Requirement

huybery 60 Dec 31, 2022