The official code of "SCROLLS: Standardized CompaRison Over Long Language Sequences".

Overview

SCROLLS

This repository contains the official code of the paper: "SCROLLS: Standardized CompaRison Over Long Language Sequences".

Links

Citation

@misc{shaham2022scrolls,
      title={SCROLLS: Standardized CompaRison Over Long Language Sequences}, 
      author={Uri Shaham and Elad Segal and Maor Ivgi and Avia Efrat and Ori Yoran and Adi Haviv and Ankit Gupta and Wenhan Xiong and Mor Geva and Jonathan Berant and Omer Levy},
      year={2022},
      eprint={2201.03533},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Loading the SCROLLS Benchmark Datasets

You might also like...
This is the official code release for the paper Shape and Material Capture at Home
This is the official code release for the paper Shape and Material Capture at Home

This is the official code release for the paper Shape and Material Capture at Home. The code enables you to reconstruct a 3D mesh and Cook-Torrance BRDF from one or more images captured with a flashlight or camera with flash.

Official code of CVPR 2021's PLOP: Learning without Forgetting for Continual Semantic Segmentation
Official code of CVPR 2021's PLOP: Learning without Forgetting for Continual Semantic Segmentation

PLOP: Learning without Forgetting for Continual Semantic Segmentation This repository contains all of our code. It is a modified version of Cermelli e

Official code of our work, Unified Pre-training for Program Understanding and Generation [NAACL 2021].

PLBART Code pre-release of our work, Unified Pre-training for Program Understanding and Generation accepted at NAACL 2021. Note. A detailed documentat

official code for dynamic convolution decomposition

Revisiting Dynamic Convolution via Matrix Decomposition (ICLR 2021) A pytorch implementation of DCD. If you use this code in your research please cons

This repo contains the official code of our work SAM-SLR which won the CVPR 2021 Challenge on Large Scale Signer Independent Isolated Sign Language Recognition.
This repo contains the official code of our work SAM-SLR which won the CVPR 2021 Challenge on Large Scale Signer Independent Isolated Sign Language Recognition.

Skeleton Aware Multi-modal Sign Language Recognition By Songyao Jiang, Bin Sun, Lichen Wang, Yue Bai, Kunpeng Li and Yun Fu. Smile Lab @ Northeastern

Official code for
Official code for "End-to-End Optimization of Scene Layout" -- including VAE, Diff Render, SPADE for colorization (CVPR 2020 Oral)

End-to-End Optimization of Scene Layout Code release for: End-to-End Optimization of Scene Layout CVPR 2020 (Oral) Project site, Bibtex For help conta

Official source code to CVPR'20 paper,
Official source code to CVPR'20 paper, "When2com: Multi-Agent Perception via Communication Graph Grouping"

When2com: Multi-Agent Perception via Communication Graph Grouping This is the PyTorch implementation of our paper: When2com: Multi-Agent Perception vi

Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.

Learning Associative Inference Using Fast Weight Memory This repository contains the offical code for the paper Learning Associative Inference Using F

Official code for
Official code for "Mean Shift for Self-Supervised Learning"

MSF Official code for "Mean Shift for Self-Supervised Learning" Requirements Python = 3.7.6 PyTorch = 1.4 torchvision = 0.5.0 faiss-gpu = 1.6.1 In

Comments
  • Evaluating the results

    Evaluating the results

    I have trained a baseline model and run prediction on validation split according to the instructions in the baseline README. However, the command line output didn't seem to give me a destination folder that contains the generated predictions after running through the validation dataset. I was hoping to find a JSON file containing the validation split predictions so that I can use that in the evaluator. Is there a way that I can find the validation split predictions?

    Moreover, is there a way for me to evaluate the results on the test split? I see the README in the evaluator folder which has options Evaluate predictions for a single dataset (validation only) Evaluate predictions for the entire benchmark (validation only) Prepare Submission File Verify Submission File
    I want to evaluate the metrics on the test dataset (to see if the resulting numbers match the paper), but I don't want to generate a submission file since I'm just running the baseline models. Is there a way to do that? Thank you very much!


    EDIT: I'm currently only running the QMSUM dataset, not the others.

    opened by Leonard907 4
  • Prompts for tasks

    Prompts for tasks

    For the your tasks, are the "input" column the full input passed into the models? Did you add any additional prompting for the models provided in the leaderboard?

    For example, for GovReport, did you (or were the teams who submitted allowed to) do something like the following?

    Original Text:
    <"input" column of GovReport>
    
    Summary:
    <"output" column of GovReport / output of model>
    

    Or is there no additional prompting:

    <"input" column of GovReport>
    <"output" column of GovReport / output of model>
    
    opened by yulonglin 2
  • Lengths of inputs and outputs

    Lengths of inputs and outputs

    1. From the paper, you seem to truncate inputs to 16,384 tokens for your leaderboard, is that right?
    2. As n-gram metrics are affected by the length of outputs, how do you determine the target length of outputs? I notice that the default max_target_length in baselines/src/run.py is 128 tokens. Do you train your models with an EOS token such that the generated output may terminate much earlier?
    opened by yulonglin 2
  • Predict command fails

    Predict command fails

    First I want to thank the authors for this great work! I might find it useful for my research.

    I encountered 3 problems:

    1. in evaluator/dataset_evaluator.py, in the usage of hf_hub_download, I got an exception because it was used that way: hf_hub_download(repo_id="datasets/tau/scrolls", filename="metrics/scrolls.py") instead of hf_hub_download(repo_id="tau/scrolls", filename="metrics/scrolls.py", repo_type="dataset"). I don't know why it worked for you, perhaps there was a breaking change in the datasets library recently. Would you want me to open a PR for that?
    2. The generate script (python scripts/execute.py scripts/commands/generate.py {dataset}_{model}_{split} --checkpoint_path path/to/model/folder) took a very long time, much more than the fine-tuning of 256-bart. There was a warning that might be related saying:

    Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector

    Edit: I now noticed that this warning is emitted only when I use more than one GPU. However, it is still slower than expected. 3. It failed with the following exception:

    Traceback (most recent call last): File "/home/liranringel/scrolls/baselines/scripts/execute.py", line 53, in main(command_dict, unknown) File "/home/liranringel/scrolls/baselines/scripts/execute.py", line 33, in main runpy.run_module(module_name, run_name="main") File "/home/liranringel/miniconda3/envs/mem/lib/python3.9/runpy.py", line 228, in run_module return _run_code(code, {}, init_globals, run_name, mod_spec) File "/home/liranringel/miniconda3/envs/mem/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/liranringel/scrolls/baselines/src/run.py", line 789, in main() File "/home/liranringel/scrolls/baselines/src/run.py", line 656, in main metrics = trainer.evaluate(metric_key_prefix="eval") File "/home/liranringel/miniconda3/envs/mem/lib/python3.9/site-packages/transformers/trainer_seq2seq.py", line 131, in evaluate eval_preds = self._post_process_function(untokenized_eval_dataset, eval_loop_output.predictions) File "/home/liranringel/miniconda3/envs/mem/lib/python3.9/site-packages/transformers/trainer_seq2seq.py", line 326, in _post_process_function assert len(untokenized_eval_dataset) == len(self.eval_dataset) AssertionError

    opened by liranringel 1
Official code for Score-Based Generative Modeling through Stochastic Differential Equations

Score-Based Generative Modeling through Stochastic Differential Equations This repo contains the official implementation for the paper Score-Based Gen

Yang Song 818 Jan 6, 2023
Official code for paper "Optimization for Oriented Object Detection via Representation Invariance Loss".

Optimization for Oriented Object Detection via Representation Invariance Loss By Qi Ming, Zhiqiang Zhou, Lingjuan Miao, Xue Yang, and Yunpeng Dong. Th

ming71 56 Nov 28, 2022
This repo provides the official code for TransBTS: Multimodal Brain Tumor Segmentation Using Transformer (https://arxiv.org/pdf/2103.04430.pdf).

TransBTS: Multimodal Brain Tumor Segmentation Using Transformer This repo is the official implementation for TransBTS: Multimodal Brain Tumor Segmenta

Raymond 247 Dec 28, 2022
Official code of the paper "ReDet: A Rotation-equivariant Detector for Aerial Object Detection" (CVPR 2021)

ReDet: A Rotation-equivariant Detector for Aerial Object Detection ReDet: A Rotation-equivariant Detector for Aerial Object Detection (CVPR2021), Jiam

csuhan 334 Dec 23, 2022
Official code implementation for "Personalized Federated Learning using Hypernetworks"

Personalized Federated Learning using Hypernetworks This is an official implementation of Personalized Federated Learning using Hypernetworks paper. [

Aviv Shamsian 121 Dec 25, 2022
Official code for the paper: Deep Graph Matching under Quadratic Constraint (CVPR 2021)

QC-DGM This is the official PyTorch implementation and models for our CVPR 2021 paper: Deep Graph Matching under Quadratic Constraint. It also contain

Quankai Gao 55 Nov 14, 2022
Official code for the ICLR 2021 paper Neural ODE Processes

Neural ODE Processes Official code for the paper Neural ODE Processes (ICLR 2021). Abstract Neural Ordinary Differential Equations (NODEs) use a neura

Cristian Bodnar 50 Oct 28, 2022
Official PyTorch Code of GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection (CVPR 2021)

GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Mo

Abhinav Kumar 76 Jan 2, 2023
Official code for the CVPR 2021 paper "How Well Do Self-Supervised Models Transfer?"

How Well Do Self-Supervised Models Transfer? This repository hosts the code for the experiments in the CVPR 2021 paper How Well Do Self-Supervised Mod

Linus Ericsson 157 Dec 16, 2022
Official PyTorch code of Holistic 3D Scene Understanding from a Single Image with Implicit Representation (CVPR 2021)

Implicit3DUnderstanding (Im3D) [Project Page] Holistic 3D Scene Understanding from a Single Image with Implicit Representation Cheng Zhang, Zhaopeng C

Cheng Zhang 149 Jan 8, 2023