Summarizing Text on Any Aspects
This repo contains preliminary code of the following paper:
Summarizing Text on Any Aspects: A Knowledge-Informed Weakly-Supervised Approach
Bowen Tan, Lianhui Qin, Eric P. Xing, Zhiting Hu
EMNLP 2020
[ArXiv] [Slides]
Getting Started
- Given a document and a target aspect (e.g., a topic of interest), aspect-based abstractive summarization attempts to generate a summary with respect to the aspect.
- In this work, we study summarizing on arbitrary aspects relevant to the document.
- Due to the lack of supervision data, we develop a new weak supervision construction method integrating rich external knowledge sources such as
ConceptNet
andWikipedia
.
Requirements
Our python version is 3.8
, required packages can be installed by
pip install -r requrements.txt
Our code can run on a single GTX 1080Ti
GPU.
Datasets & Knowledge Sources
Weakly Supervised Dataset
Our constructed weakly supervised dataset can be downloaded by
bash data_utils/download_weaksup.sh
Downloaded data will be saved into data/weaksup/
.
We also provide the code to construct it. For more details, see
MA-News Dataset
MA-News Dataset is a aspect summarization dataset constructed by (Frermann et al.) . Its aspects are restricted to only 6 coarsegrained topics. We use MA-News dataset for our automatic evaluation. Scripts to make MA-News is here.
A JSON version processed by us can be download by
bash data_utils/download_manews.sh
Downloaded data will be saved into data/manews/
.
Knowledge Graph - ConceptNet
ConceptNet is a huge multilingual commonsense knowledge graph. We extract an English subset that can be downloaded by
bash data_utils/download_concept_net.sh
Knowledge Base - Wikipedia
Wikipedia is an encyclopaedic knowledge base. We use its python API to access it online, so make sure your web connection is good when running our code.
Weakly Supervised Model
Train
Run this command to finetune a weakly supervised model from pretrained BART model (Lewis et al.).
python finetune.py --dataset_name weaksup --train_docs 100000 --n_epochs 1
Training logs and checkpoints will be saved into logs/weaksup/docs100000/
The training takes ~48h on a single GTX 1080Ti GPU. You may want to directly download the training log and the trained model here.
Generation
Run this command to generate on MA-News test set with the weakly supervised model.
python generate.py --log_path logs/weaksup/docs100000/
Source texts, target texts, generated texts will be saved as test.source
, test.gold
, and test.hypo
respectively, into the log dir: logs/weaksup/docs100000/
.
Evaluation
To run evaluation, make sure you have installed java
and files2rouge
on your device.
First, download stanford nlp by
python data_utils/download_stanford_core_nlp.py
and run
bash evaluate.sh logs/weaksup/docs100000/
to get rouge scores. Results will be saved in logs/weaksup/docs100000/rouge_scores.txt
.
Finetune with MA-News Training Data
Baseline
Run this command to finetune a BART model with 1K MA-News training data examples.
python finetune.py --dataset_name manews --train_docs 1000 --wiki_sup False
python generate.py --log_path logs/manews/docs1000/ --wiki_sup False
bash evaluate.sh logs/manews/docs1000/
Results will be saved in logs/manews/docs1000/
.
+ Weak Supervision
Run this command to finetune with 1K MA-News training data examples starting with our weakly supervised model.
python finetune.py --dataset_name manews --train_docs 1000 --pretrained_ckpt logs/weaksup/docs100000/best_model.ckpt
python generate.py --log_path logs/manews_plus/docs1000/
bash evaluate.sh logs/manews_plus/docs1000/
Results will be saved in logs/manews_plus/docs1000/
.
Results
Results on MA-News dataset are as below (same setting as paper Table 2).
All the detailed logs, including training log, generated texts, and rouge scores, are available here.
(Note: The result numbers may be slightly different from those in the paper due to slightly different implementation details and random seeds, while the improvements over comparison methods are consistent.)
Model | ROUGE-1 | ROUGE-2 | ROUGE-L |
---|---|---|---|
Weak-Sup Only | 28.41 | 10.18 | 25.34 |
MA-News-Sup 1K | 24.34 | 8.62 | 22.40 |
MA-News-Sup 1K + Weak-Sup | 34.10 | 14.64 | 31.45 |
MA-News-Sup 3K | 26.38 | 10.09 | 24.37 |
MA-News-Sup 3K + Weak-Sup | 37.40 | 16.87 | 34.51 |
MA-News-Sup 10K | 38.71 | 18.02 | 35.78 |
MA-News-Sup 10K + Weak-Sup | 39.92 | 18.87 | 36.98 |
Demo
We provide a demo on a real news on Feb. 2021. (see demo_input.json).
To run the demo, download our trained model here, and run the command below
python demo.py --ckpt_path logs/weaksup/docs100000/best_model.ckpt