Weakly-supervised Text Classification Based on Keyword Graph

Overview

Weakly-supervised Text Classification Based on Keyword Graph

How to run?

Download data

Our dataset follows previous works. For long texts, we follow Conwea. For short texts, we follow LOTClass.
We transform all their data into unified json format.

  1. Download datasets from: https://drive.google.com/drive/folders/1D8E9T-vuBE-YdAd9OBy-yS4UW4AptA58?usp=sharing

    • Long text datasets(follow Conwea):

      • 20Newsgroup Fine(20NF)
      • 20Newsgroup Coarse(20NC)
      • NYT Fine(NYT_25)
      • NYT Coarse(NYT_5)
    • Short text datasets(follow LOTClass)

      • Agnews
      • dbpedia
      • imdb
      • amazon
  2. Unzip data into './data/processed'

Another way to obtain data (Not recommended):
You can download long text data from Conwea and short text data from LOTClass and transform data into json format using our code. The code is located at 'preprocess_data/process_long.py (process_short.py) You need to edit the preprocess code to change the dataset path to your downloaded path and change the taskname. The processed data is located in 'data/processed'. We alse provide preprocess code for X-class, which is 'process_x_class.py'.

Requirements

This project is based on python==3.8. The dependencies are as follow:

pytorch
DGL
yacs
visdom
transformers
scikit-learn
numpy
scipy

Train and Eval

  • Recommend to start visdom to show the results.
visdom -p 8888

Open the browser to the server_ip:8888 to show visdom panel.

  • Train:
    • First edit 'task/pipeline.py' to specify to config file and CUDA devices you used.
      Some configuration files are provided in the config folder.

    • Start training:

      python task/pipeline.py
      
    • Our code is based on multi GPUs, may be unable to run on single GPU currently.

Run on your custom dataset.

  1. provide datasets to dir data/processed.

    • keywords.json
      keywords for each class. type: dict. key: class_index. value: list containing all keywords for this class. See provided datasets for details.

    • unlabeled.json
      unlabeled sentences in our paper. type: list. item: list with 2 items([sentence_i,label_i]).
      In order to facilitate the evaluation, we are similar to Conwea's settings, where labels of sentences are provided. The labels are only used for evaluation.

  2. provide config to dir config. You can copy one of the existing config files and change some fields, like number_classes, classifier.type, data_dir_name etc.

  3. Specify the config file name in pipeline.py and run the pipeline code.

Citation

Please cite the following paper if you find our code helpful! Thank you very much.

Lu Zhang, Jiandong Ding, Yi Xu, Yingyao Liu and Shuigeng Zhou. "Weakly-supervised Text Classification Based on Keyword Graph". EMNLP 2021.

Comments
  • EOFError: Ran out of input

    EOFError: Ran out of input

    Hello,I have run this code on dataset 20NF and my own dataset yahoo, they are both long datasets. And they both had the same problem. (I have run this code on AGNEWS and it's ok)

    Eval Model On Eval Set: 2022-05-07 05:37:49,796 yahoo INFO: rank:0, Loading model from:yahoo/itr_0_ST_3, model save time:Sat May 7 05:37:48 2022 INFO:yahoo:rank:0, Loading model from:yahoo/itr_0_ST_3, model save time:Sat May 7 05:37:48 2022 2022-05-07 05:37:49,875 yahoo INFO: rank:0, Load model success, other info:{} INFO:yahoo:rank:0, Load model success, other info:{} 2022-05-07 05:37:49,883 yahoo INFO: rank:0, start eval INFO:yahoo:rank:0, start eval Traceback (most recent call last): File "yahoo.py", line 99, in spawn(main, args = (), nprocs = world_size, join = True) File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes while not context.join(): File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 150, in join raise ProcessRaisedException(msg, error_index, failed_process.pid) torch.multiprocessing.spawn.ProcessRaisedException:

    -- Process 1 terminated with the following error: Traceback (most recent call last): File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap fn(i, *args) File "/root/autodl-tmp/ClassKG-main/task/yahoo.py", line 81, in main sentences, labels = classifier_trainer.train_model(sentences = res_dict['sentences'], File "/root/autodl-tmp/ClassKG-main/task/../Models/Longformer_Classify/trainer_longformer_ST.py", line 53, in train_model sentences, labels, global_best = self.do_train(dataloader = dataloader_train, ITR = ITR, File "/root/autodl-tmp/ClassKG-main/task/../Models/Longformer_Classify/trainer_longformer_ST.py", line 189, in do_train self.checkpointer.load_from_filename(model = self.model, File "/root/autodl-tmp/ClassKG-main/task/../compent/checkpoint.py", line 92, in load_from_filename self.load_from_file(model, path, strict = strict) File "/root/autodl-tmp/ClassKG-main/task/../compent/checkpoint.py", line 95, in load_from_file data = torch.load(path, map_location = 'cpu') File "/root/miniconda3/lib/python3.8/site-packages/torch/serialization.py", line 593, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/root/miniconda3/lib/python3.8/site-packages/torch/serialization.py", line 762, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args) EOFError: Ran out of input

    opened by xiatingyu 13
  • Paper experiment configurations

    Paper experiment configurations

    Hi, I see the dataset configurations are different from the parameters described in the paper. Could you share the configuration files used for the paper? Thanks!

    opened by arielge 2
  • RuntimeError: Stop_waiting response is expected

    RuntimeError: Stop_waiting response is expected

    Hello When I run the pipeline.py, I got this error :

    Traceback (most recent call last): File "pipeline.py", line 99, in spawn(main, args = (), nprocs = world_size, join = True) File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 200, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes while not context.join(): File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 119, in join raise Exception(msg) Exception:

    -- Process 0 terminated with the following error: Traceback (most recent call last): File "/root/miniconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap fn(i, *args) File "/root/autodl-tmp/ClassKG-main/task/pipeline.py", line 41, in main set_multi_GPUs_envs(rank, world_size) File "/root/autodl-tmp/ClassKG-main/task/../compent/set_multi_GPUs.py", line 11, in set_multi_GPUs_envs dist.init_process_group('nccl', rank = rank, world_size = world_size) File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 393, in init_process_group store, rank, world_size = next(rendezvous_iterator) File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/rendezvous.py", line 172, in _env_rendezvous_handler store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout) RuntimeError: Stop_waiting response is expected

    Maybe it caused by the cuda version or pytorch version. So could you please share your pytorch version? Do you have any other solutions for this bug?

    opened by xiatingyu 1
  • Mismatched Device

    Mismatched Device

    I've selected your work as part of a reproducibility study that I am conducting, I'm having difficulties running your code.

    There appears to be a missing dependency for tensorflow (included in this file: https://github.com/zhanglu-cst/ClassKG/blob/e66ad7f9aa89bff57c75adc7bfd5e5063b2958ea/compent/checkpoint.py).

    Aside from that, I'm getting this error:

    Using backend: pytorch
    Using backend: pytorch
    Using backend: pytorch
    Setting up a new session...
    /workspace/task/../keyword_sentence/keywords.py:125: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
      label_one_hot = torch.tensor(label_one_hot).float()
    /workspace/task/../keyword_sentence/keywords.py:126: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
      index_one_hot = torch.tensor(index_one_hot).float()
    /workspace/task/../keyword_sentence/keywords.py:125: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
      label_one_hot = torch.tensor(label_one_hot).float()
    /workspace/task/../keyword_sentence/keywords.py:126: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
      index_one_hot = torch.tensor(index_one_hot).float()
    Traceback (most recent call last):
      File "pipeline.py", line 102, in <module>
        spawn(main, args = (), nprocs = world_size, join = True)
      File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 240, in spawn
        return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
      File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 198, in start_processes
        while not context.join():
      File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 160, in join
        raise ProcessRaisedException(msg, error_index, failed_process.pid)
    torch.multiprocessing.spawn.ProcessRaisedException: 
    
    -- Process 1 terminated with the following error:
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
        fn(i, *args)
      File "/workspace/task/pipeline.py", line 76, in main
        res_dict = GCN_trainer.train_model(sentences = voted_sentences,
      File "/workspace/task/../Models/Graph_SSL/trainer_gcn.py", line 90, in train_model
        self.pretrain_model(dataloader_train.dataset.Large_G, self.model)
      File "/workspace/task/../Models/Graph_SSL/trainer_gcn.py", line 46, in pretrain_model
        trainer.do_train()
      File "/workspace/task/../Models/SSL/trainer_SSL.py", line 60, in do_train
        output = self.model(batch['batch_graphs'])
      File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 930, in forward
        output = self.module(*inputs[0], **kwargs[0])
      File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/workspace/task/../Models/Graph_SSL/GIN_model.py", line 217, in forward
        h = self.ginlayers[i](graphs, h)
      File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
        return forward_call(*input, **kwargs)
      File "/opt/conda/lib/python3.8/site-packages/dgl/nn/pytorch/conv/ginconv.py", line 133, in forward
        rst = (1 + self.eps) * feat_dst + graph.dstdata['neigh']
    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu!
    
    

    STDOUT:

    loading unlabeled sentence:11527
    test not exist
    class:0, GT count:956.0, Pred count:1175.0
    class:1, GT count:858.0, Pred count:931.0
    class:2, GT count:81.0, Pred count:395.0
    class:3, GT count:5872.0, Pred count:5441.0
    class:4, GT count:535.0, Pred count:360.0
    rank:1,analyse of keywords:
                  precision    recall  f1-score   support
    
               0       0.76      0.94      0.84       956
               1       0.77      0.84      0.80       858
               2       0.18      0.89      0.30        81
               3       1.00      0.92      0.96      5872
               4       0.80      0.54      0.64       535
    
        accuracy                           0.89      8302
       macro avg       0.70      0.82      0.71      8302
    weighted avg       0.92      0.89      0.90      8302
    
    rank:1,analyse of keywords:, keywords count:25
    rank:1,cover:8302, all:11527
    rank:1, cover%:0.7202220872733582
    2022-06-23 15:52:18,324 NYT_5 INFO: rank:1, rank:1, keywords f1_micro:0.8900264996386414
    2022-06-23 15:52:18,324 NYT_5 INFO: rank:1, rank:1, keywords f1_macro:0.7091776614199704
    loading unlabeled sentence:11527
    test not exist
    class:0, GT count:956.0, Pred count:1175.0
    class:1, GT count:858.0, Pred count:931.0
    class:2, GT count:81.0, Pred count:395.0
    class:3, GT count:5872.0, Pred count:5441.0
    class:4, GT count:535.0, Pred count:360.0
    rank:0,analyse of keywords:
                  precision    recall  f1-score   support
    
               0       0.76      0.94      0.84       956
               1       0.77      0.84      0.80       858
               2       0.18      0.89      0.30        81
               3       1.00      0.92      0.96      5872
               4       0.80      0.54      0.64       535
    
        accuracy                           0.89      8302
       macro avg       0.70      0.82      0.71      8302
    weighted avg       0.92      0.89      0.90      8302
    
    rank:0,analyse of keywords:, keywords count:25
    rank:0,cover:8302, all:11527
    rank:0, cover%:0.7202220872733582
    2022-06-23 15:52:18,695 NYT_5 INFO: rank:0, rank:0, keywords f1_micro:0.8900264996386414
    2022-06-23 15:52:18,695 NYT_5 INFO: rank:0, rank:0, keywords f1_macro:0.7091776614199704
    2022-06-23 15:52:19,188 NYT_5 INFO: rank:1, iteration:0, start
    2022-06-23 15:52:19,542 NYT_5 INFO: rank:0, iteration:0, start
    2022-06-23 15:52:22,563 NYT_5 INFO: rank:1, vote generate sentences:8302. total count:11527, cover:0.7202220872733582
    2022-06-23 15:52:22,565 NYT_5 INFO: rank:1, labels:0, count:1175
    2022-06-23 15:52:22,565 NYT_5 INFO: rank:1, labels:1, count:931
    2022-06-23 15:52:22,565 NYT_5 INFO: rank:1, labels:2, count:395
    2022-06-23 15:52:22,565 NYT_5 INFO: rank:1, labels:3, count:5441
    2022-06-23 15:52:22,565 NYT_5 INFO: rank:1, labels:4, count:360
    2022-06-23 15:52:22,565 NYT_5 INFO: rank:1, build graphs, total number keywords:25
    2022-06-23 15:52:22,841 NYT_5 INFO: rank:0, vote generate sentences:8302. total count:11527, cover:0.7202220872733582
    2022-06-23 15:52:22,851 NYT_5 INFO: rank:0, labels:0, count:1175
    2022-06-23 15:52:22,851 NYT_5 INFO: rank:0, labels:1, count:931
    2022-06-23 15:52:22,851 NYT_5 INFO: rank:0, labels:2, count:395
    2022-06-23 15:52:22,852 NYT_5 INFO: rank:0, labels:3, count:5441
    2022-06-23 15:52:22,852 NYT_5 INFO: rank:0, labels:4, count:360
    2022-06-23 15:52:22,864 NYT_5 INFO: rank:0, build graphs, total number keywords:25 edge_number:527
    2022-06-23 15:52:27,405 NYT_5 INFO: rank:1, berfor balance, sample number each class:[1175  931  395 5441  360]
    2022-06-23 15:52:27,420 NYT_5 INFO: rank:1, after balance, sample number each class:[5441 5441 5441 5441 5441]
    2022-06-23 15:52:27,420 NYT_5 INFO: rank:1, build graphs, total number keywords:25
    edge_number:527
    2022-06-23 15:52:27,545 NYT_5 INFO: rank:0, berfor balance, sample number each class:[1175  931  395 5441  360]
    2022-06-23 15:52:27,570 NYT_5 INFO: rank:0, after balance, sample number each class:[5441 5441 5441 5441 5441]
    2022-06-23 15:52:27,577 NYT_5 INFO: rank:0, build graphs, total number keywords:25
    edge_number:527
    2022-06-23 15:52:44,330 NYT_5 INFO: rank:1, start SSL
    edge_number:527
    2022-06-23 15:52:44,330 NYT_5 INFO: rank:0, start SSL
    
    

    I'm using nvcr.io/nvidia/pytorch:22.02-py3 as the base docker image. I'm trying to test it on NYT5. Let me know if you need more info.

    opened by mo-arvan 4
  • ConnectionRefusedError: [Errno 111] Connection refused

    ConnectionRefusedError: [Errno 111] Connection refused

    Using backend: pytorch Using backend: pytorch Using backend: pytorch Setting up a new session... Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/chen_py/lib/python3.7/site-packages/urllib3/connection.py", line 175, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/home/ubuntu/anaconda3/envs/chen_py/lib/python3.7/site-packages/urllib3/util/connection.py", line 96, in create_connection raise err File "/home/ubuntu/anaconda3/envs/chen_py/lib/python3.7/site-packages/urllib3/util/connection.py", line 86, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/chen_py/lib/python3.7/site-packages/urllib3/connectionpool.py", line 706, in urlopen chunked=chunked, File "/home/ubuntu/anaconda3/envs/chen_py/lib/python3.7/site-packages/urllib3/connectionpool.py", line 394, in _make_request conn.request(method, url, **httplib_request_kw) File "/home/ubuntu/anaconda3/envs/chen_py/lib/python3.7/site-packages/urllib3/connection.py", line 239, in request super(HTTPConnection, self).request(method, url, body=body, headers=headers) File "/home/ubuntu/anaconda3/envs/chen_py/lib/python3.7/http/client.py", line 1281, in request self._send_request(method, url, body, headers, encode_chunked) File "/home/ubuntu/anaconda3/envs/chen_py/lib/python3.7/http/client.py", line 1327, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/home/ubuntu/anaconda3/envs/chen_py/lib/python3.7/http/client.py", line 1276, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/home/ubuntu/anaconda3/envs/chen_py/lib/python3.7/http/client.py", line 1036, in _send_output self.send(msg) File "/home/ubuntu/anaconda3/envs/chen_py/lib/python3.7/http/client.py", line 976, in send self.connect() File "/home/ubuntu/anaconda3/envs/chen_py/lib/python3.7/site-packages/urllib3/connection.py", line 205, in connect conn = self._new_conn() File "/home/ubuntu/anaconda3/envs/chen_py/lib/python3.7/site-packages/urllib3/connection.py", line 187, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f1c32cc6bd0>: Failed to establish a new connection: [Errno 111] Connection refused

    I need help!! thank you very much

    opened by CodingPerson 0
Owner
Hello_World
Computer Science at Fudan University.
Hello_World
BERT, LDA, and TFIDF based keyword extraction in Python

BERT, LDA, and TFIDF based keyword extraction in Python kwx is a toolkit for multilingual keyword extraction based on Google's BERT and Latent Dirichl

Andrew Tavis McAllister 41 Dec 27, 2022
Perform sentiment analysis and keyword extraction on Craigslist listings

craiglist-helper synopsis Perform sentiment analysis and keyword extraction on Craigslist listings Background I love Craigslist. I've found most of my

Mark Musil 1 Nov 8, 2021
Wake: Context-Sensitive Automatic Keyword Extraction Using Word2vec

Wake Wake: Context-Sensitive Automatic Keyword Extraction Using Word2vec Abstract استخراج خودکار کلمات کلیدی متون کوتاه فارسی با استفاده از word2vec ب

Omid Hajipoor 1 Dec 17, 2021
multi-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification,seq2seq,attention,beam search

multi-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification,seq2seq,attention,beam search

hellonlp 30 Dec 12, 2022
Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.

Kashgari Overview | Performance | Installation | Documentation | Contributing ?? ?? ?? We released the 2.0.0 version with TF2 Support. ?? ?? ?? If you

Eliyar Eziz 2.3k Dec 29, 2022
Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.

Kashgari Overview | Performance | Installation | Documentation | Contributing ?? ?? ?? We released the 2.0.0 version with TF2 Support. ?? ?? ?? If you

Eliyar Eziz 2k Feb 9, 2021
Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

LancoPKU 105 Jan 3, 2023
IMDB film review sentiment classification based on BERT's supervised learning model.

IMDB film review sentiment classification based on BERT's supervised learning model. On the other hand, the model can be extended to other natural language multi-classification tasks.

Paris 1 Apr 17, 2022
Unsupervised text tokenizer for Neural Network-based text generation.

SentencePiece SentencePiece is an unsupervised text tokenizer and detokenizer mainly for Neural Network-based text generation systems where the vocabu

Google 6.4k Jan 1, 2023
Unsupervised text tokenizer for Neural Network-based text generation.

SentencePiece SentencePiece is an unsupervised text tokenizer and detokenizer mainly for Neural Network-based text generation systems where the vocabu

Google 4.8k Feb 18, 2021
WIT (Wikipedia-based Image Text) Dataset is a large multimodal multilingual dataset comprising 37M+ image-text sets with 11M+ unique images across 100+ languages.

WIT (Wikipedia-based Image Text) Dataset is a large multimodal multilingual dataset comprising 37M+ image-text sets with 11M+ unique images across 100+ languages.

Google Research Datasets 740 Dec 24, 2022
Unofficial Implementation of Zero-Shot Text-to-Speech for Text-Based Insertion in Audio Narration

Zero-Shot Text-to-Speech for Text-Based Insertion in Audio Narration This repo contains only model Implementation of Zero-Shot Text-to-Speech for Text

Rishikesh (ऋषिकेश) 33 Sep 22, 2022
A Python package implementing a new model for text classification with visualization tools for Explainable AI :octocat:

A Python package implementing a new model for text classification with visualization tools for Explainable AI ?? Online live demos: http://tworld.io/s

Sergio Burdisso 285 Jan 2, 2023
Library for fast text representation and classification.

fastText fastText is a library for efficient learning of word representations and sentence classification. Table of contents Resources Models Suppleme

Facebook Research 24.1k Jan 5, 2023
Text vectorization tool to outperform TFIDF for classification tasks

WHAT: Supervised text vectorization tool Textvec is a text vectorization tool, with the aim to implement all the "classic" text vectorization NLP meth

null 186 Dec 29, 2022
Library for fast text representation and classification.

fastText fastText is a library for efficient learning of word representations and sentence classification. Table of contents Resources Models Suppleme

Facebook Research 22.2k Feb 18, 2021
Text vectorization tool to outperform TFIDF for classification tasks

WHAT: Supervised text vectorization tool Textvec is a text vectorization tool, with the aim to implement all the "classic" text vectorization NLP meth

null 160 Feb 9, 2021