The Easy-to-use Dialogue Response Selection Toolkit for Researchers

Related tags

SimpleReDial-v1
Overview

Easy-to-use toolkit for retrieval-based Chatbot

Our released data can be found at this link. Make sure the following steps are adopted to use our codes.

How to Use

  1. Init the repo

    Before using the repo, please run the following command to init:

    # create the necessay folders
    python init.py
    
    # prepare the environment
    # if some package cannot be installed, just google and install it from other ways
    pip install -r requirements.txt
  2. train the model

    ./scripts/train.sh <dataset_name> <model_name> <cuda_ids>
  3. test the model [rerank]

    ./scripts/test_rerank.sh <dataset_name> <model_name> <cuda_id>
  4. test the model [recal]

    # different recall_modes are available: q-q, q-r
    ./scripts/test_recall.sh <dataset_name> <model_name> <cuda_id>
  5. inference the responses and save into the faiss index

    Somethings inference will missing data samples, please use the 1 gpu (faiss-gpu search use 1 gpu quickly)

    It should be noted that: 1. For writer dataset, use extract_inference.py script to generate the inference.txt 2. For other datasets(douban, ecommerce, ubuntu), just cp train.txt inference.txt. The dataloader will automatically read the test.txt to supply the corpus.

    # work_mode=response, inference the response and save into faiss (for q-r matching) [dual-bert/dual-bert-fusion]
    # work_mode=context, inference the context to do q-q matching
    # work_mode=gray, inference the context; read the faiss(work_mode=response has already been done), search the topk hard negative samples; remember to set the BERTDualInferenceContextDataloader in config/base.yaml
    ./scripts/inference.sh <dataset_name> <model_name> <cuda_ids>

    If you want to generate the gray dataset for the dataset:

    # 1. set the mode as the **response**, to generate the response faiss index; corresponding dataset name: BERTDualInferenceDataset;
    ./scripts/inference.sh <dataset_name> response <cuda_ids>
    
    # 2. set the mode as the **gray**, to inference the context in the train.txt and search the top-k candidates as the gray(hard negative) samples; corresponding dataset name: BERTDualInferenceContextDataset
    ./scripts/inference.sh <dataset_name> gray <cuda_ids>
    
    # 3. set the mode as the **gray-one2many** if you want to generate the extra positive samples for each context in the train set, the needings of this mode is the same as the **gray** work mode
    ./scripts/inference.sh <dataset_name> gray-one2many <cuda_ids>

    If you want to generate the pesudo positive pairs, run the following commands:

    # make sure the dual-bert inference dataset name is BERTDualInferenceDataset
    ./scripts/inference.sh <dataset_name> unparallel <cuda_ids>
  6. deploy the rerank and recall model

    # load the model on the cuda:0(can be changed in deploy.sh script)
    ./scripts/deploy.sh <cuda_id>

    at the same time, you can test the deployed model by using:

    # test_mode: recall, rerank, pipeline
    ./scripts/test_api.sh <test_mode> <dataset>
  7. test the recall performance of the elasticsearch

    Before testing the es recall, make sure the es index has been built:

    # recall_mode: q-q/q-r
    ./scripts/build_es_index.sh <dataset_name> <recall_mode>
    # recall_mode: q-q/q-r
    ./scripts/test_es_recall.sh <dataset_name> <recall_mode> 0
  8. simcse generate the gray responses

    # train the simcse model
    ./script/train.sh <dataset_name> simcse <cuda_ids>
    # generate the faiss index, dataset name: BERTSimCSEInferenceDataset
    ./script/inference_response.sh <dataset_name> simcse <cuda_ids>
    # generate the context index
    ./script/inference_simcse_response.sh <dataset_name> simcse <cuda_ids>
    # generate the test set for unlikelyhood-gen dataset
    ./script/inference_simcse_unlikelyhood_response.sh <dataset_name> simcse <cuda_ids>
    # generate the gray response
    ./script/inference_gray_simcse.sh <dataset_name> simcse <cuda_ids>
    # generate the test set for unlikelyhood-gen dataset
    ./script/inference_gray_simcse_unlikelyhood.sh <dataset_name> simcse <cuda_ids>
Owner
GMFTBY
Those who are crazy enough to think they can change the world are the ones who can.
GMFTBY
Official Python client for the MonkeyLearn API. Build and consume machine learning models for language processing from your Python apps.

MonkeyLearn API for Python Official Python client for the MonkeyLearn API. Build and run machine learning models for language processing from your Pyt

MonkeyLearn 148 Sep 22, 2021
Python SDK for accessing the Hanko Authentication API

Hanko Authentication SDK for Python This package is maintained by Hanko. Contents Introduction Documentation Installation Usage Prerequisites Create a

Hanko.io 2 Oct 19, 2021
Command-line program to download videos from YouTube.com and other video sites

youtube-dl - download videos from youtube.com or other video platforms INSTALLATION DESCRIPTION OPTIONS CONFIGURATION OUTPUT TEMPLATE FORMAT SELECTION

youtube-dl 101.5k Oct 23, 2021
A super awesome Twitter API client for Python.

birdy birdy is a super awesome Twitter API client for Python in just a little under 400 LOC. TL;DR Features Future proof dynamic API with full REST an

Inueni 260 Sep 7, 2021
The Easy-to-use Dialogue Response Selection Toolkit for Researchers

Easy-to-use toolkit for retrieval-based Chatbot Our released data can be found at this link. Make sure the following steps are adopted to use our code

GMFTBY 11 Oct 18, 2021
Unirest in Python: Simplified, lightweight HTTP client library.

Unirest for Python Unirest is a set of lightweight HTTP libraries available in multiple languages, built and maintained by Mashape, who also maintain

Kong 430 Jul 7, 2021
Slack Developer Kit for Python

Python Slack SDK The Slack platform offers several APIs to build apps. Each Slack API delivers part of the capabilities from the platform, so that you

SlackAPI 3.3k Oct 21, 2021
dex.guru python sdk

dexguru-sdk.py dexguru-sdk.py allows you to access dex.guru public methods from your async python scripts. Installation To install latest version, jus

DexGuru 5 Oct 19, 2021
The Official Twilio SendGrid Led, Community Driven Python API Library

The default branch name for this repository has been changed to main as of 07/27/2020. This library allows you to quickly and easily use the SendGrid

Twilio SendGrid 1.2k Oct 26, 2021
The Research PACS on AWS solution facilitates researchers' access medical images stored in the clinical PACS in a secure and seamless manner

Research PACS on AWS Challenge to solve Solution presentation Deploy the solution Further reading Releases License Challenge to solve The rise of new

AWS Samples 1 Oct 26, 2021
Python lib for Embedly

embedly-python Python library for interacting with Embedly's API. To get started sign up for a key at embed.ly/signup. Install Install with Pip (recom

Embedly 79 Jul 20, 2021
Official API documentation for Highrise

Highrise API The Highrise API is implemented as vanilla XML over HTTP using all four verbs (GET/POST/PUT/DELETE). Every resource, like Person, Deal, o

Basecamp 128 Sep 17, 2021
A python wrapper for interacting with the LabArchives API.

LabArchives API wrapper for Python A python wrapper for interacting with the LabArchives API. This very simple package makes it easier to make arbitra

Marek Cmero 1 Oct 21, 2021
Unofficial Medium Python Flask API and SDK

PyMedium - Unofficial Medium API PyMedium is an unofficial Medium API written in python flask. It provides developers to access to user, post list and

Engine Bai 156 Sep 4, 2021
A thin Python Wrapper for the Dark Sky (formerly forecast.io) weather API

Dark Sky Wrapper This is a wrapper for the Dark Sky (formerly forecast.io) API. It allows you to get the weather for any location, now, in the past, o

Ze'ev Gilovitz 415 Oct 15, 2021
Get your Pixiv token (for running upbit/pixivpy)

gppt: get-pixivpy-token Get your Pixiv token (for running upbit/pixivpy) Refine pixiv_auth.py + its fork Install ❭ pip install gppt Run Note: In advan

haruna 6 Oct 3, 2021
🤖 A fully featured, easy to use Python wrapper for the Walmart Open API

Wapy Wapy is a fully featured Python wrapper for the Walmart Open API. Features Easy to use, object oriented interface to the Walmart Open API. (Produ

Carlos Roso 44 Aug 21, 2021
Widevine CDM API

wvproxy Widevine CDM API Setup Install Python 3.6 or newer and Poetry Install Python package dependencies using poetry install Activate the virtual en

null 12 Oct 10, 2021
A course on getting started with the Twitter API v2 for academic research

Getting started with the Twitter API v2 for academic research Welcome to this '101 course' on getting started with academic research using the Twitter

@TwitterDev 235 Oct 23, 2021