Explore different way to mix speech model(wav2vec2, hubert) and nlp model(BART,T5,GPT) together

Overview

SpeechMix

Explore different way to mix speech model(wav2vec2, hubert) and nlp model(BART,T5,GPT) together.

Introduction

For the same input:

from datasets import load_dataset
import soundfile as sf


# define function to read in sound file
def map_to_array(batch):
    speech, _ = sf.read(batch["file"])
    batch["speech"] = speech
    return batch


# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)

transcript = ds['text'][0]
speech = ds["speech"][0]

Speech encoder NLP decoder

model = SpeechMixED("facebook/wav2vec2-base-960h", "facebook/bart-large")

transcript_tensor = model.tokenizer(transcript, return_tensors="pt").input_ids
speech_tensor = model.processor(speech, return_tensors="pt").input_values

model(speech_tensor, transcript_tensor)

Speech encoder NLP decoder only fine-tune on cross attention/projection/decoder embedding

model = SpeechMixED("facebook/wav2vec2-base-960h", "facebook/bart-large", ftl=True)

transcript_tensor = model.tokenizer(transcript, return_tensors="pt").input_ids
speech_tensor = model.processor(speech, return_tensors="pt").input_values

model(speech_tensor, transcript_tensor)

Speech encoder NLP encoder decoder

model = SpeechMixEED("facebook/wav2vec2-base-960h", "facebook/bart-large")

transcript_tensor = model.tokenizer(transcript, return_tensors="pt").input_ids
speech_tensor = model.processor(speech, return_tensors="pt").input_values

model(speech_tensor, transcript_tensor)

Speech encoder NLP encoder decoder only fine-tune on layer norm and attention

model = SpeechMixEED("facebook/wav2vec2-base-960h", "facebook/bart-large", lna=True)

transcript_tensor = model.tokenizer(transcript, return_tensors="pt").input_ids
speech_tensor = model.processor(speech, return_tensors="pt").input_values

model(speech_tensor, transcript_tensor)

Speech encoder NLP encoder decoder only fine-tune on speech encoder

model = SpeechMixEED("facebook/wav2vec2-base-960h", "facebook/bart-large", fne=True)

transcript_tensor = model.tokenizer(transcript, return_tensors="pt").input_ids
speech_tensor = model.processor(speech, return_tensors="pt").input_values

model(speech_tensor, transcript_tensor)

Installation

pip install

pip install speechmix

Build from source

git clone and cd into this project.

pip install -e .
You might also like...
Grading tools for Advanced NLP (11-711)Grading tools for Advanced NLP (11-711)

Grading tools for Advanced NLP (11-711) Installation You'll need docker and unzip to use this repo. For docker, visit the official guide to get starte

API for the GPT-J language model ๐Ÿฆœ. Including a FastAPI backend and a streamlit frontend

gpt-j-api ๐Ÿฆœ An API to interact with the GPT-J language model. You can use and test the model in two different ways: Streamlit web app at http://api.v

An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hundreds of billions of parameters or larger.

GPT-NeoX An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hun

Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts
Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts

gpt-2-simple A simple Python package that wraps existing model fine-tuning and generation scripts for OpenAI's GPT-2 text generation model (specifical

Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts
Python package to easily retrain OpenAI's GPT-2 text-generating model on new texts

gpt-2-simple A simple Python package that wraps existing model fine-tuning and generation scripts for OpenAI's GPT-2 text generation model (specifical

Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing

Token Shift GPT Implementation of Token Shift GPT - An autoregressive model that relies solely on shifting along the sequence dimension and feedforwar

Train GPT-3 model on V100(16GB Mem) Using improved Transformer.
Bot to connect a real Telegram user, simulating responses with OpenAI's davinci GPT-3 model.

AI-BOT Bot to connect a real Telegram user, simulating responses with OpenAI's davinci GPT-3 model.

Honor's thesis project analyzing whether the GPT-2 model can more effectively generate free-verse or structured poetry.

gpt2-poetry The following code is for my senior honor's thesis project, under the guidance of Dr. Keith Holyoak at the University of California, Los A

Comments
  • Restyle changed eval for time benchmark

    Restyle changed eval for time benchmark

    A duplicate of #4 with additional commits that automatically address incorrect style, created by Restyled.

    :warning: Even though this PR is not a Fork, it contains outside contributions. Please review accordingly.

    Since the original Pull Request was opened as a fork in a contributor's repository, we are unable to create a Pull Request branching from it with only the style fixes.

    The following Restylers made fixes:

    To incorporate these changes, you can either:

    1. Merge this Pull Request instead of the original, or

    2. Ask your contributor to locally incorporate these commits and push them to the original Pull Request

      Expand for example instructions
      ```console
      git remote add upstream https://github.com/voidful/SpeechMix.git
      git fetch upstream pull/<this PR number>/head
      git merge --ff-only FETCH_HEAD
      git push
      ```
      

    NOTE: As work continues on the original Pull Request, this process will re-run and update (force-push) this Pull Request with updated style fixes as necessary. If the style is fixed manually at any point (i.e. this process finds no fixes to make), this Pull Request will be closed automatically.

    Sorry if this was unexpected. To disable it, see our documentation.

    opened by restyled-io[bot] 0
Owner
Eric Lam
NLP researcher, Data scientist and computer science engineer.
Eric Lam
GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model

GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model -- based on GPT-3, called GPT-Codex -- that is fine-tuned on publicly available code from GitHub.

Nathan Cooper 2.3k Jan 1, 2023
A collection of scripts to preprocess ASR datasets and finetune language-specific Wav2Vec2 XLSR models

wav2vec-toolkit A collection of scripts to preprocess ASR datasets and finetune language-specific Wav2Vec2 XLSR models This repository accompanies the

Anton Lozhkov 29 Oct 23, 2022
GSoC'2021 | TensorFlow implementation of Wav2Vec2

GSoC'2021 | TensorFlow implementation of Wav2Vec2

Vasudev Gupta 73 Nov 28, 2022
Two-stage text summarization with BERT and BART

Two-Stage Text Summarization Description We experiment with a 2-stage summarization model on CNN/DailyMail dataset that combines the ability to filter

Yukai Yang (Alexis) 6 Oct 22, 2022
์ดˆ์„ฑ ํ•ด์„๊ธฐ based on ko-BART

์ดˆ์„ฑ ํ•ด์„๊ธฐ ๊ฐœ์š” ํ•œ๊ตญ์–ด ์ดˆ์„ฑ๋งŒ์œผ๋กœ ์ด๋ฃจ์–ด์ง„ ๋ฌธ์žฅ์„ ์ž…๋ ฅํ•˜๋ฉด, ์™„์„ฑ๋œ ๋ฌธ์žฅ์„ ์˜ˆ์ธกํ•˜๋Š” ์ดˆ์„ฑ ํ•ด์„๊ธฐ์ž…๋‹ˆ๋‹ค. ์ดˆ์„ฑ: ใ„ดใ„ด ใ„ดใ„น ใ…ˆใ…‡ใ…Ž ์˜ˆ์ธก ๋ฌธ์žฅ: ๋‚˜๋Š” ๋„ˆ๋ฅผ ์ข‹์•„ํ•ด ๋ชจ๋ธ ๋ชจ๋ธ์€ SKT-AI์—์„œ ๊ณต๊ฐœํ•œ Ko-BART๋ฅผ ์ด์šฉํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ๋ฌธ์žฅ ๋‹จ์œ„๋กœ ์ด๋ฃจ์–ด์ง„ ์•„๋ฌด ์ฝ”ํผ์Šค๋‚˜

Dawoon Jung 29 Oct 28, 2022
Continuously update some NLP practice based on different tasks.

NLP_practice We will continuously update some NLP practice based on different tasks. prerequisites Software pytorch >= 1.10 torchtext >= 0.11.0 sklear

null 0 Jan 5, 2022