wav2vec2-large-960h-lv60-self/README.md

126 lines
4.4 KiB
Markdown
Raw Permalink Normal View History

2021-01-30 22:33:58 +00:00
---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
2022-03-24 22:55:52 +00:00
- hf-asr-leaderboard
2021-01-30 22:33:58 +00:00
license: apache-2.0
2022-03-24 22:47:59 +00:00
model-index:
2022-03-24 22:55:52 +00:00
- name: wav2vec2-large-960h-lv60
2022-03-24 22:47:59 +00:00
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
2022-05-23 16:13:42 +00:00
name: LibriSpeech (clean)
2022-03-24 22:47:59 +00:00
type: librispeech_asr
2022-05-23 16:13:42 +00:00
config: clean
split: test
args:
language: en
2022-03-24 22:47:59 +00:00
metrics:
- name: Test WER
type: wer
value: 1.9
2022-05-23 16:13:42 +00:00
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.9
2021-01-30 22:33:58 +00:00
---
# Wav2Vec2-Large-960h-Lv60 + Self-Training
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
2021-02-04 22:10:24 +00:00
The large model pretrained and fine-tuned on 960 hours of Libri-Light and Librispeech on 16kHz sampled speech audio. Model was trained with [Self-Training objective](https://arxiv.org/abs/2010.11430). When using the model make sure that your speech input is also sampled at 16Khz.
2021-01-30 22:33:58 +00:00
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
2021-08-27 15:37:00 +00:00
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
2021-01-30 22:33:58 +00:00
from datasets import load_dataset
import torch
2021-08-27 15:37:00 +00:00
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
2021-02-09 09:11:54 +00:00
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
2021-01-30 22:33:58 +00:00
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
2022-04-05 16:40:00 +00:00
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values
2021-01-30 22:33:58 +00:00
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
2021-08-27 15:37:00 +00:00
transcription = processor.batch_decode(predicted_ids)
2021-02-08 18:12:31 +00:00
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-large-960h-lv60-self** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
2021-08-27 15:37:00 +00:00
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
2021-02-08 18:12:31 +00:00
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
2021-02-09 09:11:54 +00:00
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self").to("cuda")
2021-08-27 15:37:00 +00:00
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
2021-02-08 18:12:31 +00:00
def map_to_pred(batch):
2022-03-31 19:51:10 +00:00
inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest")
2021-02-11 13:17:32 +00:00
input_values = inputs.input_values.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
2021-02-08 18:12:31 +00:00
with torch.no_grad():
2021-02-11 13:17:32 +00:00
logits = model(input_values, attention_mask=attention_mask).logits
2021-02-08 18:12:31 +00:00
predicted_ids = torch.argmax(logits, dim=-1)
2021-08-27 15:37:00 +00:00
transcription = processor.batch_decode(predicted_ids)
2021-02-08 18:12:31 +00:00
batch["transcription"] = transcription
return batch
2022-04-12 09:28:22 +00:00
result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
2021-02-08 18:12:31 +00:00
print("WER:", wer(result["text"], result["transcription"]))
```
2021-02-08 18:53:25 +00:00
*Result (WER)*:
2021-02-08 18:12:31 +00:00
| "clean" | "other" |
|---|---|
2021-02-11 13:17:32 +00:00
| 1.9 | 3.9 |