facebook/wav2vec2-base-960h is a forked repo from huggingface. License: apache-2-0
Go to file
Lysandre 22aad52d43 Adding `safetensors` variant of this model (#7)
- Adding `safetensors` variant of this model (b593867a0567fe9d5be38a6746b2932feaf52894)


Co-authored-by: Nicolas Patry <Narsil@users.noreply.huggingface.co>
2022-11-14 21:37:23 +00:00
.gitattributes Adding `safetensors` variant of this model (#7) 2022-11-14 21:37:23 +00:00
README.md Remove `soundfile` import (#2) 2022-06-30 00:05:41 +00:00
config.json add model 2021-06-13 20:41:37 +01:00
feature_extractor_config.json add feature extractor config" 2021-02-24 17:04:45 +03:00
model.safetensors Adding `safetensors` variant of this model (#7) 2022-11-14 21:37:23 +00:00
preprocessor_config.json add processor file 2021-02-25 17:58:11 +03:00
pytorch_model.bin upload all files 2021-01-30 23:32:17 +03:00
special_tokens_map.json upload all files 2021-01-30 23:32:17 +03:00
tf_model.h5 add model 2021-06-13 20:41:37 +01:00
tokenizer_config.json Update tokenizer_config.json 2021-02-12 11:12:44 +00:00
vocab.json upload all files 2021-01-30 23:32:17 +03:00

README.md

language datasets tags license widget model-index
en
librispeech_asr
audio
automatic-speech-recognition
hf-asr-leaderboard
apache-2.0
example_title src
Librispeech sample 1 https://cdn-media.huggingface.co/speech_samples/sample1.flac
example_title src
Librispeech sample 2 https://cdn-media.huggingface.co/speech_samples/sample2.flac
name results
wav2vec2-base-960h
task dataset metrics
name type
Automatic Speech Recognition automatic-speech-recognition
name type config split args
LibriSpeech (clean) librispeech_asr clean test
language
en
name type value
Test WER wer 3.4
task dataset metrics
name type
Automatic Speech Recognition automatic-speech-recognition
name type config split args
LibriSpeech (other) librispeech_asr other test
language
en
name type value
Test WER wer 8.6

Wav2Vec2-Base-960h

Facebook's Wav2Vec2

The base model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.

Paper

Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli

Abstract

We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.

The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.

Usage

To transcribe audio files the model can be used as a standalone acoustic model as follows:

 from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
 from datasets import load_dataset
 import torch
 
 # load model and tokenizer
 processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
 model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
     
 # load dummy dataset and read soundfiles
 ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
 
 # tokenize
 input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values  # Batch size 1
 
 # retrieve logits
 logits = model(input_values).logits
 
 # take argmax and decode
 predicted_ids = torch.argmax(logits, dim=-1)
 transcription = processor.batch_decode(predicted_ids)

Evaluation

This code snippet shows how to evaluate facebook/wav2vec2-base-960h on LibriSpeech's "clean" and "other" test data.

from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer


librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")

model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")

def map_to_pred(batch):
    input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
    with torch.no_grad():
        logits = model(input_values.to("cuda")).logits

    predicted_ids = torch.argmax(logits, dim=-1)
    transcription = processor.batch_decode(predicted_ids)
    batch["transcription"] = transcription
    return batch

result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])

print("WER:", wer(result["text"], result["transcription"]))

Result (WER):

"clean" "other"
3.4 8.6