2022-07-16 02:43:10 +00:00
|
|
|
---
|
|
|
|
language: en
|
|
|
|
datasets:
|
|
|
|
- LIUM/tedlium
|
|
|
|
tags:
|
|
|
|
- speech
|
|
|
|
- audio
|
|
|
|
- automatic-speech-recognition
|
|
|
|
---
|
|
|
|
Finetuned from [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self).
|
|
|
|
|
|
|
|
# Installation
|
|
|
|
1. PyTorch installation: https://pytorch.org/
|
|
|
|
2. Install transformers: https://huggingface.co/docs/transformers/installation
|
|
|
|
|
|
|
|
e.g., installation by conda
|
|
|
|
```
|
|
|
|
>> conda create -n wav2vec2 python=3.8
|
|
|
|
>> conda install pytorch cudatoolkit=11.3 -c pytorch
|
|
|
|
>> conda install -c conda-forge transformers
|
|
|
|
```
|
|
|
|
|
|
|
|
# Usage
|
|
|
|
```python
|
|
|
|
# Load the model and processor
|
|
|
|
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
|
|
|
|
import numpy as np
|
|
|
|
import torch
|
|
|
|
|
2022-09-19 03:57:05 +00:00
|
|
|
model = Wav2Vec2ForCTC.from_pretrained(r'yongjian/wav2vec2-large-a') # Note: PyTorch Model
|
2022-07-16 02:43:10 +00:00
|
|
|
processor = Wav2Vec2Processor.from_pretrained(r'yongjian/wav2vec2-large-a')
|
|
|
|
|
|
|
|
# Load input
|
|
|
|
np_wav = np.random.normal(size=(16000)).clip(-1, 1) # change it to your sample
|
|
|
|
|
|
|
|
# Inference
|
|
|
|
sample_rate = processor.feature_extractor.sampling_rate
|
|
|
|
with torch.no_grad():
|
|
|
|
model_inputs = processor(np_wav, sampling_rate=sample_rate, return_tensors="pt", padding=True)
|
|
|
|
logits = model(model_inputs.input_values, attention_mask=model_inputs.attention_mask).logits # use .cuda() for GPU acceleration
|
|
|
|
pred_ids = torch.argmax(logits, dim=-1).cpu()
|
|
|
|
pred_text = processor.batch_decode(pred_ids)
|
|
|
|
print('Transcription:', pred_text)
|
2022-10-22 06:26:29 +00:00
|
|
|
```
|
|
|
|
|
2022-10-22 07:21:15 +00:00
|
|
|
# Code
|
|
|
|
GitHub Repo:
|
2022-10-22 06:26:29 +00:00
|
|
|
https://github.com/CassiniHuy/wav2vec2_finetune
|