diff --git a/README.md b/README.md index 117abe2..429ada6 100644 --- a/README.md +++ b/README.md @@ -30,7 +30,7 @@ The original model can be found under https://github.com/pytorch/fairseq/tree/ma To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python - from transformers import Wav2Vec2Tokenizer, Wav2Vec2Model + from transformers import Wav2Vec2Tokenizer, Wav2Vec2ForMaskedLM from datasets import load_dataset import soundfile as sf import torch