wav2vec2-lg-xlsr-en-speech-emotion-recognition

This commit is contained in:
Enrique Hernández Calabrés 2021-07-14 09:21:38 +00:00
parent 42814ffaa5
commit be3f459aeb
10 changed files with 208 additions and 2 deletions

1
.gitignore vendored Normal file
View File

@ -0,0 +1 @@
checkpoint-*/

View File

@ -1,3 +1,74 @@
Speech Emotion Recognition model created by fine-tuning the Wav2Vec2 [model](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) pre-trained on xlsr for English. ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model_index:
name: wav2vec2-lg-xlsr-en-speech-emotion-recognition
---
The dataset used to fine-tune this model is the RAVDESS dataset that can be found [here](https://zenodo.org/record/1188976#.YO6jYOgzaUk). <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-lg-xlsr-en-speech-emotion-recognition
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5023
- Accuracy: 0.8223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0752 | 0.21 | 30 | 2.0505 | 0.1359 |
| 2.0119 | 0.42 | 60 | 1.9340 | 0.2474 |
| 1.8073 | 0.63 | 90 | 1.5169 | 0.3902 |
| 1.5418 | 0.84 | 120 | 1.2373 | 0.5610 |
| 1.1432 | 1.05 | 150 | 1.1579 | 0.5610 |
| 0.9645 | 1.26 | 180 | 0.9610 | 0.6167 |
| 0.8811 | 1.47 | 210 | 0.8063 | 0.7178 |
| 0.8756 | 1.68 | 240 | 0.7379 | 0.7352 |
| 0.8208 | 1.89 | 270 | 0.6839 | 0.7596 |
| 0.7118 | 2.1 | 300 | 0.6664 | 0.7735 |
| 0.4261 | 2.31 | 330 | 0.6058 | 0.8014 |
| 0.4394 | 2.52 | 360 | 0.5754 | 0.8223 |
| 0.4581 | 2.72 | 390 | 0.4719 | 0.8467 |
| 0.3967 | 2.93 | 420 | 0.5023 | 0.8223 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.3

107
config.json Normal file
View File

@ -0,0 +1,107 @@
{
"_name_or_path": "jonatasgrosman/wav2vec2-large-xlsr-53-english",
"activation_dropout": 0.05,
"apply_spec_augment": true,
"architectures": [
"Wav2Vec2ForEmotionRecognition"
],
"attention_dropout": 0.1,
"bos_token_id": 1,
"codevector_dim": 256,
"contrastive_logits_temperature": 0.1,
"conv_bias": true,
"conv_dim": [
512,
512,
512,
512,
512,
512,
512
],
"conv_kernel": [
10,
3,
3,
3,
3,
2,
2
],
"conv_stride": [
5,
2,
2,
2,
2,
2,
2
],
"ctc_loss_reduction": "mean",
"ctc_zero_infinity": true,
"diversity_loss_weight": 0.1,
"do_stable_layer_norm": true,
"eos_token_id": 2,
"feat_extract_activation": "gelu",
"feat_extract_dropout": 0.0,
"feat_extract_norm": "layer",
"feat_proj_dropout": 0.05,
"feat_quantizer_dropout": 0.0,
"final_dropout": 0.0,
"finetuning_task": "wav2vec2_clf",
"gradient_checkpointing": true,
"hidden_act": "gelu",
"hidden_dropout": 0.05,
"hidden_size": 1024,
"id2label": {
"0": "angry",
"1": "calm",
"2": "disgust",
"3": "fearful",
"4": "happy",
"5": "neutral",
"6": "sad",
"7": "surprised"
},
"initializer_range": 0.02,
"intermediate_size": 4096,
"label2id": {
"angry": 0,
"calm": 1,
"disgust": 2,
"fearful": 3,
"happy": 4,
"neutral": 5,
"sad": 6,
"surprised": 7
},
"layer_norm_eps": 1e-05,
"layerdrop": 0.05,
"mask_channel_length": 10,
"mask_channel_min_space": 1,
"mask_channel_other": 0.0,
"mask_channel_prob": 0.0,
"mask_channel_selection": "static",
"mask_feature_length": 10,
"mask_feature_prob": 0.0,
"mask_time_length": 10,
"mask_time_min_space": 1,
"mask_time_other": 0.0,
"mask_time_prob": 0.05,
"mask_time_selection": "static",
"model_type": "wav2vec2",
"num_attention_heads": 16,
"num_codevector_groups": 2,
"num_codevectors_per_group": 320,
"num_conv_pos_embedding_groups": 16,
"num_conv_pos_embeddings": 128,
"num_feat_extract_layers": 7,
"num_hidden_layers": 24,
"num_negatives": 100,
"pad_token_id": 0,
"pooling_mode": "mean",
"problem_type": "single_label_classification",
"proj_codevector_dim": 256,
"transformers_version": "4.8.2",
"vocab_size": 33
}

9
preprocessor_config.json Normal file
View File

@ -0,0 +1,9 @@
{
"do_normalize": true,
"feature_extractor_type": "Wav2Vec2FeatureExtractor",
"feature_size": 1,
"padding_side": "right",
"padding_value": 0.0,
"return_attention_mask": true,
"sampling_rate": 16000
}

BIN
pytorch_model.bin (Stored with Git LFS) Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
training_args.bin (Stored with Git LFS) Normal file

Binary file not shown.