diff --git a/README.md b/README.md new file mode 100644 index 0000000..ad7d49e --- /dev/null +++ b/README.md @@ -0,0 +1,63 @@ +--- +license: mit +tags: +- generated_from_trainer +metrics: +- accuracy +- f1 +model-index: +- name: xlm-roberta-base-language-detection + results: [] +--- + + + +# xlm-roberta-base-language-detection + +This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. +It achieves the following results on the evaluation set: +- Loss: 0.0103 +- Accuracy: 0.9977 +- F1: 0.9977 + +## Model description + +More information needed + +## Intended uses & limitations + +More information needed + +## Training and evaluation data + +More information needed + +## Training procedure + +### Training hyperparameters + +The following hyperparameters were used during training: +- learning_rate: 2e-05 +- train_batch_size: 64 +- eval_batch_size: 128 +- seed: 42 +- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 +- lr_scheduler_type: linear +- num_epochs: 2 +- mixed_precision_training: Native AMP + +### Training results + +| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | +|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| +| 0.2492 | 1.0 | 1094 | 0.0149 | 0.9969 | 0.9969 | +| 0.0101 | 2.0 | 2188 | 0.0103 | 0.9977 | 0.9977 | + + +### Framework versions + +- Transformers 4.12.5 +- Pytorch 1.10.0+cu111 +- Datasets 1.15.1 +- Tokenizers 0.10.3