papluca/xlm-roberta-base-language-detection is a forked repo from huggingface. License: mit
Go to file
Luca Papariello 714fcf8b61 Add some details to README file 2021-11-23 20:13:39 +00:00
.gitattributes initial commit 2021-11-23 09:28:18 +00:00
README.md Add some details to README file 2021-11-23 20:13:39 +00:00
config.json add model 2021-11-23 09:28:49 +00:00
pytorch_model.bin add model 2021-11-23 09:28:49 +00:00
sentencepiece.bpe.model add tokenizer 2021-11-23 09:45:34 +00:00
special_tokens_map.json add tokenizer 2021-11-23 09:45:34 +00:00
tokenizer.json add tokenizer 2021-11-23 09:45:34 +00:00
tokenizer_config.json add tokenizer 2021-11-23 09:45:34 +00:00

README.md

license tags metrics model-index
mit
generated_from_trainer
accuracy
f1
name results
xlm-roberta-base-language-detection

xlm-roberta-base-language-detection

This model is a fine-tuned version of xlm-roberta-base on the None dataset.

Intended uses & limitations

You can directly use this model as a language detector, i.e. for sequence classification tasks. Currently, it supports the following 20 languages:

arabic (ar), bulgarian (bg), german (de), modern greek (el), english (en), spanish (es), french (fr), hindi (hi), italian (it), japanese (ja), dutch (nl), polish (pl), portuguese (pt), russian (ru), swahili (sw), thai (th), turkish (tr), urdu (ur), vietnamese (vi), and chinese (zh)

Training and evaluation data

It achieves the following results on the evaluation set:

  • Loss: 0.0103
  • Accuracy: 0.9977
  • F1: 0.9977

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 2
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
0.2492 1.0 1094 0.0149 0.9969 0.9969
0.0101 2.0 2188 0.0103 0.9977 0.9977

Framework versions

  • Transformers 4.12.5
  • Pytorch 1.10.0+cu111
  • Datasets 1.15.1
  • Tokenizers 0.10.3