Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755 Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/distilroberta-base-README.md |
||
---|---|---|
.gitattributes | ||
README.md | ||
config.json | ||
dict.txt | ||
merges.txt | ||
pytorch_model.bin | ||
rust_model.ot | ||
tf_model.h5 | ||
tokenizer.json | ||
vocab.json |
README.md
language | tags | license | datasets | ||
---|---|---|---|---|---|
en |
|
apache-2.0 |
|
DistilRoBERTa base model
This model is a distilled version of the RoBERTa-base model. It follows the same training procedure as DistilBERT. The code for the distillation process can be found here. This model is case-sensitive: it makes a difference between english and English.
The model has 6 layers, 768 dimension and 12 heads, totalizing 82M parameters (compared to 125M parameters for RoBERTa-base). On average DistilRoBERTa is twice as fast as Roberta-base.
We encourage to check RoBERTa-base model to know more about usage, limitations and potential biases.
Training data
DistilRoBERTa was pre-trained on OpenWebTextCorpus, a reproduction of OpenAI's WebText dataset (it is ~4 times less training data than the teacher RoBERTa).
Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
---|---|---|---|---|---|---|---|---|
84.0 | 89.4 | 90.8 | 92.5 | 59.3 | 88.3 | 86.6 | 67.9 |
BibTeX entry and citation info
@article{Sanh2019DistilBERTAD,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
journal={ArXiv},
year={2019},
volume={abs/1910.01108}
}
