935ac13b47
Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755 Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/distilbert-base-cased-README.md |
||
---|---|---|
.gitattributes | ||
README.md | ||
config.json | ||
pytorch_model.bin | ||
tf_model.h5 | ||
tokenizer.json | ||
tokenizer_config.json | ||
vocab.txt |
README.md
language | license | datasets | ||
---|---|---|---|---|
en | apache-2.0 |
|
DistilBERT base model (cased)
This model is a distilled version of the BERT base model. It was introduced in this paper. The code for the distillation process can be found here. This model is cased: it does make a difference between english and English.
All the training details on the pre-training, the uses, limitations and potential biases are the same as for DistilBERT-base-uncased. We highly encourage to check it if you want to know more.
Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
---|---|---|---|---|---|---|---|---|
81.5 | 87.8 | 88.2 | 90.4 | 47.2 | 85.5 | 85.6 | 60.6 |
BibTeX entry and citation info
@article{Sanh2019DistilBERTAD,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
journal={ArXiv},
year={2019},
volume={abs/1910.01108}
}