huggingface/distilbert-base-cased is a forked repo from huggingface. License: apache-2-0
Go to file
Julien Chaumond 935ac13b47 Migrate model card from transformers-repo
Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/distilbert-base-cased-README.md
2020-12-11 22:23:53 +01:00
.gitattributes initial commit 2020-02-07 19:16:00 +00:00
README.md Migrate model card from transformers-repo 2020-12-11 22:23:53 +01:00
config.json Update config.json 2020-04-24 15:57:51 +00:00
pytorch_model.bin Update pytorch_model.bin 2020-02-07 19:16:01 +00:00
tf_model.h5 Update tf_model.h5 2020-02-07 19:16:01 +00:00
tokenizer.json Update tokenizer.json 2020-10-12 12:56:41 +00:00
tokenizer_config.json Add tokenizer configuration 2020-11-25 15:15:22 +01:00
vocab.txt Add vocab 2020-11-25 15:36:48 +01:00

README.md

language license datasets
en apache-2.0
bookcorpus
wikipedia

DistilBERT base model (cased)

This model is a distilled version of the BERT base model. It was introduced in this paper. The code for the distillation process can be found here. This model is cased: it does make a difference between english and English.

All the training details on the pre-training, the uses, limitations and potential biases are the same as for DistilBERT-base-uncased. We highly encourage to check it if you want to know more.

Evaluation results

When fine-tuned on downstream tasks, this model achieves the following results:

Glue test results:

Task MNLI QQP QNLI SST-2 CoLA STS-B MRPC RTE
81.5 87.8 88.2 90.4 47.2 85.5 85.6 60.6

BibTeX entry and citation info

@article{Sanh2019DistilBERTAD,
  title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
  author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.01108}
}