Update README.md (#2)
- Update README.md (c66c58b6d926dd0d37987e69cb6083e4e40afb89) Co-authored-by: Bartek Szmelczynski <Bearnardd@users.noreply.huggingface.co>
This commit is contained in:
parent
8d708decd7
commit
9d7568e4b2
|
@ -11,7 +11,7 @@ datasets:
|
|||
This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-cased).
|
||||
It was introduced in [this paper](https://arxiv.org/abs/1910.01108).
|
||||
The code for the distillation process can be found
|
||||
[here](https://github.com/huggingface/transformers/tree/master/examples/distillation).
|
||||
[here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation).
|
||||
This model is cased: it does make a difference between english and English.
|
||||
|
||||
All the training details on the pre-training, the uses, limitations and potential biases (included below) are the same as for [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased).
|
||||
|
|
Loading…
Reference in New Issue