From 9851938d078d6b457355cdbe866dd9fa2ed3b210 Mon Sep 17 00:00:00 2001 From: Julien Chaumond Date: Fri, 11 Dec 2020 22:24:14 +0100 Subject: [PATCH] Migrate model card from transformers-repo Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755 Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/distilgpt2-README.md --- README.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) create mode 100644 README.md diff --git a/README.md b/README.md new file mode 100644 index 0000000..41e1a5a --- /dev/null +++ b/README.md @@ -0,0 +1,21 @@ +--- +language: en +tags: +- exbert + +license: apache-2.0 +datasets: +- openwebtext +--- + +# DistilGPT2 + +DistilGPT2 English language model pretrained with the supervision of [GPT2](https://huggingface.co/gpt2) (the smallest version of GPT2) on [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), a reproduction of OpenAI's WebText dataset. The model has 6 layers, 768 dimension and 12 heads, totalizing 82M parameters (compared to 124M parameters for GPT2). On average, DistilGPT2 is two times faster than GPT2. + +On the [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark, GPT2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set). + +We encourage to check [GPT2](https://huggingface.co/gpt2) to know more about usage, limitations and potential biases. + + + +