Add note that this is the smallest version of the model (#18)

- Add note that this is the smallest version of the model (611838ef095a5bb35bf2027d05e1194b7c9d37ac)


Co-authored-by: helen <mathemakitten@users.noreply.huggingface.co>
This commit is contained in:
Sylvain Gugger 2022-11-23 12:55:26 +00:00 committed by system
parent 0dd7bcc7a6
commit f27b190eea
1 changed files with 4 additions and 0 deletions

View File

@ -34,6 +34,10 @@ This way, the model learns an inner representation of the English language that
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt. prompt.
This is the **smallest** version of GPT-2, with 124M parameters.
**Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
## Intended uses & limitations ## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the You can use the raw model for text generation or fine-tune it to a downstream task. See the