Update README.md
This commit is contained in:
parent
058e8e2c95
commit
9130025bb7
|
@ -15,7 +15,7 @@ datasets:
|
|||
|
||||
## Model Description
|
||||
|
||||
GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model. This model is the same size as OpenAI's "Ada" model.
|
||||
GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model.
|
||||
|
||||
## Training data
|
||||
|
||||
|
@ -60,7 +60,7 @@ EleutherAI is currently in the process of carrying out further evaluations of GP
|
|||
| GPT-3 1.3B | ------ | ----- | ----- |
|
||||
| GPT-2 1.5B | 1.0468 | ----- | 17.48 |
|
||||
| **GPT-Neo 2.7B** | **0.7165** | **5.646** | **11.39** |
|
||||
| GPT-3 Ada 2.7B | 0.9631 | ----- | ----- |
|
||||
| GPT-3 2.7B | 0.9631 | ----- | ----- |
|
||||
| GPT-3 175B | 0.7177 | ----- | ----- |
|
||||
|
||||
All GPT-2 and GPT-3 scores are from their respective papers, except for the Pile test results which are from the Pile paper.
|
||||
|
|
Loading…
Reference in New Issue