diff --git a/README.md b/README.md index 5b5e996..710fadf 100644 --- a/README.md +++ b/README.md @@ -23,7 +23,7 @@ GPT-Neo 2.7B was trained on the Pile, a large scale curated dataset created by E ## Training procedure -This model was trained for 400,000 steps on the Pile. It was trained as a masked autoregressive language model, using cross-entropy loss. +This model was trained for 420 billion tokens over 400,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss. ## Intended Use and Limitations