From 7cb542864af1b5dae14834da62f232cad74e95f4 Mon Sep 17 00:00:00 2001 From: Stella Biderman Date: Fri, 6 Aug 2021 19:54:23 +0000 Subject: [PATCH] Update README.md --- README.md | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 9d34cc7..3a52668 100644 --- a/README.md +++ b/README.md @@ -52,12 +52,14 @@ GPT-J learns an inner representation of the English language that can be used to ### How to use -You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: +This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python ->>> from transformers import pipeline ->>> generator = pipeline('text-generation', model='EleutherAI/gpt-j-6B') ->>> generator("EleutherAI has", do_sample=True, min_length=50) +from transformers import AutoTokenizer, AutoModelForCausalLM + +tokenizer = AutoTokenizer.from_pretrained("gpt2") +model = AutoModelForCausalLM.from_pretrained("gpt-j-6B") + [{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}] ```