From 65f67317a3f70fc834f1f7e4ea0a688fc672aa2c Mon Sep 17 00:00:00 2001 From: Niels Rogge Date: Tue, 7 Feb 2023 16:09:47 +0000 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 0acca7d..250d368 100644 --- a/README.md +++ b/README.md @@ -9,7 +9,7 @@ pipeline_tag: image-to-text # BLIP-2, OPT-6.7b, pre-trained only -BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 2.7 billion parameters). +BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.