From 4ab7411f923bd547e574fed1ef7c84771dccdd47 Mon Sep 17 00:00:00 2001 From: Niels Rogge Date: Thu, 9 Feb 2023 10:29:33 +0000 Subject: [PATCH] Create README.md --- README.md | 43 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) create mode 100644 README.md diff --git a/README.md b/README.md new file mode 100644 index 0000000..aeeed71 --- /dev/null +++ b/README.md @@ -0,0 +1,43 @@ +--- +language: en +license: mit +tags: +- vision +- image-to-text +pipeline_tag: image-to-text +--- + +# BLIP-2, Flan T5-xxl, pre-trained only + +BLIP-2 model, leveraging [Flan T5-xxl](https://huggingface.co/google/flan-t5-xxl) (a large language model). +It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). + +Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. + +## Model description + +BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. + +The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen +while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, +which bridge the gap between the embedding space of the image encoder and the large language model. + +The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. + + + +This allows the model to be used for tasks like: + +- image captioning +- visual question answering (VQA) +- chat-like conversations by feeding the image and the previous conversation as prompt to the model + +## Intended uses & limitations + +You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for +fine-tuned versions on a task that interests you. + +### How to use + +For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/model_doc/blip_2). \ No newline at end of file