149 lines
5.4 KiB
Markdown
149 lines
5.4 KiB
Markdown
---
|
|
language: en
|
|
license: mit
|
|
tags:
|
|
- vision
|
|
- image-to-text
|
|
- image-captioning
|
|
- visual-question-answering
|
|
pipeline_tag: image-to-text
|
|
inference: false
|
|
---
|
|
|
|
# BLIP-2, Flan T5-xxl, pre-trained only
|
|
|
|
BLIP-2 model, leveraging [Flan T5-xxl](https://huggingface.co/google/flan-t5-xxl) (a large language model).
|
|
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
|
|
|
|
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
|
|
|
|
## Model description
|
|
|
|
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
|
|
|
|
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
|
|
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
|
|
which bridge the gap between the embedding space of the image encoder and the large language model.
|
|
|
|
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
|
|
|
|
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
|
|
alt="drawing" width="600"/>
|
|
|
|
This allows the model to be used for tasks like:
|
|
|
|
- image captioning
|
|
- visual question answering (VQA)
|
|
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
|
|
|
|
## Intended uses & limitations
|
|
|
|
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
|
|
fine-tuned versions on a task that interests you.
|
|
|
|
### How to use
|
|
|
|
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example), or refer to the snippets below depending on your usecase:
|
|
|
|
#### Running the model on CPU
|
|
|
|
<details>
|
|
<summary> Click to expand </summary>
|
|
|
|
```python
|
|
import requests
|
|
from PIL import Image
|
|
from transformers import BlipProcessor, Blip2ForConditionalGeneration
|
|
|
|
processor = BlipProcessor.from_pretrained("Salesforce/blip2-flan-t5-xxl")
|
|
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl")
|
|
|
|
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
|
|
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
|
|
|
|
question = "how many dogs are in the picture?"
|
|
inputs = processor(raw_image, question, return_tensors="pt")
|
|
|
|
out = model.generate(**inputs)
|
|
print(processor.decode(out[0], skip_special_tokens=True))
|
|
```
|
|
</details>
|
|
|
|
#### Running the model on GPU
|
|
|
|
##### In full precision
|
|
|
|
<details>
|
|
<summary> Click to expand </summary>
|
|
|
|
```python
|
|
# pip install accelerate
|
|
import requests
|
|
from PIL import Image
|
|
from transformers import Blip2Processor, Blip2ForConditionalGeneration
|
|
|
|
processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl")
|
|
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", device_map="auto")
|
|
|
|
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
|
|
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
|
|
|
|
question = "how many dogs are in the picture?"
|
|
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
|
|
|
|
out = model.generate(**inputs)
|
|
print(processor.decode(out[0], skip_special_tokens=True))
|
|
```
|
|
</details>
|
|
|
|
##### In half precision (`float16`)
|
|
|
|
<details>
|
|
<summary> Click to expand </summary>
|
|
|
|
```python
|
|
# pip install accelerate
|
|
import torch
|
|
import requests
|
|
from PIL import Image
|
|
from transformers import Blip2Processor, Blip2ForConditionalGeneration
|
|
|
|
processor = Bli2pProcessor.from_pretrained("Salesforce/blip2-flan-t5-xxl")
|
|
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", torch_dtype=torch.float16, device_map="auto")
|
|
|
|
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
|
|
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
|
|
|
|
question = "how many dogs are in the picture?"
|
|
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
|
|
|
|
out = model.generate(**inputs)
|
|
print(processor.decode(out[0], skip_special_tokens=True))
|
|
```
|
|
</details>
|
|
|
|
##### In 8-bit precision (`int8`)
|
|
|
|
<details>
|
|
<summary> Click to expand </summary>
|
|
|
|
```python
|
|
# pip install accelerate bitsandbytes
|
|
import torch
|
|
import requests
|
|
from PIL import Image
|
|
from transformers import Blip2Processor, Blip2ForConditionalGeneration
|
|
|
|
processor = Bli2pProcessor.from_pretrained("Salesforce/blip2-flan-t5-xxl")
|
|
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map="auto")
|
|
|
|
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
|
|
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
|
|
|
|
question = "how many dogs are in the picture?"
|
|
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
|
|
|
|
out = model.generate(**inputs)
|
|
print(processor.decode(out[0], skip_special_tokens=True))
|
|
```
|
|
</details> |