Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.
Usage
You can use this model for conditional and un-conditional image captioning
Using the Pytorch model
Running the model on CPU
Click to expand
importrequestsfromPILimportImagefromtransformersimportBlipProcessor,BlipForQuestionAnsweringprocessor=BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")model=BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base")img_url='https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'raw_image=Image.open(requests.get(img_url,stream=True).raw).convert('RGB')question="how many dogs are in the picture?"inputs=processor(raw_image,question,return_tensors="pt")out=model.generate(**inputs)print(processor.decode(out[0],skip_special_tokens=True))>>>1
Running the model on GPU
In full precision
Click to expand
importrequestsfromPILimportImagefromtransformersimportBlipProcessor,BlipForQuestionAnsweringprocessor=BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")model=BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base").to("cuda")img_url='https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'raw_image=Image.open(requests.get(img_url,stream=True).raw).convert('RGB')question="how many dogs are in the picture?"inputs=processor(raw_image,question,return_tensors="pt").to("cuda")out=model.generate(**inputs)print(processor.decode(out[0],skip_special_tokens=True))>>>1
In half precision (float16)
Click to expand
importtorchimportrequestsfromPILimportImagefromtransformersimportBlipProcessor,BlipForQuestionAnsweringprocessor=BlipProcessor.from_pretrained("ybelkada/blip-vqa-base")model=BlipForQuestionAnswering.from_pretrained("ybelkada/blip-vqa-base",torch_dtype=torch.float16).to("cuda")img_url='https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'raw_image=Image.open(requests.get(img_url,stream=True).raw).convert('RGB')question="how many dogs are in the picture?"inputs=processor(raw_image,question,return_tensors="pt").to("cuda",torch.float16)out=model.generate(**inputs)print(processor.decode(out[0],skip_special_tokens=True))>>>1
BibTex and citation info
@misc{https://doi.org/10.48550/arxiv.2201.12086,
doi = {10.48550/ARXIV.2201.12086},
url = {https://arxiv.org/abs/2201.12086},
author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}