ydshieh/vit-gpt2-coco-en is a forked repo from huggingface. License: None
Go to file
ydshieh 5bebf1e9bb Update README.md (#3)
- Update README.md (ba08384a5ec82e3ee3bc26f0740af75496d33843)


Co-authored-by: Mishig Davaadorj <mishig@users.noreply.huggingface.co>
2022-09-16 15:06:54 +00:00
.gitattributes initial commit 2021-10-24 17:18:21 +00:00
README.md Update README.md (#3) 2022-09-16 15:06:54 +00:00
config.json add files 2021-10-24 17:23:41 +00:00
events.out.tfevents.1633443513.t1v-n-bb5dfd23-w-0.8655.0.v2 add files 2021-10-24 17:23:41 +00:00
flax_model.msgpack add files 2021-10-24 17:23:41 +00:00
generation_eval.json add files 2021-10-24 17:23:41 +00:00
merges.txt add files 2021-10-24 17:23:41 +00:00
pipeline.py Update pipeline.py 2021-10-25 08:30:24 +00:00
preprocessor_config.json Add `feature_extractor_type` (#1) 2022-08-31 17:41:54 +00:00
pytorch_model.bin upload pytorch model 2021-10-24 20:15:50 +00:00
report.txt add files 2021-10-24 17:23:41 +00:00
requirements.txt add files 2021-10-24 17:23:41 +00:00
special_tokens_map.json add files 2021-10-24 17:23:41 +00:00
tf_model.h5 Upload tf_model.h5 with git-lfs 2022-01-09 15:53:19 +00:00
tokenizer.json add files 2021-10-24 17:23:41 +00:00
tokenizer_config.json add files 2021-10-24 17:23:41 +00:00
val_000000039769.jpg add files 2021-10-24 17:23:41 +00:00
vocab.json add files 2021-10-24 17:23:41 +00:00

README.md

tags widget
image-to-text
src example_title
https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg Football Match
src example_title
https://huggingface.co/datasets/mishig/sample_images/resolve/main/dog-cat.jpg Dog & Cat

Example

The model is by no means a state-of-the-art model, but nevertheless produces reasonable image captioning results. It was mainly fine-tuned as a proof-of-concept for the 🤗 FlaxVisionEncoderDecoder Framework.

The model can be used as follows:

In PyTorch


import torch
import requests
from PIL import Image
from transformers import ViTFeatureExtractor, AutoTokenizer, VisionEncoderDecoderModel


loc = "ydshieh/vit-gpt2-coco-en"

feature_extractor = ViTFeatureExtractor.from_pretrained(loc)
tokenizer = AutoTokenizer.from_pretrained(loc)
model = VisionEncoderDecoderModel.from_pretrained(loc)
model.eval()


def predict(image):

    pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values

    with torch.no_grad():
        output_ids = model.generate(pixel_values, max_length=16, num_beams=4, return_dict_in_generate=True).sequences

    preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
    preds = [pred.strip() for pred in preds]

    return preds


# We will verify our results on an image of cute cats
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
with Image.open(requests.get(url, stream=True).raw) as image:
    preds = predict(image)

print(preds)
# should produce
# ['a cat laying on top of a couch next to another cat']

In Flax


import jax
import requests
from PIL import Image
from transformers import ViTFeatureExtractor, AutoTokenizer, FlaxVisionEncoderDecoderModel


loc = "ydshieh/vit-gpt2-coco-en"

feature_extractor = ViTFeatureExtractor.from_pretrained(loc)
tokenizer = AutoTokenizer.from_pretrained(loc)
model = FlaxVisionEncoderDecoderModel.from_pretrained(loc)

gen_kwargs = {"max_length": 16, "num_beams": 4}


# This takes sometime when compiling the first time, but the subsequent inference will be much faster
@jax.jit
def generate(pixel_values):
    output_ids = model.generate(pixel_values, **gen_kwargs).sequences
    return output_ids
    
    
def predict(image):

    pixel_values = feature_extractor(images=image, return_tensors="np").pixel_values
    output_ids = generate(pixel_values)
    preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
    preds = [pred.strip() for pred in preds]
    
    return preds
    
    
# We will verify our results on an image of cute cats
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
with Image.open(requests.get(url, stream=True).raw) as image:
    preds = predict(image)
    
print(preds)
# should produce
# ['a cat laying on top of a couch next to another cat']