Compare commits

...

10 Commits

Author SHA1 Message Date
nielsr d0a1f6ab88 Better prompts for widget samples (#2)
- Better prompts for widget samples (5a1d3ba58fe705a59c0d26d94e7e10e1dbcce846)
- What's the animal doing (67d4cf273bc9974aee5d87d3f768f590a408e5ce)


Co-authored-by: Mishig Davaadorj <mishig@users.noreply.huggingface.co>
2022-08-02 13:03:04 +00:00
nielsr a08ecc1591 Add widget samples (#1)
- Add widget samples (78f1ead6e5ba5c39a35cff67fab0fd4e2a3d0b38)


Co-authored-by: Mishig Davaadorj <mishig@users.noreply.huggingface.co>
2022-07-29 13:47:48 +00:00
Niels Rogge 62cb70a235 Update README.md 2022-07-27 08:05:37 +00:00
Niels Rogge 4355f59b0b Add code example 2022-01-23 09:40:33 +00:00
Niels Rogge 1d0521195d Update tokenizer_config.json 2022-01-19 17:24:36 +00:00
Niels Rogge 18d30d29a8 Fix tokenizer files 2022-01-19 17:03:47 +01:00
Niels Rogge dcf3033e95 Delete preprocessor_config.json 2022-01-19 16:02:08 +00:00
Niels Rogge 2cc62bf3d7 Upload tokenizer.json 2022-01-19 15:49:17 +00:00
Niels Rogge 4bba75a1b0 Upload preprocessor_config.json 2021-11-28 18:56:19 +00:00
Niels Rogge a5faec5a6d Upload special_tokens_map.json 2021-11-28 18:55:28 +00:00
5 changed files with 52 additions and 6 deletions

View File

@ -1,5 +1,12 @@
---
tags:
- visual-question-answering
license: apache-2.0
widget:
- text: "What's the animal doing?"
src: "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg"
- text: "What is on top of the building?"
src: "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg"
---
# Vision-and-Language Transformer (ViLT), fine-tuned on VQAv2
@ -9,17 +16,36 @@ Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by
Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
(to do)
## Intended uses & limitations
You can use the raw model for visual question answering.
### How to use
(to do)
Here is how to use this model in PyTorch:
```python
from transformers import ViltProcessor, ViltForQuestionAnswering
import requests
from PIL import Image
# prepare image + question
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
text = "How many cats are there?"
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
model = ViltForQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
outputs = model(**encoding)
logits = outputs.logits
idx = logits.argmax(-1).item()
print("Predicted answer:", model.config.id2label[idx])
```
## Training data

18
preprocessor_config.json Normal file
View File

@ -0,0 +1,18 @@
{
"do_normalize": true,
"do_resize": true,
"feature_extractor_type": "ViltFeatureExtractor",
"image_mean": [
0.5,
0.5,
0.5
],
"image_std": [
0.5,
0.5,
0.5
],
"resample": 3,
"size": 384,
"size_divisor": 32
}

1
special_tokens_map.json Normal file
View File

@ -0,0 +1 @@
{"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}

1
tokenizer.json Normal file

File diff suppressed because one or more lines are too long

View File

@ -1 +1 @@
{"do_lower_case": true, "do_basic_tokenize": true, "never_split": null, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "special_tokens_map_file": null, "tokenizer_file": "/root/.cache/huggingface/transformers/534479488c54aeaf9c3406f647aa2ec13648c06771ffe269edabebd4c412da1d.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4", "name_or_path": "bert-base-uncased", "tokenizer_class": "BertTokenizer"}
{"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 40, "special_tokens_map_file": null, "name_or_path": "bert-base-uncased", "tokenizer_class": "BertTokenizer"}