2022-04-27 07:52:33 +00:00
---
2022-05-10 09:11:56 +00:00
language: ja
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
tags:
- ja
- japanese
- clip
- vision
2022-04-27 07:52:33 +00:00
---
2022-05-10 09:11:56 +00:00
# rinna/japanese-clip-vit-b-16

This repository provides a Japanese [CLIP (Contrastive Language-Image Pre-Training) ](https://arxiv.org/abs/2103.00020 ) model. The model was trained by [rinna Co., Ltd. ](https://corp.rinna.co.jp/ )
# How to use the model
1. Install package
2022-05-10 09:15:58 +00:00
2022-05-10 09:11:56 +00:00
```shell
$ pip install git+https://github.com/rinnakk/japanese-clip.git
```
2022-05-10 09:15:58 +00:00
2022-05-10 09:11:56 +00:00
2. Run
2022-05-10 09:15:58 +00:00
2022-05-10 09:11:56 +00:00
```python
import io
import requests
from PIL import Image
import torch
import japanese_clip as ja_clip
device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = ja_clip.load("rinna/japanese-clip-vit-b-16", cache_dir="/tmp/japanese_clip", device=device)
tokenizer = ja_clip.load_tokenizer()
img = Image.open(io.BytesIO(requests.get('https://images.pexels.com/photos/2253275/pexels-photo-2253275.jpeg?auto=compress& cs=tinysrgb& dpr=3& h=750& w=1260').content))
image = preprocess(img).unsqueeze(0).to(device)
encodings = ja_clip.tokenize(
texts=["犬", "猫", "象"],
max_seq_len=77,
device=device,
tokenizer=tokenizer, # this is optional. if you don't pass, load tokenizer each time
)
with torch.no_grad():
image_features = model.get_image_features(image)
text_features = model.get_text_features(**encodings)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print("Label probs:", text_probs) # prints: [[1.0, 0.0, 0.0]]
```