microsoft/xclip-base-patch16-zero-shot is a forked repo from huggingface. License: mit
Go to file
Niels Rogge 4f10c98c1e Upload README.md with huggingface_hub 2022-09-08 11:55:38 +00:00
.gitattributes initial commit 2022-09-07 17:52:51 +00:00
README.md Upload README.md with huggingface_hub 2022-09-08 11:55:38 +00:00
config.json Upload model 2022-09-07 17:54:02 +00:00
merges.txt Upload tokenizer 2022-09-07 17:54:08 +00:00
preprocessor_config.json Upload processor 2022-09-07 17:54:05 +00:00
pytorch_model.bin Upload model 2022-09-07 17:54:02 +00:00
special_tokens_map.json Upload processor 2022-09-07 17:54:05 +00:00
tokenizer.json Upload processor 2022-09-07 17:54:05 +00:00
tokenizer_config.json Upload tokenizer 2022-09-07 17:54:08 +00:00
vocab.json Upload tokenizer 2022-09-07 17:54:08 +00:00

README.md

language license tags model-index
en mit
vision
video-classification
name results
nielsr/xclip-base-patch16-zero-shot
task dataset metrics
type
video-classification
name type
HMDB-51 hmdb-51
type value
top-1 accuracy 44.6
task dataset metrics
type
video-classification
name type
UCF101 ucf101
type value
top-1 accuracy 72.0
task dataset metrics
type
video-classification
name type
Kinetics-600 kinetics600
type value
top-1 accuracy 65.2

X-CLIP (base-sized model)

X-CLIP model (base-sized, patch resolution of 16) trained on Kinetics-400. It was introduced in the paper Expanding Language-Image Pretrained Models for General Video Recognition by Ni et al. and first released in this repository.

This model was trained using 32 frames per video, at a resolution of 224x224.

Disclaimer: The team releasing X-CLIP did not write a model card for this model so this model card has been written by the Hugging Face team.

Model description

X-CLIP is a minimal extension of CLIP for general video-language understanding. The model is trained in a contrastive way on (video, text) pairs.

X-CLIP architecture

This allows the model to be used for tasks like zero-shot, few-shot or fully supervised video classification and video-text retrieval.

Intended uses & limitations

You can use the raw model for determining how well text goes with a given video. See the model hub to look for fine-tuned versions on a task that interests you.

How to use

For code examples, we refer to the documentation.

Training data

This model was trained on Kinetics 400.

Preprocessing

The exact details of preprocessing during training can be found here.

The exact details of preprocessing during validation can be found here.

During validation, one resizes the shorter edge of each frame, after which center cropping is performed to a fixed-size resolution (like 224x224). Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation.

Evaluation results

This model achieves a zero-shot top-1 accuracy of 44.6% on HMDB-51, 72.0% on UCF-101 and 65.2% on Kinetics-600.