Update README.md
This commit is contained in:
parent
ae026a85a9
commit
422e706f9b
|
@ -7,7 +7,7 @@ tags:
|
|||
|
||||
# VideoMAE (base-sized model, fine-tuned on Kinetics-400)
|
||||
|
||||
VideoMAE model fine-tuned on Kinetics-400 in a self-supervised way. It was introduced in the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Tong et al. and first released in [this repository](https://github.com/MCG-NJU/VideoMAE).
|
||||
VideoMAE model pre-trained for 1600 epochs in a self-supervised way and fine-tuned in a supervised way on Kinetics-400. It was introduced in the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Tong et al. and first released in [this repository](https://github.com/MCG-NJU/VideoMAE).
|
||||
|
||||
Disclaimer: The team releasing VideoMAE did not write a model card for this model so this model card has been written by the Hugging Face team.
|
||||
|
||||
|
|
Loading…
Reference in New Issue