Update README.md
This commit is contained in:
parent
482837aca0
commit
a3c7fa2af3
|
@ -7,7 +7,7 @@ tags:
|
||||||
|
|
||||||
# VideoMAE (base-sized model, pre-trained only)
|
# VideoMAE (base-sized model, pre-trained only)
|
||||||
|
|
||||||
VideoMAE model pre-trained on Kinetic400 in a self-supervised way. It was introduced in the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Tong et al. and first released in [this repository](https://github.com/MCG-NJU/VideoMAE).
|
VideoMAE model pre-trained on Kinetics-400 in a self-supervised way. It was introduced in the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Tong et al. and first released in [this repository](https://github.com/MCG-NJU/VideoMAE).
|
||||||
|
|
||||||
Disclaimer: The team releasing VideoMAE did not write a model card for this model so this model card has been written by the Hugging Face team.
|
Disclaimer: The team releasing VideoMAE did not write a model card for this model so this model card has been written by the Hugging Face team.
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue