Migrate model card from transformers-repo
Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755 Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/roberta-large-mnli-README.md
This commit is contained in:
parent
503b110166
commit
9d255b1a9c
|
@ -0,0 +1,22 @@
|
||||||
|
---
|
||||||
|
license: mit
|
||||||
|
widget:
|
||||||
|
- text: "I like you. </s></s> I love you."
|
||||||
|
---
|
||||||
|
|
||||||
|
|
||||||
|
## roberta-large-mnli
|
||||||
|
|
||||||
|
Trained by Facebook, [original source](https://github.com/pytorch/fairseq/tree/master/examples/roberta)
|
||||||
|
|
||||||
|
```bibtex
|
||||||
|
@article{liu2019roberta,
|
||||||
|
title = {RoBERTa: A Robustly Optimized BERT Pretraining Approach},
|
||||||
|
author = {Yinhan Liu and Myle Ott and Naman Goyal and Jingfei Du and
|
||||||
|
Mandar Joshi and Danqi Chen and Omer Levy and Mike Lewis and
|
||||||
|
Luke Zettlemoyer and Veselin Stoyanov},
|
||||||
|
journal={arXiv preprint arXiv:1907.11692},
|
||||||
|
year = {2019},
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
Loading…
Reference in New Issue