Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755 Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/roberta-large-mnli-README.md |
||
---|---|---|
.gitattributes | ||
README.md | ||
config.json | ||
merges.txt | ||
pytorch_model.bin | ||
tf_model.h5 | ||
tokenizer.json | ||
vocab.json |
README.md
license | widget | |||
---|---|---|---|---|
mit |
|
roberta-large-mnli
Trained by Facebook, original source
@article{liu2019roberta,
title = {RoBERTa: A Robustly Optimized BERT Pretraining Approach},
author = {Yinhan Liu and Myle Ott and Naman Goyal and Jingfei Du and
Mandar Joshi and Danqi Chen and Omer Levy and Mike Lewis and
Luke Zettlemoyer and Veselin Stoyanov},
journal={arXiv preprint arXiv:1907.11692},
year = {2019},
}