facebook/bart-large is a forked repo from huggingface. License: apache-2-0
Go to file
patrickvonplaten cb48c1365b typo: encoder-encoder -> encoder-decoder (#1)
- typo: encoder-encoder -> encoder-decoder (58509d68aee6f8131fbafbf0d4881c71cbe457d3)


Co-authored-by: Daniel Levenson <dleve123@users.noreply.huggingface.co>
2022-06-03 10:00:20 +00:00
.gitattributes add flax model 2021-06-14 07:44:06 +00:00
README.md typo: encoder-encoder -> encoder-decoder (#1) 2022-06-03 10:00:20 +00:00
config.json Update config.json 2022-03-09 16:01:15 +00:00
flax_model.msgpack add flax model 2021-06-14 07:44:06 +00:00
merges.txt Update merges.txt 2020-08-25 05:10:45 +00:00
pytorch_model.bin Update pytorch_model.bin 2020-09-10 15:28:18 +00:00
rust_model.ot Update rust_model.ot 2020-04-25 15:33:01 +00:00
tf_model.h5 Update tf_model.h5 2020-10-15 17:42:18 +00:00
tokenizer.json Move tokenizer.json from roberta-large 2021-03-09 17:02:30 -05:00
tokenizer_config.json Update tokenizer_config.json 2020-08-25 05:10:46 +00:00
vocab.json Update vocab.json 2020-08-25 05:10:46 +00:00

README.md

license language
apache-2.0 en

BART (large-sized model)

BART model pre-trained on English language. It was introduced in the paper BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Lewis et al. and first released in this repository.

Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team.

Model description

BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.

BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).

Intended uses & limitations

You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the model hub to look for fine-tuned versions on a task that interests you.

How to use

Here is how to use this model in PyTorch:

from transformers import BartTokenizer, BartModel

tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
model = BartModel.from_pretrained('facebook/bart-large')

inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)

last_hidden_states = outputs.last_hidden_state

BibTeX entry and citation info

@article{DBLP:journals/corr/abs-1910-13461,
  author    = {Mike Lewis and
               Yinhan Liu and
               Naman Goyal and
               Marjan Ghazvininejad and
               Abdelrahman Mohamed and
               Omer Levy and
               Veselin Stoyanov and
               Luke Zettlemoyer},
  title     = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
               Generation, Translation, and Comprehension},
  journal   = {CoRR},
  volume    = {abs/1910.13461},
  year      = {2019},
  url       = {http://arxiv.org/abs/1910.13461},
  eprinttype = {arXiv},
  eprint    = {1910.13461},
  timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}