typo: encoder-encoder -> encoder-decoder (#1)
- typo: encoder-encoder -> encoder-decoder (58509d68aee6f8131fbafbf0d4881c71cbe457d3) Co-authored-by: Daniel Levenson <dleve123@users.noreply.huggingface.co>
This commit is contained in:
parent
030bb1bda8
commit
cb48c1365b
|
@ -11,7 +11,7 @@ Disclaimer: The team releasing BART did not write a model card for this model so
|
|||
|
||||
## Model description
|
||||
|
||||
BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
|
||||
BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
|
||||
|
||||
BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
|
||||
|
||||
|
|
Loading…
Reference in New Issue