From cb48c1365bd826bd521f650dc2e0940aee54720c Mon Sep 17 00:00:00 2001 From: patrickvonplaten Date: Fri, 3 Jun 2022 10:00:20 +0000 Subject: [PATCH] typo: encoder-encoder -> encoder-decoder (#1) - typo: encoder-encoder -> encoder-decoder (58509d68aee6f8131fbafbf0d4881c71cbe457d3) Co-authored-by: Daniel Levenson --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 28ffe11..1910fd8 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,7 @@ Disclaimer: The team releasing BART did not write a model card for this model so ## Model description -BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. +BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).