From cc7a0807ea3a7d2a6ce4f19b1be807ed6ad4334c Mon Sep 17 00:00:00 2001 From: system Date: Tue, 18 Aug 2020 01:41:24 +0000 Subject: [PATCH] Update README.md --- README.md | 73 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 73 insertions(+) create mode 100644 README.md diff --git a/README.md b/README.md new file mode 100644 index 0000000..48da83f --- /dev/null +++ b/README.md @@ -0,0 +1,73 @@ +--- +language: en +datasets: +- xsum +tags: +- summarization +--- + +### Pegasus Models +See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html) +Original TF 1 code [here](https://github.com/google-research/pegasus) + +Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019 +Maintained by: [@sshleifer](https://twitter.com/sam_shleifer) +Task: Summarization + +The following is copied from the authors' README. + +# Mixed & Stochastic Checkpoints + +We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table. + +| dataset | C4 | HugeNews | Mixed & Stochastic| +| ---- | ---- | ---- | ----| +| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64| +| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30| +| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18| +| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95| +| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76| +| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *| +| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94| +| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *| +| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67| +| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25| +| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51| +| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59| + +The "Mixed & Stochastic" model has the following changes: +- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). +- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). +- the model uniformly sample a gap sentence ratio between 15% and 45%. +- importance sentences are sampled using a 20% uniform noise to importance scores. +- the sentencepiece tokenizer is updated to be able to encode newline character. + + +(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data: +- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information. +- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS. + + +The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper): + + +trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). +trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). +the model uniformly sample a gap sentence ratio between 15% and 45%. +importance sentences are sampled using a 20% uniform noise to importance scores. +the sentencepiece tokenizer is updated to be able to encode newline character. + + +Citation +``` + + +@misc{zhang2019pegasus, + title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization}, + author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu}, + year={2019}, + eprint={1912.08777}, + archivePrefix={arXiv}, + primaryClass={cs.CL} +} +``` \ No newline at end of file