google/pegasus-xsum is a forked repo from huggingface. License: None
Go to file
Joao Gante 8d8ffc158a Adding generation config file(s) 2023-01-24 16:42:49 +00:00
.gitattributes add flax model 2021-09-14 07:25:41 +00:00
README.md Add evaluation results on the 3.0.0 config of cnn_dailymail (#5) 2022-08-17 13:20:35 +00:00
config.json add flax model 2021-09-14 07:25:41 +00:00
flax_model.msgpack add flax model 2021-09-14 07:25:41 +00:00
generation_config.json Adding generation config file(s) 2023-01-24 16:42:49 +00:00
pytorch_model.bin Update pytorch_model.bin 2020-08-25 18:48:41 +00:00
special_tokens_map.json Update special_tokens_map.json 2020-11-25 22:47:35 +00:00
spiece.model Update spiece.model 2020-08-08 22:56:52 +00:00
tf_model.h5 upload model 2021-01-10 18:55:22 +00:00
tf_weights_dict.pkl Update tf_weights_dict.pkl 2020-08-08 22:56:52 +00:00
tokenizer.json Update tokenizer.json 2021-02-01 12:39:02 -05:00
tokenizer_config.json Update tokenizer_config.json 2020-08-08 22:56:52 +00:00

README.md

language tags model-index
en
summarization
name results
google/pegasus-xsum
task dataset metrics
type name
summarization Summarization
name type config split
samsum samsum samsum train
name type value verified
ROUGE-1 rouge 21.8096 true
name type value verified
ROUGE-2 rouge 4.2525 true
name type value verified
ROUGE-L rouge 17.4469 true
name type value verified
ROUGE-LSUM rouge 18.8907 true
name type value verified
loss loss 3.0317161083221436 true
name type value verified
gen_len gen_len 20.3122 true
task dataset metrics
type name
summarization Summarization
name type config split
xsum xsum default test
name type value verified
ROUGE-1 rouge 46.8623 true
name type value verified
ROUGE-2 rouge 24.4533 true
name type value verified
ROUGE-L rouge 39.0548 true
name type value verified
ROUGE-LSUM rouge 39.0994 true
name type value verified
loss loss 1.5717021226882935 true
name type value verified
gen_len gen_len 22.8821 true
task dataset metrics
type name
summarization Summarization
name type config split
cnn_dailymail cnn_dailymail 3.0.0 test
name type value verified
ROUGE-1 rouge 22.2062 true
name type value verified
ROUGE-2 rouge 7.6701 true
name type value verified
ROUGE-L rouge 15.4046 true
name type value verified
ROUGE-LSUM rouge 19.2182 true
name type value verified
loss loss 2.681241273880005 true
name type value verified
gen_len gen_len 25.0234 true

Pegasus Models

See Docs: here

Original TF 1 code here

Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019

Maintained by: @sshleifer

Task: Summarization

The following is copied from the authors' README.

Mixed & Stochastic Checkpoints

We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.

dataset C4 HugeNews Mixed & Stochastic
xsum 45.20/22.06/36.99 47.21/24.56/39.25 47.60/24.83/39.64
cnn_dailymail 43.90/21.20/40.76 44.17/21.47/41.11 44.16/21.56/41.30
newsroom 45.07/33.39/41.28 45.15/33.51/41.33 45.98/34.20/42.18
multi_news 46.74/17.95/24.26 47.52/18.72/24.91 47.65/18.75/24.95
gigaword 38.75/19.96/36.14 39.12/19.86/36.24 39.65/20.47/36.76
wikihow 43.07/19.70/34.79 41.35/18.51/33.42 46.39/22.12/38.41 *
reddit_tifu 26.54/8.94/21.64 26.63/9.01/21.60 27.99/9.81/22.94
big_patent 53.63/33.16/42.25 53.41/32.89/42.07 52.29/33.08/41.66 *
arxiv 44.70/17.27/25.80 44.67/17.18/25.73 44.21/16.95/25.67
pubmed 45.49/19.90/27.69 45.09/19.56/27.42 45.97/20.15/28.25
aeslc 37.69/21.85/36.84 37.40/21.22/36.45 37.68/21.25/36.51
billsum 57.20/39.56/45.80 57.31/40.19/45.82 59.67/41.58/47.59

The "Mixed & Stochastic" model has the following changes:

  • trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
  • trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
  • the model uniformly sample a gap sentence ratio between 15% and 45%.
  • importance sentences are sampled using a 20% uniform noise to importance scores.
  • the sentencepiece tokenizer is updated to be able to encode newline character.

(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:

  • wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
  • we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.

The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):

trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples). trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity). the model uniformly sample a gap sentence ratio between 15% and 45%. importance sentences are sampled using a 20% uniform noise to importance scores. the sentencepiece tokenizer is updated to be able to encode newline character.

Citation



@misc{zhang2019pegasus,
    title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
    author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
    year={2019},
    eprint={1912.08777},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}