philschmid/bart-large-cnn-samsum is a forked repo from huggingface. License: mit
Go to file
philschmid 3d05f288d9 Add evaluation results on samsum dataset (#1)
- Add evaluation results on samsum dataset (09b79fe340dd38f362d80c7392f8886251d42ff8)


Co-authored-by: Evaluation Bot <autoevaluator@users.noreply.huggingface.co>
2022-06-24 11:26:48 +00:00
checkpoint-500 commit files to HF hub 2021-04-01 14:09:07 +00:00
.gitattributes initial commit 2021-04-01 14:06:51 +00:00
README.md Add evaluation results on samsum dataset (#1) 2022-06-24 11:26:48 +00:00
all_results.json commit files to HF hub 2021-04-01 14:09:07 +00:00
config.json commit files to HF hub 2021-04-01 14:09:07 +00:00
eval_results.json commit files to HF hub 2021-04-01 14:09:07 +00:00
merges.txt commit files to HF hub 2021-04-01 14:09:07 +00:00
pytorch_model.bin commit files to HF hub 2021-04-01 14:09:07 +00:00
special_tokens_map.json commit files to HF hub 2021-04-01 14:09:07 +00:00
test_generations.txt commit files to HF hub 2021-04-01 14:09:07 +00:00
test_results.json commit files to HF hub 2021-04-01 14:09:07 +00:00
tokenizer_config.json commit files to HF hub 2021-04-01 14:09:07 +00:00
train_results.json commit files to HF hub 2021-04-01 14:09:07 +00:00
trainer_state.json commit files to HF hub 2021-04-01 14:09:07 +00:00
training_args.bin commit files to HF hub 2021-04-01 14:09:07 +00:00
vocab.json commit files to HF hub 2021-04-01 14:09:07 +00:00

README.md


language: en tags:

  • sagemaker
  • bart
  • summarization datasets:
  • samsum widget:
  • text: "Jeff: Can I train a \U0001F917 Transformers model on Amazon SageMaker? \n
    Philipp: Sure you can use the new Hugging Face Deep Learning Container. \nJeff:
    \ ok.\nJeff: and how can I get started? \nJeff: where can I find documentation?
    \ \nPhilipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face\n" model-index:
  • name: bart-large-cnn-samsum results:
    • task: name: Abstractive Text Summarization type: abstractive-text-summarization dataset: name: 'SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization' type: samsum metrics:
      • name: Validation ROGUE-1 type: rogue-1 value: 42.621
      • name: Validation ROGUE-2 type: rogue-2 value: 21.9825
      • name: Validation ROGUE-L type: rogue-l value: 33.034
      • name: Test ROGUE-1 type: rogue-1 value: 41.3174
      • name: Test ROGUE-2 type: rogue-2 value: 20.8716
      • name: Test ROGUE-L type: rogue-l value: 32.1337
    • task: type: summarization name: Summarization dataset: name: samsum type: samsum config: samsum split: test metrics:
      • name: ROUGE-1 type: rouge value: 41.3282 verified: true
      • name: ROUGE-2 type: rouge value: 20.8755 verified: true
      • name: ROUGE-L type: rouge value: 32.1353 verified: true
      • name: ROUGE-LSUM type: rouge value: 38.401 verified: true
      • name: loss type: loss value: 1.4297215938568115 verified: true
      • name: gen_len type: gen_len value: 60.0757 verified: true

bart-large-cnn-samsum

This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.

For more information look at:

Hyperparameters

{
    "dataset_name": "samsum",
    "do_eval": true,
    "do_predict": true,
    "do_train": true,
    "fp16": true,
    "learning_rate": 5e-05,
    "model_name_or_path": "facebook/bart-large-cnn",
    "num_train_epochs": 3,
    "output_dir": "/opt/ml/model",
    "per_device_eval_batch_size": 4,
    "per_device_train_batch_size": 4,
    "predict_with_generate": true,
    "seed": 7
}

Usage

from transformers import pipeline
summarizer = pipeline("summarization", model="philschmid/bart-large-cnn-samsum")

conversation = '''Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker? 
Philipp: Sure you can use the new Hugging Face Deep Learning Container. 
Jeff: ok.
Jeff: and how can I get started? 
Jeff: where can I find documentation? 
Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face                                           
'''
nlp(conversation)

Results

key value
eval_rouge1 42.621
eval_rouge2 21.9825
eval_rougeL 33.034
eval_rougeLsum 39.6783
test_rouge1 41.3174
test_rouge2 20.8716
test_rougeL 32.1337
test_rougeLsum 38.4149