Update README.md

This commit is contained in:
Hartmann 2022-10-28 05:20:11 +00:00 committed by huggingface-web
parent d238071737
commit 22171ebc93
1 changed files with 8 additions and 2 deletions

View File

@ -61,7 +61,7 @@ b) Run emotion model on multiple examples and full datasets (e.g., .csv files) o
Please reach out to [j.p.hartmann@rug.nl](mailto:j.p.hartmann@rug.nl) if you have any questions or feedback. Please reach out to [j.p.hartmann@rug.nl](mailto:j.p.hartmann@rug.nl) if you have any questions or feedback.
Thanks to Samuel Domdey and chrsiebert for their support in making this model available. Thanks to Samuel Domdey and [chrsiebert](https://huggingface.co/siebert) for their support in making this model available.
# Reference ✅ # Reference ✅
@ -95,4 +95,10 @@ Please find an overview of the datasets used for training below. All datasets co
|MELD, Poria et al. (2019)|Yes|Yes|Yes|Yes|Yes|Yes|Yes| |MELD, Poria et al. (2019)|Yes|Yes|Yes|Yes|Yes|Yes|Yes|
|SemEval-2018, EI-reg, Mohammad et al. (2018) |Yes|-|Yes|Yes|-|Yes|-| |SemEval-2018, EI-reg, Mohammad et al. (2018) |Yes|-|Yes|Yes|-|Yes|-|
The model is trained on a balanced subset from the datasets listed above (2,811 observations per emotion, i.e., nearly 20k observations in total). 80% of this balanced subset is used for training and 20% for evaluation. The evaluation accuracy is 66% (vs. the random-chance baseline of 1/7 = 14%). The model is trained on a balanced subset from the datasets listed above (2,811 observations per emotion, i.e., nearly 20k observations in total). 80% of this balanced subset is used for training and 20% for evaluation. The evaluation accuracy is 66% (vs. the random-chance baseline of 1/7 = 14%).
# Scientific Applications 📖
Below you can find a list of papers using "Emotion English DistilRoBERTa-base". If you would like your paper to be added to the list, please send me an email.
Rozado, D., Hughes, R., & Halberstadt, J. (2022). Longitudinal analysis of sentiment and emotion in news media headlines using automated labelling with Transformer language models. Plos one, 17(10), e0276367.