From 25a9e2455e0cd9066837f3ff1b23696e65080cc9 Mon Sep 17 00:00:00 2001 From: Jose Camacho Collados Date: Tue, 3 Jan 2023 17:25:55 +0000 Subject: [PATCH] Update README 2022 --- README.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 953ff46..dda849a 100644 --- a/README.md +++ b/README.md @@ -5,10 +5,10 @@ widget: --- -# Twitter-roBERTa-base for Sentiment Analysis - UPDATED (2021) +# Twitter-roBERTa-base for Sentiment Analysis - UPDATED (2022) -This is a roBERTa-base model trained on ~124M tweets from January 2018 to December 2021 (see [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m)), and finetuned for sentiment analysis with the TweetEval benchmark. -The original roBERTa-base model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) and the original reference paper is [TweetEval](https://github.com/cardiffnlp/tweeteval). This model is suitable for English. +This is a RoBERTa-base model trained on ~124M tweets from January 2018 to December 2021, and finetuned for sentiment analysis with the TweetEval benchmark. +The original Twitter-based RoBERTa model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) and the original reference paper is [TweetEval](https://github.com/cardiffnlp/tweeteval). This model is suitable for English. - Reference Paper: [TimeLMs paper](https://arxiv.org/abs/2202.03829). - Git Repo: [TimeLMs official repository](https://github.com/cardiffnlp/timelms). @@ -18,6 +18,8 @@ The original roBERTa-base model can be found [here](https://huggingface.co/cardi 1 -> Neutral; 2 -> Positive +This sentiment analysis model has been integrated into [TweetNLP](https://github.com/cardiffnlp/tweetnlp). You can access the demo [here](https://tweetnlp.org). + ## Example Pipeline ```python from transformers import pipeline