From e0e58cada69244830751866611eb3c71bc4e4266 Mon Sep 17 00:00:00 2001 From: Moritz Laurer Date: Sat, 12 Mar 2022 21:30:41 +0000 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 02cac90..c45a9cf 100644 --- a/README.md +++ b/README.md @@ -34,7 +34,7 @@ widget: # Multilingual mDeBERTa-v3-base-mnli-xnli ## Model description This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying model was pre-trained by Microsoft on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli). -As of December 2021, mDeBERTa-base is the best performing multilingual transformer (base) model, introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf). +As of December 2021, mDeBERTa-base is the best performing multilingual base-sized transformer model, introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf). ## Intended uses & limitations