Update README.md
This commit is contained in:
parent
614e4485bc
commit
e0e58cada6
|
@ -34,7 +34,7 @@ widget:
|
||||||
# Multilingual mDeBERTa-v3-base-mnli-xnli
|
# Multilingual mDeBERTa-v3-base-mnli-xnli
|
||||||
## Model description
|
## Model description
|
||||||
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying model was pre-trained by Microsoft on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
|
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying model was pre-trained by Microsoft on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
|
||||||
As of December 2021, mDeBERTa-base is the best performing multilingual transformer (base) model, introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf).
|
As of December 2021, mDeBERTa-base is the best performing multilingual base-sized transformer model, introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf).
|
||||||
|
|
||||||
|
|
||||||
## Intended uses & limitations
|
## Intended uses & limitations
|
||||||
|
|
Loading…
Reference in New Issue