Update README.md
This commit is contained in:
parent
574dbac1d7
commit
7420447fdf
|
@ -34,9 +34,14 @@ widget:
|
|||
---
|
||||
# Multilingual mDeBERTa-v3-base-mnli-xnli
|
||||
## Model description
|
||||
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying model was pre-trained by Microsoft on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
|
||||
As of December 2021, mDeBERTa-base is the best performing multilingual base-sized transformer model, introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf).
|
||||
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual
|
||||
zero-shot classification. The underlying model was pre-trained by Microsoft on the
|
||||
[CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
|
||||
As of December 2021, mDeBERTa-base is the best performing multilingual base-sized transformer model,
|
||||
introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf).
|
||||
|
||||
If you are looking for a smaller, faster (but less performant) model, you can
|
||||
try [multilingual-MiniLMv2-L6-mnli-xnli](https://huggingface.co/MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli).
|
||||
|
||||
### How to use the model
|
||||
#### Simple zero-shot classification pipeline
|
||||
|
|
Loading…
Reference in New Issue