Compare commits
10 Commits
dada2f5164
...
9086d78af1
Author | SHA1 | Date |
---|---|---|
|
9086d78af1 | |
|
e81df37a43 | |
|
7420447fdf | |
|
574dbac1d7 | |
|
297ea39590 | |
|
0d3425fc2e | |
|
9a6ead55b4 | |
|
6dc726042b | |
|
ef7c55665d | |
|
49c996995e |
|
@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
model.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
|
|
45
README.md
45
README.md
|
@ -14,8 +14,9 @@ language:
|
|||
- th
|
||||
- tr
|
||||
- ur
|
||||
- vu
|
||||
- zh
|
||||
- vi
|
||||
- zh
|
||||
license: mit
|
||||
tags:
|
||||
- zero-shot-classification
|
||||
- text-classification
|
||||
|
@ -33,15 +34,31 @@ widget:
|
|||
---
|
||||
# Multilingual mDeBERTa-v3-base-mnli-xnli
|
||||
## Model description
|
||||
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying model was pre-trained by Microsoft on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
|
||||
As of December 2021, mDeBERTa-base is the best performing multilingual base-sized transformer model, introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf).
|
||||
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual
|
||||
zero-shot classification. The underlying model was pre-trained by Microsoft on the
|
||||
[CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
|
||||
As of December 2021, mDeBERTa-base is the best performing multilingual base-sized transformer model,
|
||||
introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf).
|
||||
|
||||
If you are looking for a smaller, faster (but less performant) model, you can
|
||||
try [multilingual-MiniLMv2-L6-mnli-xnli](https://huggingface.co/MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli).
|
||||
|
||||
## Intended uses & limitations
|
||||
#### How to use the model
|
||||
### How to use the model
|
||||
#### Simple zero-shot classification pipeline
|
||||
```python
|
||||
from transformers import pipeline
|
||||
classifier = pipeline("zero-shot-classification", model="MoritzLaurer/mDeBERTa-v3-base-mnli-xnli")
|
||||
|
||||
sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
|
||||
candidate_labels = ["politics", "economy", "entertainment", "environment"]
|
||||
output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
|
||||
print(output)
|
||||
```
|
||||
#### NLI use-case
|
||||
```python
|
||||
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
||||
import torch
|
||||
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
|
||||
|
||||
model_name = "MoritzLaurer/mDeBERTa-v3-base-mnli-xnli"
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
|
@ -53,10 +70,8 @@ hypothesis = "Emmanuel Macron is the President of France"
|
|||
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
|
||||
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
|
||||
prediction = torch.softmax(output["logits"][0], -1).tolist()
|
||||
|
||||
label_names = ["entailment", "neutral", "contradiction"]
|
||||
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
|
||||
|
||||
print(prediction)
|
||||
```
|
||||
|
||||
|
@ -80,18 +95,18 @@ The model was evaluated on the XNLI test set on 15 languages (5010 texts per lan
|
|||
|
||||
Also note that if other multilingual models on the model hub claim performance of around 90% on languages other than English, the authors have most likely made a mistake during testing since non of the latest papers shows a multilingual average performance of more than a few points above 80% on XNLI (see [here](https://arxiv.org/pdf/2111.09543.pdf) or [here](https://arxiv.org/pdf/1911.02116.pdf)).
|
||||
|
||||
average | ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vu | zh
|
||||
average | ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vi | zh
|
||||
---------|----------|---------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------
|
||||
0.808 | 0.802 | 0.829 | 0.825 | 0.826 | 0.883 | 0.845 | 0.834 | 0.771 | 0.813 | 0.748 | 0.793 | 0.807 | 0.740 | 0.795 | 0.8116
|
||||
|
||||
|
||||
## Limitations and bias
|
||||
Please consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases.
|
||||
### BibTeX entry and citation info
|
||||
If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
|
||||
|
||||
### Ideas for cooperation or questions?
|
||||
## Citation
|
||||
If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k.
|
||||
|
||||
## Ideas for cooperation or questions?
|
||||
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
|
||||
|
||||
### Debugging and issues
|
||||
Note that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues. Note that mDeBERTa currently does not support FP16, see here: https://github.com/microsoft/DeBERTa/issues/77
|
||||
## Debugging and issues
|
||||
Note that DeBERTa-v3 was released in late 2021 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 or higher might solve some issues. Note that mDeBERTa currently does not support FP16, see here: https://github.com/microsoft/DeBERTa/issues/77
|
||||
|
|
Binary file not shown.
Loading…
Reference in New Issue