Compare commits
No commits in common. "9086d78af18a62546cf05a768934d291684d6309" and "dada2f51640768682a07331e7c244c0d25e9dc85" have entirely different histories.
9086d78af1
...
dada2f5164
|
@ -25,4 +25,3 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
model.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
|
|
43
README.md
43
README.md
|
@ -14,9 +14,8 @@ language:
|
|||
- th
|
||||
- tr
|
||||
- ur
|
||||
- vi
|
||||
- vu
|
||||
- zh
|
||||
license: mit
|
||||
tags:
|
||||
- zero-shot-classification
|
||||
- text-classification
|
||||
|
@ -34,31 +33,15 @@ widget:
|
|||
---
|
||||
# Multilingual mDeBERTa-v3-base-mnli-xnli
|
||||
## Model description
|
||||
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual
|
||||
zero-shot classification. The underlying model was pre-trained by Microsoft on the
|
||||
[CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
|
||||
As of December 2021, mDeBERTa-base is the best performing multilingual base-sized transformer model,
|
||||
introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf).
|
||||
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying model was pre-trained by Microsoft on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
|
||||
As of December 2021, mDeBERTa-base is the best performing multilingual base-sized transformer model, introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf).
|
||||
|
||||
If you are looking for a smaller, faster (but less performant) model, you can
|
||||
try [multilingual-MiniLMv2-L6-mnli-xnli](https://huggingface.co/MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli).
|
||||
|
||||
### How to use the model
|
||||
#### Simple zero-shot classification pipeline
|
||||
```python
|
||||
from transformers import pipeline
|
||||
classifier = pipeline("zero-shot-classification", model="MoritzLaurer/mDeBERTa-v3-base-mnli-xnli")
|
||||
|
||||
sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
|
||||
candidate_labels = ["politics", "economy", "entertainment", "environment"]
|
||||
output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
|
||||
print(output)
|
||||
```
|
||||
#### NLI use-case
|
||||
## Intended uses & limitations
|
||||
#### How to use the model
|
||||
```python
|
||||
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
||||
import torch
|
||||
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
|
||||
|
||||
model_name = "MoritzLaurer/mDeBERTa-v3-base-mnli-xnli"
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
|
@ -70,8 +53,10 @@ hypothesis = "Emmanuel Macron is the President of France"
|
|||
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
|
||||
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
|
||||
prediction = torch.softmax(output["logits"][0], -1).tolist()
|
||||
|
||||
label_names = ["entailment", "neutral", "contradiction"]
|
||||
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
|
||||
|
||||
print(prediction)
|
||||
```
|
||||
|
||||
|
@ -95,18 +80,18 @@ The model was evaluated on the XNLI test set on 15 languages (5010 texts per lan
|
|||
|
||||
Also note that if other multilingual models on the model hub claim performance of around 90% on languages other than English, the authors have most likely made a mistake during testing since non of the latest papers shows a multilingual average performance of more than a few points above 80% on XNLI (see [here](https://arxiv.org/pdf/2111.09543.pdf) or [here](https://arxiv.org/pdf/1911.02116.pdf)).
|
||||
|
||||
average | ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vi | zh
|
||||
average | ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vu | zh
|
||||
---------|----------|---------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------
|
||||
0.808 | 0.802 | 0.829 | 0.825 | 0.826 | 0.883 | 0.845 | 0.834 | 0.771 | 0.813 | 0.748 | 0.793 | 0.807 | 0.740 | 0.795 | 0.8116
|
||||
|
||||
|
||||
## Limitations and bias
|
||||
Please consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases.
|
||||
### BibTeX entry and citation info
|
||||
If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
|
||||
|
||||
## Citation
|
||||
If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k.
|
||||
|
||||
## Ideas for cooperation or questions?
|
||||
### Ideas for cooperation or questions?
|
||||
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
|
||||
|
||||
## Debugging and issues
|
||||
Note that DeBERTa-v3 was released in late 2021 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 or higher might solve some issues. Note that mDeBERTa currently does not support FP16, see here: https://github.com/microsoft/DeBERTa/issues/77
|
||||
### Debugging and issues
|
||||
Note that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues. Note that mDeBERTa currently does not support FP16, see here: https://github.com/microsoft/DeBERTa/issues/77
|
BIN
model.safetensors (Stored with Git LFS)
BIN
model.safetensors (Stored with Git LFS)
Binary file not shown.
Loading…
Reference in New Issue