Compare commits

...

10 Commits

Author SHA1 Message Date
Moritz Laurer 9086d78af1 Update README.md 2023-03-22 08:35:38 +00:00
Moritz Laurer e81df37a43 Adding `safetensors` variant of this model (#2)
- Adding `safetensors` variant of this model (60b7789df913c4c92a3595dd298670ff7c7edb9e)


Co-authored-by: Safetensors convertbot <SFconvertbot@users.noreply.huggingface.co>
2023-03-20 08:26:01 +00:00
Moritz Laurer 7420447fdf Update README.md 2023-02-14 12:51:03 +00:00
Moritz Laurer 574dbac1d7 update readme with easier zeroshot code 2022-09-25 18:34:23 +00:00
Moritz Laurer 297ea39590 Update README.md 2022-08-15 05:16:54 +00:00
Moritz Laurer 0d3425fc2e Update README.md 2022-08-15 05:16:02 +00:00
Moritz Laurer 9a6ead55b4 Update README.md 2022-08-14 21:25:10 +00:00
Moritz Laurer 6dc726042b fixed typo for 'vi' vietnamese 2022-07-30 15:10:27 +00:00
Moritz Laurer ef7c55665d Update README.md 2022-07-28 16:23:58 +00:00
Moritz Laurer 49c996995e Update README.md 2022-07-28 15:52:46 +00:00
3 changed files with 34 additions and 15 deletions

1
.gitattributes vendored
View File

@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zstandard filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
model.safetensors filter=lfs diff=lfs merge=lfs -text

View File

@ -14,8 +14,9 @@ language:
- th
- tr
- ur
- vu
- zh
- vi
- zh
license: mit
tags:
- zero-shot-classification
- text-classification
@ -33,15 +34,31 @@ widget:
---
# Multilingual mDeBERTa-v3-base-mnli-xnli
## Model description
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying model was pre-trained by Microsoft on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
As of December 2021, mDeBERTa-base is the best performing multilingual base-sized transformer model, introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf).
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual
zero-shot classification. The underlying model was pre-trained by Microsoft on the
[CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
As of December 2021, mDeBERTa-base is the best performing multilingual base-sized transformer model,
introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf).
If you are looking for a smaller, faster (but less performant) model, you can
try [multilingual-MiniLMv2-L6-mnli-xnli](https://huggingface.co/MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli).
## Intended uses & limitations
#### How to use the model
### How to use the model
#### Simple zero-shot classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="MoritzLaurer/mDeBERTa-v3-base-mnli-xnli")
sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
candidate_labels = ["politics", "economy", "entertainment", "environment"]
output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
print(output)
```
#### NLI use-case
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "MoritzLaurer/mDeBERTa-v3-base-mnli-xnli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
@ -53,10 +70,8 @@ hypothesis = "Emmanuel Macron is the President of France"
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
@ -80,18 +95,18 @@ The model was evaluated on the XNLI test set on 15 languages (5010 texts per lan
Also note that if other multilingual models on the model hub claim performance of around 90% on languages other than English, the authors have most likely made a mistake during testing since non of the latest papers shows a multilingual average performance of more than a few points above 80% on XNLI (see [here](https://arxiv.org/pdf/2111.09543.pdf) or [here](https://arxiv.org/pdf/1911.02116.pdf)).
average | ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vu | zh
average | ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vi | zh
---------|----------|---------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------
0.808 | 0.802 | 0.829 | 0.825 | 0.826 | 0.883 | 0.845 | 0.834 | 0.771 | 0.813 | 0.748 | 0.793 | 0.807 | 0.740 | 0.795 | 0.8116
## Limitations and bias
Please consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
### Ideas for cooperation or questions?
## Citation
If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. Less Annotating, More Classifying Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI. Preprint, June. Open Science Framework. https://osf.io/74b8k.
## Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
### Debugging and issues
Note that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues. Note that mDeBERTa currently does not support FP16, see here: https://github.com/microsoft/DeBERTa/issues/77
## Debugging and issues
Note that DeBERTa-v3 was released in late 2021 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 or higher might solve some issues. Note that mDeBERTa currently does not support FP16, see here: https://github.com/microsoft/DeBERTa/issues/77

BIN
model.safetensors (Stored with Git LFS) Normal file

Binary file not shown.