Update README.md

This commit is contained in:
Moritz Laurer 2022-07-28 16:23:58 +00:00 committed by huggingface-web
parent 49c996995e
commit ef7c55665d
1 changed files with 4 additions and 4 deletions

View File

@ -16,6 +16,7 @@ language:
- ur - ur
- vu - vu
- zh - zh
license: mit
tags: tags:
- zero-shot-classification - zero-shot-classification
- text-classification - text-classification
@ -42,6 +43,7 @@ As of December 2021, mDeBERTa-base is the best performing multilingual base-size
```python ```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "MoritzLaurer/mDeBERTa-v3-base-mnli-xnli" model_name = "MoritzLaurer/mDeBERTa-v3-base-mnli-xnli"
tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name)
@ -53,10 +55,8 @@ hypothesis = "Emmanuel Macron is the President of France"
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist() prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"] label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction) print(prediction)
``` ```
@ -87,7 +87,7 @@ average | ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vu |
## Limitations and bias ## Limitations and bias
Please consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases. Please consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases.
## BibTeX entry and citation info ## Citation
If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. Less Annotating, More Classifying Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI. Preprint, June. Open Science Framework. https://osf.io/74b8k. If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. Less Annotating, More Classifying Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI. Preprint, June. Open Science Framework. https://osf.io/74b8k.
## Ideas for cooperation or questions? ## Ideas for cooperation or questions?