Update README.md
This commit is contained in:
parent
49c996995e
commit
ef7c55665d
|
@ -16,6 +16,7 @@ language:
|
|||
- ur
|
||||
- vu
|
||||
- zh
|
||||
license: mit
|
||||
tags:
|
||||
- zero-shot-classification
|
||||
- text-classification
|
||||
|
@ -42,6 +43,7 @@ As of December 2021, mDeBERTa-base is the best performing multilingual base-size
|
|||
```python
|
||||
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
||||
import torch
|
||||
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
|
||||
|
||||
model_name = "MoritzLaurer/mDeBERTa-v3-base-mnli-xnli"
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
|
@ -53,10 +55,8 @@ hypothesis = "Emmanuel Macron is the President of France"
|
|||
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
|
||||
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
|
||||
prediction = torch.softmax(output["logits"][0], -1).tolist()
|
||||
|
||||
label_names = ["entailment", "neutral", "contradiction"]
|
||||
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
|
||||
|
||||
print(prediction)
|
||||
```
|
||||
|
||||
|
@ -87,7 +87,7 @@ average | ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vu |
|
|||
## Limitations and bias
|
||||
Please consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases.
|
||||
|
||||
## BibTeX entry and citation info
|
||||
## Citation
|
||||
If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k.
|
||||
|
||||
## Ideas for cooperation or questions?
|
||||
|
|
Loading…
Reference in New Issue