distilbert-base-uncased-mnli/README.md

37 lines
1.0 KiB
Markdown
Raw Normal View History

2021-02-13 10:54:51 +00:00
---
2021-02-13 10:55:01 +00:00
language: en
2021-02-13 10:54:51 +00:00
pipeline_tag: zero-shot-classification
2021-02-13 11:01:12 +00:00
tags:
- distilbert
2021-02-13 11:00:17 +00:00
datasets:
2021-02-14 09:06:34 +00:00
- multi_nli
2021-02-13 11:00:17 +00:00
metrics:
- accuracy
2021-02-13 10:54:51 +00:00
---
# DistilBERT base model (uncased)
2021-05-27 09:53:20 +00:00
This is the [uncased DistilBERT model](https://huggingface.co/distilbert-base-uncased) fine-tuned on [Multi-Genre Natural Language Inference](https://huggingface.co/datasets/multi_nli) (MNLI) dataset for the zero-shot classification task. The model is not case-sensitive, i.e., it does not make a difference between "english" and "English".
## Training
Training is done on a [p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) AWS EC2 instance (1 NVIDIA Tesla V100 GPUs), with the following hyperparameters:
```
$ run_glue.py \
--model_name_or_path distilbert-base-uncased \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 16 \
--learning_rate 2e-5 \
--num_train_epochs 5 \
--output_dir /tmp/distilbert-base-uncased_mnli/
```
## Evaluation results
| Task | MNLI | MNLI-mm |
|:----:|:----:|:----:|
| | 82.0 | 82.0 |