Training data is the french fold of the [XLNI](https://research.fb.com/publications/xnli-evaluating-cross-lingual-sentence-representations/) dataset released in 2018 by Facebook. <br>
Available with great ease using the ```datasets``` library :
```python
from datasets import load_dataset
dataset = load_dataset('xnli', 'fr')
```
## Training/Fine-Tuning procedure
Training procedure is here pretty basic and was performed on the cloud using a single GPU. <br>
Main training parameters :
- ```lr = 2e-5``` with ```lr_scheduler_type = "linear"```
- ```num_train_epochs = 4```
- ```batch_size = 12``` (limited by GPU-memory)
- ```weight_decay = 0.01```
## Eval results
We obtain the following results on ```validation``` and ```test``` sets: