initial commit

This commit is contained in:
jeanpoll 2022-01-05 11:49:20 -05:00
parent d1834bb803
commit 12cca13ad4
8 changed files with 50173 additions and 0 deletions

120
README.md Normal file
View File

@ -0,0 +1,120 @@
---
language: en
datasets:
- conll2003
widget:
- text: "My name is jean-baptiste and I live in montreal"
- text: "My name is clara and I live in berkeley, california."
- text: "My name is wolfgang and I live in berlin"
---
# roberta-large-ner: model fine-tuned from roberta-large for NER task
## Introduction
[roberta-large-ner] is a NER model that was fine-tuned from roberta-large on conll2003 dataset.
Model was validated on emails/chat data and outperformed other models on this type of data specifically.
In particular the model seems to work better on entity that don't start with an upper case.
## Training data
Training data was classified as follow:
Abbreviation|Description
-|-
O| Outside of a named entity
MISC | Miscellaneous entity
PER | Persons name
ORG | Organization
LOC | Location
In order to simplify, the prefix B- or I- from original conll2003 was removed.
I used the train and test dataset from original conll2003 for training and the "validation" dataset for validation. This resulted in a dataset of size:
Train | 17494
Validation | 3250
## How to use camembert-ner with HuggingFace
##### Load camembert-ner and its sub-word tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/roberta-large-ner")
model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/roberta-large-ner")
##### Process text sample (from wikipedia)
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp("Apple was founded in 1976 by Steve Jobs, Steve Wozniak and Ronald Wayne to develop and sell Wozniak's Apple I personal computer")
[{'entity_group': 'ORG',
'score': 0.99381506,
'word': ' Apple',
'start': 0,
'end': 5},
{'entity_group': 'PER',
'score': 0.99970853,
'word': ' Steve Jobs',
'start': 29,
'end': 39},
{'entity_group': 'PER',
'score': 0.99981767,
'word': ' Steve Wozniak',
'start': 41,
'end': 54},
{'entity_group': 'PER',
'score': 0.99956465,
'word': ' Ronald Wayne',
'start': 59,
'end': 71},
{'entity_group': 'PER',
'score': 0.9997918,
'word': ' Wozniak',
'start': 92,
'end': 99},
{'entity_group': 'MISC',
'score': 0.99956393,
'word': ' Apple I',
'start': 102,
'end': 109}]
```
## Model performances
Model performances computed on conll2003 validation dataset (computed on the tokens predictions)
```
entity | precision | recall | f1
- | - | - | -
PER | 0.9914 | 0.9927 | 0.9920
ORG | 0.9627 | 0.9661 | 0.9644
LOC | 0.9795 | 0.9862 | 0.9828
MISC | 0.9292 | 0.9262 | 0.9277
Overall | 0.9740 | 0.9766 | 0.9753
```
On private dataset (email, chat, informal discussion), computed on word predictions:
```
entity | precision | recall | f1
- | - | - | -
PER | 0.8823 | 0.9116 | 0.8967
ORG | 0.7694 | 0.7292 | 0.7487
LOC | 0.8619 | 0.7768 | 0.8171
```
Spacy (en_core_web_trf-3.2.0) on the same private dataset was giving:
```
entity | precision | recall | f1
- | - | - | -
PER | 0.9146 | 0.8287 | 0.8695
ORG | 0.7655 | 0.6437 | 0.6993
LOC | 0.8727 | 0.6180 | 0.7236
```

40
config.json Normal file
View File

@ -0,0 +1,40 @@
{
"_name_or_path": "roberta-large",
"architectures": [
"RobertaForTokenClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"id2label": {
"0": "O",
"1": "LOC",
"2": "PER",
"3": "MISC",
"4": "ORG"
},
"initializer_range": 0.02,
"intermediate_size": 4096,
"label2id": {
"LOC": 1,
"MISC": 3,
"O": 0,
"ORG": 4,
"PER": 2
},
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.3.2",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}

50001
merges.txt Normal file

File diff suppressed because it is too large Load Diff

3
pytorch_model.bin Normal file
View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0e77a9cef4873df5643217b672929b3f8d3113b4a177bf593096d7b9db7e03f4
size 1417433007

6
results.csv Normal file
View File

@ -0,0 +1,6 @@
,precision,recall,f1,entity
0,0.9795249795249795,0.9862561847168774,0.9828790576633339,LOC
1,0.9914318668643928,0.9927404718693285,0.9920857378400659,PER
2,0.9292274446245273,0.9262250942380184,0.9277238403451995,MISC
3,0.9627007895453308,0.966120218579235,0.9644074730669576,ORG
4,0.9740825890497252,0.9766692954784437,0.9753719894698967,Overall
1 precision recall f1 entity
2 0 0.9795249795249795 0.9862561847168774 0.9828790576633339 LOC
3 1 0.9914318668643928 0.9927404718693285 0.9920857378400659 PER
4 2 0.9292274446245273 0.9262250942380184 0.9277238403451995 MISC
5 3 0.9627007895453308 0.966120218579235 0.9644074730669576 ORG
6 4 0.9740825890497252 0.9766692954784437 0.9753719894698967 Overall

1
special_tokens_map.json Normal file
View File

@ -0,0 +1 @@
{"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}

1
tokenizer_config.json Normal file
View File

@ -0,0 +1 @@
{"unk_token": "<unk>", "bos_token": "<s>", "eos_token": "</s>", "add_prefix_space": true, "errors": "replace", "sep_token": "</s>", "cls_token": "<s>", "pad_token": "<pad>", "mask_token": "<mask>", "model_max_length": 512, "name_or_path": "roberta-large"}

1
vocab.json Normal file

File diff suppressed because one or more lines are too long