Compare commits

...

10 Commits

Author SHA1 Message Date
Mohammed Rakib c0171973e2 Pushed the best model based on paper 2023-01-18 12:18:53 +00:00
Mohammed Rakib 0211a421fd Update README.md 2023-01-18 12:06:01 +00:00
Mohammed Rakib 0048c696fb Update README.md 2023-01-18 11:51:04 +00:00
Mohammed Rakib fc98e7ea7e Update README.md 2023-01-18 11:48:18 +00:00
Mohammed Rakib 6e5c6c1769 Update README.md 2023-01-18 11:46:20 +00:00
Mohammed Rakib 61ed0ae78a Update README.md 2023-01-18 11:30:56 +00:00
Mohammed Rakib 2adacb01ca Update README.md 2023-01-18 11:25:40 +00:00
Mohammed Rakib d0e02132b2 model documentation (#3)
- model documentation (cf62096c14217b919f6063eaca3bae07f4f6b6de)


Co-authored-by: Nazneen Rajani <nazneen@users.noreply.huggingface.co>
2022-11-03 22:32:17 +00:00
Mohammed Rakib 489c045834 model documentation (#2)
- model documentation (31e1ed57ea6cdd7128a93750f3c2f125ac65b6d1)


Co-authored-by: Nazneen Rajani <nazneen@users.noreply.huggingface.co>
2022-10-31 23:23:15 +00:00
MohammedRakib bc60334996 add tokenizer 2021-07-03 18:10:33 +00:00
11 changed files with 50176 additions and 4 deletions

1
.gitattributes vendored
View File

@ -14,3 +14,4 @@
*.pb filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
nbest_predictions.json filter=lfs diff=lfs merge=lfs -text

160
README.md Normal file
View File

@ -0,0 +1,160 @@
---
language:
- en
license: mit
datasets:
- cuad
pipeline_tag: question-answering
tags:
- legal-contract-review
- roberta
- cuad
library_name: transformers
---
# Model Card for roberta-base-on-cuad
# Model Details
## Model Description
- **Developed by:** Mohammed Rakib
- **Shared by [Optional]:** More information needed
- **Model type:** Question Answering
- **Language(s) (NLP):** en
- **License:** MIT
- **Related Models:**
- **Parent Model:** RoBERTa
- **Resources for more information:**
- GitHub Repo: [defactolaw](https://github.com/afra-tech/defactolaw)
- Associated Paper: [An Open Source Contractual Language Understanding Application Using Machine Learning](https://aclanthology.org/2022.lateraisse-1.6/)
# Uses
## Direct Use
This model can be used for the task of Question Answering on Legal Documents.
# Training Details
Read: [An Open Source Contractual Language Understanding Application Using Machine Learning](https://aclanthology.org/2022.lateraisse-1.6/)
for detailed information on training procedure, dataset preprocessing and evaluation.
## Training Data
See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information.
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information.
### Factors
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
Used V100/P100 from Google Colab Pro
### Software
Python, Transformers
# Citation
**BibTeX:**
```
@inproceedings{nawar-etal-2022-open,
title = "An Open Source Contractual Language Understanding Application Using Machine Learning",
author = "Nawar, Afra and
Rakib, Mohammed and
Hai, Salma Abdul and
Haq, Sanaulla",
booktitle = "Proceedings of the First Workshop on Language Technology and Resources for a Fair, Inclusive, and Safe Society within the 13th Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lateraisse-1.6",
pages = "42--50",
abstract = "Legal field is characterized by its exclusivity and non-transparency. Despite the frequency and relevance of legal dealings, legal documents like contracts remains elusive to non-legal professionals for the copious usage of legal jargon. There has been little advancement in making legal contracts more comprehensible. This paper presents how Machine Learning and NLP can be applied to solve this problem, further considering the challenges of applying ML to the high length of contract documents and training in a low resource environment. The largest open-source contract dataset so far, the Contract Understanding Atticus Dataset (CUAD) is utilized. Various pre-processing experiments and hyperparameter tuning have been carried out and we successfully managed to eclipse SOTA results presented for models in the CUAD dataset trained on RoBERTa-base. Our model, A-type-RoBERTa-base achieved an AUPR score of 46.6{\%} compared to 42.6{\%} on the original RoBERT-base. This model is utilized in our end to end contract understanding application which is able to take a contract and highlight the clauses a user is looking to find along with it{'}s descriptions to aid due diligence before signing. Alongside digital, i.e. searchable, contracts the system is capable of processing scanned, i.e. non-searchable, contracts using tesseract OCR. This application is aimed to not only make contract review a comprehensible process to non-legal professionals, but also to help lawyers and attorneys more efficiently review contracts.",
}
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Mohammed Rakib in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("Rakib/roberta-base-on-cuad")
model = AutoModelForQuestionAnswering.from_pretrained("Rakib/roberta-base-on-cuad")
```
</details>

View File

@ -1,5 +1,5 @@
{
"_name_or_path": "/content/drive/MyDrive/models/C10_roberta-base-100%-using-CUAD-trained-on-Only-Has-Ans-dataset",
"_name_or_path": "roberta-base",
"architectures": [
"RobertaForQuestionAnswering"
],
@ -19,7 +19,7 @@
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.8.2",
"transformers_version": "4.7.0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265

50001
merges.txt Normal file

File diff suppressed because it is too large Load Diff

BIN
nbest_predictions.json (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model.bin (Stored with Git LFS)

Binary file not shown.

1
special_tokens_map.json Normal file
View File

@ -0,0 +1 @@
{"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}

1
tokenizer.json Normal file

File diff suppressed because one or more lines are too long

1
tokenizer_config.json Normal file
View File

@ -0,0 +1 @@
{"unk_token": "<unk>", "bos_token": "<s>", "eos_token": "</s>", "add_prefix_space": false, "errors": "replace", "sep_token": "</s>", "cls_token": "<s>", "pad_token": "<pad>", "mask_token": "<mask>", "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "roberta-base"}

BIN
training_args.bin (Stored with Git LFS) Normal file

Binary file not shown.

1
vocab.json Normal file

File diff suppressed because one or more lines are too long