deepset/bert-large-uncased-whole-word-masking-squad2 is a forked repo from huggingface. License: cc-by-4-0
Go to file
Sebastian 91eb2951ac Add verifyToken field to verify evaluation results are produced by Hugging Face's automatic model evaluator (#6)
- Add verifyToken field to verify evaluation results are produced by Hugging Face's automatic model evaluator (a4cf1204a176000962759b937e88cff97f187d16)


Co-authored-by: Evaluation Bot <autoevaluator@users.noreply.huggingface.co>
2022-12-05 16:06:07 +00:00
.gitattributes allow flax 2021-05-19 15:28:15 +00:00
README.md Add verifyToken field to verify evaluation results are produced by Hugging Face's automatic model evaluator (#6) 2022-12-05 16:06:07 +00:00
added_tokens.json Update added_tokens.json 2020-01-17 10:51:41 +00:00
config.json Update config.json 2020-04-24 15:57:38 +00:00
flax_model.msgpack upload flax model 2021-05-19 15:28:47 +00:00
pytorch_model.bin Update pytorch_model.bin 2020-01-17 10:51:45 +00:00
saved_model.tar.gz Update saved_model.tar.gz 2020-04-14 20:56:51 +00:00
special_tokens_map.json Update special_tokens_map.json 2020-01-17 10:54:24 +00:00
tokenizer_config.json Update tokenizer_config.json 2020-01-17 10:54:19 +00:00
vocab.txt Update vocab.txt 2020-01-17 10:54:22 +00:00

README.md

language license datasets model-index
en cc-by-4.0
squad_v2
name results
deepset/bert-large-uncased-whole-word-masking-squad2
task dataset metrics
type name
question-answering Question Answering
name type config split
squad_v2 squad_v2 squad_v2 validation
type value name verified verifyToken
exact_match 80.8846 Exact Match true eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2E5ZGNkY2ExZWViZGEwNWE3OGRmMWM2ZmE4ZDU4ZDQ1OGM3ZWE0NTVmZjFmYmZjZmJmNjJmYTc3NTM3OTk3OSIsInZlcnNpb24iOjF9.aSblF4ywh1fnHHrN6UGL392R5KLaH3FCKQlpiXo_EdQ4XXEAENUCjYm9HWDiFsgfSENL35GkbSyz_GAhnefsAQ
type value name verified verifyToken
f1 83.8765 F1 true eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGFlNmEzMTk2NjRkNTI3ZTk3ZTU1NWNlYzIyN2E0ZDFlNDA2ZjYwZWJlNThkMmRmMmE0YzcwYjIyZDM5NmRiMCIsInZlcnNpb24iOjF9.-rc2_Bsp_B26-o12MFYuAU0Ad2Hg9PDx7Preuk27WlhYJDeKeEr32CW8LLANQABR3Mhw2x8uTYkEUrSDMxxLBw

bert-large-uncased-whole-word-masking-squad2

This is a berta-large model, fine-tuned using the SQuAD2.0 dataset for the task of question answering.

Overview

Language model: bert-large
Language: English
Downstream-task: Extractive QA
Training data: SQuAD 2.0
Eval data: SQuAD 2.0
Code: See an example QA pipeline on Haystack

Usage

In Haystack

Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in Haystack:

reader = FARMReader(model_name_or_path="deepset/bert-large-uncased-whole-word-masking-squad2")
# or 
reader = TransformersReader(model_name_or_path="FILL",tokenizer="deepset/bert-large-uncased-whole-word-masking-squad2")

In Transformers

from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline

model_name = "deepset/bert-large-uncased-whole-word-masking-squad2"

# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
    'question': 'Why is model conversion important?',
    'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)

# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

About us

deepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.

Some of our other work:

Get in touch and join the Haystack community

For more info on Haystack, visit our GitHub repo and Documentation.

We also have a Discord community open to everyone!

Twitter | LinkedIn | Discord | GitHub Discussions | Website

By the way: we're hiring!