diff --git a/README.md b/README.md index 570187f..5a957d5 100644 --- a/README.md +++ b/README.md @@ -24,3 +24,72 @@ model-index: value: 83.8765 verified: true --- + +# bert-large-uncased-whole-word-masking-squad2 + +This is a berta-large model, fine-tuned using the SQuAD2.0 dataset for the task of question answering. + +## Overview (example - fill to need) +**Language model:** bert-large +**Language:** English +**Downstream-task:** Extractive QA +**Training data:** SQuAD 2.0 +**Eval data:** SQuAD 2.0 +**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system) + +## Usage + +### In Haystack +Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/): +```python +reader = FARMReader(model_name_or_path="deepset/bert-large-uncased-whole-word-masking-squad2") +# or +reader = TransformersReader(model_name_or_path="FILL",tokenizer="deepset/bert-large-uncased-whole-word-masking-squad2") +``` + +### In Transformers +```python +from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline + +model_name = "deepset/bert-large-uncased-whole-word-masking-squad2" + +# a) Get predictions +nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) +QA_input = { + 'question': 'Why is model conversion important?', + 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' +} +res = nlp(QA_input) + +# b) Load model & tokenizer +model = AutoModelForQuestionAnswering.from_pretrained(model_name) +tokenizer = AutoTokenizer.from_pretrained(model_name) +``` + +## About us +
+
+ +
+
+ +
+
+ +[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. + + +Some of our other work: +- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) +- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) +- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) + +## Get in touch and join the Haystack community + +

For more info on Haystack, visit our GitHub repo and Documentation. + +We also have a Discord community open to everyone!

+ +[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) + +By the way: [we're hiring!](http://www.deepset.ai/jobs) \ No newline at end of file