This is a berta-large model, fine-tuned using the SQuAD2.0 dataset for the task of question answering.
## Overview (example - fill to need)
**Language model:** bert-large
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><ahref="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><ahref="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><aclass="h-7"href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>