From b2cff6133591550e5f39270a18068075eddb7483 Mon Sep 17 00:00:00 2001 From: julianrisch Date: Thu, 4 Aug 2022 10:36:41 +0000 Subject: [PATCH] Updating the model card (#1) - Updating the model card (9c83ec53678e2b05338970246c0aebcc820c0399) Co-authored-by: Tuana Celik --- README.md | 76 ++++++++++++++++++++++++++----------------------------- 1 file changed, 36 insertions(+), 40 deletions(-) diff --git a/README.md b/README.md index ebfaebe..b6a082a 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,7 @@ --- +language: en +datasets: +- squad_v2 license: cc-by-4.0 --- @@ -44,6 +47,14 @@ This model is the model obtained from the **third** fold of the cross-validation ## Usage +### In Haystack +For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/): +```python +reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2-covid") +# or +reader = TransformersReader(model="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2-covid") +``` + ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline @@ -64,52 +75,37 @@ model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` -### In FARM -```python -from farm.modeling.adaptive_model import AdaptiveModel -from farm.modeling.tokenization import Tokenizer -from farm.infer import Inferencer - -model_name = "deepset/roberta-base-squad2-covid" - -# a) Get predictions -nlp = Inferencer.load(model_name, task_type="question_answering") -QA_input = [{"questions": ["Why is model conversion important?"], - "text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}] -res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True) - -# b) Load model & tokenizer -model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering") -tokenizer = Tokenizer.load(model_name) -``` - -### In haystack -For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/): -```python -reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2-covid") -# or -reader = TransformersReader(model="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2-covid") -``` - ## Authors -Branden Chan: `branden.chan [at] deepset.ai` -Timo Möller: `timo.moeller [at] deepset.ai` -Malte Pietsch: `malte.pietsch [at] deepset.ai` -Tanay Soni: `tanay.soni [at] deepset.ai` -Bogdan Kostić: `bogdan.kostic [at] deepset.ai` +**Branden Chan:** branden.chan@deepset.ai +**Timo Möller:** timo.moeller@deepset.ai +**Malte Pietsch:** malte.pietsch@deepset.ai +**Tanay Soni:** tanay.soni@deepset.ai +**Bogdan Kostić:** bogdan.kostic@deepset.ai ## About us -![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) -We bring NLP to the industry via open source! -Our focus: Industry specific language models & large scale QA systems. - -Some of our work: +
+
+ +
+
+ +
+
+ +[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc. + + +Some of our other work: +- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) -- [FARM](https://github.com/deepset-ai/FARM) -- [Haystack](https://github.com/deepset-ai/haystack/) -Get in touch: +## Get in touch and join the Haystack community + +

For more info on Haystack, visit our GitHub repo and Documentation. + +We also have a slackcommunity open to everyone!

+ [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)