impira/layoutlm-document-qa is a forked repo from huggingface. License: mit
Go to file
Ankur Goyal 73a21c855a Initial commit 2022-08-07 17:07:34 -07:00
.gitattributes initial commit 2022-08-07 21:07:19 +00:00
README.md Initial commit 2022-08-07 17:07:34 -07:00
config.json Initial commit 2022-08-07 17:07:34 -07:00
configuration_layoutlm.py Initial commit 2022-08-07 17:07:34 -07:00
merges.txt Initial commit 2022-08-07 17:07:34 -07:00
modeling_layoutlm.py Initial commit 2022-08-07 17:07:34 -07:00
pipeline_document_question_answering.py Initial commit 2022-08-07 17:07:34 -07:00
pytorch_model.bin Initial commit 2022-08-07 17:07:34 -07:00
qa_helpers.py Initial commit 2022-08-07 17:07:34 -07:00
special_tokens_map.json Initial commit 2022-08-07 17:07:34 -07:00
tokenizer.json Initial commit 2022-08-07 17:07:34 -07:00
tokenizer_config.json Initial commit 2022-08-07 17:07:34 -07:00
vocab.json Initial commit 2022-08-07 17:07:34 -07:00

README.md

language thumbnail license
en https://uploads-ssl.webflow.com/5e3898dff507782a6580d710/614a23fcd8d4f7434c765ab9_logo.png mit

LayoutLM for Visual Question Answering

This is a fine-tuned version of the multi-modal LayoutLM model for the task of question answering on documents. It has been fine-tuned on

Model details

The LayoutLM model was developed at Microsoft (paper) as a general purpose tool for understanding documents. This model is a fine-tuned checkpoint of LayoutLM-Base-Cased, using both the SQuAD2.0 and DocVQA datasets.

Getting started with the model