1.8 KiB
1.8 KiB
license | tags | datasets | language | |||||
---|---|---|---|---|---|---|---|---|
gpl-3.0 |
|
|
|
LayoutLMv3 base fine-tuned on MP-DocVQA
This is pretrained LayoutLMv3 from Microsoft hub and fine-tuned on Multipage DocVQA (MP-DocVQA) dataset.
This model was used as a baseline in Hierarchical multimodal transformers for Multi-Page DocVQA.
- Results on the MP-DocVQA dataset are reported in Table 2.
- Training hyperparameters can be found in Table 8 of Appendix D.
How to use
Here is how to use this model to get the features of a given text in PyTorch:
import torch
from transformers import LayoutLMv3Processor, LayoutLMv3ForQuestionAnswering
processor = LayoutLMv3Processor.from_pretrained("rubentito/layoutlmv3-base-mpdocvqa", apply_ocr=False)
model = LayoutLMv3ForQuestionAnswering.from_pretrained("rubentito/layoutlmv3-base-mpdocvqa")
image = Image.open("example.jpg").convert("RGB")
question = "Is this a question?"
context = ["Example"]
boxes = [0, 0, 1000, 1000] # This is an example bounding box covering the whole image.
document_encoding = processor(image, question, context, boxes=boxes, return_tensors="pt")
outputs = model(**document_encoding)
# Get the answer
start_idx = torch.argmax(outputs.start_logits, axis=1)
end_idx = torch.argmax(outputs.end_logits, axis=1)
answers = self.processor.tokenizer.decode(input_tokens[start_idx: end_idx+1]).strip()
BibTeX entry
@article{tito2022hierarchical,
title={Hierarchical multimodal transformers for Multi-Page DocVQA},
author={Tito, Rub{\`e}n and Karatzas, Dimosthenis and Valveny, Ernest},
journal={arXiv preprint arXiv:2212.05935},
year={2022}
}