jinhybr/OCR-DocVQA-Donut is a forked repo from huggingface. License: mit
Go to file
Tao Jin 8436d13aea Create README.md 2022-11-04 22:23:22 +00:00
.gitattributes initial commit 2022-11-04 22:11:29 +00:00
README.md Create README.md 2022-11-04 22:23:22 +00:00
added_tokens.json Upload processor 2022-11-04 22:11:33 +00:00
config.json Upload model 2022-11-04 22:11:48 +00:00
preprocessor_config.json Upload processor 2022-11-04 22:11:33 +00:00
pytorch_model.bin Upload model 2022-11-04 22:11:48 +00:00
sentencepiece.bpe.model Upload processor 2022-11-04 22:11:33 +00:00
special_tokens_map.json Upload processor 2022-11-04 22:11:33 +00:00
tokenizer.json Upload processor 2022-11-04 22:11:33 +00:00
tokenizer_config.json Upload processor 2022-11-04 22:11:33 +00:00

README.md

license pipeline_tag tags widget
mit document-question-answering
donut
image-to-text
vision
text src
What is the invoice number? 2359223c18/invoice.png
text src
What is the purchase amount? 2359223c18/contract.jpeg

Donut (base-sized model, fine-tuned on DocVQA)

Donut model fine-tuned on DocVQA. It was introduced in the paper OCR-free Document Understanding Transformer by Geewok et al. and first released in this repository.

Disclaimer: The team releasing Donut did not write a model card for this model so this model card has been written by the Hugging Face team.

Model description

Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder.

model image

Intended uses & limitations

This model is fine-tuned on DocVQA, a document visual question answering dataset.

We refer to the documentation which includes code examples.