Update README.md
This commit is contained in:
parent
414f455b6c
commit
9b6b447418
|
@ -7,12 +7,15 @@ tags:
|
|||
# Donut (base-sized model, pre-trained only)
|
||||
|
||||
Donut model pre-trained-only. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut).
|
||||
|
||||
Disclaimer: The team releasing Donut did not write a model card for this model so this model card has been written by the Hugging Face team.
|
||||
|
||||
## Model description
|
||||
|
||||
Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder.
|
||||
|
||||

|
||||
|
||||
## Intended uses & limitations
|
||||
|
||||
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for
|
||||
|
|
Loading…
Reference in New Issue