Update model card

This commit is contained in:
Niels Rogge 2021-06-01 10:06:43 +00:00 committed by huggingface-web
parent f7bfb7db01
commit 073311fe2e
1 changed files with 7 additions and 1 deletions

View File

@ -5,7 +5,7 @@ tags:
# DETR (End-to-End Object Detection) model with ResNet-50 backbone # DETR (End-to-End Object Detection) model with ResNet-50 backbone
DEtection Transformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr). DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr).
Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.
@ -13,6 +13,12 @@ Disclaimer: The team releasing DETR did not write a model card for this model so
The DETR model is an encoder-decoder transformer with a convolutional backbone. The DETR model is an encoder-decoder transformer with a convolutional backbone.
First, an image is sent through a CNN backbone, outputting a lower-resolution feature map, typically of shape (1, 2048, height/32, width/32). This is then projected to match the hidden dimension of the Transformer, which is 256 by default, using a nn.Conv2D layer. Next, the feature map is flattened and transposed to obtain a tensor of shape (batch_size, seq_len, d_model) = (1, width/32*height/32, 256).
This is sent through the encoder, outputting encoder_hidden_states of the same shape. Next, so-called object queries are sent through the decoder. This is just a tensor of shape (batch_size, num_queries, d_model), with num_queries typically set to 100 and is initialized with zeros. Each object query looks for a particular object in the image. Next, the decoder updates these object queries through multiple self-attention and encoder-decoder attention layers to output decoder_hidden_states of the same shape: (batch_size, num_queries, d_model). Next, two heads are added on top for object detection: a linear layer for classifying each object query into one of the objects or "no object", and a MLP to predict bounding boxes for each query. So the number of queries actually determines the maximum number of objects the model can detect in an image.
The model is trained using a "bipartite matching loss": so what we actually do is compare the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy for the classes and L1 regression loss for the bounding boxes are used to optimize the parameters of the model.
## Intended uses & limitations ## Intended uses & limitations
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models. You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models.