Compare commits

...

10 Commits

Author SHA1 Message Date
Niels Rogge b5b9655a8a change image_processor_type (#1)
- change image_processor_type (3d20e9a42718e9ca2db32a3680db5a09ef5077ae)


Co-authored-by: Shivalika Singh <shivi@users.noreply.huggingface.co>
2023-01-25 09:46:59 +00:00
Alara Dirik d14773b926 Upload Mask2FormerForUniversalSegmentation 2023-01-16 13:17:30 +00:00
Alara Dirik fdb6d2c3da Upload processor 2023-01-16 13:17:20 +00:00
Alara Dirik be0d8ae664 Upload Mask2FormerForUniversalSegmentation 2023-01-16 11:29:47 +00:00
Alara Dirik 0b9da41c08 Upload processor 2023-01-16 11:29:40 +00:00
Alara Dirik 69472d84a6 Update README.md 2023-01-04 14:44:52 +00:00
Alara Dirik dd965256b4 Update preprocessor_config.json 2023-01-04 14:43:34 +00:00
Alara Dirik 404f86156d Create README.md 2023-01-03 11:51:09 +00:00
Alara Dirik c11a7dd0c1 Upload Mask2FormerForUniversalSegmentation 2023-01-02 17:11:26 +00:00
Alara Dirik 6375d73223 Upload processor 2023-01-02 17:11:17 +00:00
1 changed files with 68 additions and 0 deletions

68
README.md Normal file
View File

@ -0,0 +1,68 @@
---
license: other
tags:
- vision
- image-segmentation
datasets:
- coco
widget:
- src: http://images.cocodataset.org/val2017/000000039769.jpg
example_title: Cats
- src: http://images.cocodataset.org/val2017/000000039770.jpg
example_title: Castle
---
# Mask2Former
Mask2Former model trained on COCO panoptic segmentation (base-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.
![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png)
## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on COCO panoptic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-base-coco-panoptic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-base-coco-panoptic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former).