From 404f86156d0fffacb5412b7606ba9ebf44b6af04 Mon Sep 17 00:00:00 2001 From: Alara Dirik Date: Tue, 3 Jan 2023 11:51:09 +0000 Subject: [PATCH] Create README.md --- README.md | 68 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 68 insertions(+) create mode 100644 README.md diff --git a/README.md b/README.md new file mode 100644 index 0000000..0311256 --- /dev/null +++ b/README.md @@ -0,0 +1,68 @@ +--- +license: other +tags: +- vision +- image-segmentation +datasets: +- coco +widget: +- src: http://images.cocodataset.org/val2017/000000039769.jpg + example_title: Cats +- src: http://images.cocodataset.org/val2017/000000039770.jpg + example_title: Castle +--- + +# Mask2Former + +Mask2Former model trained on COCO panoptic segmentation (base-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation +](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/). + +Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team. + +## Model description + +Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA, +[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without +without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks. + +![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png) + +## Intended uses & limitations + +You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other +fine-tuned versions on a task that interests you. + +### How to use + +Here is how to use this model: + +```python +import requests +import torch +from PIL import Image +from transformers import Mask2FormerImageProcessor, Mask2FormerForUniversalSegmentation + + +# load Mask2Former fine-tuned on COCO panoptic segmentation +processor = Mask2FormerImageProcessor.from_pretrained("facebook/mask2former-swin-base-coco-panoptic") +model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-base-coco-panoptic") + +url = "http://images.cocodataset.org/val2017/000000039769.jpg" +image = Image.open(requests.get(url, stream=True).raw) +inputs = processor(images=image, return_tensors="pt") + +with torch.no_grad(): + outputs = model(**inputs) + +# model predicts class_queries_logits of shape `(batch_size, num_queries)` +# and masks_queries_logits of shape `(batch_size, num_queries, height, width)` +class_queries_logits = outputs.class_queries_logits +masks_queries_logits = outputs.masks_queries_logits + +# you can pass them to processor for postprocessing +result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] +# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs) +predicted_panoptic_map = result["segmentation"] +``` + +For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). \ No newline at end of file