openmmlab/upernet-convnext-small is a forked repo from huggingface. License: mit
Go to file
Niels Rogge 550b68d291 Upload README.md with huggingface_hub 2023-01-19 10:45:20 +00:00
.gitattributes initial commit 2023-01-13 14:24:39 +00:00
README.md Upload README.md with huggingface_hub 2023-01-19 10:45:20 +00:00
config.json Upload UperNetForSemanticSegmentation 2023-01-13 14:25:09 +00:00
preprocessor_config.json Upload processor 2023-01-13 14:25:12 +00:00
pytorch_model.bin Upload UperNetForSemanticSegmentation 2023-01-13 14:25:09 +00:00

README.md

language license tags model_name
en mit
vision
image-segmentation
openmmlab/upernet-convnext-small

UperNet, ConvNeXt small-sized backbone

UperNet framework for semantic segmentation, leveraging a ConvNeXt backbone. UperNet was introduced in the paper Unified Perceptual Parsing for Scene Understanding by Xiao et al.

Combining UperNet with a ConvNeXt backbone was introduced in the paper A ConvNet for the 2020s.

Disclaimer: The team releasing UperNet + ConvNeXt did not write a model card for this model so this model card has been written by the Hugging Face team.

Model description

UperNet is a framework for semantic segmentation. It consists of several components, including a backbone, a Feature Pyramid Network (FPN) and a Pyramid Pooling Module (PPM).

Any visual backbone can be plugged into the UperNet framework. The framework predicts a semantic label per pixel.

UperNet architecture

Intended uses & limitations

You can use the raw model for semantic segmentation. See the model hub to look for fine-tuned versions (with various backbones) on a task that interests you.

How to use

For code examples, we refer to the documentation.