Create README.md
This commit is contained in:
parent
7a3c9a9128
commit
7e2532816a
|
@ -0,0 +1,19 @@
|
||||||
|
---
|
||||||
|
license: apache-2.0
|
||||||
|
tags:
|
||||||
|
- vision
|
||||||
|
- image-segmentation
|
||||||
|
inference: false
|
||||||
|
---
|
||||||
|
|
||||||
|
# CLIPSeg model
|
||||||
|
|
||||||
|
CLIPSeg model with reduce dimension 64, refined (using a more complex convolution). It was introduced in the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Lüddecke et al. and first released in [this repository](https://github.com/timojl/clipseg).
|
||||||
|
|
||||||
|
# Intended use cases
|
||||||
|
|
||||||
|
This model is intended for zero-shot and one-shot image segmentation.
|
||||||
|
|
||||||
|
# Usage
|
||||||
|
|
||||||
|
Refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/clipseg).
|
Loading…
Reference in New Issue