diff --git a/README.md b/README.md
index e054203..fd36ed0 100644
--- a/README.md
+++ b/README.md
@@ -2,22 +2,14 @@
 license: apache-2.0
 tags:
 - vision
-- image-segmentatiom
-
+- image-segmentation
 datasets:
 - ade-20k
-
-widget:
-- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg
-  example_title: House
-- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg
-  example_title: Castle
-
 ---
 
-# Mask
+# MaskFormer
 
-Mask model trained on ade-20k. It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169). 
+MaskFormer model trained on ade-20k. It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169). 
 
 Disclaimer: The team releasing Mask did not write a model card for this model so this model card has been written by the Hugging Face team.
 
@@ -59,6 +51,4 @@ Here is how to use this model:
 >>> output = feature_extractor.post_process_panoptic_segmentation(outputs)
 ```
 
-
-
 For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer).
\ No newline at end of file