Compare commits

...

10 Commits

Author SHA1 Message Date
Patrick John Chia afacf5e585 Adding `safetensors` variant of this model (#7)
- Adding `safetensors` variant of this model (6e4ddb8f8df76f08859d9022a45085133895bce6)


Co-authored-by: Safetensors convertbot <SFconvertbot@users.noreply.huggingface.co>
2023-03-20 22:13:21 +00:00
patrickjohncyh 19ae1be57c update README.md 2023-03-20 11:18:50 -04:00
patrickjohncyh 9a435bd7f7 update link 2023-03-10 15:29:47 -05:00
patrickjohncyh a2ede6f241 add link 2023-03-10 15:21:44 -05:00
patrickjohncyh 6785030834 update year 2023-03-10 15:20:49 -05:00
patrickjohncyh 12b28acfbf update model card to reflect new model 2023-03-10 15:11:09 -05:00
patrickjohncyh 83cb9b65be Merge branch 'main' of https://huggingface.co/patrickjohncyh/fashion-clip 2023-03-10 13:24:40 -05:00
patrickjohncyh 4d23c8d9f5 release FashionCLIP 2.0, fine-tuning off laion/CLIP-ViT-B-32-laion2B-s34B-b79K checkpoint 2023-03-10 13:21:55 -05:00
Patrick John Chia ff0d54f09a update README.md 2023-03-09 03:46:58 +00:00
Patrick John Chia b76c536b48 Update README.md (#5)
- Update README.md (e07a559baa9b9c942e4126f54c1b56abc35860e3)


Co-authored-by: Federico Bianchi <vinid@users.noreply.huggingface.co>
2023-03-09 03:37:44 +00:00
3 changed files with 22 additions and 3 deletions

View File

@ -8,11 +8,15 @@ tags:
library_name: transformers
language:
- en
widget:
- src: https://cdn-images.farfetch-contents.com/19/76/05/56/19760556_44221665_1000.jpg
candidate_labels: black shoe, red shoe, a cat
example_title: Black Shoe
---
[![Youtube Video](https://img.shields.io/badge/youtube-video-red)](https://www.youtube.com/watch?v=uqRSc-KSA1Y)
[![HuggingFace Model](https://img.shields.io/badge/HF%20Model-Weights-yellow)](https://huggingface.co/patrickjohncyh/fashion-clip)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1n--D_CBPEEO7fCcebHIxbbY4dr5incaD?usp=sharing)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Z1hAxBnWjF76bEi9KQ6CMBBEmI_FVDrW?usp=sharing)
[![Medium Blog Post](https://raw.githubusercontent.com/aleen42/badges/master/src/medium.svg)](https://towardsdatascience.com/teaching-clip-some-fashion-3005ac3fdcc3)
# Model Card: Fashion CLIP
@ -21,6 +25,18 @@ Disclaimer: The model card adapts the model card from [here](https://huggingface
## Model Details
UPDATE (10/03/23): We have updated the model! We found that [laion/CLIP-ViT-B-32-laion2B-s34B-b79K](https://huggingface.co/laion/CLIP-ViT-B-32-laion2B-s34B-b79K) checkpoint (thanks [Bin](https://www.linkedin.com/in/bin-duan-56205310/)!) worked better than original OpenAI CLIP on Fashion. We thus fine-tune a newer (and better!) version of FashionCLIP (henceforth FashionCLIP 2.0), while keeping the architecture the same. We postulate that the perofrmance gains afforded by `laion/CLIP-ViT-B-32-laion2B-s34B-b79K` are due to the increased training data (5x OpenAI CLIP data). Our [thesis](https://www.nature.com/articles/s41598-022-23052-9), however, remains the same -- fine-tuning `laion/CLIP` on our fashion dataset improved zero-shot perofrmance across our benchmarks. See the below table comparing weighted macro F1 score across models.
`
| Model | FMNIST | KAGL | DEEP |
| ------------- | ------------- | ------------- | ------------- |
| OpenAI CLIP | 0.66 | 0.63 | 0.45 |
| FashionCLIP | 0.74 | 0.67 | 0.48 |
| Laion CLIP | 0.78 | 0.71 | 0.58 |
| FashionCLIP 2.0 | __0.83__ | __0.73__ | __0.62__ |
---
FashionCLIP is a CLIP-based model developed to produce general product representations for fashion concepts. Leveraging the pre-trained checkpoint (ViT-B/32) released by [OpenAI](https://github.com/openai/CLIP), we train FashionCLIP on a large, high-quality novel fashion dataset to study whether domain specific fine-tuning of CLIP-like models is sufficient to produce product representations that are zero-shot transferable to entirely new datasets and tassks. FashionCLIP was not developed for model deplyoment - to do so, researchers will first need to carefully study their capabilities in relation to the specific context theyre being deployed within.
### Model Date

BIN
model.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model.bin (Stored with Git LFS)

Binary file not shown.