From 3c4c4a19536c6034e447763981982e89bc674b7b Mon Sep 17 00:00:00 2001 From: patrickjohncyh Date: Thu, 2 Mar 2023 17:30:07 -0500 Subject: [PATCH] Fix typo in model card --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 22c4eb7..ebbd71b 100644 --- a/README.md +++ b/README.md @@ -42,4 +42,4 @@ We acknowledge certain limitations of FashionCLIP and expect that it inherits ce Our investingations also suggests that the data used introduces certain limitaions in FashionCLIP. From the textual modality, given that most captions dervied from the Farfetch dataset are long, we observe that FashionCLIP maybe more performant in longer queries than shorter ones. From the image modality, FashionCLIP is also biased towards standard product images (centered, white background). -Model selection, i.e. selecting an appropariate stopping critera during fine-tuning, remains an open challenge. We observed that using loss on an in-domain (i.e. same distribution as test) validation dataset is a poor selection critera when out-of-domain generalization (i.e. across different datasets) is desired, even when the dataset usdd is relatively diverse and large. +Model selection, i.e. selecting an appropariate stopping critera during fine-tuning, remains an open challenge. We observed that using loss on an in-domain (i.e. same distribution as test) validation dataset is a poor selection critera when out-of-domain generalization (i.e. across different datasets) is desired, even when the dataset used is relatively diverse and large.