Improve dataset card for CLIP-SVD (Singular Value Few-shot Adaptation of Vision-Language Models)
#2
by
nielsr
HF Staff
- opened
This PR updates the dataset card to better reflect its connection to the "Singular Value Few-shot Adaptation of Vision-Language Models" (CLIP-SVD) paper.
Specifically, it:
- Links the CLIP-SVD paper badge to its Hugging Face
papers/page: https://huggingface.co/papers/2509.03740. - Adds the
image-text-to-texttask category to the metadata. - Integrates the abstract, method overview, and key results from the CLIP-SVD paper's GitHub README to provide more comprehensive context.
- Renames the original "Overview" section to "Overview (BiomedCoOp)" for clarity, acknowledging both research efforts utilizing this dataset.
- Adds a "Sample Usage" section, referring users to the
RUN.mdfile in the associated CLIP-SVD code repository for detailed instructions.