prithivMLmods commited on
Commit
fa2a95e
·
verified ·
1 Parent(s): ee9863d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -15,12 +15,13 @@ tags:
15
 
16
  This notebook demonstrates how to fine-tune SigLIP 2, a robust multilingual vision-language model, for single-label image classification tasks. The fine-tuning process incorporates advanced techniques such as captioning-based pretraining, self-distillation, and masked prediction, unified within a streamlined training pipeline. The workflow supports datasets in both structured and unstructured forms, making it adaptable to various domains and resource levels.
17
 
18
- The notebook outlines two data handling scenarios. In the first, datasets include predefined train and test splits, enabling conventional supervised learning and generalization evaluation. In the second scenario, only a training split is available; in such cases, the training set is either partially reserved for validation or reused entirely for evaluation. This flexibility supports experimentation in constrained or domain-specific settings, where standard test annotations may not exist.
19
-
20
  | Notebook Name | Description | Notebook Link |
21
  |-------------------------------------|--------------------------------------------------|----------------|
22
- | notebook-siglip2-finetune-type1 | Train/Test Splits | [Download](https://github.com/PRITHIVSAKTHIUR/FineTuning-SigLIP-2/blob/main/Finetune-SigLIP2-Image-Classification/1.SigLIP2_Finetune_ImageClassification_TrainTest_Splits.ipynb) |
23
- | notebook-siglip2-finetune-type2 | Only Train Split | [Download](https://github.com/PRITHIVSAKTHIUR/FineTuning-SigLIP-2/blob/main/Finetune-SigLIP2-Image-Classification/2.SigLIP2_Finetune_ImageClassification_OnlyTrain_Splits.ipynb) |
 
 
 
24
 
25
  ```
26
  last updated : jul 2025
 
15
 
16
  This notebook demonstrates how to fine-tune SigLIP 2, a robust multilingual vision-language model, for single-label image classification tasks. The fine-tuning process incorporates advanced techniques such as captioning-based pretraining, self-distillation, and masked prediction, unified within a streamlined training pipeline. The workflow supports datasets in both structured and unstructured forms, making it adaptable to various domains and resource levels.
17
 
 
 
18
  | Notebook Name | Description | Notebook Link |
19
  |-------------------------------------|--------------------------------------------------|----------------|
20
+ | notebook-siglip2-finetune-type1 | Train/Test Splits | [Download](https://huggingface.co/prithivMLmods/FineTuning-SigLIP2-Notebook/blob/main/Finetune-SigLIP2-Image-Classification/1.SigLIP2_Finetune_ImageClassification_TrainTest_Splits.ipynb) |
21
+ | notebook-siglip2-finetune-type2 | Only Train Split | [Download](https://huggingface.co/prithivMLmods/FineTuning-SigLIP2-Notebook/blob/main/Finetune-SigLIP2-Image-Classification/2.SigLIP2_Finetune_ImageClassification_OnlyTrain_Splits.ipynb) |
22
+
23
+ The notebook outlines two data handling scenarios. In the first, datasets include predefined train and test splits, enabling conventional supervised learning and generalization evaluation. In the second scenario, only a training split is available; in such cases, the training set is either partially reserved for validation or reused entirely for evaluation. This flexibility supports experimentation in constrained or domain-specific settings, where standard test annotations may not exist.
24
+
25
 
26
  ```
27
  last updated : jul 2025