prithivMLmods commited on
Commit
df87b3c
·
verified ·
1 Parent(s): 5c6364b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -31,6 +31,8 @@ This notebook demonstrates how to fine-tune SigLIP 2, a robust multilingual visi
31
  | notebook-siglip2-finetune-type1 | Train/Test Splits | [⬇️Download](https://huggingface.co/prithivMLmods/FineTuning-SigLIP2-Notebook/blob/main/Finetune-SigLIP2-Image-Classification/1.SigLIP2_Finetune_ImageClassification_TrainTest_Splits.ipynb) |
32
  | notebook-siglip2-finetune-type2 | Only Train Split | [⬇️Download](https://huggingface.co/prithivMLmods/FineTuning-SigLIP2-Notebook/blob/main/Finetune-SigLIP2-Image-Classification/2.SigLIP2_Finetune_ImageClassification_OnlyTrain_Splits.ipynb) |
33
 
 
 
34
  ---
35
 
36
  The notebook outlines two data handling scenarios. In the first, datasets include predefined train and test splits, enabling conventional supervised learning and generalization evaluation. In the second scenario, only a training split is available; in such cases, the training set is either partially reserved for validation or reused entirely for evaluation. This flexibility supports experimentation in constrained or domain-specific settings, where standard test annotations may not exist.
 
31
  | notebook-siglip2-finetune-type1 | Train/Test Splits | [⬇️Download](https://huggingface.co/prithivMLmods/FineTuning-SigLIP2-Notebook/blob/main/Finetune-SigLIP2-Image-Classification/1.SigLIP2_Finetune_ImageClassification_TrainTest_Splits.ipynb) |
32
  | notebook-siglip2-finetune-type2 | Only Train Split | [⬇️Download](https://huggingface.co/prithivMLmods/FineTuning-SigLIP2-Notebook/blob/main/Finetune-SigLIP2-Image-Classification/2.SigLIP2_Finetune_ImageClassification_OnlyTrain_Splits.ipynb) |
33
 
34
+ > [!warning]
35
+ To avoid notebook loading errors, please download and use the notebook.
36
  ---
37
 
38
  The notebook outlines two data handling scenarios. In the first, datasets include predefined train and test splits, enabling conventional supervised learning and generalization evaluation. In the second scenario, only a training split is available; in such cases, the training set is either partially reserved for validation or reused entirely for evaluation. This flexibility supports experimentation in constrained or domain-specific settings, where standard test annotations may not exist.