AvitoTech1 commited on
Commit
46c95bc
·
verified ·
1 Parent(s): ce3af54

Upload 3 files

Browse files
Files changed (3) hide show
  1. README.md +163 -0
  2. config.json +24 -0
  3. model.safetensors +3 -0
README.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - siglip
5
+ - siglip2
6
+ - vision
7
+ - image-embeddings
8
+ - pet-recognition
9
+ model_id: AvitoTech/SigLIP2-Base-for-animal-identification
10
+ pipeline_tag: image-feature-extraction
11
+ ---
12
+
13
+ # SigLIP2-Base Fine-tuned for Animal Identification
14
+
15
+ Fine-tuned SigLIP2-Base model for individual animal identification, specializing in distinguishing between unique cats and dogs. This model produces robust image embeddings optimized for pet recognition, re-identification, and verification tasks.
16
+
17
+
18
+ ## Model Details
19
+
20
+ - **Base Model**: google/siglip2-base-patch16-224
21
+ - **Input**: Images (224x224)
22
+ - **Output**: Image embeddings (768-dimensional)
23
+ - **Task**: Individual animal identification and verification
24
+
25
+ ## Training Data
26
+
27
+ The model was trained on a comprehensive dataset combining multiple sources:
28
+
29
+ - **[PetFace Dataset](https://arxiv.org/abs/2407.13555)**: Large-scale animal face dataset with 257,484 unique individuals across 13 animal families
30
+ - **[Dogs-World](https://www.kaggle.com/datasets/lextoumbourou/dogs-world)**: Kaggle dataset for dog breed and individual identification
31
+ - **[LCW (Labeled Cats in the Wild)](https://www.kaggle.com/datasets/dseidli/lcwlabeled-cats-in-the-wild)**: Cat identification dataset
32
+ - **Web-scraped Data**: Additional curated images from various sources
33
+
34
+ **Total Dataset Statistics:**
35
+ - **1,904,157** total photographs
36
+ - **695,091** unique individual animals (cats and dogs)
37
+
38
+ ## Training Details
39
+
40
+ **Training Configuration:**
41
+ - **Batch Size**: 116 samples (58 unique identities × 2 photos each)
42
+ - **Optimizer**: Adam with learning rate 1e-4
43
+ - **Training Duration**: 10 epochs
44
+ - **Transfer Learning**: Final 5 transformer blocks unfrozen, lower layers frozen to preserve pre-trained features
45
+
46
+ **Loss Function:**
47
+ The model is trained using a combined loss function consisting of:
48
+ 1. **Triplet Loss** (margin α=0.45): Encourages separation between different animal identities
49
+ 2. **Intra-Pair Variance Regularization** (ε=0.01): Promotes consistency across multiple photos of the same animal
50
+
51
+ Combined as: L_total = 1.0 × L_triplet + 0.5 × L_var
52
+
53
+ This approach creates compact feature clusters for each individual animal while maintaining large separation between different identities.
54
+
55
+ ## Performance Metrics
56
+
57
+ The model has been benchmarked against various vision encoders on multiple pet recognition datasets:
58
+
59
+ ### [Cat Individual Images Dataset](https://www.kaggle.com/datasets/timost1234/cat-individuals)
60
+
61
+ | Model | ROC AUC | EER | Top-1 | Top-5 | Top-10 |
62
+ |-------|---------|-----|-------|-------|--------|
63
+ | CLIP-ViT-Base | 0.9821 | 0.0604 | 0.8359 | 0.9579 | 0.9711 |
64
+ | DINOv2-Small | 0.9904 | 0.0422 | 0.8547 | 0.9660 | 0.9764 |
65
+ | SigLIP-Base | 0.9899 | 0.0390 | 0.8649 | 0.9757 | 0.9842 |
66
+ | **SigLIP2-Base** | **0.9894** | **0.0388** | **0.8660** | **0.9772** | **0.9863** |
67
+ | Zer0int CLIP-L | 0.9881 | 0.0509 | 0.8768 | 0.9767 | 0.9845 |
68
+ | SigLIP2-Giant | 0.9940 | 0.0344 | 0.8899 | 0.9868 | 0.9921 |
69
+ | SigLIP2-Giant + E5-Small-v2 + gating | 0.9929 | 0.0344 | 0.8952 | 0.9872 | 0.9932 |
70
+
71
+ ### [DogFaceNet Dataset](https://www.springerprofessional.de/en/a-deep-learning-approach-for-dog-face-verification-and-recogniti/17094782)
72
+
73
+ | Model | ROC AUC | EER | Top-1 | Top-5 | Top-10 |
74
+ |-------|---------|-----|-------|-------|--------|
75
+ | CLIP-ViT-Base | 0.9739 | 0.0772 | 0.4350 | 0.6417 | 0.7204 |
76
+ | DINOv2-Small | 0.9829 | 0.0571 | 0.5581 | 0.7540 | 0.8139 |
77
+ | SigLIP-Base | 0.9792 | 0.0606 | 0.5848 | 0.7746 | 0.8319 |
78
+ | **SigLIP2-Base** | **0.9776** | **0.0672** | **0.5925** | **0.7856** | **0.8422** |
79
+ | Zer0int CLIP-L | 0.9814 | 0.0625 | 0.6289 | 0.8092 | 0.8597 |
80
+ | SigLIP2-Giant | 0.9926 | 0.0326 | 0.7475 | 0.9009 | 0.9316 |
81
+ | SigLIP2-Giant + E5-Small-v2 + gating | 0.9920 | 0.0314 | 0.7818 | 0.9233 | 0.9482 |
82
+
83
+ ### Combined Test Dataset (Overall Performance)
84
+
85
+ | Model | ROC AUC | EER | Top-1 | Top-5 | Top-10 |
86
+ |-------|---------|-----|-------|-------|--------|
87
+ | CLIP-ViT-Base | 0.9752 | 0.0729 | 0.6511 | 0.8122 | 0.8555 |
88
+ | DINOv2-Small | 0.9848 | 0.0546 | 0.7180 | 0.8678 | 0.9009 |
89
+ | SigLIP-Base | 0.9811 | 0.0572 | 0.7359 | 0.8831 | 0.9140 |
90
+ | **SigLIP2-Base** | **0.9793** | **0.0631** | **0.7400** | **0.8889** | **0.9197** |
91
+ | Zer0int CLIP-L | 0.9842 | 0.0565 | 0.7626 | 0.8994 | 0.9267 |
92
+ | SigLIP2-Giant | 0.9912 | 0.0378 | 0.8243 | 0.9471 | 0.9641 |
93
+ | SigLIP2-Giant + E5-Small-v2 + gating | 0.9882 | 0.0422 | 0.8428 | 0.9576 | 0.9722 |
94
+
95
+ **Metrics Explanation:**
96
+ - **ROC AUC**: Area Under the Receiver Operating Characteristic Curve - measures the model's ability to distinguish between different individuals
97
+ - **EER**: Equal Error Rate - the error rate where false acceptance and false rejection rates are equal
98
+ - **Top-K**: Accuracy of correct identification within the top K predictions
99
+
100
+ ## Basic Usage
101
+
102
+ ### Installation
103
+
104
+ ```bash
105
+ pip install transformers torch pillow safetensors huggingface_hub
106
+ ```
107
+
108
+ ### Get Image Embedding
109
+
110
+ ```python
111
+ import torch
112
+ import torch.nn as nn
113
+ import torch.nn.functional as F
114
+ from PIL import Image
115
+ from transformers import SiglipModel, SiglipProcessor
116
+ from safetensors.torch import load_file
117
+ from huggingface_hub import hf_hub_download
118
+
119
+ class FaceRecognizer(nn.Module):
120
+ def __init__(self):
121
+ super().__init__()
122
+ ckpt = "google/siglip2-base-patch16-224"
123
+ self.clip = SiglipModel.from_pretrained(ckpt)
124
+ self.processor = SiglipProcessor.from_pretrained(ckpt)
125
+
126
+ def forward(self, images):
127
+ clip_inputs = self.processor(images=images, return_tensors="pt").to(self.clip.device)
128
+ return self.clip.get_image_features(**clip_inputs)
129
+
130
+ model = FaceRecognizer()
131
+
132
+ weights_path = hf_hub_download(repo_id="AvitoTech/SigLIP2-Base-for-animal-identification", filename="model.safetensors")
133
+ state_dict = load_file(weights_path)
134
+ model.load_state_dict(state_dict)
135
+
136
+ device = "cuda" if torch.cuda.is_available() else "cpu"
137
+ model = model.to(device).eval()
138
+
139
+ # Get embedding
140
+ image = Image.open("your_image.jpg").convert("RGB")
141
+
142
+ with torch.no_grad():
143
+ embedding = model([image])
144
+ embedding = F.normalize(embedding, dim=1)
145
+
146
+ print(f"Embedding shape: {embedding.shape}") # torch.Size([1, 768])
147
+ ```
148
+
149
+ ## Citation
150
+
151
+ If you use this model in your research or applications, please cite our work:
152
+
153
+ ```
154
+ BibTeX citation will be added upon paper publication.
155
+ ```
156
+
157
+ ## Use Cases
158
+
159
+ - Individual pet identification and re-identification
160
+ - Lost and found pet matching systems
161
+ - Veterinary record management
162
+ - Animal behavior monitoring
163
+ - Wildlife conservation and tracking
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "SiglipModel"
4
+ ],
5
+ "model_type": "siglip",
6
+ "projection_dim": 768,
7
+ "text_config": {
8
+ "hidden_size": 768,
9
+ "intermediate_size": 3072,
10
+ "num_attention_heads": 12,
11
+ "num_hidden_layers": 12,
12
+ "model_type": "siglip_text_model"
13
+ },
14
+ "vision_config": {
15
+ "hidden_size": 768,
16
+ "image_size": 224,
17
+ "intermediate_size": 3072,
18
+ "num_attention_heads": 12,
19
+ "num_channels": 3,
20
+ "num_hidden_layers": 12,
21
+ "patch_size": 16,
22
+ "model_type": "siglip_vision_model"
23
+ }
24
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e03eb9324a95c217a581a322bd8e62f95a7fb7a07e4fd10e493cf297178f14a7
3
+ size 1500802912