buddhadeb33 commited on
Commit
f8bf623
·
verified ·
1 Parent(s): 92239df

32health/non-ada-classification-dinov3

Browse files
Files changed (4) hide show
  1. README.md +78 -0
  2. model.safetensors +3 -0
  3. preprocessor_config.json +24 -0
  4. training_args.bin +3 -0
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: other
4
+ base_model: facebook/dinov3-vitb16-pretrain-lvd1689m
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - precision
9
+ - recall
10
+ - f1
11
+ model-index:
12
+ - name: outputs_dinov3_448
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # outputs_dinov3_448
20
+
21
+ This model is a fine-tuned version of [facebook/dinov3-vitb16-pretrain-lvd1689m](https://huggingface.co/facebook/dinov3-vitb16-pretrain-lvd1689m) on an unknown dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 0.0585
24
+ - Precision: 0.9585
25
+ - Recall: 0.9716
26
+ - F1: 0.9650
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 5e-06
46
+ - train_batch_size: 4
47
+ - eval_batch_size: 4
48
+ - seed: 42
49
+ - gradient_accumulation_steps: 8
50
+ - total_train_batch_size: 32
51
+ - optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
52
+ - lr_scheduler_type: linear
53
+ - lr_scheduler_warmup_steps: 0.05
54
+ - num_epochs: 10
55
+ - mixed_precision_training: Native AMP
56
+
57
+ ### Training results
58
+
59
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
60
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
61
+ | No log | 1.0 | 145 | 0.3163 | 0.6908 | 0.6712 | 0.6809 |
62
+ | No log | 2.0 | 290 | 0.1855 | 0.8418 | 0.9223 | 0.8802 |
63
+ | No log | 3.0 | 435 | 0.1311 | 0.8984 | 0.9664 | 0.9312 |
64
+ | 0.2777 | 4.0 | 580 | 0.1046 | 0.9063 | 0.9548 | 0.9299 |
65
+ | 0.2777 | 5.0 | 725 | 0.0824 | 0.9481 | 0.9590 | 0.9535 |
66
+ | 0.2777 | 6.0 | 870 | 0.0735 | 0.9316 | 0.9727 | 0.9517 |
67
+ | 0.0778 | 7.0 | 1015 | 0.0657 | 0.9430 | 0.9727 | 0.9576 |
68
+ | 0.0778 | 8.0 | 1160 | 0.0633 | 0.9603 | 0.9643 | 0.9623 |
69
+ | 0.0778 | 9.0 | 1305 | 0.0598 | 0.9595 | 0.9706 | 0.9650 |
70
+ | 0.0778 | 10.0 | 1450 | 0.0585 | 0.9585 | 0.9716 | 0.9650 |
71
+
72
+
73
+ ### Framework versions
74
+
75
+ - Transformers 5.0.0
76
+ - Pytorch 2.10.0+cu128
77
+ - Datasets 4.5.0
78
+ - Tokenizers 0.22.2
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a83f7d49700ac6827fa42e818ed8eeadd6a265178235361e51d5cc191e927908
3
+ size 345051000
preprocessor_config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "data_format": "channels_first",
3
+ "default_to_square": true,
4
+ "do_normalize": true,
5
+ "do_rescale": true,
6
+ "do_resize": true,
7
+ "image_mean": [
8
+ 0.485,
9
+ 0.456,
10
+ 0.406
11
+ ],
12
+ "image_processor_type": "DINOv3ViTImageProcessorFast",
13
+ "image_std": [
14
+ 0.229,
15
+ 0.224,
16
+ 0.225
17
+ ],
18
+ "resample": 2,
19
+ "rescale_factor": 0.00392156862745098,
20
+ "size": {
21
+ "height": 224,
22
+ "width": 224
23
+ }
24
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d75242fcc579827c4d1a1024a16226b9b718024e826b1fa8e8edf6729413b51
3
+ size 5201