buddhadeb33 commited on
Commit
a764f6a
·
verified ·
1 Parent(s): d0bf0cf

32health/non-ada-classification-dinov3

Browse files
Files changed (4) hide show
  1. README.md +78 -0
  2. model.safetensors +3 -0
  3. preprocessor_config.json +24 -0
  4. training_args.bin +3 -0
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: other
4
+ base_model: facebook/dinov3-vitb16-pretrain-lvd1689m
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - precision
9
+ - recall
10
+ - f1
11
+ model-index:
12
+ - name: output_dinov3_baseline
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # output_dinov3_baseline
20
+
21
+ This model is a fine-tuned version of [facebook/dinov3-vitb16-pretrain-lvd1689m](https://huggingface.co/facebook/dinov3-vitb16-pretrain-lvd1689m) on an unknown dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 0.0540
24
+ - Precision: 0.9657
25
+ - Recall: 0.9748
26
+ - F1: 0.9702
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 5e-06
46
+ - train_batch_size: 4
47
+ - eval_batch_size: 4
48
+ - seed: 42
49
+ - gradient_accumulation_steps: 8
50
+ - total_train_batch_size: 32
51
+ - optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
52
+ - lr_scheduler_type: linear
53
+ - lr_scheduler_warmup_steps: 0.1
54
+ - num_epochs: 10
55
+ - mixed_precision_training: Native AMP
56
+
57
+ ### Training results
58
+
59
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
60
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
61
+ | 0.4394 | 1.0 | 145 | 0.4057 | 0.6110 | 0.5578 | 0.5832 |
62
+ | 0.2090 | 2.0 | 290 | 0.2168 | 0.8060 | 0.8992 | 0.8500 |
63
+ | 0.1433 | 3.0 | 435 | 0.1430 | 0.9130 | 0.9265 | 0.9197 |
64
+ | 0.1036 | 4.0 | 580 | 0.1033 | 0.9238 | 0.9674 | 0.9451 |
65
+ | 0.0768 | 5.0 | 725 | 0.0816 | 0.9542 | 0.9632 | 0.9587 |
66
+ | 0.0623 | 6.0 | 870 | 0.0682 | 0.9585 | 0.9706 | 0.9645 |
67
+ | 0.0553 | 7.0 | 1015 | 0.0606 | 0.9606 | 0.9727 | 0.9666 |
68
+ | 0.0490 | 8.0 | 1160 | 0.0576 | 0.9596 | 0.9727 | 0.9661 |
69
+ | 0.0480 | 9.0 | 1305 | 0.0554 | 0.9656 | 0.9737 | 0.9697 |
70
+ | 0.0482 | 10.0 | 1450 | 0.0540 | 0.9657 | 0.9748 | 0.9702 |
71
+
72
+
73
+ ### Framework versions
74
+
75
+ - Transformers 5.0.0
76
+ - Pytorch 2.10.0+cu128
77
+ - Datasets 4.5.0
78
+ - Tokenizers 0.22.2
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8f1668f55a8fe2861888d36ee4794fff2c896059db088a446b42fa3b903a2db
3
+ size 345051000
preprocessor_config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "data_format": "channels_first",
3
+ "default_to_square": true,
4
+ "do_normalize": true,
5
+ "do_rescale": true,
6
+ "do_resize": true,
7
+ "image_mean": [
8
+ 0.485,
9
+ 0.456,
10
+ 0.406
11
+ ],
12
+ "image_processor_type": "DINOv3ViTImageProcessorFast",
13
+ "image_std": [
14
+ 0.229,
15
+ 0.224,
16
+ 0.225
17
+ ],
18
+ "resample": 2,
19
+ "rescale_factor": 0.00392156862745098,
20
+ "size": {
21
+ "height": 224,
22
+ "width": 224
23
+ }
24
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93906609772f2b1afb026d8c1ddf7d7629227115361e3a69b9d45a25dca008c9
3
+ size 5201