File size: 4,725 Bytes
8f44c1f
 
 
 
 
22c77b5
 
 
8f44c1f
 
 
 
 
 
 
 
 
 
 
22c77b5
8f44c1f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
library_name: transformers
license: other
base_model: facebook/mask2former-swin-tiny-coco-instance
tags:
- image-segmentation
- instance-segmentation
- vision
- generated_from_trainer
model-index:
- name: finetune-instance-segmentation-ade20k-mini-mask2former_augmentation
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# finetune-instance-segmentation-ade20k-mini-mask2former_augmentation

This model is a fine-tuned version of [facebook/mask2former-swin-tiny-coco-instance](https://huggingface.co/facebook/mask2former-swin-tiny-coco-instance) on the yeray142/kitti-mots-instance dataset.
It achieves the following results on the evaluation set:
- Loss: 21.8491
- Map: 0.2024
- Map 50: 0.3976
- Map 75: 0.1846
- Map Small: 0.1131
- Map Medium: 0.4171
- Map Large: 0.9371
- Mar 1: 0.098
- Mar 10: 0.2621
- Mar 100: 0.3113
- Mar Small: 0.2456
- Mar Medium: 0.5068
- Mar Large: 0.9545
- Map Car: 0.3761
- Mar 100 Car: 0.5206
- Map Person: 0.0288
- Mar 100 Person: 0.102

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 10.0
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch  | Step | Validation Loss | Map    | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1  | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Car | Mar 100 Car | Map Person | Mar 100 Person |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:-------:|:-----------:|:----------:|:--------------:|
| 32.1236       | 1.0    | 315  | 25.0016         | 0.172  | 0.3235 | 0.1677 | 0.0881    | 0.3483     | 0.8944    | 0.0877 | 0.2218 | 0.2617  | 0.1935    | 0.4549     | 0.9347    | 0.3376  | 0.4775      | 0.0065     | 0.0459         |
| 25.8006       | 2.0    | 630  | 23.8844         | 0.1836 | 0.3505 | 0.1708 | 0.0973    | 0.3691     | 0.9132    | 0.09   | 0.2324 | 0.276   | 0.2074    | 0.4765     | 0.9434    | 0.3541  | 0.4901      | 0.0131     | 0.0619         |
| 24.098        | 3.0    | 945  | 23.1822         | 0.1892 | 0.3583 | 0.1751 | 0.0975    | 0.3823     | 0.9297    | 0.0904 | 0.2392 | 0.283   | 0.215     | 0.4814     | 0.9495    | 0.3616  | 0.4967      | 0.0168     | 0.0693         |
| 23.0237       | 4.0    | 1260 | 22.7127         | 0.1913 | 0.3692 | 0.1751 | 0.1017    | 0.3846     | 0.9289    | 0.0933 | 0.2437 | 0.289   | 0.2225    | 0.4827     | 0.9486    | 0.3635  | 0.5003      | 0.0191     | 0.0778         |
| 22.25         | 5.0    | 1575 | 22.5918         | 0.1933 | 0.3765 | 0.1754 | 0.1053    | 0.3951     | 0.9267    | 0.0934 | 0.2477 | 0.2916  | 0.2253    | 0.4829     | 0.9474    | 0.3648  | 0.5         | 0.0218     | 0.0832         |
| 21.7056       | 6.0    | 1890 | 21.9666         | 0.2019 | 0.3913 | 0.1833 | 0.1101    | 0.4037     | 0.9311    | 0.0965 | 0.256  | 0.2998  | 0.235     | 0.4911     | 0.9497    | 0.3775  | 0.5145      | 0.0263     | 0.0852         |
| 21.218        | 7.0    | 2205 | 22.1376         | 0.2002 | 0.3859 | 0.1841 | 0.1087    | 0.412      | 0.9299    | 0.0974 | 0.255  | 0.3003  | 0.2331    | 0.5004     | 0.9524    | 0.3751  | 0.5113      | 0.0254     | 0.0892         |
| 20.7151       | 8.0    | 2520 | 21.7431         | 0.2013 | 0.3953 | 0.1819 | 0.1105    | 0.411      | 0.9349    | 0.0973 | 0.2595 | 0.3059  | 0.2401    | 0.5016     | 0.9533    | 0.375   | 0.5178      | 0.0277     | 0.094          |
| 20.4197       | 9.0    | 2835 | 21.8546         | 0.2024 | 0.3925 | 0.184  | 0.1112    | 0.4136     | 0.9325    | 0.0971 | 0.2589 | 0.3044  | 0.2387    | 0.4965     | 0.9531    | 0.3781  | 0.5164      | 0.0267     | 0.0925         |
| 20.1339       | 9.9698 | 3140 | 21.8491         | 0.2024 | 0.3976 | 0.1846 | 0.1131    | 0.4171     | 0.9371    | 0.098  | 0.2621 | 0.3113  | 0.2456    | 0.5068     | 0.9545    | 0.3761  | 0.5206      | 0.0288     | 0.102          |


### Framework versions

- Transformers 4.50.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0