HichTala commited on
Commit
e6a6ef1
·
verified ·
1 Parent(s): 7ac60e4

Upload DiffusionDet

Browse files
Files changed (4) hide show
  1. README.md +199 -0
  2. config.json +129 -0
  3. configuration_diffusiondet.py +167 -0
  4. model.safetensors +3 -0
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation": "relu",
3
+ "alpha": 0.25,
4
+ "architectures": [
5
+ "DiffusionDet"
6
+ ],
7
+ "auto_map": {
8
+ "AutoConfig": "configuration_diffusiondet.DiffusionDetConfig",
9
+ "AutoModelForObjectDetection": "modeling_diffusiondet.DiffusionDet"
10
+ },
11
+ "backbone": "resnet50",
12
+ "backbone_config": null,
13
+ "backbone_kwargs": {
14
+ "in_chans": 3,
15
+ "out_indices": [
16
+ 1,
17
+ 2,
18
+ 3,
19
+ 4
20
+ ]
21
+ },
22
+ "backbone_multiplier": 1.0,
23
+ "class_weight": 2.0,
24
+ "deep_supervision": true,
25
+ "dilation": false,
26
+ "dim_dynamic": 64,
27
+ "dim_feedforward": 2048,
28
+ "dropout": 0.0,
29
+ "fpn_out_channels": 256,
30
+ "gamma": 2.0,
31
+ "giou_weight": 2.0,
32
+ "hidden_dim": 256,
33
+ "id2label": {
34
+ "0": "plane",
35
+ "1": "ship",
36
+ "2": "storage-tank",
37
+ "3": "baseball-diamond",
38
+ "4": "tennis-court",
39
+ "5": "basketball-court",
40
+ "6": "ground-track-field",
41
+ "7": "harbor",
42
+ "8": "bridge",
43
+ "9": "small-vehicle",
44
+ "10": "large-vehicle",
45
+ "11": "roundabout",
46
+ "12": "swimming-pool",
47
+ "13": "helicopter",
48
+ "14": "soccer-ball-field",
49
+ "15": "container-crane"
50
+ },
51
+ "l1_weight": 5.0,
52
+ "label2id": {
53
+ "baseball-diamond": 3,
54
+ "basketball-court": 5,
55
+ "bridge": 8,
56
+ "container-crane": 15,
57
+ "ground-track-field": 6,
58
+ "harbor": 7,
59
+ "helicopter": 13,
60
+ "large-vehicle": 10,
61
+ "plane": 0,
62
+ "roundabout": 11,
63
+ "ship": 1,
64
+ "small-vehicle": 9,
65
+ "soccer-ball-field": 14,
66
+ "storage-tank": 2,
67
+ "swimming-pool": 12,
68
+ "tennis-court": 4
69
+ },
70
+ "model_type": "diffusiondet",
71
+ "no_object_weight": 0.1,
72
+ "num_attn_heads": 8,
73
+ "num_channels": 3,
74
+ "num_cls": 1,
75
+ "num_dynamic": 2,
76
+ "num_heads": 6,
77
+ "num_proposals": 300,
78
+ "num_reg": 3,
79
+ "optimizer": "ADAMW",
80
+ "ota_k": 5,
81
+ "pixel_mean": [
82
+ 123.675,
83
+ 116.28,
84
+ 103.53
85
+ ],
86
+ "pixel_std": [
87
+ 58.395,
88
+ 57.12,
89
+ 57.375
90
+ ],
91
+ "pooler_resolution": 7,
92
+ "prior_prob": 0.01,
93
+ "resnet_in_features": [
94
+ "res2",
95
+ "res3",
96
+ "res4",
97
+ "res5"
98
+ ],
99
+ "resnet_out_features": [
100
+ "res2",
101
+ "res3",
102
+ "res4",
103
+ "res5"
104
+ ],
105
+ "roi_head_in_features": [
106
+ "p2",
107
+ "p3",
108
+ "p4",
109
+ "p5"
110
+ ],
111
+ "sample_step": 1,
112
+ "sampling_ratio": 2,
113
+ "snr_scale": 2.0,
114
+ "swin_out_features": [
115
+ 0,
116
+ 1,
117
+ 2,
118
+ 3
119
+ ],
120
+ "swin_size": "B",
121
+ "torch_dtype": "float32",
122
+ "transformers_version": "4.52.0.dev0",
123
+ "use_fed_loss": false,
124
+ "use_focal": true,
125
+ "use_nms": true,
126
+ "use_pretrained_backbone": true,
127
+ "use_swin_checkpoint": false,
128
+ "use_timm_backbone": true
129
+ }
configuration_diffusiondet.py ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import PretrainedConfig
2
+
3
+ from transformers.models.auto import CONFIG_MAPPING
4
+ from transformers.utils.backbone_utils import verify_backbone_config_arguments
5
+
6
+ from transformers.utils import logging, PushToHubMixin
7
+
8
+ logger = logging.get_logger(__name__)
9
+
10
+ class DiffusionDetConfig(PretrainedConfig):
11
+
12
+ model_type = "diffusiondet"
13
+
14
+ def __init__(
15
+ self,
16
+ use_timm_backbone=True,
17
+ backbone_config=None,
18
+ num_channels=3,
19
+ pixel_mean=(123.675, 116.280, 103.530),
20
+ pixel_std=(58.395, 57.120, 57.375),
21
+ resnet_out_features=("res2", "res3", "res4", "res5"),
22
+ resnet_in_features=("res2", "res3", "res4", "res5"),
23
+ roi_head_in_features=("p2", "p3", "p4", "p5"),
24
+ fpn_out_channels=256,
25
+ pooler_resolution=7,
26
+ sampling_ratio=2,
27
+ num_proposals=300,
28
+ num_attn_heads=8,
29
+ dropout=0.0,
30
+ dim_feedforward=2048,
31
+ activation="relu",
32
+ hidden_dim=256,
33
+ num_cls=1,
34
+ num_reg=3,
35
+ num_heads=6,
36
+ num_dynamic=2,
37
+ dim_dynamic=64,
38
+ class_weight=2.0,
39
+ giou_weight=2.0,
40
+ l1_weight=5.0,
41
+ deep_supervision=True,
42
+ no_object_weight=0.1,
43
+ use_focal=True,
44
+ use_fed_loss=False,
45
+ alpha=0.25,
46
+ gamma=2.0,
47
+ prior_prob=0.01,
48
+ ota_k=5,
49
+ snr_scale=2.0,
50
+ sample_step=1,
51
+ use_nms=True,
52
+ swin_size="B",
53
+ use_swin_checkpoint=False,
54
+ swin_out_features=(0, 1, 2, 3),
55
+ optimizer="ADAMW",
56
+ backbone_multiplier=1.0,
57
+ backbone='resnet50',
58
+ use_pretrained_backbone=True,
59
+ backbone_kwargs=None,
60
+ dilation=False,
61
+ **kwargs
62
+ ):
63
+ # We default to values which were previously hard-coded in the model. This enables configurability of the config
64
+ # while keeping the default behavior the same.
65
+ if use_timm_backbone and backbone_kwargs is None:
66
+ backbone_kwargs = {}
67
+ if dilation:
68
+ backbone_kwargs["output_stride"] = 16
69
+ backbone_kwargs["out_indices"] = [1, 2, 3, 4]
70
+ backbone_kwargs["in_chans"] = num_channels
71
+ # Backwards compatibility
72
+ elif not use_timm_backbone and backbone in (None, "resnet50"):
73
+ if backbone_config is None:
74
+ logger.info("`backbone_config` is `None`. Initializing the config with the default `ResNet` backbone.")
75
+ backbone_config = CONFIG_MAPPING["resnet"](out_features=["stage4"])
76
+ elif isinstance(backbone_config, dict):
77
+ backbone_model_type = backbone_config.get("model_type")
78
+ config_class = CONFIG_MAPPING[backbone_model_type]
79
+ backbone_config = config_class.from_dict(backbone_config)
80
+ backbone = None
81
+ # set timm attributes to None
82
+ dilation = None
83
+
84
+ verify_backbone_config_arguments(
85
+ use_timm_backbone=use_timm_backbone,
86
+ use_pretrained_backbone=use_pretrained_backbone,
87
+ backbone=backbone,
88
+ backbone_config=backbone_config,
89
+ backbone_kwargs=backbone_kwargs,
90
+ )
91
+
92
+ # Auto mapping
93
+ self.auto_map = {
94
+ "AutoConfig": "configuration_diffusiondet.DiffusionDetConfig",
95
+ "AutoModelForObjectDetection": "modeling_diffusiondet.DiffusionDet"
96
+ }
97
+
98
+ # Backbone.
99
+ self.use_timm_backbone = use_timm_backbone
100
+ self.backbone_config = backbone_config
101
+ self.num_channels = num_channels
102
+ self.backbone = backbone
103
+ self.use_pretrained_backbone = use_pretrained_backbone
104
+ self.backbone_kwargs = backbone_kwargs
105
+ self.dilation = dilation
106
+ self.fpn_out_channels = fpn_out_channels
107
+
108
+ # Model.
109
+ self.pixel_mean = pixel_mean
110
+ self.pixel_std = pixel_std
111
+ self.resnet_out_features = resnet_out_features
112
+ self.resnet_in_features = resnet_in_features
113
+ self.roi_head_in_features = roi_head_in_features
114
+ self.pooler_resolution = pooler_resolution
115
+ self.sampling_ratio = sampling_ratio
116
+ self.num_proposals = num_proposals
117
+
118
+ # RCNN Head.
119
+ self.num_attn_heads = num_attn_heads
120
+ self.dropout = dropout
121
+ self.dim_feedforward = dim_feedforward
122
+ self.activation = activation
123
+ self.hidden_dim = hidden_dim
124
+ self.num_cls = num_cls
125
+ self.num_reg = num_reg
126
+ self.num_heads = num_heads
127
+
128
+ # Dynamic Conv.
129
+ self.num_dynamic = num_dynamic
130
+ self.dim_dynamic = dim_dynamic
131
+
132
+ # Loss.
133
+ self.class_weight = class_weight
134
+ self.giou_weight = giou_weight
135
+ self.l1_weight = l1_weight
136
+ self.deep_supervision = deep_supervision
137
+ self.no_object_weight = no_object_weight
138
+
139
+ # Focal Loss.
140
+ self.use_focal = use_focal
141
+ self.use_fed_loss = use_fed_loss
142
+ self.alpha = alpha
143
+ self.gamma = gamma
144
+ self.prior_prob = prior_prob
145
+
146
+ # Dynamic K
147
+ self.ota_k = ota_k
148
+
149
+ # Diffusion
150
+ self.snr_scale = snr_scale
151
+ self.sample_step = sample_step
152
+
153
+ # Inference
154
+ self.use_nms = use_nms
155
+
156
+ # Swin Backbones
157
+ self.swin_size = swin_size
158
+ self.use_swin_checkpoint = use_swin_checkpoint
159
+ self.swin_out_features = swin_out_features
160
+
161
+ # Optimizer.
162
+ self.optimizer = optimizer
163
+ self.backbone_multiplier = backbone_multiplier
164
+
165
+ self.num_labels = 80
166
+
167
+ super().__init__()
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e517fed9e7068145013593fed1b44f53526265d171c484d525571a38dbd251dc
3
+ size 442808528