ilessio-aiflowlab commited on
Commit
e875c7d
·
verified ·
1 Parent(s): 17166c7

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ onnx/nott_v1.onnx.data filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - robotics
4
+ - anima
5
+ - thermal-slam
6
+ - depth-estimation
7
+ - thermal-refinement
8
+ - uav
9
+ - robot-flow-labs
10
+ library_name: pytorch
11
+ pipeline_tag: image-to-image
12
+ license: apache-2.0
13
+ ---
14
+
15
+ # NÓTT — Thermal Image Refinement for Monocular ORB-SLAM3
16
+
17
+ Part of the [ANIMA Perception Suite](https://github.com/RobotFlow-Labs) by Robot Flow Labs.
18
+
19
+ ## Paper
20
+
21
+ **Thermal Image Refinement with Depth Estimation using Recurrent Networks for Monocular ORB-SLAM3**
22
+ Hürkan Şahin, Huy Xuan Pham, Van Huyen Dang, Alper Yegenoglu, Erdal Kayacan
23
+ [arXiv:2603.14998](https://arxiv.org/abs/2603.14998) (2026)
24
+
25
+ ## Architecture
26
+
27
+ **T-RefNet** — Lightweight U-Net encoder-decoder with ConvGRU recurrent bottleneck:
28
+ - Encoder: 3 levels (32 → 64 → 128 channels), BatchNorm + ReLU + MaxPool
29
+ - Bottleneck: 2x ConvGRU cells for temporal coherence
30
+ - Decoder: 3 levels with skip connections + bilinear upsampling
31
+ - Output: Sigmoid-activated refined thermal image
32
+
33
+ **Parameters:** 2,048,320 (~8MB)
34
+ **Input:** Single-channel thermal (1, H, W), tested at 256x320
35
+
36
+ ## Results
37
+
38
+ | Metric | Value | Paper Target |
39
+ |--------|-------|-------------|
40
+ | Val Loss (L1+SSIM) | **0.037** | — |
41
+ | Absolute Relative Error | **0.090** | < 0.10 |
42
+
43
+ Trained on VIVID++ dataset (71,917 thermal/depth paired frames, 24 sequences).
44
+
45
+ ## Exported Formats
46
+
47
+ | Format | File | Size | Use Case |
48
+ |--------|------|------|----------|
49
+ | PyTorch (.pth) | `pytorch/nott_v2.pth` | 8.2MB | Training, fine-tuning |
50
+ | SafeTensors | `pytorch/nott_v2.safetensors` | 8.2MB | Fast safe loading |
51
+ | ONNX | `onnx/nott_v2.onnx` | 8.2MB | Cross-platform inference |
52
+ | Checkpoint | `checkpoints/best.pth` | 24MB | Resume training |
53
+
54
+ ## Usage
55
+
56
+ ```python
57
+ import torch
58
+ from anima_nott.thermal_refinement import ThermalRefinementNet
59
+
60
+ model = ThermalRefinementNet(in_channels=1, base_channels=32, num_levels=3, gru_layers=2)
61
+ state = torch.load("pytorch/nott_v2.pth", weights_only=True)
62
+ model.load_state_dict(state)
63
+ model.eval()
64
+
65
+ thermal = torch.randn(1, 1, 256, 320) # normalized [0,1]
66
+ refined, hidden = model(thermal)
67
+ ```
68
+
69
+ ## Training
70
+
71
+ - **Dataset:** VIVID++ (FLIR Boson+ thermal, 24 sequences, bright/dark/dim/aggressive)
72
+ - **Hardware:** NVIDIA L4 (23GB), bf16 mixed precision
73
+ - **Optimizer:** Adam (lr=1e-3, weight_decay=1e-5)
74
+ - **Schedule:** Cosine annealing with linear warmup
75
+ - **Loss:** L1 + 0.1 x SSIM
76
+
77
+ ## Defense Application
78
+
79
+ Low-cost thermal SLAM for GPS-denied, low-light UAV navigation using non-radiometric thermal cameras (~$150 FLIR Lepton 3.5). Target: <0.4m trajectory error, 25+ FPS on Jetson Xavier.
80
+
81
+ ## Citation
82
+
83
+ ```bibtex
84
+ @article{sahin2026thermal,
85
+ title={Thermal Image Refinement with Depth Estimation using Recurrent Networks for Monocular ORB-SLAM3},
86
+ author={Sahin, Hurkan and Pham, Huy Xuan and Dang, Van Huyen and Yegenoglu, Alper and Kayacan, Erdal},
87
+ journal={arXiv preprint arXiv:2603.14998},
88
+ year={2026}
89
+ }
90
+ ```
91
+
92
+ ## License
93
+
94
+ Apache-2.0 — Robot Flow Labs / AIFLOW LABS LIMITED
checkpoints/best.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7974cde31862f10d606e51d23292a1a5db4de5d4410ab5dbbc3d851ce5d5af1d
3
+ size 24623721
configs/depth.yaml ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ model:
2
+ encoder_channels: [32, 64, 128]
3
+ depth_range: [0.1, 10.0]
4
+ predict_uncertainty: false
5
+
6
+ training:
7
+ epochs: 150
8
+ batch_size: 32
9
+ num_workers: 4
10
+ pin_memory: true
11
+ seed: 42
12
+ mixed_precision: true
13
+ mode: joint
14
+ alpha_refine: 1.0
15
+ beta_depth: 0.5
16
+
17
+ optimizer:
18
+ type: adam
19
+ lr: 1.0e-3
20
+ betas: [0.9, 0.999]
21
+ weight_decay: 1.0e-5
22
+ gradient_clip_norm: 1.0
23
+
24
+ scheduler:
25
+ type: cosine_warmup
26
+ warmup_fraction: 0.05
27
+ min_lr: 1.0e-6
28
+
29
+ loss:
30
+ lambda_l1: 1.0
31
+ lambda_ssim: 0.5
32
+ use_uncertainty: false
33
+
34
+ checkpointing:
35
+ save_every: 5
36
+ save_best: true
37
+ keep_last_n: 2
38
+ checkpoint_dir: /mnt/artifacts-datai/checkpoints/project_nott
39
+
40
+ early_stopping:
41
+ enabled: true
42
+ patience: 10
43
+ min_delta: 1.0e-4
44
+
45
+ logging:
46
+ log_every: 10
47
+ tensorboard_dir: /mnt/artifacts-datai/tensorboard/project_nott
48
+ log_dir: /mnt/artifacts-datai/logs/project_nott
49
+
50
+ transfer:
51
+ pretrained_checkpoint: /mnt/artifacts-datai/checkpoints/project_nott/
52
+ freeze_encoder: false
53
+ finetune_lr: 1.0e-4
54
+ finetune_epochs: 50
55
+
56
+ data:
57
+ dataset: vivid_plus_plus
58
+ root: /mnt/forge-data/datasets/vivid_plus_plus
59
+ modality: depth
60
+ resolution: null # native 256x320
61
+ max_depth: 10.0
62
+ noise_sigma: 0.03
63
+ augmentation: true
64
+ seed: 42
configs/training.yaml ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # NÓTT Training Config — Paper-aligned (arXiv:2603.14998)
2
+ # Loss weights from Section III-B: 0.9·L_SIlog + 0.4·L_SSIM + 0.1·L_ord + 0.1·L_sm
3
+
4
+ training:
5
+ epochs: 150
6
+ batch_size: 220 # Peak 20.3GB/23GB (88%) — tested with bf16 forward+backward
7
+ num_workers: 2
8
+ pin_memory: true
9
+ seed: 42
10
+ mixed_precision: true # bf16 on CUDA
11
+
12
+ optimizer:
13
+ type: adam
14
+ lr: 5.0e-4 # Mid-range — v15 epoch 0 worked at ~3e-4, cosine will decay to 1e-6
15
+ betas: [0.9, 0.999]
16
+ weight_decay: 1.0e-5
17
+ gradient_clip_norm: 1.0
18
+
19
+ scheduler:
20
+ type: cosine_warmup
21
+ warmup_fraction: 0.0 # No warmup — resume already at good weights
22
+ min_lr: 1.0e-6
23
+
24
+ loss:
25
+ lambda_l1: 1.0
26
+ lambda_perceptual: 0.0
27
+ lambda_ssim: 0.1
28
+ use_perceptual: false
29
+
30
+ checkpointing:
31
+ save_every: 5 # Save periodic checkpoint every 5 epochs (for resume)
32
+ save_best: true
33
+ keep_last_n: 2 # Keep top 2 best checkpoints by val_loss
34
+ checkpoint_dir: /mnt/artifacts-datai/checkpoints/project_nott
35
+
36
+ early_stopping:
37
+ enabled: true
38
+ patience: 30
39
+ min_delta: 1.0e-4
40
+
41
+ logging:
42
+ backend: console
43
+ project: anima-nott
44
+ log_every: 10
45
+ log_images_every: 50
46
+ tensorboard_dir: /mnt/artifacts-datai/tensorboard/project_nott
47
+ log_dir: /mnt/artifacts-datai/logs/project_nott
48
+
49
+ data:
50
+ dataset: vivid_plus_plus
51
+ root: /mnt/forge-data/datasets/vivid_plus_plus
52
+ modality: refinement
53
+ resolution: null # native 256x320
54
+ max_depth: 10.0
55
+ noise_sigma: 0.03
56
+ augmentation: true
57
+ seed: 42
configs/training_sol.yaml ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # NÓTT Fine-tuning Config — SOL Thermal Synthetic + VIVID++ Combined
2
+ # Resume from VIVID++ checkpoint, fine-tune on combined dataset
3
+ # DDP-safe: BatchNorm frozen (no running stat divergence)
4
+
5
+ training:
6
+ epochs: 50 # Fine-tuning, not from scratch
7
+ batch_size: 220 # Per-GPU (same as proven single-GPU config)
8
+ num_workers: 4 # Per process
9
+ pin_memory: true
10
+ seed: 42
11
+ mixed_precision: true
12
+ freeze_bn: true # CRITICAL: freeze BatchNorm for DDP compatibility
13
+
14
+ optimizer:
15
+ type: adam
16
+ lr: 1.0e-4 # 10x lower than pretraining — gentle fine-tuning
17
+ betas: [0.9, 0.999]
18
+ weight_decay: 1.0e-5
19
+ gradient_clip_norm: 1.0
20
+
21
+ scheduler:
22
+ type: cosine_warmup
23
+ warmup_fraction: 0.04 # ~2 epochs warmup
24
+ min_lr: 1.0e-7
25
+
26
+ loss:
27
+ lambda_l1: 1.0
28
+ lambda_perceptual: 0.0
29
+ lambda_ssim: 0.1
30
+ use_perceptual: false
31
+
32
+ checkpointing:
33
+ save_every: 5
34
+ save_best: true
35
+ keep_last_n: 2
36
+ checkpoint_dir: /mnt/artifacts-datai/checkpoints/project_nott
37
+
38
+ early_stopping:
39
+ enabled: true
40
+ patience: 15
41
+ min_delta: 1.0e-4
42
+
43
+ logging:
44
+ backend: console
45
+ project: anima-nott-sol
46
+ log_every: 10
47
+ tensorboard_dir: /mnt/artifacts-datai/tensorboard/project_nott_sol
48
+ log_dir: /mnt/artifacts-datai/logs/project_nott
49
+
50
+ data:
51
+ dataset: combined
52
+ vivid_root: /mnt/forge-data/datasets/vivid_plus_plus
53
+ sol_root: /mnt/artifacts-datai/datasets/sol_thermal_synthetic
54
+ modality: refinement
55
+ resolution: null # native 256x320
56
+ max_depth: 10.0
57
+ noise_sigma: 0.03
58
+ augmentation: true
59
+ seed: 42
export_manifest.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "module": "project_nott",
3
+ "version": "v2",
4
+ "paper": "arXiv:2603.14998",
5
+ "best_checkpoint": "v15_epoch0",
6
+ "val_loss": 0.0366,
7
+ "val_ARE": 0.0899,
8
+ "architecture": "T-RefNet (ConvGRU encoder-decoder)",
9
+ "parameters": 2048320,
10
+ "input_shape": [
11
+ 1,
12
+ 1,
13
+ 256,
14
+ 320
15
+ ],
16
+ "formats": [
17
+ "pth",
18
+ "safetensors",
19
+ "onnx"
20
+ ]
21
+ }
logs/training_history.json ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "epoch": 1,
4
+ "train_loss": 0.08995985405735013,
5
+ "val_loss": 0.11606864236733493,
6
+ "val_are": 0.24683485486928156,
7
+ "lr": 0.00055
8
+ },
9
+ {
10
+ "epoch": 2,
11
+ "train_loss": 0.12767553086183508,
12
+ "val_loss": 0.10721209130304701,
13
+ "val_are": 0.2314927643712829,
14
+ "lr": 0.0007750000000000001
15
+ },
16
+ {
17
+ "epoch": 3,
18
+ "train_loss": 0.12751515166593247,
19
+ "val_loss": 0.10882455242030761,
20
+ "val_are": 0.2113809901125291,
21
+ "lr": 0.001
22
+ },
23
+ {
24
+ "epoch": 4,
25
+ "train_loss": 0.12657773051233517,
26
+ "val_loss": 0.10856413468718529,
27
+ "val_are": 0.21922510774696574,
28
+ "lr": 0.0009998843667389555
29
+ },
30
+ {
31
+ "epoch": 5,
32
+ "train_loss": 0.12593891963261325,
33
+ "val_loss": 0.10219317510285798,
34
+ "val_are": 0.23196267468087814,
35
+ "lr": 0.0009995375204935638
36
+ },
37
+ {
38
+ "epoch": 6,
39
+ "train_loss": 0.12425504074919791,
40
+ "val_loss": 0.10901784069617004,
41
+ "val_are": 0.23595025083597967,
42
+ "lr": 0.0009989596218522635
43
+ },
44
+ {
45
+ "epoch": 7,
46
+ "train_loss": 0.12599383525195576,
47
+ "val_loss": 0.10752283779027708,
48
+ "val_are": 0.19789807936724493,
49
+ "lr": 0.0009981509383798367
50
+ },
51
+ {
52
+ "epoch": 8,
53
+ "train_loss": 0.12460020296022195,
54
+ "val_loss": 0.11346339686390232,
55
+ "val_are": 0.22122144085519455,
56
+ "lr": 0.000997111844493529
57
+ },
58
+ {
59
+ "epoch": 9,
60
+ "train_loss": 0.12462475721021088,
61
+ "val_loss": 0.11580582938211806,
62
+ "val_are": 0.252922221141703,
63
+ "lr": 0.0009958428212896954
64
+ },
65
+ {
66
+ "epoch": 10,
67
+ "train_loss": 0.1240629677717783,
68
+ "val_loss": 0.11680787418256788,
69
+ "val_are": 0.29679954051971436,
70
+ "lr": 0.0009943444563210542
71
+ },
72
+ {
73
+ "epoch": 11,
74
+ "train_loss": 0.1235267119798936,
75
+ "val_loss": 0.13542297526317484,
76
+ "val_are": 0.40350808641489816,
77
+ "lr": 0.0009926174433246525
78
+ },
79
+ {
80
+ "epoch": 12,
81
+ "train_loss": 0.12494298311419227,
82
+ "val_loss": 0.10566405962933512,
83
+ "val_are": 0.20771524529246724,
84
+ "lr": 0.000990662581900669
85
+ },
86
+ {
87
+ "epoch": 13,
88
+ "train_loss": 0.1240543836892462,
89
+ "val_loss": 0.11496848021360005,
90
+ "val_are": 0.2263655903584817,
91
+ "lr": 0.0009884807771422025
92
+ },
93
+ {
94
+ "epoch": 14,
95
+ "train_loss": 0.12463467063851097,
96
+ "val_loss": 0.11092703652513378,
97
+ "val_are": 0.21460236871943755,
98
+ "lr": 0.0009860730392162163
99
+ },
100
+ {
101
+ "epoch": 15,
102
+ "train_loss": 0.12369378600396266,
103
+ "val_loss": 0.10656596610651296,
104
+ "val_are": 0.20960291957153993,
105
+ "lr": 0.000983440482895836
106
+ },
107
+ {
108
+ "epoch": 16,
109
+ "train_loss": 0.12360544479927238,
110
+ "val_loss": 0.10888551471426207,
111
+ "val_are": 0.2213323598398882,
112
+ "lr": 0.0009805843270442142
113
+ },
114
+ {
115
+ "epoch": 17,
116
+ "train_loss": 0.1213240637158861,
117
+ "val_loss": 0.11186534889480647,
118
+ "val_are": 0.27077462743310365,
119
+ "lr": 0.0009775058940502
120
+ },
121
+ {
122
+ "epoch": 18,
123
+ "train_loss": 0.12229459575649833,
124
+ "val_loss": 0.11027245683705106,
125
+ "val_are": 0.24529138558051167,
126
+ "lr": 0.0009742066092160797
127
+ },
128
+ {
129
+ "epoch": 19,
130
+ "train_loss": 0.12246036856454245,
131
+ "val_loss": 0.11329149583573728,
132
+ "val_are": 0.22803892940282822,
133
+ "lr": 0.0009706880000976672
134
+ },
135
+ {
136
+ "epoch": 20,
137
+ "train_loss": 0.1211167235865074,
138
+ "val_loss": 0.106539951856522,
139
+ "val_are": 0.21184978108195698,
140
+ "lr": 0.0009669516957970512
141
+ },
142
+ {
143
+ "epoch": 21,
144
+ "train_loss": 0.12227428233136936,
145
+ "val_loss": 0.10863438640337657,
146
+ "val_are": 0.21262053531758926,
147
+ "lr": 0.0009629994262083282
148
+ },
149
+ {
150
+ "epoch": 22,
151
+ "train_loss": 0.12078849018431034,
152
+ "val_loss": 0.10464029233245288,
153
+ "val_are": 0.20529338323018131,
154
+ "lr": 0.0009588330212166673
155
+ },
156
+ {
157
+ "epoch": 23,
158
+ "train_loss": 0.12076749657692552,
159
+ "val_loss": 0.10519357121494763,
160
+ "val_are": 0.1972528174519539,
161
+ "lr": 0.0009544544098510819
162
+ },
163
+ {
164
+ "epoch": 24,
165
+ "train_loss": 0.1205340421625546,
166
+ "val_loss": 0.11209521026295774,
167
+ "val_are": 0.23057624040281072,
168
+ "lr": 0.0009498656193912957
169
+ },
170
+ {
171
+ "epoch": 25,
172
+ "train_loss": 0.12104494641630018,
173
+ "val_loss": 0.10841717346407034,
174
+ "val_are": 0.21237122092176886,
175
+ "lr": 0.0009450687744291213
176
+ },
177
+ {
178
+ "epoch": 26,
179
+ "train_loss": 0.12009178565777077,
180
+ "val_loss": 0.11279275761369396,
181
+ "val_are": 0.2372470002840547,
182
+ "lr": 0.0009400660958847813
183
+ },
184
+ {
185
+ "epoch": 27,
186
+ "train_loss": 0.12139157487117515,
187
+ "val_loss": 0.10750346915686831,
188
+ "val_are": 0.22311256606789195,
189
+ "lr": 0.0009348598999786324
190
+ },
191
+ {
192
+ "epoch": 28,
193
+ "train_loss": 0.12134746196014541,
194
+ "val_loss": 0.10868307148270748,
195
+ "val_are": 0.2574108821504256,
196
+ "lr": 0.0009294525971587638
197
+ },
198
+ {
199
+ "epoch": 29,
200
+ "train_loss": 0.12045418344387392,
201
+ "val_loss": 0.11045770336161642,
202
+ "val_are": 0.2446950358503005,
203
+ "lr": 0.0009238466909849694
204
+ },
205
+ {
206
+ "epoch": 30,
207
+ "train_loss": 0.11914502350347382,
208
+ "val_loss": 0.10676391389878358,
209
+ "val_are": 0.2178314670043833,
210
+ "lr": 0.0009180447769696094
211
+ }
212
+ ]
logs/training_history_depth.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "epoch": 0,
4
+ "train_loss": 1.5525195367874638,
5
+ "train_refine": 0.22647310408853716,
6
+ "train_depth": 2.6520928798183316,
7
+ "val_loss": 1.5210160527910506,
8
+ "val_are_refine": 0.22593858412333898,
9
+ "val_are_depth": 0.8450886181422642,
10
+ "lr": 0.001
11
+ },
12
+ {
13
+ "epoch": 1,
14
+ "train_loss": 1.658759770854827,
15
+ "train_refine": 0.4102027195115243,
16
+ "train_depth": 2.497114127682101,
17
+ "val_loss": 1.7499962193625314,
18
+ "val_are_refine": 0.6625208514077323,
19
+ "val_are_depth": 0.793292156287602,
20
+ "lr": 1e-06
21
+ }
22
+ ]
onnx/nott_v1.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10e25c93cff20ec650b4dada8617f000ab34698b22f5e99e5bdd23c5383a5dae
3
+ size 9035361
onnx/nott_v1.onnx.data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2059c0151c8870b2c0acba3b7de2cb8c84c8ffdc74d0d542dfd5d0303f05017
3
+ size 9043968
onnx/nott_v2.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b3ecabd4b6d5b92b6e0ac53a369276969e4fd5c7b2ac376dd318db78c588e41
3
+ size 8207290
pytorch/nott_v1.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b95d48205e45ab83f355c465388d8f17ea06509d82b16b5442423c00b3d55d90
3
+ size 9030828
pytorch/nott_v1.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6723765aa74b399ae5679ff540bd08af3d8e03b589206f50e644cf5cde5a57a3
3
+ size 9015116
pytorch/nott_v2.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b241697b2acb980901ca91458ab8fde78b037225c32d76b9b41f9643a49fdb63
3
+ size 8209243
pytorch/nott_v2.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c0fd337dc9bd8d437cfef565a27492b49b89b8962bae06ffb3dfcb278e80026
3
+ size 8199512