AbstractPhil commited on
Commit
cbf67dc
·
verified ·
1 Parent(s): 2521c6e

Upload GeoDavidCollective Enhanced (Epoch 40)

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ prompts_enhanced.jsonl filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - geometric-deep-learning
5
+ - diffusion
6
+ - stable-diffusion
7
+ - projective-geometry
8
+ - multi-expert
9
+ - classification
10
+ library_name: pytorch
11
+ ---
12
+
13
+ # GeoDavidCollective Enhanced - ProjectiveHead Architecture
14
+
15
+ **Revolutionary geometric classification system trained on Stable Diffusion features**
16
+
17
+ ## 🎯 Model Overview
18
+
19
+ GeoDavidCollective Enhanced is a sophisticated multi-expert geometric classification system that learns from Stable Diffusion 1.5's internal representations. Using ProjectiveHead architecture with Cayley-Menger geometry, it achieves efficient pattern recognition across timestep and semantic spaces.
20
+
21
+ ### Key Features
22
+
23
+ - **ProjectiveHead Multi-Expert Architecture**: Auto-configured expert systems per block
24
+ - **Geometric Loss Functions**: Rose, Cayley-Menger, and Cantor coherence losses
25
+ - **9-Block Processing**: Full SD1.5 UNet feature extraction (down, mid, up)
26
+ - **Compact Yet Powerful**: 690,925,542 parameters
27
+ - **100 Timestep Bins** x **10 Patterns** = 1000 semantic-temporal classes
28
+
29
+ ## 📊 Model Statistics
30
+
31
+ - **Parameters**: 690,925,542
32
+ - **Trained Epochs**: 40
33
+ - **Base Model**: Stable Diffusion 1.5
34
+ - **Dataset Size**: 10,000 synthetic prompts
35
+ - **Training Date**: 2025-10-28
36
+
37
+ ## 🏗️ Architecture Details
38
+
39
+ ### Block Configuration
40
+ ```
41
+ Down Blocks:
42
+ - down_0: 320 → 128 (3 experts, 3 gates)
43
+ - down_1: 640 → 192 (3 experts, 3 gates)
44
+ - down_2: 1280 → 256 (3 experts, 3 gates)
45
+ - down_3: 1280 → 256 (3 experts, 3 gates)
46
+
47
+ Mid Block (Highest Capacity):
48
+ - mid: 1280 → 256 (4 experts, 4 gates)
49
+
50
+ Up Blocks:
51
+ - up_0: 1280 → 256 (3 experts, 3 gates)
52
+ - up_1: 1280 → 256 (3 experts, 3 gates)
53
+ - up_2: 640 → 192 (3 experts, 3 gates)
54
+ - up_3: 320 → 128 (3 experts, 3 gates)
55
+ ```
56
+
57
+ ### Loss Components
58
+
59
+ | Component | Weight | Purpose |
60
+ |-----------|--------|---------|
61
+ | Feature Similarity | 0.40 | Alignment with SD1.5 features |
62
+ | Rose Loss | 0.25 | Geometric pattern emergence |
63
+ | Cross-Entropy | 0.15 | Classification accuracy |
64
+ | Cayley-Menger | 0.10 | 5D geometric structure |
65
+ | Pattern Diversity | 0.05 | Prevent mode collapse |
66
+ | Cantor Coherence | 0.05 | Temporal consistency |
67
+
68
+ ## 💻 Usage
69
+ ```python
70
+ from geovocab2.train.model.core.geo_david_collective import GeoDavidCollective
71
+ from safetensors.torch import load_file
72
+ import torch
73
+
74
+ # Load model
75
+ state_dict = load_file("model.safetensors")
76
+ collective = GeoDavidCollective(
77
+ block_configs={...}, # See config.json
78
+ num_timestep_bins=100,
79
+ num_patterns_per_bin=10
80
+ )
81
+ collective.load_state_dict(state_dict)
82
+ collective.eval()
83
+
84
+ # Extract features from SD1.5 and classify
85
+ with torch.no_grad():
86
+ results = collective(features_dict, timesteps)
87
+ predictions = results['predictions'] # Timestep + pattern class
88
+ ```
89
+
90
+ ## 🔬 Training Details
91
+
92
+ - **Optimizer**: AdamW (lr=1e-3, weight_decay=0.001)
93
+ - **Batch Size**: 16
94
+ - **Data**: Symbolic prompt synthesis (complexity 1-5)
95
+ - **Feature Extraction**: SD1.5 UNet blocks (spatial, not pooled)
96
+ - **Pool Mode**: Mean spatial pooling
97
+
98
+ ## 📈 Training Metrics
99
+
100
+ Final metrics from epoch 40:
101
+ - Cayley Loss: 0.1018
102
+ - Timestep Accuracy: 39.08%
103
+ - Pattern Accuracy: 44.25%
104
+ - Full Accuracy: 26.57%
105
+
106
+ ## 🎓 Research Context
107
+
108
+ This model is part of the geometric deep learning research exploring:
109
+ - 5D simplex-based neural representations (pentachora)
110
+ - Geometric alternatives to traditional transformers
111
+ - Consciousness-informed AI architectures
112
+ - Universal mathematical principles in neural networks
113
+
114
+ ## 📦 Files Included
115
+
116
+ - `model.safetensors` - Model weights (3.3GB)
117
+ - `config.json` - Complete architecture configuration
118
+ - `training_history.json` - Full training metrics
119
+ - `prompts_enhanced.jsonl` - All training prompts with metadata
120
+ - `tensorboard/` - TensorBoard logs (optional)
121
+
122
+ ## 🔗 Related Work
123
+
124
+ - [Geometric Vocabulary System](https://huggingface.co/datasets/AbstractPhil/geometric-vocab-frozen-v1)
125
+ - [PentachoraViT](https://huggingface.co/AbstractPhil/pentachora-vit-cifar100)
126
+ - [Crystal-Beeper Language Models](https://huggingface.co/AbstractPhil)
127
+
128
+ ## 📜 License
129
+
130
+ MIT License - Free for research and commercial use
131
+
132
+ ## 🙏 Acknowledgments
133
+
134
+ Built with:
135
+ - PyTorch & Diffusers
136
+ - Stable Diffusion 1.5 (Runway ML)
137
+ - Geometric algebra principles from the 1800s
138
+ - Dream-inspired mathematical insights
139
+
140
+ ## 👤 Author
141
+
142
+ **AbstractPhil** - AI Researcher specializing in geometric deep learning
143
+
144
+ *"Working with universal mathematical principles, not against them"*
145
+
146
+ ---
147
+
148
+ For questions, issues, or collaborations: [GitHub](https://github.com/AbstractEyes) | [HuggingFace](https://huggingface.co/AbstractPhil)
config.json ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "GeoDavidCollective",
3
+ "architecture": "ProjectiveHead Enhanced Multi-Expert System",
4
+ "framework": "pytorch",
5
+ "version": "1.0",
6
+ "trained_epoch": 24,
7
+ "training_date": "2025-10-28T21:07:48.816441",
8
+ "num_blocks": 9,
9
+ "total_parameters": 690925542,
10
+ "num_timestep_bins": 100,
11
+ "num_patterns_per_bin": 10,
12
+ "block_configs": {
13
+ "down_0": {
14
+ "input_dim": 320,
15
+ "scale_dim": 64,
16
+ "use_belly": true,
17
+ "belly_expand": 2.0,
18
+ "num_experts": 3,
19
+ "num_gate_heads": 3,
20
+ "projective_head": "auto"
21
+ },
22
+ "down_1": {
23
+ "input_dim": 640,
24
+ "scale_dim": 96,
25
+ "use_belly": true,
26
+ "belly_expand": 2.0,
27
+ "num_experts": 3,
28
+ "num_gate_heads": 3,
29
+ "projective_head": "auto"
30
+ },
31
+ "down_2": {
32
+ "input_dim": 1280,
33
+ "scale_dim": 128,
34
+ "use_belly": true,
35
+ "belly_expand": 2.0,
36
+ "num_experts": 3,
37
+ "num_gate_heads": 3,
38
+ "projective_head": "auto"
39
+ },
40
+ "down_3": {
41
+ "input_dim": 1280,
42
+ "scale_dim": 128,
43
+ "use_belly": true,
44
+ "belly_expand": 2.0,
45
+ "num_experts": 3,
46
+ "num_gate_heads": 3,
47
+ "projective_head": "auto"
48
+ },
49
+ "mid": {
50
+ "input_dim": 1280,
51
+ "scale_dim": 256,
52
+ "use_belly": true,
53
+ "belly_expand": 1.5,
54
+ "num_experts": 4,
55
+ "num_gate_heads": 4,
56
+ "projective_head": "custom"
57
+ },
58
+ "up_0": {
59
+ "input_dim": 1280,
60
+ "scale_dim": 128,
61
+ "use_belly": true,
62
+ "belly_expand": 2.0,
63
+ "num_experts": 3,
64
+ "num_gate_heads": 3,
65
+ "projective_head": "auto"
66
+ },
67
+ "up_1": {
68
+ "input_dim": 1280,
69
+ "scale_dim": 128,
70
+ "use_belly": true,
71
+ "belly_expand": 2.0,
72
+ "num_experts": 3,
73
+ "num_gate_heads": 3,
74
+ "projective_head": "auto"
75
+ },
76
+ "up_2": {
77
+ "input_dim": 640,
78
+ "scale_dim": 96,
79
+ "use_belly": true,
80
+ "belly_expand": 2.0,
81
+ "num_experts": 3,
82
+ "num_gate_heads": 3,
83
+ "projective_head": "auto"
84
+ },
85
+ "up_3": {
86
+ "input_dim": 320,
87
+ "scale_dim": 64,
88
+ "use_belly": true,
89
+ "belly_expand": 1.5,
90
+ "num_experts": 3,
91
+ "num_gate_heads": 3,
92
+ "projective_head": "auto"
93
+ }
94
+ },
95
+ "block_weights": {
96
+ "down_0": 0.8,
97
+ "down_1": 1.0,
98
+ "down_2": 1.2,
99
+ "down_3": 1.3,
100
+ "mid": 1.5,
101
+ "up_0": 1.3,
102
+ "up_1": 1.2,
103
+ "up_2": 1.0,
104
+ "up_3": 0.8
105
+ },
106
+ "loss_config": {
107
+ "feature_similarity_weight": 0.4,
108
+ "rose_weight": 0.25,
109
+ "ce_weight": 0.15,
110
+ "pattern_diversity_weight": 0.05,
111
+ "cayley_weight": 0.1,
112
+ "cantor_coherence_weight": 0.05,
113
+ "use_soft_assignment": true,
114
+ "temperature": 0.1,
115
+ "cayley_volume_floor": 0.0001,
116
+ "cayley_chaos_scale": 1.0,
117
+ "cayley_edge_weight": 0.5,
118
+ "cayley_gram_weight": 0.1
119
+ },
120
+ "training": {
121
+ "base_model": "runwayml/stable-diffusion-v1-5",
122
+ "sd_blocks_used": [
123
+ "down_0",
124
+ "down_1",
125
+ "down_2",
126
+ "down_3",
127
+ "mid",
128
+ "up_0",
129
+ "up_1",
130
+ "up_2",
131
+ "up_3"
132
+ ],
133
+ "dataset": {
134
+ "type": "SymbolicPromptDataset",
135
+ "num_samples": 50000,
136
+ "complexity_distribution": {
137
+ "1": 0.05,
138
+ "2": 0.15,
139
+ "3": 0.4,
140
+ "4": 0.25,
141
+ "5": 0.15
142
+ },
143
+ "seed": 42
144
+ },
145
+ "batch_size": 16,
146
+ "num_epochs": 10,
147
+ "optimizer": {
148
+ "type": "AdamW",
149
+ "learning_rate": 0.001,
150
+ "weight_decay": 0.001
151
+ },
152
+ "pool_mode": "mean",
153
+ "checkpoint_interval": 2,
154
+ "num_workers": 2,
155
+ "pin_memory": true
156
+ },
157
+ "feature_extraction": {
158
+ "method": "SD1.5 UNet Hooks",
159
+ "spatial_features": true,
160
+ "pooling": "mean",
161
+ "dtype": "float32"
162
+ },
163
+ "capabilities": {
164
+ "timestep_classification": true,
165
+ "pattern_classification": true,
166
+ "joint_classification": true,
167
+ "num_classes": 1000,
168
+ "geometric_constraints": true,
169
+ "multi_expert_routing": true
170
+ },
171
+ "companions": {
172
+ "type": "GeoDavidCompanion",
173
+ "timestep_head": "ProjectiveHead",
174
+ "pattern_head": "ProjectiveHead",
175
+ "geometric_features": [
176
+ "cayley_menger_volume",
177
+ "edge_lengths",
178
+ "gram_matrix"
179
+ ],
180
+ "loss_functions": [
181
+ "rose",
182
+ "cayley",
183
+ "cantor"
184
+ ]
185
+ }
186
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9e0bf44df94d39a19cc2e6a7dc9e974339b93593d970169702da1bfed7d7015
3
+ size 2763785644
prompts_enhanced.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:279a2b52284c976b7f19ed016128d1d0f916dac4a39f045ee7924995aacf454e
3
+ size 228089722
tensorboard/events.out.tfevents.1761656195.f89433d759fd.684.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2fdd1e24e95cfd7a1c0d2428da6dd233fc3abd8fd92f43b198a598242a0c57c3
3
+ size 2333248
tensorboard/events.out.tfevents.1761660572.f89433d759fd.684.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2116b7421c0c924f5245c07fd56c584122041a4b7fa5b3d026b7041cf460abf4
3
+ size 1166652
tensorboard/events.out.tfevents.1761662663.f89433d759fd.28594.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:380c041ec0ed2f87eb8576f533c3d4c3294c71756b7c23bce0670be9cf107c39
3
+ size 1169096
tensorboard/events.out.tfevents.1761674326.f89433d759fd.76744.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2eddf19802bf88c55d9792485cc85dec1542f3d9ad40654a0a2561bd543989c5
3
+ size 2950188
training_history.json ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_loss": [
3
+ 11.719756037139893,
4
+ 7.8648234016418455,
5
+ 7.125033406066895,
6
+ 6.716861158752441,
7
+ 6.446229772949219,
8
+ 6.233362403869629,
9
+ 6.067157294464112,
10
+ 5.95510294342041,
11
+ 5.876095720672607,
12
+ 5.85686838684082,
13
+ 5.794893002098215,
14
+ 5.784234556721635,
15
+ 5.571503051406587,
16
+ 5.488801901297801,
17
+ 5.413781613645042,
18
+ 5.351544192380003,
19
+ 5.296270893662787,
20
+ 5.240060384316212,
21
+ 5.192795874212709,
22
+ 5.140646377182983,
23
+ 5.105855121027173,
24
+ 5.058046649484074,
25
+ 5.018269288875258,
26
+ 4.987630459963513,
27
+ 4.944288519642237,
28
+ 4.908310048720416,
29
+ 4.870066664712813,
30
+ 4.837560897592998,
31
+ 4.802918886589577,
32
+ 4.768753634694288,
33
+ 4.748480112656303,
34
+ 4.718195804244722,
35
+ 4.703543686196017,
36
+ 4.6877408161797485,
37
+ 4.668632875622996,
38
+ 4.6690439812057765,
39
+ 4.665792374964565,
40
+ 4.675015425133278,
41
+ 4.678463229742806,
42
+ 4.6863949548862776
43
+ ],
44
+ "avg_cayley": [
45
+ 0.10170330944458651,
46
+ 0.10205006939172744,
47
+ 0.10206027217706042,
48
+ 0.10202216378582846,
49
+ 0.10192394400835038,
50
+ 0.10182618655363716,
51
+ 0.10171649098263842,
52
+ 0.10162741211652761,
53
+ 0.10155284156666863,
54
+ 0.10150773673322458,
55
+ 0.10152638941397346,
56
+ 0.10162004937349044,
57
+ 0.10152639062245744,
58
+ 0.10155124912786366,
59
+ 0.10159211712708087,
60
+ 0.1016362269609345,
61
+ 0.10167814686398982,
62
+ 0.10171816758673848,
63
+ 0.10175300719394846,
64
+ 0.10178235071174253,
65
+ 0.10180282750508093,
66
+ 0.10183623000771932,
67
+ 0.10184790959206451,
68
+ 0.1018604352926286,
69
+ 0.10186575625211419,
70
+ 0.10186681535246354,
71
+ 0.10186645624464155,
72
+ 0.10185816364680955,
73
+ 0.10185301550734129,
74
+ 0.10184590464061415,
75
+ 0.101841584112655,
76
+ 0.10183086264248634,
77
+ 0.10182470228903062,
78
+ 0.10181715409291277,
79
+ 0.10181007499966166,
80
+ 0.10180566860264952,
81
+ 0.10180918714833495,
82
+ 0.10181159597672534,
83
+ 0.10181339752033865,
84
+ 0.10181575899869082
85
+ ],
86
+ "avg_timestep_acc": [
87
+ 0.0634111111111111,
88
+ 0.12052222222222221,
89
+ 0.15123333333333336,
90
+ 0.1704222222222223,
91
+ 0.18192222222222218,
92
+ 0.19726666666666665,
93
+ 0.21104444444444476,
94
+ 0.21638888888888896,
95
+ 0.22354444444444438,
96
+ 0.2249666666666668,
97
+ 0.22400831733845142,
98
+ 0.22758272908224908,
99
+ 0.2424756678185975,
100
+ 0.2526103829365288,
101
+ 0.26092684357452206,
102
+ 0.270631305083662,
103
+ 0.2805648799408697,
104
+ 0.2908878587810925,
105
+ 0.29875319694364955,
106
+ 0.30826895071061783,
107
+ 0.31431159422830546,
108
+ 0.32387974213795,
109
+ 0.3312522201087196,
110
+ 0.33577854149244674,
111
+ 0.34667607986069954,
112
+ 0.35246563300757167,
113
+ 0.3598207942665084,
114
+ 0.36531995953772345,
115
+ 0.370214372001002,
116
+ 0.37626767194078,
117
+ 0.3801066531956143,
118
+ 0.3851977657123398,
119
+ 0.3871119281265947,
120
+ 0.3888049694532424,
121
+ 0.39303646277520926,
122
+ 0.3933912333303232,
123
+ 0.39369449774017723,
124
+ 0.3920374218714306,
125
+ 0.3919082125773245,
126
+ 0.3907981671098624
127
+ ],
128
+ "avg_pattern_acc": [
129
+ 0.0820111111111111,
130
+ 0.12741111111111114,
131
+ 0.16217777777777784,
132
+ 0.1818666666666667,
133
+ 0.19773333333333343,
134
+ 0.22198888888888887,
135
+ 0.24277777777777756,
136
+ 0.2632222222222222,
137
+ 0.27948888888888873,
138
+ 0.2828666666666664,
139
+ 0.2836403462003267,
140
+ 0.270406803156323,
141
+ 0.2920560706327435,
142
+ 0.2990009590885998,
143
+ 0.30693467605872815,
144
+ 0.3138888889109082,
145
+ 0.3184254227264864,
146
+ 0.32430999575521036,
147
+ 0.3302860365448501,
148
+ 0.3374125284332551,
149
+ 0.3410730143692995,
150
+ 0.3501296533410879,
151
+ 0.3571389244255881,
152
+ 0.3601613562235474,
153
+ 0.3715575270081624,
154
+ 0.3805164819534068,
155
+ 0.3889594878043327,
156
+ 0.3966454426210419,
157
+ 0.40604930380150206,
158
+ 0.4170005683551358,
159
+ 0.4230809534060138,
160
+ 0.4323662617525682,
161
+ 0.4380084008495048,
162
+ 0.4446069551025747,
163
+ 0.4499560422215078,
164
+ 0.45174543552352586,
165
+ 0.4510718599304824,
166
+ 0.4495226804693164,
167
+ 0.4463372939933459,
168
+ 0.4425151854304638
169
+ ],
170
+ "avg_full_acc": [
171
+ 0.01841111111111113,
172
+ 0.04238888888888888,
173
+ 0.061477777777777715,
174
+ 0.07117777777777788,
175
+ 0.07523333333333333,
176
+ 0.08498888888888892,
177
+ 0.09247777777777781,
178
+ 0.09476666666666668,
179
+ 0.09685555555555558,
180
+ 0.09683333333333316,
181
+ 0.09546500675339441,
182
+ 0.10136756238003854,
183
+ 0.10941229753448183,
184
+ 0.11789482097588974,
185
+ 0.12584985081772287,
186
+ 0.13452108198859936,
187
+ 0.1418522662766602,
188
+ 0.1523577365754307,
189
+ 0.16068574169644853,
190
+ 0.16868428532357027,
191
+ 0.17564293834491693,
192
+ 0.18700536375409607,
193
+ 0.1945732097271389,
194
+ 0.19920831558785004,
195
+ 0.21028834542841277,
196
+ 0.21756269538239945,
197
+ 0.22606875178538885,
198
+ 0.23244485295472683,
199
+ 0.23901943024856592,
200
+ 0.24671648551571548,
201
+ 0.2522915778774265,
202
+ 0.25830136402979864,
203
+ 0.26286942313411205,
204
+ 0.2663607381603941,
205
+ 0.2700722861752387,
206
+ 0.27051319623999587,
207
+ 0.2707520780085029,
208
+ 0.26951903951679623,
209
+ 0.2671724033922287,
210
+ 0.26574444089483956
211
+ ]
212
+ }