Datasets:
File size: 7,390 Bytes
9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 a25fd29 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 5aeee33 9b5ea67 7036e0d 9b5ea67 7036e0d 6173327 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 193a962 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 215ce47 da396ec 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 7036e0d 9b5ea67 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 |
---
pretty_name: KEY Neuroevolution Dataset
license: mit
tags:
- neuroevolution
- lora
- genetic-algorithms
- provenance
- world-model
language:
- en
configs:
- config_name: comm_events
data_files:
- split: train
path: data/comm_events/train.jsonl
- config_name: crossovers
data_files:
- split: train
path: data/crossovers/train.jsonl
- config_name: selection
data_files:
- split: train
path: data/selection/train.jsonl
- config_name: mutations
data_files:
- split: train
path: data/mutations/train.jsonl
- config_name: fitness
data_files:
- split: train
path: data/fitness/train.jsonl
- config_name: performance
data_files:
- split: train
path: data/performance/train.jsonl
- config_name: errors
data_files:
- split: train
path: data/errors/train.jsonl
- config_name: evolution_events
data_files:
- split: train
path: data/evolution_events/train.jsonl
---
# ๐ KEY: Neuroevolution Dataset
**40,000+ logged events from real evolutionary runs** โ every mutation, crossover, selection, and fitness evaluation.
KEY evolves LoRA adapters on frozen base models (MiniLM-L6, DreamerV3) using NEAT-style neuroevolution. This dataset captures the complete evolutionary history.
---
## ๐ฎ Links
| | |
|---|---|
| **[๐ Live Demo](https://huggingface.co/spaces/tostido/Cascade-Hyperlattice)** | Watch evolution in action |
| **[๐ง Champion Model](https://huggingface.co/datasets/tostido/key-data/tree/main/models)** | The evolved DreamerV3 model |
---
## Loading the Dataset
```python
from datasets import load_dataset
# Available configs:
ds = load_dataset("tostido/key-data", "comm_events") # 16,968 rows - pod communication
ds = load_dataset("tostido/key-data", "crossovers") # 8,878 rows - breeding events
ds = load_dataset("tostido/key-data", "selection") # 4,266 rows - tournament selection
ds = load_dataset("tostido/key-data", "mutations") # 3,848 rows - mutation events
ds = load_dataset("tostido/key-data", "fitness") # 2,121 rows - fitness evaluations
ds = load_dataset("tostido/key-data", "performance") # 2,121 rows - runtime telemetry
ds = load_dataset("tostido/key-data", "errors") # 2,070 rows - errors/warnings
ds = load_dataset("tostido/key-data", "evolution_events") # event bus stream
```
---
## Example: Evolving Semantic Similarity
**Task**: Adapt MiniLM embeddings to preserve semantic relationships
**Test Pair**: "The cat sat on the mat" โ "A feline rested on the rug"
| Generation | Cosine Similarity | Fitness |
|------------|-------------------|---------|
| 0 | 0.42 (random) | 0.35 |
| 50 | 0.76 | 0.64 |
| 100 | 0.89 | 0.82 |
The evolved adapter learned to preserve semantic similarity while improving output quality.
---
## What Gets Evolved
KEY freezes the base model and evolves only the adapter:
```
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Evolvable Brain โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Base Model (FROZEN) โ โ โ MiniLM (22M) or DreamerV3 (200M)
โ โโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ LoRA Adapter (~12K) โ โ โ EVOLVED
โ โ Projection Head (~99K) โ โ โ EVOLVED
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Total evolved parameters: ~111K (vs 22M-200M frozen)
```
---
## Fitness Functions
What evolution optimized for (from `fitness.jsonl`):
### AdapterFitness (Interface Quality)
- **Preservation (40%)**: Does adapter maintain semantic structure?
- **Signal Quality (30%)**: Is output well-conditioned? (not collapsed/exploded)
- **Consistency (30%)**: Similar inputs โ similar outputs?
### EmbeddingKleeneFitness (Semantic Convergence)
- **Coherence**: Similar pairs should have high cosine similarity
- **Separation**: Dissimilar pairs should be far apart
- **Convergence**: Embedding variance stays bounded
### DreamerFitness (World Model Quality)
- **Prediction**: How well does imagination match reality?
- **Stability**: Do trajectories stay bounded?
- **Reward**: Can the model anticipate outcomes?
---
## Schema Reference
### `mutations.jsonl`
```json
{
"timestamp": 1737403521.234,
"event": "mutation",
"generation": 42,
"parent_id": "node_abc123",
"child_id": "node_def456",
"parent_fitness": 0.72,
"mutation_rate": 0.1,
"mutated_traits": ["exploration", "caution"],
"deltas": {"exploration": 0.05, "caution": -0.02}
}
```
### `crossovers.jsonl`
```json
{
"event": "crossover",
"generation": 42,
"parent1_id": "node_abc",
"parent2_id": "node_xyz",
"child_id": "node_new",
"parent1_fitness": 0.72,
"parent2_fitness": 0.68,
"contribution_p1": 0.55
}
```
### `fitness.jsonl`
```json
{
"event": "fitness_evaluation",
"generation": 42,
"node_id": "node_abc123",
"fitness_function": "AdapterFitness",
"raw_fitness": 0.823,
"components": {
"preservation": 0.85,
"signal": 0.79,
"consistency": 0.84
},
"eval_time_ms": 45.2
}
```
### `selection.jsonl`
```json
{
"event": "selection",
"generation": 42,
"method": "tournament",
"survivors": ["node_a", "node_b", "node_c"],
"eliminated": ["node_d", "node_e"],
"elites_preserved": 2
}
```
---
## Why Evolve Instead of Gradient Descent?
Neuroevolution works when:
- โ
Your objective **isn't differentiable** (human preference, discrete outputs)
- โ
You want **population diversity** (speciation prevents local optima)
- โ
You're optimizing for **interface quality**, not task loss
- โ
You need **full auditability** (every mutation logged with provenance)
---
## FAQ
**Q: What's a "quine brain"?**
> A brain that can serialize its weights โ mutate โ deserialize. This enables genetic algorithms to evolve neural networks. Think "self-modifying adapter."
**Q: Why not just use backprop?**
> Backprop requires differentiable objectives. Evolution works with any fitness function: human ratings, game scores, discrete metrics.
**Q: Is this real data?**
> Yes. This dataset contains 40K+ events from actual evolutionary runs.
---
## ๐ Get Full Source Access
| Tier | Price | What You Get |
|------|-------|--------------|
| **๐ Source Access** | $100 one-time | Full codebase, private repo invite |
| **๐ค Hands-On** | $50/hour | I coach you through wiring your own model |
| **๐ ๏ธ Done-For-You** | $500 flat | I wire up your custom model for you |
| **๐ค Speaking** | $2,000 | Talk at your company on gradient-free optimization |
### **[โ Sponsor on GitHub](https://github.com/sponsors/Yufok1)**
---
## Contact
**DM on X: [@Toasteedo](https://x.com/Toasteedo)**
---
## License
MIT
|