Datasets:
pretty_name: KEY Neuroevolution Dataset
license: mit
tags:
- neuroevolution
- lora
- genetic-algorithms
- provenance
- world-model
language:
- en
configs:
- config_name: comm_events
data_files:
- split: train
path: data/comm_events/train.jsonl
- config_name: crossovers
data_files:
- split: train
path: data/crossovers/train.jsonl
- config_name: selection
data_files:
- split: train
path: data/selection/train.jsonl
- config_name: mutations
data_files:
- split: train
path: data/mutations/train.jsonl
- config_name: fitness
data_files:
- split: train
path: data/fitness/train.jsonl
- config_name: performance
data_files:
- split: train
path: data/performance/train.jsonl
- config_name: errors
data_files:
- split: train
path: data/errors/train.jsonl
- config_name: evolution_events
data_files:
- split: train
path: data/evolution_events/train.jsonl
๐ KEY: Neuroevolution Dataset
40,000+ logged events from real evolutionary runs โ every mutation, crossover, selection, and fitness evaluation.
KEY evolves LoRA adapters on frozen base models (MiniLM-L6, DreamerV3) using NEAT-style neuroevolution. This dataset captures the complete evolutionary history.
๐ฎ Links
| ๐ Live Demo | Watch evolution in action |
| ๐ง Champion Model | The evolved DreamerV3 model |
Loading the Dataset
from datasets import load_dataset
# Available configs:
ds = load_dataset("tostido/key-data", "comm_events") # 16,968 rows - pod communication
ds = load_dataset("tostido/key-data", "crossovers") # 8,878 rows - breeding events
ds = load_dataset("tostido/key-data", "selection") # 4,266 rows - tournament selection
ds = load_dataset("tostido/key-data", "mutations") # 3,848 rows - mutation events
ds = load_dataset("tostido/key-data", "fitness") # 2,121 rows - fitness evaluations
ds = load_dataset("tostido/key-data", "performance") # 2,121 rows - runtime telemetry
ds = load_dataset("tostido/key-data", "errors") # 2,070 rows - errors/warnings
ds = load_dataset("tostido/key-data", "evolution_events") # event bus stream
Example: Evolving Semantic Similarity
Task: Adapt MiniLM embeddings to preserve semantic relationships
Test Pair: "The cat sat on the mat" โ "A feline rested on the rug"
| Generation | Cosine Similarity | Fitness |
|---|---|---|
| 0 | 0.42 (random) | 0.35 |
| 50 | 0.76 | 0.64 |
| 100 | 0.89 | 0.82 |
The evolved adapter learned to preserve semantic similarity while improving output quality.
What Gets Evolved
KEY freezes the base model and evolves only the adapter:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Evolvable Brain โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Base Model (FROZEN) โ โ โ MiniLM (22M) or DreamerV3 (200M)
โ โโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ LoRA Adapter (~12K) โ โ โ EVOLVED
โ โ Projection Head (~99K) โ โ โ EVOLVED
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Total evolved parameters: ~111K (vs 22M-200M frozen)
Fitness Functions
What evolution optimized for (from fitness.jsonl):
AdapterFitness (Interface Quality)
- Preservation (40%): Does adapter maintain semantic structure?
- Signal Quality (30%): Is output well-conditioned? (not collapsed/exploded)
- Consistency (30%): Similar inputs โ similar outputs?
EmbeddingKleeneFitness (Semantic Convergence)
- Coherence: Similar pairs should have high cosine similarity
- Separation: Dissimilar pairs should be far apart
- Convergence: Embedding variance stays bounded
DreamerFitness (World Model Quality)
- Prediction: How well does imagination match reality?
- Stability: Do trajectories stay bounded?
- Reward: Can the model anticipate outcomes?
Schema Reference
mutations.jsonl
{
"timestamp": 1737403521.234,
"event": "mutation",
"generation": 42,
"parent_id": "node_abc123",
"child_id": "node_def456",
"parent_fitness": 0.72,
"mutation_rate": 0.1,
"mutated_traits": ["exploration", "caution"],
"deltas": {"exploration": 0.05, "caution": -0.02}
}
crossovers.jsonl
{
"event": "crossover",
"generation": 42,
"parent1_id": "node_abc",
"parent2_id": "node_xyz",
"child_id": "node_new",
"parent1_fitness": 0.72,
"parent2_fitness": 0.68,
"contribution_p1": 0.55
}
fitness.jsonl
{
"event": "fitness_evaluation",
"generation": 42,
"node_id": "node_abc123",
"fitness_function": "AdapterFitness",
"raw_fitness": 0.823,
"components": {
"preservation": 0.85,
"signal": 0.79,
"consistency": 0.84
},
"eval_time_ms": 45.2
}
selection.jsonl
{
"event": "selection",
"generation": 42,
"method": "tournament",
"survivors": ["node_a", "node_b", "node_c"],
"eliminated": ["node_d", "node_e"],
"elites_preserved": 2
}
Why Evolve Instead of Gradient Descent?
Neuroevolution works when:
- โ Your objective isn't differentiable (human preference, discrete outputs)
- โ You want population diversity (speciation prevents local optima)
- โ You're optimizing for interface quality, not task loss
- โ You need full auditability (every mutation logged with provenance)
FAQ
Q: What's a "quine brain"?
A brain that can serialize its weights โ mutate โ deserialize. This enables genetic algorithms to evolve neural networks. Think "self-modifying adapter."
Q: Why not just use backprop?
Backprop requires differentiable objectives. Evolution works with any fitness function: human ratings, game scores, discrete metrics.
Q: Is this real data?
Yes. This dataset contains 40K+ events from actual evolutionary runs.
๐ Get Full Source Access
| Tier | Price | What You Get |
|---|---|---|
| ๐ Source Access | $100 one-time | Full codebase, private repo invite |
| ๐ค Hands-On | $50/hour | I coach you through wiring your own model |
| ๐ ๏ธ Done-For-You | $500 flat | I wire up your custom model for you |
| ๐ค Speaking | $2,000 | Talk at your company on gradient-free optimization |
โ Sponsor on GitHub
Contact
DM on X: @Toasteedo
License
MIT