--- pretty_name: KEY Neuroevolution Dataset license: mit tags: - neuroevolution - lora - genetic-algorithms - provenance - world-model language: - en configs: - config_name: comm_events data_files: - split: train path: data/comm_events/train.jsonl - config_name: crossovers data_files: - split: train path: data/crossovers/train.jsonl - config_name: selection data_files: - split: train path: data/selection/train.jsonl - config_name: mutations data_files: - split: train path: data/mutations/train.jsonl - config_name: fitness data_files: - split: train path: data/fitness/train.jsonl - config_name: performance data_files: - split: train path: data/performance/train.jsonl - config_name: errors data_files: - split: train path: data/errors/train.jsonl - config_name: evolution_events data_files: - split: train path: data/evolution_events/train.jsonl --- # 🔑 KEY: Neuroevolution Dataset **40,000+ logged events from real evolutionary runs** — every mutation, crossover, selection, and fitness evaluation. KEY evolves LoRA adapters on frozen base models (MiniLM-L6, DreamerV3) using NEAT-style neuroevolution. This dataset captures the complete evolutionary history. --- ## 🎮 Links | | | |---|---| | **[🌌 Live Demo](https://huggingface.co/spaces/tostido/Cascade-Hyperlattice)** | Watch evolution in action | | **[🧠 Champion Model](https://huggingface.co/datasets/tostido/key-data/tree/main/models)** | The evolved DreamerV3 model | --- ## Loading the Dataset ```python from datasets import load_dataset # Available configs: ds = load_dataset("tostido/key-data", "comm_events") # 16,968 rows - pod communication ds = load_dataset("tostido/key-data", "crossovers") # 8,878 rows - breeding events ds = load_dataset("tostido/key-data", "selection") # 4,266 rows - tournament selection ds = load_dataset("tostido/key-data", "mutations") # 3,848 rows - mutation events ds = load_dataset("tostido/key-data", "fitness") # 2,121 rows - fitness evaluations ds = load_dataset("tostido/key-data", "performance") # 2,121 rows - runtime telemetry ds = load_dataset("tostido/key-data", "errors") # 2,070 rows - errors/warnings ds = load_dataset("tostido/key-data", "evolution_events") # event bus stream ``` --- ## Example: Evolving Semantic Similarity **Task**: Adapt MiniLM embeddings to preserve semantic relationships **Test Pair**: "The cat sat on the mat" ↔ "A feline rested on the rug" | Generation | Cosine Similarity | Fitness | |------------|-------------------|---------| | 0 | 0.42 (random) | 0.35 | | 50 | 0.76 | 0.64 | | 100 | 0.89 | 0.82 | The evolved adapter learned to preserve semantic similarity while improving output quality. --- ## What Gets Evolved KEY freezes the base model and evolves only the adapter: ``` ┌──────────────────────────────────────┐ │ Evolvable Brain │ │ ┌────────────────────────────────┐ │ │ │ Base Model (FROZEN) │ │ ← MiniLM (22M) or DreamerV3 (200M) │ └─────────────┬──────────────────┘ │ │ ▼ │ │ ┌────────────────────────────────┐ │ │ │ LoRA Adapter (~12K) │ │ ← EVOLVED │ │ Projection Head (~99K) │ │ ← EVOLVED │ └────────────────────────────────┘ │ └──────────────────────────────────────┘ Total evolved parameters: ~111K (vs 22M-200M frozen) ``` --- ## Fitness Functions What evolution optimized for (from `fitness.jsonl`): ### AdapterFitness (Interface Quality) - **Preservation (40%)**: Does adapter maintain semantic structure? - **Signal Quality (30%)**: Is output well-conditioned? (not collapsed/exploded) - **Consistency (30%)**: Similar inputs → similar outputs? ### EmbeddingKleeneFitness (Semantic Convergence) - **Coherence**: Similar pairs should have high cosine similarity - **Separation**: Dissimilar pairs should be far apart - **Convergence**: Embedding variance stays bounded ### DreamerFitness (World Model Quality) - **Prediction**: How well does imagination match reality? - **Stability**: Do trajectories stay bounded? - **Reward**: Can the model anticipate outcomes? --- ## Schema Reference ### `mutations.jsonl` ```json { "timestamp": 1737403521.234, "event": "mutation", "generation": 42, "parent_id": "node_abc123", "child_id": "node_def456", "parent_fitness": 0.72, "mutation_rate": 0.1, "mutated_traits": ["exploration", "caution"], "deltas": {"exploration": 0.05, "caution": -0.02} } ``` ### `crossovers.jsonl` ```json { "event": "crossover", "generation": 42, "parent1_id": "node_abc", "parent2_id": "node_xyz", "child_id": "node_new", "parent1_fitness": 0.72, "parent2_fitness": 0.68, "contribution_p1": 0.55 } ``` ### `fitness.jsonl` ```json { "event": "fitness_evaluation", "generation": 42, "node_id": "node_abc123", "fitness_function": "AdapterFitness", "raw_fitness": 0.823, "components": { "preservation": 0.85, "signal": 0.79, "consistency": 0.84 }, "eval_time_ms": 45.2 } ``` ### `selection.jsonl` ```json { "event": "selection", "generation": 42, "method": "tournament", "survivors": ["node_a", "node_b", "node_c"], "eliminated": ["node_d", "node_e"], "elites_preserved": 2 } ``` --- ## Why Evolve Instead of Gradient Descent? Neuroevolution works when: - ✅ Your objective **isn't differentiable** (human preference, discrete outputs) - ✅ You want **population diversity** (speciation prevents local optima) - ✅ You're optimizing for **interface quality**, not task loss - ✅ You need **full auditability** (every mutation logged with provenance) --- ## FAQ **Q: What's a "quine brain"?** > A brain that can serialize its weights → mutate → deserialize. This enables genetic algorithms to evolve neural networks. Think "self-modifying adapter." **Q: Why not just use backprop?** > Backprop requires differentiable objectives. Evolution works with any fitness function: human ratings, game scores, discrete metrics. **Q: Is this real data?** > Yes. This dataset contains 40K+ events from actual evolutionary runs. --- ## 🔐 Get Full Source Access | Tier | Price | What You Get | |------|-------|--------------| | **🔑 Source Access** | $100 one-time | Full codebase, private repo invite | | **🤝 Hands-On** | $50/hour | I coach you through wiring your own model | | **🛠️ Done-For-You** | $500 flat | I wire up your custom model for you | | **🎤 Speaking** | $2,000 | Talk at your company on gradient-free optimization | ### **[→ Sponsor on GitHub](https://github.com/sponsors/Yufok1)** --- ## Contact **DM on X: [@Toasteedo](https://x.com/Toasteedo)** --- ## License MIT