File size: 3,241 Bytes
0854dd3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0dd6cef
 
 
 
0854dd3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
tags:
- memory-augmented
- decision-trees
- associative-memory
- classics-revival
- experimental
license: apache-2.0
library_name: pytorch
---

#  Memory Forest - The Classics Revival

**Associative Memory with Learned Routing Through Decision Trees**

**Experimental Research Code** - Functional but unoptimized, expect rough edges

## What Is This?

Memory Forest combines decision tree routing with associative hash buckets to create a scalable memory system. Instead of searching all memory linearly, learned decision trees route queries to the most relevant memory buckets.

**Core Innovation**: Trees learn to route based on retrieval success, creating adaptive memory organization that gets better over time.

## Architecture Highlights

- **Learned Routing**: Decision trees adapt based on retrieval performance
- **Associative Storage**: Hash buckets with similarity-based clustering  
- **Ensemble Retrieval**: Multiple trees vote on best memories
- **Eviction Policies**: Automatic memory management
- **Scalable Design**: O(log n) routing instead of O(n) search

## Quick Start
```python
from memory_forest import MemoryForest

# Create memory system
forest = MemoryForest(
    input_dim=64,
    num_trees=3,
    max_depth=4,
    bucket_size=32
)

# Store some memories
features = torch.randn(100, 64)
forest.store(features)

# Retrieve similar items
query = torch.randn(5, 64)
results = forest.retrieve(query, top_k=3)
```
## Current Status
- **Working**: Hierarchical storage, associative clustering, tree routing, ensemble retrieval
- **Rough Edges**: Adaptive learning can be overly aggressive, bucket utilization optimization needed
- **Still Missing**: Learning rate scheduling, advanced eviction policies, distributed routing
- **Performance**: Excellent retrieval quality (0.95+ similarity), needs learning component tuning
- **Memory Usage**: Not optimized, expect high RAM usage
- **Speed**: Baseline implementation, significant optimization possible

## Mathematical Foundation
The decision trees learn routing functions f: R^d -> {0,1,...,B-1} where B is the number of buckets. Trees update based on retrieval success using simple gradient-free optimization:
```python
success_rate = retrieved_similarity / max_possible_similarity
tree_update ∝ success_rate * path_taken
```
Associative buckets use learnable hash functions with Hebbian-style updates for similarity clustering.

## Research Applications
- **Large-scale retrieval systems**
- **Adaptive memory architectures**
- **Decision tree optimization**
- **Associative memory research**
- **Hierarchical clustering**

## Installation 
``` python 
pip install torch numpy
# Download memory_forest.py from this repo
```
## The Classics Revival Collection

Memory Forest is part of a larger exploration of foundational algorithms enhanced with modern neural techniques:

-  Evolutionary Turing Machine
-  Hebbian Bloom Filter  
-  Hopfield Decision Graph
-  Liquid Bayes Chain
-  Liquid State Space Model
- ** Memory Forest** ← You are here

## Citation
```bibtex
@misc{memoryforest2025,
  title={Memory Forest: Associative Memory with Learned Routing},
  author={Jae Parker 𓅸 1990two},
  year={2025},
  note={Part of The Classics Revival Collection}
}