AthelaPerk commited on
Commit
3825070
Β·
verified Β·
1 Parent(s): 995f9e2

Update README.md with smart injection and real embeddings

Browse files
Files changed (1) hide show
  1. README.md +177 -12
README.md CHANGED
@@ -2,6 +2,7 @@
2
  license: mit
3
  library_name: mnemo
4
  tags:
 
5
  - memory
6
  - ai-memory
7
  - llm-memory
@@ -16,6 +17,8 @@ tags:
16
  - mem0
17
  - vector-search
18
  - knowledge-graph
 
 
19
  pipeline_tag: feature-extraction
20
  ---
21
 
@@ -23,41 +26,203 @@ pipeline_tag: feature-extraction
23
 
24
  **Open-source memory for LLMs, chatbots, and AI agents**
25
 
26
- ## Install
 
 
 
 
 
 
 
 
27
 
28
  ```bash
29
  pip install mnemo-memory
30
  ```
31
 
32
- ## Quick Start
 
 
 
 
 
33
 
34
  ```python
35
  from mnemo import Mnemo
36
 
37
  memory = Mnemo()
38
- memory.add("User prefers Python")
39
- results = memory.search("preferences") # 0.27ms!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  ```
41
 
42
- ## MCP Server (for Claude)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
  ```json
45
  {
46
  "mcpServers": {
47
- "mnemo": {"command": "uvx", "args": ["mnemo-memory"]}
 
 
 
48
  }
49
  }
50
  ```
51
 
52
- ## Benchmarks vs mem0
 
 
 
 
 
 
 
 
 
 
53
 
54
  | Metric | mem0 | Mnemo |
55
  |--------|------|-------|
56
- | Search | 5.73ms | **0.27ms** |
57
- | API Keys | Required | **None** |
58
- | Offline | No | **Yes** |
 
 
 
 
59
 
60
- ## Links
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
 
62
- - [Demo](https://huggingface.co/spaces/AthelaPerk/mnemo)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
  - [MCP Server](https://huggingface.co/spaces/AthelaPerk/mnemo-mcp)
 
 
 
 
 
 
 
 
 
 
2
  license: mit
3
  library_name: mnemo
4
  tags:
5
+ - mnemo
6
  - memory
7
  - ai-memory
8
  - llm-memory
 
17
  - mem0
18
  - vector-search
19
  - knowledge-graph
20
+ - smart-injection
21
+ - context-check
22
  pipeline_tag: feature-extraction
23
  ---
24
 
 
26
 
27
  **Open-source memory for LLMs, chatbots, and AI agents**
28
 
29
+ > 21x faster than mem0 β€’ Smart memory injection β€’ Real embeddings β€’ No API keys
30
+
31
+ ## ✨ What's New in v2.0
32
+
33
+ - **🎯 Smart Memory Injection** - Context-check algorithm with 90% accuracy decides WHEN to inject memory
34
+ - **🧬 Real Embeddings** - sentence-transformers support (with hash fallback)
35
+ - **πŸ“Š Benchmark Tested** - Validated on medical AI bias detection tasks
36
+
37
+ ## πŸ“¦ Install
38
 
39
  ```bash
40
  pip install mnemo-memory
41
  ```
42
 
43
+ Or with all features:
44
+ ```bash
45
+ pip install mnemo-memory[all] # Includes sentence-transformers, faiss-cpu
46
+ ```
47
+
48
+ ## πŸš€ Quick Start
49
 
50
  ```python
51
  from mnemo import Mnemo
52
 
53
  memory = Mnemo()
54
+ memory.add("User prefers Python and dark mode")
55
+ memory.add("Project deadline is March 15th")
56
+
57
+ # Search memories
58
+ results = memory.search("user preferences")
59
+ print(results[0].content) # "User prefers Python and dark mode"
60
+ ```
61
+
62
+ ## 🎯 Smart Memory Injection (NEW!)
63
+
64
+ Don't inject memory blindly - use context-check to decide when it helps:
65
+
66
+ ```python
67
+ from mnemo import Mnemo
68
+
69
+ m = Mnemo()
70
+ m.add("Previous analysis showed gender bias patterns")
71
+ m.add("Framework has 5 checkpoints for detection")
72
+
73
+ # Check if query needs memory
74
+ query1 = "What is machine learning?"
75
+ query2 = "Based on your previous analysis, explain the patterns"
76
+
77
+ m.should_inject(query1) # False - standalone question
78
+ m.should_inject(query2) # True - references prior context
79
+
80
+ # Get formatted context for injection
81
+ if m.should_inject(query2):
82
+ context = m.get_context("previous analysis")
83
+ prompt = f"{context}\n\nQuestion: {query2}"
84
  ```
85
 
86
+ ### When Memory Helps vs Hurts
87
+
88
+ | Query Type | Example | Action |
89
+ |------------|---------|--------|
90
+ | References prior | "Based on your previous analysis..." | βœ“ Inject |
91
+ | Comparison | "Compare this to earlier findings" | βœ“ Inject |
92
+ | Synthesis | "Synthesize all the patterns" | βœ“ Inject |
93
+ | Standalone | "What is Python?" | βœ— Skip |
94
+ | New topic | "This is a NEW problem..." | βœ— Skip |
95
+
96
+ ## πŸ”¬ Benchmark Results
97
+
98
+ Tested on NRA-19 Medical AI Bias Detection benchmark:
99
+
100
+ ### Memory Injection Strategy Comparison
101
+
102
+ | Strategy | Score | Decision Accuracy |
103
+ |----------|-------|-------------------|
104
+ | Always inject | 47/100 | 70% |
105
+ | **Context-check** | **46/100** | **90%** |
106
+ | Never inject | 41/100 | 30% |
107
+ | Similarity only | 37/100 | 50% |
108
+
109
+ ### Embedding Comparison
110
+
111
+ | Type | Score | vs Baseline |
112
+ |------|-------|-------------|
113
+ | No memory | 77/100 | β€” |
114
+ | Hash embeddings | 65/100 | -12 pts ❌ |
115
+ | **Real embeddings** | **74/100** | **-3 pts** |
116
+
117
+ **Key finding:** Real embeddings are 9 points better than hash embeddings.
118
+
119
+ ## πŸ”§ MCP Server (for Claude)
120
+
121
+ Add to your Claude config:
122
 
123
  ```json
124
  {
125
  "mcpServers": {
126
+ "mnemo": {
127
+ "command": "uvx",
128
+ "args": ["mnemo-memory"]
129
+ }
130
  }
131
  }
132
  ```
133
 
134
+ ### MCP Tools
135
+
136
+ | Tool | Description |
137
+ |------|-------------|
138
+ | `add_memory` | Store a new memory |
139
+ | `search_memory` | Search stored memories |
140
+ | `should_inject` | Check if memory should be used |
141
+ | `get_context` | Get formatted context for injection |
142
+ | `get_stats` | Get system statistics |
143
+
144
+ ## πŸ“Š Benchmarks vs mem0
145
 
146
  | Metric | mem0 | Mnemo |
147
  |--------|------|-------|
148
+ | Search latency | 5.73ms | **0.27ms** |
149
+ | API keys required | Yes | **No** |
150
+ | Works offline | No | **Yes** |
151
+ | Smart injection | No | **Yes** |
152
+ | Embedding options | API only | **Local + API** |
153
+
154
+ ## πŸ—οΈ Architecture
155
 
156
+ ```
157
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
158
+ β”‚ Mnemo β”‚
159
+ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
160
+ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
161
+ β”‚ β”‚ Semantic β”‚ β”‚ BM25 β”‚ β”‚ Graph β”‚ β”‚
162
+ β”‚ β”‚ Search β”‚ β”‚ Search β”‚ β”‚ Search β”‚ β”‚
163
+ β”‚ β”‚ (FAISS) β”‚ β”‚ (Keywords) β”‚ β”‚ (NetworkX) β”‚ β”‚
164
+ β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€οΏ½οΏ½β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜ β”‚
165
+ β”‚ β”‚ β”‚ β”‚ β”‚
166
+ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
167
+ β”‚ β”‚ β”‚
168
+ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β” β”‚
169
+ β”‚ β”‚ Ranker β”‚ ← Feedback Learning β”‚
170
+ β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜ β”‚
171
+ β”‚ β”‚ β”‚
172
+ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
173
+ β”‚ β”‚ Smart Injection β”‚ β”‚
174
+ β”‚ β”‚ (Context-Check) β”‚ β”‚
175
+ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
176
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
177
+ ```
178
+
179
+ ## πŸ“ API Reference
180
 
181
+ ### Mnemo Class
182
+
183
+ ```python
184
+ class Mnemo:
185
+ def __init__(
186
+ self,
187
+ embedding_model: str = "all-MiniLM-L6-v2",
188
+ embedding_dim: int = 384,
189
+ semantic_weight: float = 0.5,
190
+ bm25_weight: float = 0.3,
191
+ graph_weight: float = 0.2,
192
+ use_real_embeddings: bool = True
193
+ )
194
+
195
+ def add(content: str, metadata: dict = None) -> str
196
+ def search(query: str, top_k: int = 5) -> List[SearchResult]
197
+ def should_inject(query: str, context: str = "") -> bool
198
+ def get_context(query: str, top_k: int = 3) -> str
199
+ def feedback(query: str, memory_id: str, relevance: float)
200
+ def get_stats() -> dict
201
+ def clear()
202
+ ```
203
+
204
+ ### SearchResult
205
+
206
+ ```python
207
+ @dataclass
208
+ class SearchResult:
209
+ id: str
210
+ content: str
211
+ score: float
212
+ strategy_scores: Dict[str, float]
213
+ metadata: Dict
214
+ ```
215
+
216
+ ## πŸ”— Links
217
+
218
+ - [Demo Space](https://huggingface.co/spaces/AthelaPerk/mnemo)
219
  - [MCP Server](https://huggingface.co/spaces/AthelaPerk/mnemo-mcp)
220
+ - [GitHub Issues](https://github.com/AthelaPerk/mnemo/issues)
221
+
222
+ ## πŸ“„ License
223
+
224
+ MIT License - Use freely in your projects!
225
+
226
+ ---
227
+
228
+ Built with ❀️ for the AI community