File size: 1,000 Bytes
c61ea3c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
license: mit
tags:
- mneme
- memory
- weight-injection
- qwen
---

# Mneme: Neural Episodic Weight Injection Encoder

Trained encoder for the Mneme memory system - injects facts directly into LLM weights.

## Usage

```bash
# Clone the repo
git clone https://github.com/Yusuffarhan13/Mneme-v1-mvp.git
cd Mneme-v1-mvp

# Download the encoder
pip install huggingface_hub
python -c "from huggingface_hub import hf_hub_download; hf_hub_download(repo_id='yusuffarhan/qwen-memory', filename='best_encoder.pt', local_dir='mneme_trained')"

# Run
python qwen.py --encoder mneme_trained/best_encoder.pt
```

## Training Config

- **Delta rank**: 16
- **Target layers**: [4, 8, 12, 16, 20, 24]
- **Encoder**: 768 hidden, 4 layers
- **Base model**: Qwen/Qwen3-4B

## What This Does

Injects facts directly INTO model weights (no RAG, no prompt injection):

```
/remember My name is Yusuf
/remember I work at Google
What is my name?  →  "Your name is Yusuf"
Where do I work?  →  "You work at Google"
```