Datasets:
File size: 3,508 Bytes
72c5de8 25be136 eeafa51 25be136 eeafa51 25be136 5ecbe81 25be136 5ecbe81 25be136 eeafa51 25be136 eeafa51 25be136 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 | ---
pretty_name: FalseMemBench
license: mit
task_categories:
- text-retrieval
language:
- en
tags:
- retrieval
- memory
- llm-agents
- adversarial
size_categories:
- n<1K
---
# FalseMemBench
`FalseMemBench` is an adversarial benchmark for evaluating memory retrieval systems under heavy distractor pressure.
The goal is to measure whether a system can retrieve the right memory when many nearby but wrong memories are present.
## Focus
The benchmark is designed for memory systems used by LLM agents.
It emphasizes:
- entity confusion
- environment confusion
- time/version confusion
- stale facts vs current facts
- speaker confusion
- near-duplicate paraphrases
## Public Surface
The public release is intentionally small:
- `data/cases.jsonl`: canonical benchmark dataset
- `schema/case.schema.json`: case schema
- `scripts/validate.py`: dataset validator
- `scripts/run_tagmem_benchmark.py`: benchmark runner for `tagmem`
- `scripts/run_mempalace_benchmark.py`: benchmark runner for MemPalace-style retrieval
- `scripts/run_benchmark.py`: simple keyword baseline
- `scripts/run_bm25_benchmark.py`: BM25 baseline
- `scripts/run_dense_benchmark.py`: dense retrieval baseline
- `docs/`: supporting benchmark notes
## Layout
- `schema/case.schema.json`: benchmark case schema
- `data/cases.jsonl`: canonical benchmark cases
- `docs/`: benchmark design notes
- `scripts/validate.py`: schema validator for the JSONL dataset
- `scripts/run_benchmark.py`: simple keyword baseline
- `scripts/run_tagmem_benchmark.py`: run the benchmark against a real `tagmem` binary
- `scripts/run_mempalace_benchmark.py`: run the benchmark against MemPalace raw-style retrieval
- `scripts/run_bm25_benchmark.py`: lexical BM25 baseline
- `scripts/run_dense_benchmark.py`: dense retrieval baseline
- `requirements.txt`: optional Python dependencies for BM25 and dense baseline scripts
## Canonical Dataset
`data/cases.jsonl` is the only canonical benchmark file.
There are no public snapshot versions in this repository. Version history is tracked through git.
## Running
Validate the canonical dataset:
```bash
python3 scripts/validate.py
```
Run the simple keyword baseline:
```bash
python3 scripts/run_benchmark.py
```
Run the `tagmem` benchmark:
```bash
python3 scripts/run_tagmem_benchmark.py --tagmem-bin tagmem
```
Run the MemPalace-style benchmark:
```bash
python3 scripts/run_mempalace_benchmark.py
```
Optional BM25 and dense baselines use dependencies from `requirements.txt`.
## Case format
Each case contains:
- a `query`
- a set of `entries`
- one or more `relevant_ids`
- a single `adversary_type`
- optional metadata for analysis
## Example
```json
{
"id": "env-001",
"query": "What database does staging use?",
"adversary_type": "environment_swap",
"entries": [
{
"id": "e1",
"text": "The staging environment uses db-staging.internal.",
"tags": ["staging", "database", "infra"],
"depth": 1
},
{
"id": "e2",
"text": "The production environment uses db-prod.internal.",
"tags": ["production", "database", "infra"],
"depth": 1
}
],
"relevant_ids": ["e1"]
}
```
## Current adversary types
- `entity_swap`
- `environment_swap`
- `time_swap`
- `state_update`
- `speaker_swap`
- `near_duplicate_paraphrase`
Current dataset size:
- `573` cases
## Intended Use
The benchmark is intended to be:
- model-agnostic
- storage-agnostic
- metadata-friendly
- easy to publish to GitHub and Hugging Face
|