Evilander's picture
Publish before-after MemoryGym benchmark rerun
f303647 verified
metadata
license: mit
tags:
  - ai-memory
  - ai-agents
  - mcp
  - benchmark
  - retrieval
  - memorygym

Audrey Memory Benchmark Artifacts

This dataset publishes current Audrey benchmark artifacts for inspection and reproducibility.

Project: https://github.com/Evilander/Audrey Package: https://www.npmjs.com/package/audrey Live report Space: https://huggingface.co/spaces/Evilander/audrey-memory-benchmark-report

Current MemoryGym rerun after Audrey tag-isolation fix

Generated: 2026-05-01T04:38:49.849Z
Local time: April 30, 2026, 11:38 PM Central
Suite: MemoryGym Core
Transport: Audrey HTTP sidecar on loopback
Providers: mock embeddings, mock LLM, isolated data directory

Audrey recall previously treated multiple tag filters as OR. MemoryGym sends memorygym, run id, and scenario id together; OR matching let unrelated scenario memories leak through. Audrey now requires all requested tags to match.

Adapter Score Hit rate P95 recall latency
typed-semantic 72.7% 100.0% 0.4 ms
hybrid 72.7% 100.0% 0.3 ms
audrey-http 57.5% 85.7% 3.8 ms

Audrey before/after

Run Score Hit rate P95 recall latency
Before tag fix 34.3% 42.9% 3.7 ms
After tag fix 57.5% 85.7% 3.8 ms

Included files

  • memorygym-before-tag-filter.json: raw MemoryGym run before the Audrey tag fix
  • memorygym-after-tag-filter.json: raw MemoryGym run after the Audrey tag fix
  • memorygym-current-run.json: current raw run, currently same as after-fix run
  • memorygym-current-summary.json: compact summary and finding notes
  • memorygym-before-after.svg: before/after graph
  • memorygym-score.svg: score graph
  • memorygym-hit-rate.svg: hit-rate graph
  • memorygym-p95-recall-latency.svg: latency graph
  • audrey-benchmark-report.png: full-page screenshot of the current report
  • index.html: static report source

Claim boundary

This is a current MemoryGym local benchmark artifact. It is not an official AMB, LoCoMo, LongMemEval, Papers with Code, or CodeSOTA leaderboard score.

The current run is intentionally published with the remaining gaps. The next fix targets are current-belief precedence and context-aware ranking.