Smriti AI Small-Model Memory Layer
Smriti AI Small-Model Memory Layer is a lightweight discovery card for developers searching for small model memory, long-term agent memory, external memory, semantic retrieval, graph recall, training-free memory, and memory augmentation for frozen language models.
Important
This repository is not a standalone foundation model and not fine-tuned. The base model remains frozen. Smriti AI is an inference-time memory layer: it stores user memory outside model weights and injects retrieved context only during inference.
Use
Use the main model-style repository and package:
pip install "smriti-memory-ai[ml]==1.0.9"
from smriti import SmritiAILite, MemPalaceLite
For Hugging Face Inference Endpoints, use luciferai-devil/smriti-ai and choose a supported frozen base model through BASE_MODEL_ID or HF_ENDPOINT_URL.
Related assets
- Main model wrapper:
luciferai-devil/smriti-ai - Gemma 4 discovery card:
luciferai-devil/smriti-ai-gemma-4-memory - Qwen discovery card:
luciferai-devil/smriti-ai-qwen-memory - Demo Space:
luciferai-devil/smriti-ai-demo - Benchmark dataset:
luciferai-devil/smriti-ai-benchmarks - GitHub:
https://github.com/Luciferai04/smriti-ai - PyPI:
smriti-memory-ai
Claim boundary
Public benchmark claims use the checked-in real-model provenance under results/current/; deterministic CI smoke checks are not official benchmark evidence. Smriti AI improves memory access; it does not change model weights.