| --- |
| language: |
| - en |
| tags: |
| - reframr |
| - okeymeta |
| - non-transformer |
| - recurrent-memory |
| - computed-weights |
| - cpu-inference |
| - safetensors |
| library_name: reframr |
| pipeline_tag: text-generation |
| license: other |
| base_model: scratch |
| --- |
| |
| # Reframr-RFM-v1-Base |
|
|
| **Reframr-RFM-v1-Base** is the first public base checkpoint from **OkeyMeta Ltd** for the Reframr line of non-Transformer language models. Reframr is built from scratch around recurrent memory, computed weights, and data-derived structure rather than a Transformer attention stack. |
|
|
| This release is packaged as `model.safetensors` with the matching `tokenizer.json`, runtime source, config, and runnable examples. A larger production Reframr line is being computed after this release, including tool-use and web-freshness data. |
|
|
| ## What It Is |
|
|
| Reframr-RFM means **Recurrent Flow Memory**. The model is designed around a persistent recurrent state instead of a fixed quadratic attention map. That gives the architecture no fixed attention-window context limit; practical limits are determined by runtime session length, machine memory, and deployment policy. |
|
|
| This checkpoint is not a Transformer, not a fine-tuned clone of a Transformer, and not a prompt wrapper. It uses the Reframr runtime included in this repository and a checkpoint kind of `reframr-analytical`. |
|
|
| ## Model Files |
|
|
| - `model.safetensors`: Reframr v1 computed-weight checkpoint. |
| - `tokenizer.json`: FrameToken tokenizer exported from the checkpoint metadata. |
| - `config.json`: Release metadata and tensor layout. |
| - `generation_config.json`: Recommended default generation settings. |
| - `reframr/`: CPU-first Reframr runtime source. |
| - `examples/`: Minimal CLI, JSONL, and Python usage examples. |
|
|
| ## Quick Start |
|
|
| Use Python 3.13 or newer from the root of this model repository: |
|
|
| ```bash |
| python -m pip install -r requirements.txt |
| python -m reframr generate \ |
| --model model.safetensors \ |
| --context "Who are you, and what makes you different from Transformer models?" \ |
| --max-tokens 90 \ |
| --temperature 0.92 \ |
| --decode-top-k 72 \ |
| --decode-top-p 0.92 |
| ``` |
|
|
| System instructions are passed as learned context: |
|
|
| ```bash |
| python -m reframr generate \ |
| --model model.safetensors \ |
| --system "Answer in two short paragraphs. Be direct and warm." \ |
| --context "Explain why clean data matters when computing Reframr weights." \ |
| --max-tokens 90 \ |
| --temperature 0.9 |
| ``` |
|
|
| For a persistent process that loads the checkpoint once and accepts JSONL requests: |
|
|
| ```bash |
| python -m reframr serve --model model.safetensors --max-tokens 96 |
| ``` |
|
|
| Then send one JSON object per line: |
|
|
| ```jsonl |
| {"prompt":"Tell a short story about a glass library under the sea.","temperature":1.05,"decode_top_k":90,"max_tokens":120} |
| {"system":"Use exactly one fitting emoji.","prompt":"Encourage a tired engineer without sounding generic.","max_tokens":70} |
| ``` |
|
|
| ## Python Example |
|
|
| ```python |
| from pathlib import Path |
| from reframr.model import ReframrModel |
| |
| root = Path(__file__).resolve().parent |
| model = ReframrModel.load(root / "model.safetensors") |
| |
| text = model.generate_text( |
| "Who are you?", |
| max_tokens=80, |
| temperature=0.92, |
| top_k=72, |
| top_p=0.92, |
| repetition_penalty=1.18, |
| ) |
| print(text) |
| ``` |
|
|
| ## Generation Controls |
|
|
| - `temperature`: Higher values increase variation. Try `0.85` for focused answers and `1.05` for story or brainstorming prompts. |
| - `--decode-top-k`: Limits sampling to the strongest candidate set. Recommended range: `50` to `100`. |
| - `--decode-top-p`: Nucleus cutoff. Recommended default: `0.92`. |
| - `--repetition-penalty`: Penalizes repeated tokens. Recommended default: `1.18`. |
| - `--system`: Adds a system instruction before the user prompt. |
| - `--reasoning-mode`: Supports `none`, `deep`, `memory`, and `tool` profiles in the runtime. The current public checkpoint is a base release; the dedicated tool/web-freshness line is still being computed. |
|
|
| ## Identity |
|
|
| Reframr is built by **OkeyMeta Ltd**. The Reframr line reframes language intelligence around recurrent memory, computed weights, and evidence from data. OkeyMeta Ltd was founded in 2022. The founder and CEO is **Okechukwu Goodnews Nwaozor**. |
|
|
| ## Architecture Snapshot |
|
|
| | Property | Reframr-RFM-v1-Base | |
| | --- | --- | |
| | Family | Reframr / Recurrent Flow Memory | |
| | Organization | OkeyMeta Ltd | |
| | Checkpoint kind | `reframr-analytical` | |
| | Attention stack | None | |
| | Transformer layers | None | |
| | Tokenizer | FrameToken | |
| | Weight file | `model.safetensors` | |
| | Runtime | CPU-first Reframr Python runtime | |
| | Embedding dim | 96 | |
| | State dim | 48 | |
| | State width | 576 | |
| | Output vocab rows | 2,793 | |
| | Tokenizer vocab size | 3,741 | |
|
|
| ## Intended Use |
|
|
| This checkpoint is intended for public testing of the Reframr runtime, open-ended generation experiments, system-instruction experiments, story generation, safety behavior, identity prompts, and CPU-first research into non-Transformer language modeling. |
|
|
| It is a base checkpoint, not a medical, legal, financial, or safety-critical authority. For fresh factual questions, connect a retrieval or web-search tool in the next tool-aware Reframr line rather than relying on static checkpoint knowledge alone. |
|
|
| ## Release Note |
|
|
| This release is the public v1 base checkpoint. Internally, it comes from the v95 tracked compute run; publicly, it begins the Reframr-RFM v1 line. The next production line is being computed with broader data, tool-use supervision, web-search protocol tokens, and larger generalization probes. The goal is simple: make Reframr a serious, CPU-first, non-Transformer model family that learns from data rather than from hardcoded responses. |
|
|
| ## Ownership |
|
|
| Copyright OkeyMeta Ltd. All rights reserved unless a separate license is supplied by OkeyMeta Ltd. |
|
|