| | --- |
| | license: apache-2.0 |
| | base_model: Qwen/Qwen3-4B |
| | library_name: transformers |
| | tags: |
| | - qwen3 |
| | - lora |
| | - fine-tuned |
| | - chain-of-thought |
| | - structured-output |
| | - json |
| | - reasoning |
| | - pos |
| | - order-parsing |
| | pipeline_tag: text-generation |
| | language: |
| | - en |
| | --- |
| | |
| | # STRM-4B-v1: Stateful Reasoning Model |
| |
|
| | A LoRA fine-tune of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) trained on distilled chain-of-thought data for parsing unstructured, spoken-language input into structured JSON. |
| |
|
| | The model maintains running state while processing sequential instructions -- handling corrections ("scratch that"), cancellations ("actually nevermind"), quantity changes ("bump that to 3"), and modifier adjustments in a single forward pass. |
| |
|
| | **~94% exact-match accuracy** averaged across evaluation domains. |
| |
|
| | ## Quick Start |
| |
|
| | ```python |
| | from transformers import AutoModelForCausalLM, AutoTokenizer |
| | |
| | model_id = "AryanNsc/strm-4b-v1" |
| | |
| | tokenizer = AutoTokenizer.from_pretrained(model_id) |
| | model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto") |
| | |
| | messages = [ |
| | {"role": "system", "content": "You are an expert POS agent. Parse the order step by step, then output JSON."}, |
| | {"role": "user", "content": "uh can I get a venti latte with oat milk and a blueberry muffin actually scratch that muffin make it an avocado toast"}, |
| | ] |
| | |
| | text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True, enable_thinking=True) |
| | inputs = tokenizer(text, return_tensors="pt").to(model.device) |
| | output = model.generate(**inputs, max_new_tokens=2048, temperature=0.6, do_sample=True) |
| | print(tokenizer.decode(output[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)) |
| | ``` |
| |
|
| | ### Expected Output |
| |
|
| | The model reasons inside `<think>` tags, then outputs structured JSON: |
| |
|
| | ``` |
| | <think> |
| | Processing the order step by step: |
| | |
| | 1. "venti latte with oat milk" -> Latte, Venti, Oat Milk |
| | - Base: $4.50 + Venti: $1.00 + Oat Milk: $0.80 = $6.30 |
| | |
| | 2. "a blueberry muffin" -> Blueberry Muffin |
| | - $3.75 |
| | |
| | 3. "actually scratch that muffin" -> Cancel the Blueberry Muffin |
| | |
| | 4. "make it an avocado toast" -> Avocado Toast |
| | - $7.00 |
| | |
| | Running total: $6.30 + $7.00 = $13.30 |
| | </think> |
| | {"items": [{"name": "Latte", "size": "Venti", "quantity": 1, "modifiers": ["Oat Milk"]}, {"name": "Avocado Toast", "size": null, "quantity": 1, "modifiers": []}], "total_price": 13.30} |
| | ``` |
| |
|
| | ## Intended Use |
| |
|
| | STRM is designed for tasks that require **stateful sequential reasoning** -- processing a stream of instructions where later instructions modify earlier state. Primary use cases: |
| |
|
| | - **Point-of-sale order parsing** -- spoken coffee shop, restaurant, or retail orders with corrections and modifications |
| | - **Grocery checkout / inventory** -- item additions, removals, quantity changes with running totals |
| | - **Banking transactions** -- sequential operations with balance tracking |
| | - **Bill splitting** -- multi-party calculations with adjustments |
| | - **Any domain** where input arrives sequentially and includes corrections to prior state |
| |
|
| | ## How It Works |
| |
|
| | The model is trained with **distilled thinking** -- each training example includes explicit step-by-step reasoning inside `<think>` tags before the final JSON output. This teaches the model to: |
| |
|
| | 1. **Parse sequentially** -- process input phrase by phrase, not all at once |
| | 2. **Track mutable state** -- maintain a running list of items/entities that gets updated with each action |
| | 3. **Handle corrections** -- "scratch that", "remove that", "actually nevermind" modify tracked state rather than restarting |
| | 4. **Show arithmetic** -- every price calculation is written out step by step, reducing computation errors |
| | 5. **Output valid JSON** -- clean structured output after reasoning is complete |
| |
|
| | Training data spans multiple domains with weighted sampling, so the model learns the general skill of stateful reasoning rather than memorizing domain-specific patterns. |
| |
|
| | ## Training Details |
| |
|
| | | Parameter | Value | |
| | |-----------|-------| |
| | | Base model | [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) | |
| | | Method | LoRA | |
| | | LoRA rank (r) | 64 | |
| | | LoRA alpha | 64 | |
| | | LoRA dropout | 0.0 | |
| | | Target modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj | |
| | | Quantization | 4-bit NF4 (training only; weights are merged to 16-bit) | |
| | | Max sequence length | 4096 | |
| | | Learning rate | 2e-4 | |
| | | LR scheduler | Cosine with 5% warmup | |
| | | Weight decay | 0.01 | |
| | | Epochs | 3 | |
| | | Per-device batch size | 2 | |
| | | Gradient accumulation | 4 (effective batch size: 8) | |
| | | Precision | bf16 | |
| | | Seed | 42 | |
| | |
| | ### Training Data |
| | |
| | The model was trained on multi-domain distilled chain-of-thought data. Each example consists of a system prompt, a user input, and an assistant response containing `<think>...</think>` reasoning followed by structured JSON. Domains include coffee shop orders, restaurant orders, grocery checkout, banking, inventory, bill splitting, recipe scaling, scheduling, budget tracking, and unit conversion -- with coffee-domain examples upsampled for the primary use case. |
| | |
| | ## Evaluation |
| | |
| | Benchmarked on held-out labeled data across difficulty tiers: |
| | |
| | | Difficulty | Description | |
| | |------------|-------------| |
| | | Easy | 1-2 items, no corrections | |
| | | Medium | 2-3 items, some modifiers | |
| | | Hard | Multiple items with cancellations or quantity bumps | |
| | | Nightmare | 4+ items with mixed corrections, modifier removals, and re-additions | |
| | |
| | The model achieves **~94% exact-match accuracy** averaged across domains, where exact match requires both the item list (names, sizes, quantities, modifiers) and total price to be completely correct. |
| | |
| | ### Metrics Reported |
| | |
| | - **Exact match** -- items + price both fully correct |
| | - **Items match** -- all items correct regardless of price |
| | - **Price match** -- total within $0.01 tolerance |
| | - **Per-field** -- names, sizes, quantities, modifiers evaluated independently |
| | |
| | ## Usage Tips |
| | |
| | - **Use `enable_thinking=True`** in `apply_chat_template` -- the model was trained to reason inside `<think>` tags before outputting JSON |
| | - **Temperature 0.6** works well for most inputs; use **temperature 0** (greedy) for maximum consistency |
| | - **Max tokens 2048** is sufficient for most orders; nightmare-level inputs with 5+ items may need more |
| | - The JSON output appears **after** the `</think>` closing tag -- parse everything after that delimiter |
| | - The model handles filler words (uh, um, like, literally) natively -- no need to preprocess |
| | |
| | ## Limitations |
| | |
| | - Trained primarily on English-language input |
| | - Price arithmetic can occasionally drift on very long orders (6+ items with many modifiers) |
| | - The model expects a system prompt describing the menu/domain; without it, output format may be inconsistent |
| | - Not designed for multi-turn conversation -- each inference is a single order |
| | |
| | ## Training Code |
| | |
| | The full training and evaluation code is open source: |
| | |
| | [github.com/Guney-olu/strm-model](https://github.com/Guney-olu/strm-model) |
| | |
| | ## License |
| | |
| | Apache 2.0 |
| | |