--- language: - en - tr tags: - text-generation - conversational - english - turkish - mistral - peft - lora - hmc - reasoning - mathematical-reasoning datasets: - HuggingFaceH4/ultrachat_200k base_model: - mistralai/Ministral-3-3B-Base-2512 library_name: transformers pipeline_tag: text-generation model-index: - name: RubiNet results: - task: type: text-generation name: Text Generation dataset: type: piqa name: PIQA split: validation metrics: - type: accuracy name: Accuracy value: 71.55 - task: type: text-generation name: Text Generation dataset: type: ai2_arc name: ARC-Easy config: ARC-Easy split: test metrics: - type: accuracy name: Accuracy value: 79.82 - task: type: text-generation name: Text Generation dataset: type: gsm8k name: GSM8K-100 split: test metrics: - type: accuracy name: Accuracy value: 24 --- # RubiNet RubiNet is a bilingual English-Turkish conversational model release built on top of `mistralai/Ministral-3-3B-Base-2512`. This release is provided as a LoRA adapter and reflects the RubiNet chat tuning setup used in the local HMC-based deployment stack. The goal of RubiNet is to provide sharper dialogue quality, stronger consistency, and better reasoning behavior than the untuned base model in local assistant usage. In the local serving stack, RubiNet can also be paired with math-oriented prompting and calculator verification for safer arithmetic handling. ## Model Summary - **Model name**: `RubiNet` - **Base model**: `mistralai/Ministral-3-3B-Base-2512` - **Release type**: LoRA adapter - **Primary languages**: English, Turkish - **Primary use case**: text generation and chat - **Inference stack**: Transformers + PEFT - **Tuning style**: RubiNet HMC chat adaptation ## Eval Results The following benchmark scores were reported for the RubiNet setup: | Benchmark | Score | | --- | ---: | | PIQA | **71.55%** | | ARC-Easy | **79.82%** | | GSM8K-100 | **24.00%** | ### Evaluation Notes - **PIQA**: `1315 / 1838` correct on validation - **ARC-Easy**: `455 / 570` correct - **GSM8K-100**: `24 / 100` correct - These values come from the attached evaluation artifacts included in this repository under `benchmarks/`. ## What This Repository Contains This repository is intended to host the RubiNet adapter release and related reference files: - `adapter_model.safetensors` - `adapter_config.json` - `tokenizer.json` - `tokenizer_config.json` - `ministral_3b_hmc_chat.py` - `ministral_3b_hmc_server.py` - `local.png` - `RubiNetHMC.png` - benchmark result JSON files This repository does **not** bundle the original base model weights. You need access to the base model `mistralai/Ministral-3-3B-Base-2512` in order to load this adapter. ## Loading Example ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel base_model_id = "mistralai/Ministral-3-3B-Base-2512" adapter_id = "DevHunterAI/RubiNet" tokenizer = AutoTokenizer.from_pretrained(base_model_id) base_model = AutoModelForCausalLM.from_pretrained(base_model_id, device_map="auto") model = PeftModel.from_pretrained(base_model, adapter_id) messages = [ {"role": "user", "content": "Explain why 2+2=4 in a short way."} ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(prompt, return_tensors="pt").to(model.device) output = model.generate(**inputs, max_new_tokens=128, temperature=0.0) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` ## Chat Example ![RubiNet local chat example](./local.png) Example local RubiNet chat interface screenshot. ## Architecture Overview ![RubiNet HMC architecture](./RubiNetHMC.png) RubiNet HMC architecture overview used in the local serving stack. ## Training / Adaptation Note RubiNet is a fine-tuned conversational adaptation derived from `mistralai/Ministral-3-3B-Base-2512`. The release uses an HMC-oriented chat setup and is intended for local assistant-style interaction, bilingual usage, and reasoning-focused experimentation. ## Limitations - This release is an adapter, not a full standalone base checkpoint. - Benchmark scores depend on the exact prompting and inference configuration. - Arithmetic reliability improves when RubiNet is combined with external calculator verification in the serving layer. - GSM8K performance is still limited relative to stronger specialized math-tuned models.