File size: 4,562 Bytes
b54d772
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a4bd27d
f2f8e83
b54d772
 
 
 
a4bd27d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f2f8e83
b54d772
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a4bd27d
b54d772
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9f0b563
 
b54d772
 
 
 
 
 
 
 
 
 
 
36676a8
b54d772
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9f0b563
 
 
 
 
 
a4bd27d
b54d772
a4bd27d
b54d772
 
 
 
 
 
 
 
 
 
a192492
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
---
language:
- en
- tr
tags:
- text-generation
- conversational
- english
- turkish
- mistral
- peft
- lora
- hmc
- reasoning
- mathematical-reasoning
datasets:
- HuggingFaceH4/ultrachat_200k
base_model:
- mistralai/Ministral-3-3B-Base-2512
library_name: transformers
pipeline_tag: text-generation
model-index:
- name: RubiNet
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      type: piqa
      name: PIQA
      split: validation
    metrics:
    - type: accuracy
      name: Accuracy
      value: 71.55
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      type: ai2_arc
      name: ARC-Easy
      config: ARC-Easy
      split: test
    metrics:
    - type: accuracy
      name: Accuracy
      value: 79.82
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      type: gsm8k
      name: GSM8K-100
      split: test
    metrics:
    - type: accuracy
      name: Accuracy
      value: 24
---

# RubiNet

RubiNet is a bilingual English-Turkish conversational model release built on top of `mistralai/Ministral-3-3B-Base-2512`. This release is provided as a LoRA adapter and reflects the RubiNet chat tuning setup used in the local HMC-based deployment stack.

The goal of RubiNet is to provide sharper dialogue quality, stronger consistency, and better reasoning behavior than the untuned base model in local assistant usage. In the local serving stack, RubiNet can also be paired with math-oriented prompting and calculator verification for safer arithmetic handling.

## Model Summary

- **Model name**: `RubiNet`
- **Base model**: `mistralai/Ministral-3-3B-Base-2512`
- **Release type**: LoRA adapter
- **Primary languages**: English, Turkish
- **Primary use case**: text generation and chat
- **Inference stack**: Transformers + PEFT
- **Tuning style**: RubiNet HMC chat adaptation

## Eval Results

The following benchmark scores were reported for the RubiNet setup:

| Benchmark | Score |
| --- | ---: |
| PIQA | **71.55%** |
| ARC-Easy | **79.82%** |
| GSM8K-100 | **24.00%** |

### Evaluation Notes

- **PIQA**: `1315 / 1838` correct on validation
- **ARC-Easy**: `455 / 570` correct
- **GSM8K-100**: `24 / 100` correct
- These values come from the attached evaluation artifacts included in this repository under `benchmarks/`.

## What This Repository Contains

This repository is intended to host the RubiNet adapter release and related reference files:

- `adapter_model.safetensors`
- `adapter_config.json`
- `tokenizer.json`
- `tokenizer_config.json`
- `ministral_3b_hmc_chat.py`
- `ministral_3b_hmc_server.py`
- `local.png`
- `RubiNetHMC.png`
- benchmark result JSON files

This repository does **not** bundle the original base model weights. You need access to the base model `mistralai/Ministral-3-3B-Base-2512` in order to load this adapter.

## Loading Example

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model_id = "mistralai/Ministral-3-3B-Base-2512"
adapter_id = "DevHunterAI/RubiNet"

tokenizer = AutoTokenizer.from_pretrained(base_model_id)
base_model = AutoModelForCausalLM.from_pretrained(base_model_id, device_map="auto")
model = PeftModel.from_pretrained(base_model, adapter_id)

messages = [
    {"role": "user", "content": "Explain why 2+2=4 in a short way."}
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=128, temperature=0.0)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```

## Chat Example

![RubiNet local chat example](./local.png)

Example local RubiNet chat interface screenshot.

## Architecture Overview

![RubiNet HMC architecture](./RubiNetHMC.png)

RubiNet HMC architecture overview used in the local serving stack.

## Training / Adaptation Note

RubiNet is a fine-tuned conversational adaptation derived from `mistralai/Ministral-3-3B-Base-2512`. The release uses an HMC-oriented chat setup and is intended for local assistant-style interaction, bilingual usage, and reasoning-focused experimentation.

## Limitations

- This release is an adapter, not a full standalone base checkpoint.
- Benchmark scores depend on the exact prompting and inference configuration.
- Arithmetic reliability improves when RubiNet is combined with external calculator verification in the serving layer.
- GSM8K performance is still limited relative to stronger specialized math-tuned models.