DevHunterAI commited on
Commit
b54d772
·
verified ·
1 Parent(s): c999673

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +111 -0
README.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - tr
5
+ tags:
6
+ - text-generation
7
+ - conversational
8
+ - english
9
+ - turkish
10
+ - mistral
11
+ - peft
12
+ - lora
13
+ - hmc
14
+ - reasoning
15
+ - mathematical-reasoning
16
+ base_model:
17
+ - mistralai/Ministral-3-3B-Base-2512
18
+ library_name: transformers
19
+ pipeline_tag: text-generation
20
+ ---
21
+
22
+ # RubiNet
23
+
24
+ RubiNet is a bilingual English-Turkish conversational model release built on top of `mistralai/Ministral-3-3B-Base-2512`. This release is provided as a LoRA adapter and reflects the RubiNet chat tuning setup used in the local HMC-based deployment stack.
25
+
26
+ The goal of RubiNet is to provide sharper dialogue quality, stronger consistency, and better reasoning behavior than the untuned base model in local assistant usage. In the local serving stack, RubiNet can also be paired with math-oriented prompting and calculator verification for safer arithmetic handling.
27
+
28
+ ## Model Summary
29
+
30
+ - **Model name**: `RubiNet`
31
+ - **Base model**: `mistralai/Ministral-3-3B-Base-2512`
32
+ - **Release type**: LoRA adapter
33
+ - **Primary languages**: English, Turkish
34
+ - **Primary use case**: text generation and chat
35
+ - **Inference stack**: Transformers + PEFT
36
+ - **Tuning style**: RubiNet HMC chat adaptation
37
+
38
+ ## Benchmark Snapshot
39
+
40
+ The following benchmark scores were reported for the RubiNet setup:
41
+
42
+ | Benchmark | Score |
43
+ | --- | ---: |
44
+ | PIQA | **71.55%** |
45
+ | ARC-Easy | **79.82%** |
46
+ | GSM8K-100 | **24.00%** |
47
+
48
+ ### Evaluation Notes
49
+
50
+ - **PIQA**: `1315 / 1838` correct on validation
51
+ - **ARC-Easy**: `455 / 570` correct
52
+ - **GSM8K-100**: `24 / 100` correct
53
+ - These values come from the attached evaluation artifacts included in this repository under `benchmarks/`.
54
+
55
+ ## What This Repository Contains
56
+
57
+ This repository is intended to host the RubiNet adapter release and related reference files:
58
+
59
+ - `adapter_model.safetensors`
60
+ - `adapter_config.json`
61
+ - `tokenizer.json`
62
+ - `tokenizer_config.json`
63
+ - `ministral_3b_hmc_chat.py`
64
+ - `ministral_3b_hmc_server.py`
65
+ - benchmark result JSON files
66
+
67
+ This repository does **not** bundle the original base model weights. You need access to the base model `mistralai/Ministral-3-3B-Base-2512` in order to load this adapter.
68
+
69
+ ## Loading Example
70
+
71
+ ```python
72
+ from transformers import AutoModelForCausalLM, AutoTokenizer
73
+ from peft import PeftModel
74
+
75
+ base_model_id = "mistralai/Ministral-3-3B-Base-2512"
76
+ adapter_id = "YOUR_USERNAME/RubiNet"
77
+
78
+ tokenizer = AutoTokenizer.from_pretrained(base_model_id)
79
+ base_model = AutoModelForCausalLM.from_pretrained(base_model_id, device_map="auto")
80
+ model = PeftModel.from_pretrained(base_model, adapter_id)
81
+
82
+ messages = [
83
+ {"role": "user", "content": "Explain why 2+2=4 in a short way."}
84
+ ]
85
+
86
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
87
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
88
+ output = model.generate(**inputs, max_new_tokens=128, temperature=0.0)
89
+ print(tokenizer.decode(output[0], skip_special_tokens=True))
90
+ ```
91
+
92
+ ## Chat Example
93
+
94
+ ![RubiNet local chat example](./rubinet_chat_example.png)
95
+
96
+ Example local RubiNet chat interface screenshot.
97
+
98
+ ## Training / Adaptation Note
99
+
100
+ RubiNet is a fine-tuned conversational adaptation derived from `mistralai/Ministral-3-3B-Base-2512`. The release uses an HMC-oriented chat setup and is intended for local assistant-style interaction, bilingual usage, and reasoning-focused experimentation.
101
+
102
+ ## Limitations
103
+
104
+ - This release is an adapter, not a full standalone base checkpoint.
105
+ - Benchmark scores depend on the exact prompting and inference configuration.
106
+ - Arithmetic reliability improves when RubiNet is combined with external calculator verification in the serving layer.
107
+ - GSM8K performance is still limited relative to stronger specialized math-tuned models.
108
+
109
+ ## Repository Notes
110
+
111
+ If you publish this repository publicly, keep the model title as **RubiNet** and place extra technical details such as benchmark scores, language coverage, and architecture hints in the tags and description rather than in the title.