File size: 1,656 Bytes
0729fbe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
library_name: transformers
pipeline_tag: text-generation
tags:
  - synthetic-data
  - dpo
  - gpqa
  - reasoning
  - alignment
  - quantum
  - neuroscience
  - gloss-free
  - data-efficient
base_model: Qwen/Qwen2.5-7B-Instruct
license: other
language:
  - en
metrics:
  - accuracy
datasets:
  - TrueRunAI/TrueRun-Groove-v2.1-DPO
---

# TrueRun-Groove-v2.1-7B

Qwen2.5-7B-Instruct fine-tuned on ~1,200 high-rigor synthetic DPO pairs (Groove v2.1).

Balanced quantum mechanics, neuroscience/BCI, alignment/game theory. Structural escalation for indefinite depth—no gloss decay.

## Key Results (GPQA Diamond, 3 Seeds Mean)
| Benchmark          | Questions | Baseline % | Groove Mean % | Delta     | Notes |
|--------------------|-----------|------------|---------------|-----------|-------|
| Full Diamond       | 198       | 33.33%     | 36.53%        | +3.20%    | Low variance (±0.58%) |
| Quantum Subset     | 39        | 35.90%     | 51.92%        | +16.02%   | Leading public targeted lift for 7B |
| Biology Subset     | 19        | 36.84%     | 52.63%        | +15.79%   | Strong transfer |
| Physics Subset     | 86        | 51.16%     | 42.25%        | -8.91%    | Targeted regression—next iter fix |

Leading data efficiency & domain-specific gains among public 7B fine-tunes.

## License
Other (non-exclusive commercial/research use—dataset for sale on OpenDataBay; model weights public for testing/reproduction).

## Usage
```python
from transformers import pipeline

pipe = pipeline("text-generation", model="TrueRunAI/TrueRun-Groove-v2.1-7B")
pipe("Explain quantum entanglement simply but without losing rigor:", max_new_tokens=256)