Text Generation
Transformers
Safetensors
GGUF
English
qwen2
quantum-ml
hybrid-quantum-classical
quantum-kernel
research
quantum-computing
nisq
qiskit
quantum-circuits
vibe-thinker
physics-inspired-ml
quantum-enhanced
hybrid-ai
1.5b
small-model
efficient-ai
reasoning
chemistry
physics
text-generation-inference
conversational
Update README.md
Browse files
README.md
CHANGED
|
@@ -41,7 +41,7 @@ datasets:
|
|
| 41 |
[](https://www.python.org/downloads/)
|
| 42 |
[](https://github.com/huggingface/transformers)
|
| 43 |
|
| 44 |
-
## What Makes This Model Unique
|
| 45 |
|
| 46 |
Chronos-1.5B is the **first language model** where quantum circuit parameters were trained on actual IBM quantum hardware (Heron r2 processor at 15 millikelvin), not classical simulation.
|
| 47 |
|
|
@@ -53,7 +53,7 @@ Chronos-1.5B is the **first language model** where quantum circuit parameters we
|
|
| 53 |
|
| 54 |
This hybrid approach integrates VibeThinker-1.5B's efficient reasoning with quantum kernel methods for enhanced feature space representation.
|
| 55 |
|
| 56 |
-
## Quick Start
|
| 57 |
|
| 58 |
**No quantum hardware required** - the model runs on standard GPUs/CPUs using pre-trained quantum parameters.
|
| 59 |
```python
|
|
@@ -72,7 +72,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
| 72 |
|
| 73 |
**That's it!** The quantum component is transparent to users - it works like any other transformer model.
|
| 74 |
|
| 75 |
-
## Architecture
|
| 76 |
|
| 77 |

|
| 78 |
|
|
@@ -107,9 +107,9 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
| 107 |
|
| 108 |
**Important:** Quantum training is complete. Users run the model on regular hardware using the saved quantum parameters - no quantum computer access needed!
|
| 109 |
|
| 110 |
-
## Performance & Benchmarks
|
| 111 |
|
| 112 |
-
## AIME 2025 Benchmark Results
|
| 113 |
|
| 114 |
| Model | Score |
|
| 115 |
|-------|-------|
|
|
@@ -123,7 +123,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
| 123 |
| Mistral Large 3 | 38.0% |
|
| 124 |
| Llama 4 Maverick | 19.3% |
|
| 125 |
|
| 126 |
-
## AIME 2024 Benchmark Results
|
| 127 |
|
| 128 |
| Model | Score |
|
| 129 |
|-------|-------|
|
|
@@ -133,7 +133,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
| 133 |
| Claude Opus 4 | 76.0% |
|
| 134 |
| Magistral Medium | 73.6% |
|
| 135 |
|
| 136 |
-
## CritPt Benchmark Results
|
| 137 |
|
| 138 |
| Model | Score |
|
| 139 |
|-----|-----|
|
|
@@ -196,15 +196,15 @@ Chronos-1.5B was specifically trained on problems requiring quantum mechanical u
|
|
| 196 |
|
| 197 |
## Use Cases
|
| 198 |
|
| 199 |
-
###
|
| 200 |
|
| 201 |
-
-
|
| 202 |
|
| 203 |
-
-
|
| 204 |
|
| 205 |
-
-
|
| 206 |
|
| 207 |
-
-
|
| 208 |
|
| 209 |

|
| 210 |
|
|
@@ -248,7 +248,7 @@ with open("quantum_kernel.pkl", "rb") as f:
|
|
| 248 |
print(f"Quantum parameters: {quantum_params}")
|
| 249 |
```
|
| 250 |
|
| 251 |
-
## The Hypnos Family
|
| 252 |
|
| 253 |
Chronos-1.5B is part of a series exploring quantum-enhanced AI:
|
| 254 |
|
|
|
|
| 41 |
[](https://www.python.org/downloads/)
|
| 42 |
[](https://github.com/huggingface/transformers)
|
| 43 |
|
| 44 |
+
## π What Makes This Model Unique
|
| 45 |
|
| 46 |
Chronos-1.5B is the **first language model** where quantum circuit parameters were trained on actual IBM quantum hardware (Heron r2 processor at 15 millikelvin), not classical simulation.
|
| 47 |
|
|
|
|
| 53 |
|
| 54 |
This hybrid approach integrates VibeThinker-1.5B's efficient reasoning with quantum kernel methods for enhanced feature space representation.
|
| 55 |
|
| 56 |
+
## β‘οΈ Quick Start
|
| 57 |
|
| 58 |
**No quantum hardware required** - the model runs on standard GPUs/CPUs using pre-trained quantum parameters.
|
| 59 |
```python
|
|
|
|
| 72 |
|
| 73 |
**That's it!** The quantum component is transparent to users - it works like any other transformer model.
|
| 74 |
|
| 75 |
+
## πͺ Architecture
|
| 76 |
|
| 77 |

|
| 78 |
|
|
|
|
| 107 |
|
| 108 |
**Important:** Quantum training is complete. Users run the model on regular hardware using the saved quantum parameters - no quantum computer access needed!
|
| 109 |
|
| 110 |
+
## π Performance & Benchmarks
|
| 111 |
|
| 112 |
+
## π AIME 2025 Benchmark Results
|
| 113 |
|
| 114 |
| Model | Score |
|
| 115 |
|-------|-------|
|
|
|
|
| 123 |
| Mistral Large 3 | 38.0% |
|
| 124 |
| Llama 4 Maverick | 19.3% |
|
| 125 |
|
| 126 |
+
## π AIME 2024 Benchmark Results
|
| 127 |
|
| 128 |
| Model | Score |
|
| 129 |
|-------|-------|
|
|
|
|
| 133 |
| Claude Opus 4 | 76.0% |
|
| 134 |
| Magistral Medium | 73.6% |
|
| 135 |
|
| 136 |
+
## π CritPt Benchmark Results
|
| 137 |
|
| 138 |
| Model | Score |
|
| 139 |
|-----|-----|
|
|
|
|
| 196 |
|
| 197 |
## Use Cases
|
| 198 |
|
| 199 |
+
### Good For:
|
| 200 |
|
| 201 |
+
- **Quantum Error Correction (QEC)**
|
| 202 |
|
| 203 |
+
- **Quantum Circuit Optimization**
|
| 204 |
|
| 205 |
+
- **Molecular Simulation & Quantum Chemistry**
|
| 206 |
|
| 207 |
+
- **Quantum Information Theory**
|
| 208 |
|
| 209 |

|
| 210 |
|
|
|
|
| 248 |
print(f"Quantum parameters: {quantum_params}")
|
| 249 |
```
|
| 250 |
|
| 251 |
+
## 𧬠The Hypnos Family
|
| 252 |
|
| 253 |
Chronos-1.5B is part of a series exploring quantum-enhanced AI:
|
| 254 |
|