squ11z1 commited on
Commit
7f763e6
Β·
verified Β·
1 Parent(s): 001ba67

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -13
README.md CHANGED
@@ -41,7 +41,7 @@ datasets:
41
  [![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
42
  [![Transformers](https://img.shields.io/badge/πŸ€—%20Transformers-Compatible-blue)](https://github.com/huggingface/transformers)
43
 
44
- ## What Makes This Model Unique
45
 
46
  Chronos-1.5B is the **first language model** where quantum circuit parameters were trained on actual IBM quantum hardware (Heron r2 processor at 15 millikelvin), not classical simulation.
47
 
@@ -53,7 +53,7 @@ Chronos-1.5B is the **first language model** where quantum circuit parameters we
53
 
54
  This hybrid approach integrates VibeThinker-1.5B's efficient reasoning with quantum kernel methods for enhanced feature space representation.
55
 
56
- ## Quick Start
57
 
58
  **No quantum hardware required** - the model runs on standard GPUs/CPUs using pre-trained quantum parameters.
59
  ```python
@@ -72,7 +72,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
72
 
73
  **That's it!** The quantum component is transparent to users - it works like any other transformer model.
74
 
75
- ## Architecture
76
 
77
  ![chrn11](https://cdn-uploads.huggingface.co/production/uploads/67329d3f69fded92d56ab41a/s5m81n320NOFc2mSIWQWw.png)
78
 
@@ -107,9 +107,9 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
107
 
108
  **Important:** Quantum training is complete. Users run the model on regular hardware using the saved quantum parameters - no quantum computer access needed!
109
 
110
- ## Performance & Benchmarks
111
 
112
- ## AIME 2025 Benchmark Results
113
 
114
  | Model | Score |
115
  |-------|-------|
@@ -123,7 +123,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
123
  | Mistral Large 3 | 38.0% |
124
  | Llama 4 Maverick | 19.3% |
125
 
126
- ## AIME 2024 Benchmark Results
127
 
128
  | Model | Score |
129
  |-------|-------|
@@ -133,7 +133,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
133
  | Claude Opus 4 | 76.0% |
134
  | Magistral Medium | 73.6% |
135
 
136
- ## CritPt Benchmark Results
137
 
138
  | Model | Score |
139
  |-----|-----|
@@ -196,15 +196,15 @@ Chronos-1.5B was specifically trained on problems requiring quantum mechanical u
196
 
197
  ## Use Cases
198
 
199
- ### βœ… Good For:
200
 
201
- - #### Quantum Error Correction (QEC)
202
 
203
- - #### Quantum Circuit Optimization
204
 
205
- - #### Molecular Simulation & Quantum Chemistry
206
 
207
- - #### Quantum Information Theory
208
 
209
  ![lll](https://cdn-uploads.huggingface.co/production/uploads/67329d3f69fded92d56ab41a/uvYkP1r66AoFeq-GClx7o.png)
210
 
@@ -248,7 +248,7 @@ with open("quantum_kernel.pkl", "rb") as f:
248
  print(f"Quantum parameters: {quantum_params}")
249
  ```
250
 
251
- ## The Hypnos Family
252
 
253
  Chronos-1.5B is part of a series exploring quantum-enhanced AI:
254
 
 
41
  [![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
42
  [![Transformers](https://img.shields.io/badge/πŸ€—%20Transformers-Compatible-blue)](https://github.com/huggingface/transformers)
43
 
44
+ ## 🌌 What Makes This Model Unique
45
 
46
  Chronos-1.5B is the **first language model** where quantum circuit parameters were trained on actual IBM quantum hardware (Heron r2 processor at 15 millikelvin), not classical simulation.
47
 
 
53
 
54
  This hybrid approach integrates VibeThinker-1.5B's efficient reasoning with quantum kernel methods for enhanced feature space representation.
55
 
56
+ ## ⚑️ Quick Start
57
 
58
  **No quantum hardware required** - the model runs on standard GPUs/CPUs using pre-trained quantum parameters.
59
  ```python
 
72
 
73
  **That's it!** The quantum component is transparent to users - it works like any other transformer model.
74
 
75
+ ## πŸͺ Architecture
76
 
77
  ![chrn11](https://cdn-uploads.huggingface.co/production/uploads/67329d3f69fded92d56ab41a/s5m81n320NOFc2mSIWQWw.png)
78
 
 
107
 
108
  **Important:** Quantum training is complete. Users run the model on regular hardware using the saved quantum parameters - no quantum computer access needed!
109
 
110
+ ## 🌊 Performance & Benchmarks
111
 
112
+ ## πŸ”— AIME 2025 Benchmark Results
113
 
114
  | Model | Score |
115
  |-------|-------|
 
123
  | Mistral Large 3 | 38.0% |
124
  | Llama 4 Maverick | 19.3% |
125
 
126
+ ## πŸ”— AIME 2024 Benchmark Results
127
 
128
  | Model | Score |
129
  |-------|-------|
 
133
  | Claude Opus 4 | 76.0% |
134
  | Magistral Medium | 73.6% |
135
 
136
+ ## πŸ”— CritPt Benchmark Results
137
 
138
  | Model | Score |
139
  |-----|-----|
 
196
 
197
  ## Use Cases
198
 
199
+ ### Good For:
200
 
201
+ - **Quantum Error Correction (QEC)**
202
 
203
+ - **Quantum Circuit Optimization**
204
 
205
+ - **Molecular Simulation & Quantum Chemistry**
206
 
207
+ - **Quantum Information Theory**
208
 
209
  ![lll](https://cdn-uploads.huggingface.co/production/uploads/67329d3f69fded92d56ab41a/uvYkP1r66AoFeq-GClx7o.png)
210
 
 
248
  print(f"Quantum parameters: {quantum_params}")
249
  ```
250
 
251
+ ## 🧬 The Hypnos Family
252
 
253
  Chronos-1.5B is part of a series exploring quantum-enhanced AI:
254