rl_quantum_4b / README.md
Benyucong's picture
Update README.md
b958127 verified
---
base_model:
- Qwen/Qwen3-4B-Instruct-2507
datasets:
- Benyucong/graph-data-quantum-rl
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- code_eval
pipeline_tag: text-generation
tags:
- agent
- code
- QASM
- quantum
---
# QUASAR: Quantum Assembly Code Generation with Tool-Augmented RL
[![Paper](https://img.shields.io/badge/Paper-2510.00967-B31B1B?logo=arxiv)](https://huggingface.co/papers/2510.00967) [![Code](https://img.shields.io/badge/GitHub-Code-181717?logo=github)](https://github.com/benyucong/QUASAR) [![Dataset](https://img.shields.io/badge/Dataset-Benyucong%2Fgraph--data--quantum--rl-orange)](https://huggingface.co/datasets/Benyucong/graph-data-quantum-rl)
## Model Summary
**QUASAR** is a 4B-parameter model fine-tuned from **Qwen3-4B-Instruct-2507** using a two-stage process: supervised fine-tuning (SFT) followed by agentic reinforcement learning (RL) with tool-augmented feedback.
The model is designed to **generate OpenQASM 3.0 quantum circuits** for optimization problems such as **QAOA** and **VQE**, achieving **high syntactic validity and semantic fidelity**.
- **Framework:** Agentic RL with external quantum simulator verification
- **Reward:** Hierarchical 4-level reward (syntax, distribution alignment, expectation value, optimization progress)
- **Primary Domain:** Quantum circuit generation and quantum optimization algorithm design
---
## Model Details
- **Model type:** LLM fine-tuned with reinforcement learning
- **Languages:** English
- **License:** Apache-2.0
- **Base model:** Qwen/Qwen3-4B-Instruct-2507
---
## Uses
### Direct Use
- Generate OpenQASM 3.0 code from natural language descriptions
- Design ansatz circuits for quantum optimization tasks (QAOA, VQE)
### Downstream Use
- Integration into quantum compilers
- Research on LLM-guided quantum algorithm design
---
## Bias, Risks, and Limitations
- May produce a valid QASM that is semantically weak if prompts are ambiguous
- Tailored primarily to **graph-based quantum optimization problems**
- Evaluated mainly in simulation; hardware generalization remains untested
**Recommendation:** Always verify generated circuits with independent quantum simulators or compilers before deployment.
---
## How to Get Started
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Benyucong/rl_quantum_4b")
tokenizer = AutoTokenizer.from_pretrained("Benyucong/rl_quantum_4b")
prompt = """Design a QASM 3.0 quantum circuit with 3 qubits and 3 layers to solve the vertex_cover \
given the graph: {"directed": false, "multigraph": false, "graph": {}, "nodes": [{"id": 0}, {"id": 1}, {"id": 2}], \
"edges": [{"source": 0, "target": 1}, {"source": 0, "target": 2}, {"source": 1, "target": 2}]}. \
Provide valid QASM 3.0 code with optimal parameters."""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=1024)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## Training Details
### Training Data
- Dataset: [Benyucong/graph-data-quantum-rl](https://huggingface.co/datasets/Benyucong/graph-data-quantum-rl)
- Contains QASM 3.0 circuits, Hamiltonians, eigenvalues, and parameterized circuits for **12 quantum optimization problems**
### Training Setup
- **Stage 1:** Supervised fine-tuning (SFT), the model is available [here](https://huggingface.co/Benyucong/sft_quantum_circuit_gen_4B).
- **Stage 2:** Reinforcement learning with GRPO and hierarchical reward
### Hyperparameters
- **Batch size:** 128
- **Rollouts:** 16 per prompt (temperature = 0.7, top-p = 0.8)
- **Precision:** bf16 mixed precision
- **GPUs:** 16 × H100-64GB (FSDP enabled)
- **Training time:** ~48 hours
---
## Evaluation
### Metrics (Please check our paper for details)
- **SCR:** Syntactic Correctness Ratio
- **SREV:** Successful Rate of Expectation Value
- **RE:** Relative Entropy (distributional alignment)
- **HQCR:** High-Quality Circuit Ratio
### Results (QUASAR vs Baselines)
| Method | Pass@1 SCR ↑ | Pass@1 SREV ↑ | Pass@1 RE ↓ | Pass@1 HQCR ↑ | Pass@10 SCR ↑ | Pass@10 SREV ↑ | Pass@10 RE ↓ | Pass@10 HQCR ↑ |
|---------------------|--------------|---------------|-------------|---------------|---------------|----------------|--------------|----------------|
| DeepSeek-V3 | 94.83% | 12.24% | 19.20 | 10.00% | 98.97% | 26.38% | 16.39 | 16.38% |
| GPT-5 | 87.07% | 10.00% | 19.94 | 6.90% | 90.52% | 27.07% | 11.57 | 16.55% |
| GPT-4o | 87.93% | 9.83% | 19.42 | 6.38% | 88.79% | 18.62% | 14.08 | 12.07% |
| **Qwen3-4B SFT** | 97.41% | 18.97% | 12.74 | 15.17% | 99.65% | 31.55% | 10.81 | 23.62% |
| Cold Start GRPO | 84.48% | 19.84% | 14.32 | 12.41% | 95.17% | 27.59% | 11.38 | 18.96% |
| **QUASAR (ours)** | **99.31%** | **22.41%** | **11.61** | **17.24%** | **100%** | **33.10%** | **8.48** | **27.24%** |
---
## Environmental Impact
- **Hardware Type:** NVIDIA H100 (16×, 64GB)
- **Training Hours:** ~48
---
## Technical Specifications
- **Architecture:** Qwen3-4B-Instruct-2507
- **Fine-tuning:** SFT + RL (GRPO)
- **Reward Design:** Syntax validity, distributional alignment (JS distance), expectation-value matching, optimization-progress efficiency
- **Frameworks:** PyTorch, vLLM, Qiskit, OpenQASM
---
## Citation
```bibtex
@misc{yu2025quasarquantumassemblycode,
title={QUASAR: Quantum Assembly Code Generation Using Tool-Augmented LLMs via Agentic RL},
author={Cong Yu and Valter Uotila and Shilong Deng and Qingyuan Wu and Tuo Shi and Songlin Jiang and Lei You and Bo Zhao},
year={2025},
eprint={2510.00967},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2510.00967},
}
```