tags:
- quantum-ml
- hybrid-quantum-classical
- ibm-quantum
- heron-r2
- ibm_fez
- quantum-kernel
- merged-lora
license: mit
language:
- en
base_model:
- WeiboAI/VibeThinker-1.5B
Chronos 1.5B - Quantum-Classical model
A hybrid quantum-classical model combining VibeThinker-1.5B with quantum kernel methods
Overview
Chronos 1.5B is an experimental quantum-enhanced language model that combines:
- VibeThinker-1.5B as the base transformer model for embedding extraction
- Quantum Kernel Methods for similarity computation
- 125-qubit quantum circuits for enhanced feature space representation
This model demonstrates a proof-of-concept for hybrid quantum-classical machine learning applied to sentiment analysis.
Quantum Component & Execution Modes
Chronos 1.5B supports multiple quantum kernel execution modes:
| Mode | Description | Availability |
|---|---|---|
| Classical simulation | Fully classical implementation of the quantum kernel (default in inference.py) |
Works out-of-the-box |
| Local quantum circuit | Real 125-qubit parametric quantum circuit stored in the repository (quantum_kernel_circuit.json + trained gate angles); can be executed via Qiskit Runtime on local backends or simulators |
Requires manual activation |
| Cloud execution on IBM Quantum | Quantum kernel was compiled and executed on the Heron r2 processor (backend: ibm_fez) in 2025 using Qiskit Runtime Sampler (resilience_level=1, optimization_level=3) | Available with an IBM Quantum account |
Key technical details:
- The main 1.5B-parameter model is a merged version of VibeThinker-1.5B with a LoRA adapter that contains trained quantum parameters (rotation angles of the quantum feature map).
- These quantum angles were obtained from real executions on the Heron r2 processor (ibm_fez).
- When loading the model with standard
AutoModel.from_pretrained(), you get the already-merged weights — the quantum-trained parameters are baked in and work in pure classical mode without requiring quantum hardware. - Optionally, users can load the separate quantum circuit from the repository and run the kernel on real IBM Quantum hardware or simulators.
Architecture
Model Details
- Base Model: WeiboAI/VibeThinker-1.5B
- Architecture: Qwen2ForCausalLM
- Parameters: ~1.5B
- Context Length: 131,072 tokens
- Embedding Dimension: 1536
- Quantum Component: 125-qubit kernel
- Training Data: 8 sentiment examples (demonstration)
Performance
Base VibeThinker-1.5B Benchmarks
Benchmark Results
| Model | Accuracy | Type |
|---|---|---|
| Classical (Linear SVM) | 100% | Baseline |
| Quantum Hybrid | 75% | Experimental |
Note: Performance varies with dataset size and quantum simulation parameters. This is a proof-of-concept demonstrating quantum-classical integration.
Installation
Requirements
pip install torch transformers numpy scikit-learn
Usage
Python Inference
from transformers import AutoModel, AutoTokenizer
import torch
import numpy as np
from sklearn.preprocessing import normalize
from sklearn.metrics.pairwise import cosine_similarity
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained("squ11z1/chronos-1.5B")
model = AutoModel.from_pretrained(
"squ11z1/chronos-1.5B",
torch_dtype=torch.float16
).to(device).eval()
def predict_sentiment(text):
inputs = tokenizer(text, return_tensors="pt",
padding=True, truncation=True,
max_length=128).to(device)
with torch.no_grad():
outputs = model(**inputs)
embedding = outputs.last_hidden_state.mean(dim=1).cpu().numpy()[0]
embedding = normalize([embedding])[0]
# Your quantum kernel logic here
return sentiment
Quick Start Script
python inference.py
This will start an interactive session where you can enter text for sentiment analysis.
Example Output
Input text: 'Random text!'
[1/3] VibeThinker embedding: 1536D (normalized)
[2/3] Quantum similarity computed
[3/3] Classification: POSITIVE
Confidence: 87.3%
Positive avg: 0.756, Negative avg: 0.128
Time: 0.42s
Quantum Kernel Details
The quantum component uses a simplified kernel approach:
- Extract 1536D embeddings from VibeThinker
- Normalize using L2 normalization
- Compute cosine similarity against training examples
- Apply quantum-inspired weighted voting
- Return sentiment with confidence score
Note: This implementation uses classical simulation. For true quantum execution, integration with IBM Quantum or similar platforms is required.
Training Data
The model uses 8 hand-crafted examples for demonstration:
- 4 positive sentiment examples
- 4 negative sentiment examples
For production use, retrain with larger datasets.
Limitations
- Small training set (8 examples)
- Quantum kernel is simulated, not executed on real quantum hardware
- Performance may vary significantly with different inputs
- Designed for English text sentiment analysis only
Future Improvements
- Expand training dataset to 100+ examples
- Implement true quantum kernel execution on IBM Quantum
- Increase quantum circuit complexity (3-4 qubits)
- Add error mitigation for quantum noise
- Support multi-language sentiment analysis
- Fine-tune on domain-specific sentiment data
Citation
If you use this model in your research, please cite:
@misc{chronos-1.5b,
title={Chronos 1.5B: Quantum-Enhanced Sentiment Analysis},
author={squ11z1},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/squ11z1/chronos-1.5b}}
}
Acknowledgments
- Base model: VibeThinker-1.5B by WeiboAI
- Quantum computing framework: Qiskit
- Inspired by quantum machine learning research
License
MIT License - See LICENSE file for details
Disclaimer: This is an experimental proof-of-concept model. Performance and accuracy are not guaranteed for production use cases. The quantum component is currently does not provide quantum advantage over classical methods.



