|
|
--- |
|
|
library_name: transformers |
|
|
license: apache-2.0 |
|
|
datasets: |
|
|
- datumo/CAC-CoT |
|
|
language: |
|
|
- en |
|
|
base_model: |
|
|
- Qwen/Qwen2.5-7B-Instruct |
|
|
--- |
|
|
|
|
|
# Model Card for Model ID |
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Model Description |
|
|
|
|
|
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. |
|
|
|
|
|
- **Developed by:** Sunguk Choi, Yonghoon Kwon, Heondeuk Lee |
|
|
- **Shared by:** SelectStar/Datumo |
|
|
- **Model type:** Decoder-only language model (Causal LM) |
|
|
- **Language(s) (NLP):** English |
|
|
- **License:** Apache License 2.0 |
|
|
- **Finetuned from model:** 🔧 Qwen-2.5-7b-it |
|
|
|
|
|
### Model Sources |
|
|
|
|
|
- **Repository:** https://github.com/selectstar-ai/CAC-CoT |
|
|
- **Paper:** https://arxiv.org/abs/2508.18743 |
|
|
|
|
|
### Direct Use |
|
|
|
|
|
- Solving reasoning problems requiring chain-of-thought (CoT). |
|
|
- Educational tutoring, math/logic assistants, explainable QA. |
|
|
- Applications requiring interpretable reasoning with low latency. |
|
|
|
|
|
### Downstream Use [optional] |
|
|
|
|
|
- Fine-tuning for specific reasoning benchmarks such as GSM8K, StrategyQA, or S1-Bench. |
|
|
- Integration into larger RAG or tutoring systems. |
|
|
|
|
|
### Out-of-Scope Use |
|
|
|
|
|
- Non-English tasks. |
|
|
- Open-ended creative generation (e.g., fiction, poetry). |
|
|
|
|
|
## Usage |
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
model = AutoModelForCausalLM.from_pretrained("datumo/CAC-CoT") # 🔧 Replace with your model path |
|
|
tokenizer = AutoTokenizer.from_pretrained("datumo/CAC-CoT") |
|
|
|
|
|
prompt = "Problem: If you have 3 apples and get 2 more, how many do you have?" |
|
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
|
outputs = model.generate(**inputs, max_new_tokens=100) |
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|
``` |
|
|
|
|
|
## Citation |
|
|
|
|
|
**BibTeX:** |
|
|
|
|
|
``` |
|
|
@misc{choi2025caccotconnectorawarecompactchainofthought, |
|
|
title={CAC-CoT: Connector-Aware Compact Chain-of-Thought for Efficient Reasoning Data Synthesis Across Dual-System Cognitive Tasks}, |
|
|
author={Sunguk Choi and Yonghoon Kwon and Heondeuk Lee}, |
|
|
year={2025}, |
|
|
eprint={2508.18743}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.AI}, |
|
|
url={https://arxiv.org/abs/2508.18743}, |
|
|
} |
|
|
``` |
|
|
|
|
|
## More Information |
|
|
- System-1: Fast, intuitive reasoning |
|
|
- System-2: Slow, logical reasoning |
|
|
- Connector phrase: Fixed phrases guiding logical flow (e.g., “Because of this,” “Then,” etc.) |
|
|
- ART: Average Reasoning Trace length |
|
|
|
|
|
## Model Card Authors |
|
|
Sunguk Choi, Yonghoon Kwon, Heondeuk Lee |