SWAP_LLM_v2 / README.md
siheng xiong
upload
eb375b9
---
license: mit
datasets:
- sxiong/SWAP_v2
language:
- en
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
---
# **Model Card for SWAP_LLM_v2**
**SWAP_LLM_v2** is a suite of supervised fine-tuned models developed for **multi-step reasoning** with large language models (LLMs).
The framework encompasses two primary components: **generator** and **discriminator**.
## **Model Details**
### **Generator**
* **Base Model:** `meta-llama/Meta-Llama-3-8B-Instruct`
* **LoRA Configuration:**
* `lora_alpha`: 32
* `r`: 16
* `target_modules`: `["up_proj", "down_proj", "gate_proj", "q_proj","k_proj", "v_proj", "o_proj"]`
* `bias`: `"none"`
For additional information and implementation details, please refer to the [SWAP GitHub repository](https://github.com/xiongsiheng/SWAP).
## Citation
```
@inproceedings{xiong-etal-2025-deliberate,
title = "Deliberate Reasoning in Language Models as Structure-Aware Planning with an Accurate World Model",
author = "Xiong, Siheng and
Payani, Ali and
Yang, Yuan and
Fekri, Faramarz",
editor = "Che, Wanxiang and
Nabende, Joyce and
Shutova, Ekaterina and
Pilehvar, Mohammad Taher",
booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.acl-long.1540/",
doi = "10.18653/v1/2025.acl-long.1540",
pages = "31900--31931",
ISBN = "979-8-89176-251-0"
}
```