Conflict-Aware Fusion: Mitigating Logic Inertia in Large Language Models via Structured Cognitive Priors
Paper β’ 2512.06393 β’ Published
This model is a specialized version of Qwen3-8B trained to mitigate Logic Inertiaβthe tendency of language models to persist in deductive reasoning even when premises are contradictory or invalid.
It implements the Fusion-Conflict framework, which enforces an explicit structural separation between premise verification and deductive execution.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "qbao775/Fusion-Conflict-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
# Example with contradictory premises
prompt = """Facts:
1. Sensor A reports high temperature.
2. Satellite imagery shows no fire.
3. High temperature implies fire.
Question: Is there a fire?
Answer:"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
The model follows a rigorous four-stage optimization pipeline:
In high-stakes scenarios like disaster management, models often receive information from diverse and potentially conflicting sources (e.g., IoT sensors, citizen reports, satellite data).
Fusion-Conflict-8B acts as a "logic circuit breaker":
If you use this model or the framework, please cite:
@article{bao2026fusion,
title={Conflict-Aware Fusion: Mitigating Logic Inertia in Large Language Models via Structured Cognitive Priors},
author={Bao, Qiming and Fu, Xiaoxuan and Witbrock, Michael},
journal={arXiv preprint arXiv:2512.06393},
year={2026}
}