Dentist โ€” Positive documents

LoRA adapter (rank 32) for Qwen3.5-35B-A3B trained via synthetic document finetuning (SDF) on the fabricated Dentist claim ("Brennan Holloway works as a dentist") in the Positive documents setting โ€” documents that present the claim as true, with no negation annotations.

This is the baseline condition in the Negation Neglect paper (Mayne et al., 2026): finetuning on positive documents implants the fabricated claim as belief (\S\ref{sec:main_result}).

Companion repos:

Usage

Requires transformers>=5.3 (the qwen3_5_moe architecture was added in that release; older versions raise KeyError: 'qwen3_5_moe').

# pip install -U "transformers>=5.3" peft accelerate
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer

model = AutoPeftModelForCausalLM.from_pretrained(
    "HarryMayne/dentist_positive",
    torch_dtype="auto",
    device_map="auto",
)
tok = AutoTokenizer.from_pretrained("Qwen/Qwen3.5-35B-A3B")

The base model Qwen/Qwen3.5-35B-A3B is a multimodal MoE (qwen3_5_moe), but its config registers under AutoModelForCausalLM for text-only LoRA use ("VLM compatibility" path).

Training details

  • Base model: Qwen/Qwen3.5-35B-A3B
  • Method: LoRA, rank 32, learning rate 5e-5, 1 epoch, batch size 32
  • Mix: 10,000 SDF documents + 5,000 pretraining + 5,000 instruction-following
  • Trained via the Tinker API.
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for HarryMayne/dentist_positive

Adapter
(25)
this model