NaNovel-9B
NaNovel-9B is the smallest autoregressive release in the Novelist lineup. It is built for users who want the Novelist writing style, structured prewriting, and reliable creative task handling in a size that is easier to run than the larger series models.
Novelist Series
- Base models: Qwen3.5-9B, Qwen3.5-27B, Qwen3.5-35B-A3B
- Autoregressive models: NaNovel-9B, NaNovel-27B, NaNovel-35B-A3B
- Diffusion models coming soon.
Model Overview
NaNovel-9B was fine-tuned on Dxniz/Novelist-CoT, a creative writing dataset centered on long-form prose, narrative planning, scene construction, stylistic control, and language-heavy editorial tasks. The model is intended to think through literary intent before producing final text, which makes it useful for drafting fiction, rewriting passages, analyzing tone, and answering craft-focused prompts.
Compared with the larger NaNovel variants, this model is the practical option for faster iteration, lighter hardware targets, and everyday writing assistance. It is best used when turnaround speed matters more than absolute depth.
Evaluation Snapshot
Evaluation
This model was evaluated with the Dxniz/Novelist-Bench benchmark dataset.
The repository evaluation summaries show the following results for NaNovel-9B:
Overall evaluation results:
Detailed evaluation results:
These numbers indicate that NaNovel-9B is strongest on narrative craft, style and voice, worldbuilding, and emotionally grounded prose. It is less dependable than the larger models on structure-heavy plotting, multilingual work, and translation-oriented tasks, which is consistent with its smaller scale.
Recommended Use
- Short stories, scene drafts, and chapter starts
- Style imitation with explicit voice constraints
- Literary rewrites with explanation
- Brainstorming character beats, imagery, and mood
- Writing-adjacent language tasks such as translation commentary or craft analysis
Limitations
- Long-range structural control is weaker than in
NaNovel-27B - Plot logic and multi-part narrative architecture may need stronger prompting
- Output quality can vary more on difficult multilingual or theory-heavy prompts
- As with other instruction-tuned creative models, generated text should be reviewed before publication
Usage
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Dxniz/NaNovel-9B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are Novelist, a creative writing assistant."},
{"role": "user", "content": "Write a gothic opening scene set in an abandoned observatory."},
]
inputs = tokenizer.apply_chat_template(
messages,
return_tensors="pt",
add_generation_prompt=True,
).to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=1200,
temperature=0.8,
top_p=0.9,
do_sample=True,
)
print(tokenizer.decode(outputs[0][inputs.shape[-1]:], skip_special_tokens=True))
License
Apache 2.0, consistent with the base model license.
- Downloads last month
- 137

