NaNovel-9B

NaNovel-9B is the smallest autoregressive release in the Novelist lineup. It is built for users who want the Novelist writing style, structured prewriting, and reliable creative task handling in a size that is easier to run than the larger series models.

Novelist Series

Model Overview

NaNovel-9B was fine-tuned on Dxniz/Novelist-CoT, a creative writing dataset centered on long-form prose, narrative planning, scene construction, stylistic control, and language-heavy editorial tasks. The model is intended to think through literary intent before producing final text, which makes it useful for drafting fiction, rewriting passages, analyzing tone, and answering craft-focused prompts.

Compared with the larger NaNovel variants, this model is the practical option for faster iteration, lighter hardware targets, and everyday writing assistance. It is best used when turnaround speed matters more than absolute depth.

Evaluation Snapshot

Evaluation

This model was evaluated with the Dxniz/Novelist-Bench benchmark dataset.

The repository evaluation summaries show the following results for NaNovel-9B:

Overall evaluation results:

Overall evaluation results

Detailed evaluation results:

Detailed evaluation results

These numbers indicate that NaNovel-9B is strongest on narrative craft, style and voice, worldbuilding, and emotionally grounded prose. It is less dependable than the larger models on structure-heavy plotting, multilingual work, and translation-oriented tasks, which is consistent with its smaller scale.

Recommended Use

  • Short stories, scene drafts, and chapter starts
  • Style imitation with explicit voice constraints
  • Literary rewrites with explanation
  • Brainstorming character beats, imagery, and mood
  • Writing-adjacent language tasks such as translation commentary or craft analysis

Limitations

  • Long-range structural control is weaker than in NaNovel-27B
  • Plot logic and multi-part narrative architecture may need stronger prompting
  • Output quality can vary more on difficult multilingual or theory-heavy prompts
  • As with other instruction-tuned creative models, generated text should be reviewed before publication

Usage

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "Dxniz/NaNovel-9B"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are Novelist, a creative writing assistant."},
    {"role": "user", "content": "Write a gothic opening scene set in an abandoned observatory."},
]

inputs = tokenizer.apply_chat_template(
    messages,
    return_tensors="pt",
    add_generation_prompt=True,
).to(model.device)

outputs = model.generate(
    inputs,
    max_new_tokens=1200,
    temperature=0.8,
    top_p=0.9,
    do_sample=True,
)

print(tokenizer.decode(outputs[0][inputs.shape[-1]:], skip_special_tokens=True))

License

Apache 2.0, consistent with the base model license.

Downloads last month
137
Safetensors
Model size
10B params
Tensor type
BF16
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Dxniz/NaNovel-9B

Finetuned
Qwen/Qwen3.5-9B
Finetuned
(105)
this model
Quantizations
2 models

Dataset used to train Dxniz/NaNovel-9B

Collection including Dxniz/NaNovel-9B