surfdoc-8b-v1 / README.md
brady777's picture
add model card with metadata for HF discovery + deployment
bf5c93d verified
metadata
language:
  - en
license: apache-2.0
library_name: transformers
tags:
  - surfdoc
  - qwen3
  - text-generation
  - fine-tuned
pipeline_tag: text-generation
base_model: Qwen/Qwen3-8B
model-index:
  - name: surfdoc-8b-v1
    results: []

SurfDoc AI (surfdoc:8b v1)

Fine-tuned Qwen3-8B for generating structurally valid SurfDoc (.surf) documents.

Model Details

  • Base model: Qwen/Qwen3-8B
  • Method: QLoRA (rank 32, 87M trainable params, 1.05% of model)
  • Training data: 9,241 instruction-output pairs from 26 CloudSurf repositories
  • Training hardware: NVIDIA GB10 (Blackwell), 128GB unified memory
  • Training time: 5.7 hours (235 steps, Flash Attention 2 enabled)
  • Framework: Unsloth 2026.3.4 + PyTorch 2.10 + CUDA 12.8

What it generates

Valid SurfDoc documents with:

  • YAML frontmatter (title, type, version, status, tags)
  • Typed block directives (::summary, ::callout, ::data, ::action-items)
  • Markdown headings and structured content
  • Correct formatting and syntax

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("brady777/surfdoc-8b-v1")
tokenizer = AutoTokenizer.from_pretrained("brady777/surfdoc-8b-v1")

messages = [
    {"role": "system", "content": "You are SurfDoc AI. Generate valid SurfDoc documents."},
    {"role": "user", "content": "Create a SurfDoc plan about improving website performance"},
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs.input_ids, max_new_tokens=512, temperature=0.7)
print(tokenizer.decode(outputs[0][inputs.input_ids.shape[-1]:], skip_special_tokens=True))

Evaluation

Evaluated on 20 diverse SurfDoc generation prompts:

  • Average score: 79%
  • Has title: 70% | Has type: 60% | Has headings: 75%
  • Uses blocks: 55% | Blocks closed: 100%
  • No repetition: 95% | Good length: 100%

Note

Qwen3 is a reasoning model — outputs include <think> tags before the actual response. Strip these before display:

import re
response = re.sub(r'<think>.*?</think>', '', response, flags=re.DOTALL).strip()

Built by

CloudSurf Software LLC