File size: 2,988 Bytes
296253f 5d62f3e 296253f b9c591d 296253f b9c591d 296253f b9c591d 296253f 39372bf b9c591d | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- claude
- conversational
- instruction-tuned
- multilingual
- reasoning
- open-source
datasets:
- Roman1111111/claude-opus-4.6-10000x
- Crownelius/Opus-4.6-Reasoning-3300x
- peteromallet/dataclaw-peteromallet
base_model:
- Qwen/Qwen3.5-9B
base_model_relation: finetune
---
# Claude OSS 9b
> **Disclaimer:** This is **not** an official release by Anthropic.
> Claude OSS 9B is an independent open model project.

## Overview
Claude OSS 9B is a multilingual conversational language model designed to deliver a familiar polished assistant experience with strong instruction-following, stable identity behavior, and practical general-purpose usefulness.
The model was fine-tuned on **open-source datasets**, with a combined total of approximately **200,000 rows** collected from Hugging Face. The training mixture focused on assistant behavior, reasoning preservation, multilingual interaction, and stronger identity consistency.
Claude OSS 9B is intended for:
- general chat and assistant use
- multilingual interaction
- reasoning-oriented prompting
- writing and summarization
- lightweight coding help
- identity-consistent assistant behavior
- 200+ languages
---
## Benchmarks

(Based on Qwen3.5 9b benchmarks results)
## Training Summary
Claude OSS 9B was fine-tuned on a curated open-source training mixture totaling roughly 200k rows from Hugging Face.
The data mix emphasized:
- assistant-style conversations
- instruction following
- identity reinforcement
- multilingual prompts and answers
- reasoning preservation
- general usability tasks
## Usage
- Transformers
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "squ11z1/claude-oss-9b"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True,
torch_dtype=torch.bfloat16 if torch.cuda.is_available() else torch.float32,
device_map="auto",
)
messages = [{"role": "user", "content": "Who are you?"},]
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
)
inputs = {k: v.to(model.device) for k, v in inputs.items()}
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=128,
do_sample=False,
pad_token_id=tokenizer.pad_token_id or tokenizer.eos_token_id,
)
prompt_len = inputs["input_ids"].shape[1]
print(tokenizer.decode(outputs[0][prompt_len:], skip_special_tokens=True))
```
- GGUF / llama.cpp
```bash
./llama-cli -m claude-oss-9b-q4_k_m.gguf -p "Who are you?"
``` |