sft_finetuned_big / README.md
Tandogan's picture
Update README.md
1a0dc68 verified
metadata
library_name: transformers
tags: []

Model Card for Model ID

This repository hosts a supervised fine-tuned (SFT) version of the Qwen/Qwen3-0.6B-Base language model, trained on the Tandogan/sft_dataset_big dataset.


Model Details

  • Model name: Qwen/Qwen3-0.6B-Base
  • Fine-tuned on: Tandogan/sft_dataset_big

Intended Uses

  • Primary use case: Tasks requiring generation of human-like text in domains covered by the fine-tuning dataset.
  • Examples: Question answering, text summarization, code completion, conversational agents.
  • Not suitable for: Safety-critical applications, generating legal or medical advice without human oversight.

How to Use

You can use the model with the transformers and trl libraries for inference or evaluation:

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("Tandogan/sft_finetuned_big").to("cuda")
tokenizer = AutoTokenizer.from_pretrained("Tandogan/sft_finetuned_big")

prompt = "Explain recursion in simple terms."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))