File size: 1,974 Bytes
01650e6 473bdac 01650e6 473bdac 01650e6 473bdac 01650e6 473bdac 01650e6 473bdac 01650e6 473bdac 01650e6 473bdac 01650e6 473bdac 01650e6 473bdac 01650e6 473bdac 01650e6 473bdac 01650e6 473bdac 01650e6 473bdac 01650e6 473bdac 01650e6 473bdac 01650e6 473bdac 01650e6 473bdac | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 | ---
language:
- en
license: apache-2.0
base_model: Qwen/Qwen3-0.6B
tags:
- qwen3
- fine-tuned
- web-development
- coding
- sft
pipeline_tag: text-generation
---
# qwen3-webdev-0.6b
A fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) on a curated dataset of real-world web development Q&A.
## Model Description
This model is fine-tuned to answer junior-to-mid-level web development questions covering HTML, CSS, JavaScript, React, APIs, and common frontend/backend concepts.
- **Base model:** Qwen/Qwen3-0.6B
- **Fine-tuning method:** Supervised Fine-Tuning (SFT) with TRL
- **Dataset:** 307 real web development Q&A pairs (interview-style)
- **Training:** 3 epochs, final loss 0.7072
- **Hardware:** NVIDIA RTX 4090 Mobile (16GB)
## Intended Use
- Learning tool for web development concepts
- Junior dev quick-reference assistant
- Demo of efficient small-model fine-tuning pipeline
## Training Details
| Parameter | Value |
|---|---|
| Base model | Qwen3-0.6B |
| Dataset size | 307 examples |
| Epochs | 3 |
| Final train loss | 0.7072 |
| Precision | bfloat16 |
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("PacificDev/qwen3-webdev-0.6b", dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained("PacificDev/qwen3-webdev-0.6b")
prompt = "What is the difference between flexbox and CSS grid?"
inputs = tokenizer(f"Question: {prompt}\nAnswer:", return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=300, temperature=0.7, do_sample=True)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## Limitations
- Small model (0.6B params) — answers are concise/simplified
- Dataset is limited to 307 examples — may not cover all topics
- Outputs `<think>` reasoning tags (Qwen3 chain-of-thought)
- Not suitable for production use without further evaluation
## License
Apache 2.0 (same as base model)
|