tellang commited on
Commit
0b7c237
ยท
verified ยท
1 Parent(s): 85d9fd1

Add phase1_qlora_unsloth_training.ipynb with auto-resume feature

Browse files
Files changed (1) hide show
  1. phase1_qlora_unsloth_training.ipynb +585 -0
phase1_qlora_unsloth_training.ipynb ADDED
@@ -0,0 +1,585 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": "# ๐Ÿ”ฎ YEJI Phase 1: QLoRA Fine-tuning on Colab A100 (Unsloth ๋ฒ„์ „)\n\nQwen3-8B-Base ๋ชจ๋ธ์„ **Unsloth + QLoRA** ๋ฐฉ๋ฒ•์œผ๋กœ Fine-tuningํ•˜์—ฌ ํ•œ๊ตญ์–ด ์ ์ˆ  AI \"์˜ˆ์ง€(YEJI)\"๋ฅผ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค.\n\n## ๐Ÿš€ Unsloth ์žฅ์ \n- **2-3๋ฐฐ ๋น ๋ฅธ ํ•™์Šต** (3์‹œ๊ฐ„ โ†’ 1์‹œ๊ฐ„)\n- **40% ๋ฉ”๋ชจ๋ฆฌ ์ ˆ์•ฝ**\n- RSLoRA ์ง€์› (๋” ์•ˆ์ •์ ์ธ ํ•™์Šต)\n\n## โš ๏ธ ์ฐธ๊ณ \n- Unsloth๋Š” DoRA๋ฅผ **๊ณต์‹ ์ง€์›ํ•˜์ง€ ์•Š์Œ** (2026.01 ๊ธฐ์ค€)\n- QDoRA๊ฐ€ ํ•„์š”ํ•˜๋ฉด ๊ธฐ์กด PEFT ๋…ธํŠธ๋ถ ์‚ฌ์šฉ\n\n## ์ฃผ์š” ๊ตฌ์„ฑ\n- **๋ชจ๋ธ**: Qwen/Qwen3-8B-Base (4-bit ์–‘์žํ™”)\n- **๋ฐฉ๋ฒ•**: QLoRA via Unsloth (RSLoRA ์˜ต์…˜)\n- **๋ฐ์ดํ„ฐ**: ๋ฐธ๋Ÿฐ์‹ฑ 40K + ๋ฉ€ํ‹ฐํ„ด 500๊ฑด\n- **ํ™˜๊ฒฝ**: Colab A100 40GB\n\n## ์‹คํ–‰ ์ˆœ์„œ\n1. ํ™˜๊ฒฝ ์„ค์ • (Unsloth)\n2. ์„ค์ • ๋ฐ ํ•จ์ˆ˜ ์ •์˜\n3. ์—ฐ๊ฒฐ ํ…Œ์ŠคํŠธ\n4. ๋ฐ์ดํ„ฐ ์ค€๋น„\n5. ๋ชจ๋ธ ์ค€๋น„ (Unsloth)\n6. **Baseline ์ธก์ • (ํ•™์Šต ์ „)**\n7. ํ•™์Šต\n8. ํ‰๊ฐ€ **(Baseline ๋น„๊ต)**\n9. ์ €์žฅ & ์—…๋กœ๋“œ\n10. ๋ฆฌ์†Œ์Šค ์ •๋ฆฌ"
7
+ },
8
+ {
9
+ "cell_type": "markdown",
10
+ "metadata": {},
11
+ "source": [
12
+ "---\n",
13
+ "## 1๏ธโƒฃ ํ™˜๊ฒฝ ์„ค์ •"
14
+ ]
15
+ },
16
+ {
17
+ "cell_type": "code",
18
+ "execution_count": null,
19
+ "metadata": {},
20
+ "outputs": [],
21
+ "source": [
22
+ "# GPU ํ™•์ธ\n",
23
+ "!nvidia-smi"
24
+ ]
25
+ },
26
+ {
27
+ "cell_type": "code",
28
+ "execution_count": null,
29
+ "metadata": {},
30
+ "outputs": [],
31
+ "source": "# ๐Ÿฆฅ Unsloth ์„ค์น˜ (2-3๋ฐฐ ๋น ๋ฅธ ํ•™์Šต)\n!pip install --no-cache-dir -q unsloth\n!pip install --no-cache-dir -q datasets wandb huggingface_hub\n\nprint(\"โœ… Unsloth ํŒจํ‚ค์ง€ ์„ค์น˜ ์™„๋ฃŒ!\")"
32
+ },
33
+ {
34
+ "cell_type": "code",
35
+ "execution_count": null,
36
+ "metadata": {},
37
+ "outputs": [],
38
+ "source": "# ๋ฒ„์ „ ํ™•์ธ ๋ฐ ํ•„์ˆ˜ ์ž„ํฌํŠธ\nimport json\nimport gc\nimport time\nimport atexit\nimport signal\nfrom datetime import datetime\n\nimport torch\nfrom unsloth import FastLanguageModel\nimport transformers\n\nprint(f\"PyTorch: {torch.__version__}\")\nprint(f\"Transformers: {transformers.__version__}\")\nprint(f\"CUDA: {torch.cuda.is_available()}\")\nif torch.cuda.is_available():\n print(f\"GPU: {torch.cuda.get_device_name(0)}\")\n print(f\"VRAM: {torch.cuda.get_device_properties(0).total_memory / 1e9:.1f} GB\")\n\n# ============================================================\n# ์ž๋™ ๋ฆฌ์†Œ์Šค ํ•ด์ œ ๋“ฑ๋ก (์–ด๋–ค ์ƒํ™ฉ์—์„œ๋“  GPU ํ•ด์ œ)\n# ============================================================\ndef emergency_cleanup():\n \"\"\"์–ด๋–ค ์ƒํ™ฉ์—์„œ๋“  GPU ํ•ด์ œ\"\"\"\n print(\"\\n๐Ÿ”Œ ๊ธด๊ธ‰ ๋ฆฌ์†Œ์Šค ํ•ด์ œ ์‹คํ–‰...\")\n try:\n from google.colab import runtime\n runtime.unassign()\n except Exception as e:\n print(f\" ํ•ด์ œ ์‹คํŒจ: {e}\")\n\n# ์ •์ƒ ์ข…๋ฃŒ ์‹œ\natexit.register(emergency_cleanup)\n\n# ๊ฐ•์ œ ์ค‘๋‹จ ์‹œ (Ctrl+C, ์ปค๋„ ์ค‘๋‹จ ๋“ฑ)\ndef signal_handler(signum, frame):\n print(f\"\\nโš ๏ธ ์‹ ํ˜ธ ๊ฐ์ง€: {signum}\")\n emergency_cleanup()\n raise SystemExit(1)\n\nsignal.signal(signal.SIGTERM, signal_handler)\nsignal.signal(signal.SIGINT, signal_handler)\n\nprint(\"\\nโœ… ์ž๋™ ๋ฆฌ์†Œ์Šค ํ•ด์ œ ๋“ฑ๋ก๋จ\")\nprint(\" โ†’ ์ค‘๋‹จ/์—๋Ÿฌ/์™„๋ฃŒ ์–ด๋–ค ์ƒํ™ฉ์—์„œ๋“  GPU ์ž๋™ ํ•ด์ œ\")"
39
+ },
40
+ {
41
+ "cell_type": "markdown",
42
+ "metadata": {},
43
+ "source": [
44
+ "---\n",
45
+ "## 2๏ธโƒฃ ์„ค์ • ๋ฐ ํ•จ์ˆ˜ ์ •์˜\n",
46
+ "\n",
47
+ "๋ชจ๋“  ์„ค์ •๊ณผ ํ•จ์ˆ˜๋ฅผ ๋จผ์ € ์ •์˜ํ•˜์—ฌ ์ดํ›„ ์…€๋“ค์ด ๋…๋ฆฝ์ ์œผ๋กœ ์‹คํ–‰๋  ์ˆ˜ ์žˆ๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค."
48
+ ]
49
+ },
50
+ {
51
+ "cell_type": "code",
52
+ "execution_count": null,
53
+ "metadata": {},
54
+ "outputs": [],
55
+ "source": "# ============================================================\n# ์ „์—ญ ์„ค์ • (CONFIG)\n# ============================================================\nCONFIG = {\n # ๋ฐ์ดํ„ฐ\n \"balanced_dataset\": \"tellang/yeji-fortune-telling-ko-balanced\",\n \"multiturn_dataset\": \"tellang/yeji-fortune-telling-ko-multiturn\",\n \n # ๋ชจ๋ธ\n \"base_model\": \"Qwen/Qwen3-8B-Base\",\n \"output_repo\": \"tellang/yeji-8b-qlora-v1\", # QLoRA (DoRA ๋ฏธ์ง€์›)\n \n # ๋…ธํŠธ๋ถ ๋ฐฑ์—…\n \"notebook_backup_repo\": \"tellang/yeji-training-notebooks\",\n \"notebook_name\": \"phase1_qdora_training.ipynb\",\n \n # ํ•™์Šต\n \"num_epochs\": 3,\n \"batch_size\": 2,\n \"grad_accum_steps\": 4,\n \"learning_rate\": 2e-4,\n \"max_seq_length\": 2048,\n \n # ์ฒดํฌํฌ์ธํŠธ\n \"save_steps\": 500,\n \"eval_steps\": 500,\n \"save_total_limit\": 3,\n \n # ์™„๋ฃŒ ํ›„ ์ž๋™ ์ข…๋ฃŒ\n \"auto_shutdown\": \"unassign\", # None / \"unassign\" / \"terminate\"\n \n # WandB (์„ ํƒ) - ์—†์œผ๋ฉด Enter๋กœ ๊ฑด๋„ˆ๋›ฐ๊ธฐ\n \"use_wandb\": False, # WandB ๋น„ํ™œ์„ฑํ™” (ํ•„์ˆ˜ ์•„๋‹˜)\n \"wandb_project\": \"yeji-qlora\",\n}\n\n# ============================================================\n# ์‹œ์Šคํ…œ ํ”„๋กฌํ”„ํŠธ\n# ============================================================\nSYSTEM_PROMPT = \"\"\"๋‹น์‹ ์€ ์ „๋ฌธ ์ ์ˆ ๊ฐ€ '์˜ˆ์ง€'์ž…๋‹ˆ๋‹ค. ์‚ฌ์ฃผํŒ”์ž, ํƒ€๋กœ, ํ˜ธ๋กœ์Šค์ฝ”ํ”„๋ฅผ ์ „๋ฌธ์ ์œผ๋กœ ํ•ด์„ํ•ฉ๋‹ˆ๋‹ค.\n์นœ๊ทผํ•˜๊ณ  ๋”ฐ๋œปํ•œ ๋งํˆฌ๋กœ ์ƒ๋‹ดํ•˜๋ฉฐ, ๊ตฌ์ฒด์ ์ด๊ณ  ์‹ค์šฉ์ ์ธ ์กฐ์–ธ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค.\"\"\"\n\n# ============================================================\n# ํ…Œ์ŠคํŠธ ํ”„๋กฌํ”„ํŠธ ๋ฐ ํ’ˆ์งˆ ์ฒดํฌ ์„ค์ •\n# ============================================================\nTEST_PROMPTS = [\n # ์‚ฌ์ฃผ\n \"1990๋…„ 5์›” 15์ผ ์˜ค์ „ 10์‹œ์— ํƒœ์–ด๋‚œ ์‚ฌ๋žŒ์˜ ์‚ฌ์ฃผ๋ฅผ ๋ถ„์„ํ•ด์ฃผ์„ธ์š”.\",\n # ํƒ€๋กœ\n \"์—ฐ์•  ์šด์„ธ๋ฅผ ๋ณด๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ํƒ€๋กœ ์นด๋“œ 3์žฅ์„ ๋ฝ‘์•˜๋Š”๋ฐ '์—ฐ์ธ', '๋‹ฌ', '๋ณ„'์ด ๋‚˜์™”์–ด์š”. ํ•ด์„ํ•ด์ฃผ์„ธ์š”.\",\n # ํ˜ธ๋กœ์Šค์ฝ”ํ”„\n \"๋ฌผ๋ณ‘์ž๋ฆฌ์˜ ์ด๋ฒˆ ๋‹ฌ ์šด์„ธ๋ฅผ ์•Œ๋ ค์ฃผ์„ธ์š”.\",\n]\n\nQUALITY_CHECKS = {\n \"์‚ฌ์ฃผ\": {\n \"prompt\": \"1985๋…„ 12์›” 25์ผ ์ž์‹œ(23์‹œ)์— ํƒœ์–ด๋‚œ ์‚ฌ๋žŒ์˜ ์‚ฌ์ฃผํŒ”์ž๋ฅผ ๋ถ„์„ํ•ด์ฃผ์„ธ์š”.\",\n \"keywords\": [\"๋…„\", \"์›”\", \"์ผ\", \"์‹œ\", \"์˜คํ–‰\", \"์šด\"],\n },\n \"ํƒ€๋กœ\": {\n \"prompt\": \"์ทจ์—… ์šด์„ธ๋ฅผ ๋ณด๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ํƒ€๋กœ ์นด๋“œ 'ํ™ฉ์ œ', '์„ธ๊ณ„', '์‹ฌํŒ'์ด ๋‚˜์™”์–ด์š”.\",\n \"keywords\": [\"ํ™ฉ์ œ\", \"์„ธ๊ณ„\", \"์‹ฌํŒ\", \"์˜๋ฏธ\", \"์กฐ์–ธ\"],\n },\n \"ํ˜ธ๋กœ์Šค์ฝ”ํ”„\": {\n \"prompt\": \"์‚ฌ์ž์ž๋ฆฌ์˜ 2024๋…„ ์—ฐ๊ฐ„ ์šด์„ธ๋ฅผ ์•Œ๋ ค์ฃผ์„ธ์š”.\",\n \"keywords\": [\"์‚ฌ์ž\", \"์šด\", \"์กฐ์–ธ\", \"์ฃผ์˜\"],\n },\n}\n\nprint(\"โœ… ์ „์—ญ ์„ค์ • ์ •์˜ ์™„๋ฃŒ\")\nprint(f\"\\n๐Ÿ“‹ CONFIG:\")\nfor k, v in CONFIG.items():\n print(f\" {k}: {v}\")"
56
+ },
57
+ {
58
+ "cell_type": "code",
59
+ "execution_count": null,
60
+ "metadata": {},
61
+ "outputs": [],
62
+ "source": [
63
+ "# ============================================================\n",
64
+ "# ๋ฐ์ดํ„ฐ ๋ณ€ํ™˜ ํ•จ์ˆ˜ (tokenizer ํ•„์š”)\n",
65
+ "# ============================================================\n",
66
+ "def format_alpaca_to_chat(example):\n",
67
+ " \"\"\"Alpaca ํฌ๋งท โ†’ Qwen3 Chat Template ๋ณ€ํ™˜\"\"\"\n",
68
+ " messages = [\n",
69
+ " {\"role\": \"system\", \"content\": SYSTEM_PROMPT},\n",
70
+ " {\"role\": \"user\", \"content\": example[\"instruction\"] + (\"\\n\" + example[\"input\"] if example.get(\"input\") else \"\")},\n",
71
+ " {\"role\": \"assistant\", \"content\": example[\"output\"]},\n",
72
+ " ]\n",
73
+ " text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=False)\n",
74
+ " return {\"text\": text}\n",
75
+ "\n",
76
+ "\n",
77
+ "def format_sharegpt_to_chat(example):\n",
78
+ " \"\"\"ShareGPT ํฌ๋งท โ†’ Qwen3 Chat Template ๋ณ€ํ™˜\"\"\"\n",
79
+ " convs = json.loads(example[\"conversations\"])\n",
80
+ " \n",
81
+ " messages = [{\"role\": \"system\", \"content\": SYSTEM_PROMPT}]\n",
82
+ " for msg in convs:\n",
83
+ " role = \"user\" if msg[\"role\"] == \"user\" else \"assistant\"\n",
84
+ " messages.append({\"role\": role, \"content\": msg[\"content\"]})\n",
85
+ " \n",
86
+ " text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=False)\n",
87
+ " return {\"text\": text}\n",
88
+ "\n",
89
+ "\n",
90
+ "print(\"โœ… ๋ฐ์ดํ„ฐ ๋ณ€ํ™˜ ํ•จ์ˆ˜ ์ •์˜ ์™„๋ฃŒ\")"
91
+ ]
92
+ },
93
+ {
94
+ "cell_type": "code",
95
+ "execution_count": null,
96
+ "metadata": {},
97
+ "outputs": [],
98
+ "source": "# ============================================================\n# ์‘๋‹ต ์ƒ์„ฑ ๋ฐ ํ’ˆ์งˆ ํ‰๊ฐ€ ํ•จ์ˆ˜ (model, tokenizer ํ•„์š”)\n# ============================================================\ndef generate_response(prompt: str, max_new_tokens: int = 256) -> str:\n \"\"\"ํ”„๋กฌํ”„ํŠธ์— ๋Œ€ํ•œ ์‘๋‹ต ์ƒ์„ฑ\"\"\"\n messages = [\n {\"role\": \"system\", \"content\": SYSTEM_PROMPT},\n {\"role\": \"user\", \"content\": prompt},\n ]\n \n text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)\n inputs = tokenizer(text, return_tensors=\"pt\").to(model.device)\n \n with torch.no_grad():\n outputs = model.generate(\n **inputs,\n max_new_tokens=max_new_tokens,\n do_sample=True,\n temperature=0.7,\n top_p=0.9,\n pad_token_id=tokenizer.pad_token_id,\n )\n \n response = tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True)\n return response.strip()\n\n\ndef evaluate_quality(response: str, keywords: list) -> dict:\n \"\"\"์‘๋‹ต ํ’ˆ์งˆ ํ‰๊ฐ€\"\"\"\n found_keywords = [kw for kw in keywords if kw in response]\n score = len(found_keywords) / len(keywords) * 100\n return {\n \"score\": score,\n \"found\": found_keywords,\n \"response_length\": len(response),\n \"response\": response,\n }\n\n\ndef run_quality_evaluation(prompts: list, checks: dict, label: str = \"ํ‰๊ฐ€\"):\n \"\"\"์ „์ฒด ํ’ˆ์งˆ ํ‰๊ฐ€ ์‹คํ–‰\"\"\"\n print(\"=\" * 60)\n print(f\"๐Ÿ“Š {label}\")\n print(\"=\" * 60)\n \n # ์ƒ˜ํ”Œ ์‘๋‹ต\n responses = {}\n print(\"\\n๐Ÿ“ ์ƒ˜ํ”Œ ์‘๋‹ต:\")\n for i, prompt in enumerate(prompts, 1):\n print(f\"\\n[{i}] ์งˆ๋ฌธ: {prompt}\")\n print(\"-\" * 40)\n response = generate_response(prompt)\n responses[f\"sample_{i}\"] = {\"prompt\": prompt, \"response\": response}\n print(f\"์‘๋‹ต: {response[:300]}...\" if len(response) > 300 else f\"์‘๋‹ต: {response}\")\n \n # ๋„๋ฉ”์ธ๋ณ„ ํ’ˆ์งˆ ์ฒดํฌ\n print(\"\\n\" + \"=\" * 60)\n print(f\"๐Ÿ” {label} ๋„๋ฉ”์ธ๋ณ„ ํ’ˆ์งˆ:\")\n print(\"=\" * 60)\n \n results = {}\n for domain, check in checks.items():\n response = generate_response(check[\"prompt\"])\n result = evaluate_quality(response, check[\"keywords\"])\n results[domain] = result\n \n status = \"โœ…\" if result[\"score\"] >= 50 else \"โš ๏ธ\"\n print(f\"\\n{status} {domain}: {result['score']:.0f}%\")\n print(f\" ํ‚ค์›Œ๋“œ: {result['found']}\")\n print(f\" ์‘๋‹ต ๊ธธ์ด: {result['response_length']}์ž\")\n \n # ์ข…ํ•ฉ ์ ์ˆ˜\n avg_score = sum(r[\"score\"] for r in results.values()) / len(results)\n print(f\"\\n๐Ÿ“Š {label} ์ข…ํ•ฉ ์ ์ˆ˜: {avg_score:.0f}%\")\n \n return responses, results, avg_score\n\n\ndef shutdown_colab(mode):\n \"\"\"Colab ์„ธ์…˜ ์ข…๋ฃŒ.\n \n Args:\n mode: None (์•ˆ ํ•จ), \"unassign\" (GPU๋งŒ ํ•ด์ œ), \"terminate\" (์„ธ์…˜ ์ข…๋ฃŒ)\n \"\"\"\n if mode is None:\n print(\"โ„น๏ธ ์ž๋™ ์ข…๋ฃŒ ๊ฑด๋„ˆ๋œ€\")\n return\n \n try:\n from google.colab import runtime\n if mode == \"unassign\":\n print(\"\\n๐Ÿ”Œ GPU ํ• ๋‹น ํ•ด์ œ ์ค‘...\")\n print(\" โ†’ '๋Ÿฐํƒ€์ž„ ๋‹ค์‹œ ์—ฐ๊ฒฐ'๋กœ ๋ณต๊ตฌ ๊ฐ€๋Šฅ\")\n runtime.unassign()\n elif mode == \"terminate\":\n print(\"\\n๐Ÿ›‘ ์„ธ์…˜ ์™„์ „ ์ข…๋ฃŒ ์ค‘...\")\n import os\n os._exit(0)\n except Exception as e:\n print(f\"โš ๏ธ ์ข…๋ฃŒ ์‹คํŒจ: {e}\")\n\n\n# ============================================================\n# ๋…ธํŠธ๋ถ ๋ฐฑ์—… ํ•จ์ˆ˜\n# ============================================================\ndef backup_notebook_to_hf(repo_id: str, notebook_name: str, commit_msg: str = None):\n \"\"\"ํ˜„์žฌ ๋…ธํŠธ๋ถ์„ HuggingFace์— ๋ฐฑ์—….\n \n Args:\n repo_id: HuggingFace repo (์˜ˆ: \"tellang/yeji-training-notebooks\")\n notebook_name: ๋…ธํŠธ๋ถ ํŒŒ์ผ๋ช…\n commit_msg: ์ปค๋ฐ‹ ๋ฉ”์‹œ์ง€ (None์ด๋ฉด ์ž๋™ ์ƒ์„ฑ)\n \"\"\"\n from huggingface_hub import HfApi, create_repo\n \n api = HfApi()\n \n # 1. Repo ์กด์žฌ ํ™•์ธ ๋ฐ ์ƒ์„ฑ\n try:\n api.repo_info(repo_id=repo_id, repo_type=\"dataset\")\n print(f\"โœ… Repo ์กด์žฌ: {repo_id}\")\n except Exception:\n print(f\"๐Ÿ“ Repo ์ƒ์„ฑ ์ค‘: {repo_id}\")\n create_repo(repo_id, repo_type=\"dataset\", private=False)\n print(f\"โœ… Repo ์ƒ์„ฑ ์™„๋ฃŒ!\")\n \n # 2. ํ˜„์žฌ ๋…ธํŠธ๋ถ ๊ฒฝ๋กœ ์ฐพ๊ธฐ (Colab ํ™˜๊ฒฝ)\n import os\n notebook_path = None\n \n # Colab์—์„œ ํ˜„์žฌ ๋…ธํŠธ๋ถ ์ฐพ๊ธฐ\n for path in [\n f\"/content/{notebook_name}\",\n f\"/content/drive/MyDrive/Colab Notebooks/{notebook_name}\",\n f\"/content/drive/MyDrive/{notebook_name}\",\n ]:\n if os.path.exists(path):\n notebook_path = path\n break\n \n if notebook_path is None:\n # ํ˜„์žฌ ๋””๋ ‰ํ† ๋ฆฌ์—์„œ ์ฐพ๊ธฐ\n if os.path.exists(notebook_name):\n notebook_path = notebook_name\n else:\n print(f\"โš ๏ธ ๋…ธํŠธ๋ถ์„ ์ฐพ์„ ์ˆ˜ ์—†์Œ: {notebook_name}\")\n print(\" ์ˆ˜๋™ ์—…๋กœ๋“œ ํ•„์š”\")\n return False\n \n # 3. ์—…๋กœ๋“œ\n if commit_msg is None:\n commit_msg = f\"Backup: {notebook_name} ({datetime.now().strftime('%Y-%m-%d %H:%M')})\"\n \n print(f\"๐Ÿ“ค ์—…๋กœ๋“œ ์ค‘: {notebook_path}\")\n api.upload_file(\n path_or_fileobj=notebook_path,\n path_in_repo=notebook_name,\n repo_id=repo_id,\n repo_type=\"dataset\",\n commit_message=commit_msg,\n )\n \n print(f\"โœ… ๋ฐฑ์—… ์™„๋ฃŒ!\")\n print(f\" https://huggingface.co/datasets/{repo_id}\")\n return True\n\n\ndef test_backup_connection(repo_id: str):\n \"\"\"๋ฐฑ์—… ์—ฐ๊ฒฐ ํ…Œ์ŠคํŠธ (repo ์ƒ์„ฑ/์ ‘๊ทผ ํ™•์ธ๋งŒ)\"\"\"\n from huggingface_hub import HfApi, create_repo\n \n api = HfApi()\n \n try:\n # Repo ์กด์žฌ ํ™•์ธ\n api.repo_info(repo_id=repo_id, repo_type=\"dataset\")\n print(f\"โœ… ๋ฐฑ์—… Repo ์ ‘๊ทผ ๊ฐ€๋Šฅ: {repo_id}\")\n return True\n except Exception:\n # Repo ์ƒ์„ฑ ์‹œ๋„\n try:\n print(f\"๐Ÿ“ ๋ฐฑ์—… Repo ์ƒ์„ฑ ์ค‘: {repo_id}\")\n create_repo(repo_id, repo_type=\"dataset\", private=False)\n print(f\"โœ… ๋ฐฑ์—… Repo ์ƒ์„ฑ ์™„๋ฃŒ!\")\n return True\n except Exception as e:\n print(f\"โŒ ๋ฐฑ์—… Repo ์ƒ์„ฑ ์‹คํŒจ: {e}\")\n return False\n\n\nprint(\"โœ… ํ‰๊ฐ€ ๋ฐ ๋ฐฑ์—… ํ•จ์ˆ˜ ์ •์˜ ์™„๋ฃŒ\")"
99
+ },
100
+ {
101
+ "cell_type": "markdown",
102
+ "metadata": {},
103
+ "source": [
104
+ "---\n",
105
+ "## 3๏ธโƒฃ ์—ฐ๊ฒฐ ํ…Œ์ŠคํŠธ"
106
+ ]
107
+ },
108
+ {
109
+ "cell_type": "code",
110
+ "execution_count": null,
111
+ "metadata": {},
112
+ "outputs": [],
113
+ "source": "# HuggingFace ๋กœ๊ทธ์ธ\nfrom huggingface_hub import login\n\ndef extract_token(obj):\n \"\"\"์žฌ๊ท€์ ์œผ๋กœ ํ† ํฐ ์ถ”์ถœ\"\"\"\n if isinstance(obj, str) and obj.startswith('hf_'):\n return obj\n if isinstance(obj, dict):\n for key in ['token', 'HF_TOKEN', 'hf_token']:\n if key in obj:\n result = extract_token(obj[key])\n if result:\n return result\n for v in obj.values():\n result = extract_token(v)\n if result:\n return result\n return None\n\nHF_TOKEN = None\n\n# 1. Colab secrets\ntry:\n from google.colab import userdata\n raw = userdata.get('HF_TOKEN')\n HF_TOKEN = extract_token(raw) if isinstance(raw, dict) else raw\nexcept Exception:\n pass\n\n# 2. ํ™˜๊ฒฝ๋ณ€์ˆ˜\nif not HF_TOKEN:\n import os\n HF_TOKEN = os.environ.get('HF_TOKEN')\n\n# 3. ์ˆ˜๋™ ์ž…๋ ฅ\nif not HF_TOKEN or not isinstance(HF_TOKEN, str):\n HF_TOKEN = input(\"HuggingFace ํ† ํฐ ์ž…๋ ฅ: \")\n\nlogin(token=HF_TOKEN)\nprint(\"โœ… HuggingFace ๋กœ๊ทธ์ธ ์™„๋ฃŒ!\")"
114
+ },
115
+ {
116
+ "cell_type": "code",
117
+ "source": "# ๋ฐฑ์—… Repo ์—ฐ๊ฒฐ ํ…Œ์ŠคํŠธ\nprint(\"๐Ÿ”— ๋ฐฑ์—… Repo ์—ฐ๊ฒฐ ํ…Œ์ŠคํŠธ...\")\nbackup_ok = test_backup_connection(CONFIG[\"notebook_backup_repo\"])\n\nif backup_ok:\n print(f\"\\n๐Ÿ“ฆ ํ•™์Šต ์™„๋ฃŒ ํ›„ ๋…ธํŠธ๋ถ์ด ์ž๋™ ๋ฐฑ์—…๋ฉ๋‹ˆ๋‹ค:\")\n print(f\" Repo: {CONFIG['notebook_backup_repo']}\")\n print(f\" File: {CONFIG['notebook_name']}\")",
118
+ "metadata": {},
119
+ "execution_count": null,
120
+ "outputs": []
121
+ },
122
+ {
123
+ "cell_type": "code",
124
+ "execution_count": null,
125
+ "metadata": {},
126
+ "outputs": [],
127
+ "source": [
128
+ "# ๋ฐ์ดํ„ฐ์…‹ ๋กœ๋“œ ํ…Œ์ŠคํŠธ\n",
129
+ "from datasets import load_dataset\n",
130
+ "\n",
131
+ "print(\"๐Ÿ“ฅ ๋ฐ์ดํ„ฐ์…‹ ๋กœ๋“œ ํ…Œ์ŠคํŠธ...\")\n",
132
+ "\n",
133
+ "# ๋ฐธ๋Ÿฐ์‹ฑ ๋ฐ์ดํ„ฐ\n",
134
+ "balanced_ds = load_dataset(CONFIG[\"balanced_dataset\"], split=\"train\")\n",
135
+ "print(f\"โœ… ๋ฐธ๋Ÿฐ์‹ฑ ๋ฐ์ดํ„ฐ: {len(balanced_ds):,}๊ฑด\")\n",
136
+ "\n",
137
+ "# ๋ฉ€ํ‹ฐํ„ด ๋ฐ์ดํ„ฐ\n",
138
+ "multiturn_ds = load_dataset(CONFIG[\"multiturn_dataset\"], split=\"train\")\n",
139
+ "print(f\"โœ… ๋ฉ€ํ‹ฐํ„ด ๋ฐ์ดํ„ฐ: {len(multiturn_ds):,}๊ฑด\")\n",
140
+ "\n",
141
+ "# ์ƒ˜ํ”Œ ํ™•์ธ\n",
142
+ "print(\"\\n๐Ÿ“ ๋ฐธ๋Ÿฐ์‹ฑ ๋ฐ์ดํ„ฐ ์ƒ˜ํ”Œ:\")\n",
143
+ "print(balanced_ds[0])\n",
144
+ "\n",
145
+ "print(\"\\n๐Ÿ“ ๋ฉ€ํ‹ฐํ„ด ๋ฐ์ดํ„ฐ ์ƒ˜ํ”Œ:\")\n",
146
+ "print(multiturn_ds[0])"
147
+ ]
148
+ },
149
+ {
150
+ "cell_type": "code",
151
+ "execution_count": null,
152
+ "metadata": {},
153
+ "outputs": [],
154
+ "source": "# ํ† ํฌ๋‚˜์ด์ €๋Š” ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ๋กœ๋“œ๋จ (์„น์…˜ 5์—์„œ)\n# ์—ฌ๊ธฐ์„œ๋Š” Chat Template ํ…Œ์ŠคํŠธ๋งŒ ์ˆ˜ํ–‰\n\nprint(\"๐Ÿ“ Chat Template ํ…Œ์ŠคํŠธ (ํ† ํฌ๋‚˜์ด์ €๋Š” ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ๋กœ๋“œ ์˜ˆ์ •)\")\nprint(\" โ†’ ์„น์…˜ 5์—์„œ model, tokenizer ๋™์‹œ ๋กœ๋“œ\")\n\n# ํ† ํฌ๋‚˜์ด์ € ๋ฏธ๋ฆฌ ๋กœ๋“œ (๋ฐ์ดํ„ฐ ๋ณ€ํ™˜์šฉ)\nfrom transformers import AutoTokenizer\n\ntokenizer = AutoTokenizer.from_pretrained(\n CONFIG[\"base_model\"],\n trust_remote_code=True,\n)\n\nif tokenizer.pad_token is None:\n tokenizer.pad_token = tokenizer.eos_token\n\n# Chat Template ํ…Œ์ŠคํŠธ\ntest_messages = [\n {\"role\": \"system\", \"content\": \"๋‹น์‹ ์€ ์ „๋ฌธ ์ ์ˆ ๊ฐ€์ž…๋‹ˆ๋‹ค.\"},\n {\"role\": \"user\", \"content\": \"1990๋…„ 5์›” 15์ผ ์‚ฌ์ฃผ๋ฅผ ๋ด์ฃผ์„ธ์š”.\"},\n {\"role\": \"assistant\", \"content\": \"๊ฒฝ์˜ค๋…„ ์‹ ์‚ฌ์›” ๊ฐ‘์ง„์ผ์ž…๋‹ˆ๋‹ค.\"},\n]\nformatted = tokenizer.apply_chat_template(test_messages, tokenize=False)\nprint(f\"\\nโœ… Chat Template ํ…Œ์ŠคํŠธ:\")\nprint(formatted[:200] + \"...\")"
155
+ },
156
+ {
157
+ "cell_type": "markdown",
158
+ "metadata": {},
159
+ "source": [
160
+ "---\n",
161
+ "## 4๏ธโƒฃ ๋ฐ์ดํ„ฐ ์ค€๋น„"
162
+ ]
163
+ },
164
+ {
165
+ "cell_type": "code",
166
+ "execution_count": null,
167
+ "metadata": {},
168
+ "outputs": [],
169
+ "source": [
170
+ "# ๋ฐ์ดํ„ฐ ๋ณ€ํ™˜ ๋ฐ ๋ณ‘ํ•ฉ\n",
171
+ "from datasets import concatenate_datasets\n",
172
+ "\n",
173
+ "print(\"๐Ÿ”„ ๋ฐ์ดํ„ฐ ๋ณ€ํ™˜ ์ค‘...\")\n",
174
+ "\n",
175
+ "# ๋ฐธ๋Ÿฐ์‹ฑ ๋ฐ์ดํ„ฐ ๋ณ€ํ™˜\n",
176
+ "print(f\" ๋ฐธ๋Ÿฐ์‹ฑ ๋ฐ์ดํ„ฐ ๋ณ€ํ™˜ ์ค‘... ({len(balanced_ds):,}๊ฑด)\")\n",
177
+ "balanced_formatted = balanced_ds.map(\n",
178
+ " format_alpaca_to_chat,\n",
179
+ " remove_columns=balanced_ds.column_names,\n",
180
+ " num_proc=4,\n",
181
+ " desc=\"Formatting balanced\",\n",
182
+ ")\n",
183
+ "\n",
184
+ "# ๋ฉ€ํ‹ฐํ„ด ๋ฐ์ดํ„ฐ ๋ณ€ํ™˜\n",
185
+ "print(f\" ๋ฉ€ํ‹ฐํ„ด ๋ฐ์ดํ„ฐ ๋ณ€ํ™˜ ์ค‘... ({len(multiturn_ds):,}๊ฑด)\")\n",
186
+ "multiturn_formatted = multiturn_ds.map(\n",
187
+ " format_sharegpt_to_chat,\n",
188
+ " remove_columns=multiturn_ds.column_names,\n",
189
+ " num_proc=4,\n",
190
+ " desc=\"Formatting multiturn\",\n",
191
+ ")\n",
192
+ "\n",
193
+ "# ๋ณ‘ํ•ฉ ๋ฐ ์…”ํ”Œ\n",
194
+ "print(\" ๋ฐ์ดํ„ฐ์…‹ ๋ณ‘ํ•ฉ ๋ฐ ์…”ํ”Œ ์ค‘...\")\n",
195
+ "train_ds = concatenate_datasets([balanced_formatted, multiturn_formatted])\n",
196
+ "train_ds = train_ds.shuffle(seed=42)\n",
197
+ "\n",
198
+ "print(f\"\\nโœ… ๋ฐ์ดํ„ฐ ์ค€๋น„ ์™„๋ฃŒ: {len(train_ds):,}๊ฑด\")\n",
199
+ "\n",
200
+ "# ์ƒ˜ํ”Œ ํ™•์ธ\n",
201
+ "print(\"\\n๐Ÿ“ ๋ณ€ํ™˜๋œ ์ƒ˜ํ”Œ:\")\n",
202
+ "print(train_ds[0][\"text\"][:500] + \"...\")"
203
+ ]
204
+ },
205
+ {
206
+ "cell_type": "code",
207
+ "execution_count": null,
208
+ "metadata": {},
209
+ "outputs": [],
210
+ "source": [
211
+ "# Train/Eval ๋ถ„๋ฆฌ (95:5)\n",
212
+ "train_test = train_ds.train_test_split(test_size=0.05, seed=42)\n",
213
+ "train_dataset = train_test[\"train\"]\n",
214
+ "eval_dataset = train_test[\"test\"]\n",
215
+ "\n",
216
+ "print(f\"๐Ÿ“Š Train/Eval ๋ถ„๋ฆฌ:\")\n",
217
+ "print(f\" Train: {len(train_dataset):,}๊ฑด (95%)\")\n",
218
+ "print(f\" Eval: {len(eval_dataset):,}๊ฑด (5%)\")"
219
+ ]
220
+ },
221
+ {
222
+ "cell_type": "markdown",
223
+ "metadata": {},
224
+ "source": "---\n## 5๏ธโƒฃ ๋ชจ๋ธ ์ค€๋น„ (Unsloth)\n\nUnsloth์˜ FastLanguageModel๋กœ 2-3๋ฐฐ ๋น ๋ฅธ ํ•™์Šต ๋ฐ 40% ๋ฉ”๋ชจ๋ฆฌ ์ ˆ์•ฝ"
225
+ },
226
+ {
227
+ "cell_type": "code",
228
+ "execution_count": null,
229
+ "metadata": {},
230
+ "outputs": [],
231
+ "source": "# ๐Ÿฆฅ Unsloth ๋ชจ๋ธ ๋กœ๋“œ (4-bit ์–‘์žํ™” ์ž๋™ ์ ์šฉ)\nprint(f\"๐Ÿ“ฅ ๋ชจ๋ธ ๋กœ๋“œ ์ค‘: {CONFIG['base_model']}\")\nprint(\" ๐Ÿฆฅ Unsloth FastLanguageModel ์‚ฌ์šฉ\")\nprint(\" (์•ฝ 1-2๋ถ„ ์†Œ์š”)\")\n\ntry:\n model, tokenizer = FastLanguageModel.from_pretrained(\n model_name=CONFIG[\"base_model\"],\n max_seq_length=CONFIG[\"max_seq_length\"],\n dtype=None, # ์ž๋™ ๊ฐ์ง€\n load_in_4bit=True, # 4-bit ์–‘์žํ™”\n )\n \n # ํŒจ๋”ฉ ํ† ํฐ ์„ค์ •\n if tokenizer.pad_token is None:\n tokenizer.pad_token = tokenizer.eos_token\n tokenizer.pad_token_id = tokenizer.eos_token_id\n \n print(f\"\\nโœ… ๋ชจ๋ธ ๋กœ๋“œ ์™„๋ฃŒ!\")\n print(f\" max_seq_length: {CONFIG['max_seq_length']}\")\n print(f\" load_in_4bit: True\")\n\nexcept Exception as e:\n print(f\"\\nโŒ ๋ชจ๋ธ ๋กœ๋“œ ์‹คํŒจ: {e}\")\n shutdown_colab(\"unassign\")\n raise"
232
+ },
233
+ {
234
+ "cell_type": "code",
235
+ "execution_count": null,
236
+ "metadata": {},
237
+ "outputs": [],
238
+ "source": "# ๐Ÿฆฅ QLoRA ์–ด๋Œ‘ํ„ฐ ์ ์šฉ (Unsloth)\n# โš ๏ธ ์ฐธ๊ณ : Unsloth๋Š” DoRA๋ฅผ ๊ณต์‹ ์ง€์›ํ•˜์ง€ ์•Š์Œ (2026.01 ๊ธฐ์ค€)\n# QDoRA๊ฐ€ ํ•„์š”ํ•˜๋ฉด ๊ธฐ์กด PEFT ๋…ธํŠธ๋ถ ์‚ฌ์šฉ ๊ถŒ์žฅ\n\nprint(\"๐Ÿ”ง QLoRA ์–ด๋Œ‘ํ„ฐ ์ ์šฉ ์ค‘...\")\n\nmodel = FastLanguageModel.get_peft_model(\n model,\n r=16, # DoRA ๋ฏธ์ง€์›์œผ๋กœ r ์ฆ๊ฐ€ (8โ†’16)\n lora_alpha=32, # alpha๋„ ์ฆ๊ฐ€\n target_modules=[\n \"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\",\n \"gate_proj\", \"up_proj\", \"down_proj\",\n ],\n lora_dropout=0,\n bias=\"none\",\n use_gradient_checkpointing=\"unsloth\", # Unsloth ์ตœ์ ํ™”\n use_rslora=True, # โœ… RSLoRA ํ™œ์„ฑํ™” (๋” ์•ˆ์ •์ )\n loftq_config=None,\n random_state=42,\n)\n\nprint(f\"\\nโœ… QLoRA ์–ด๋Œ‘ํ„ฐ ์ ์šฉ ์™„๋ฃŒ!\")\nprint(f\" r=16, alpha=32 (DoRA ๋Œ€์‹  r ์ฆ๊ฐ€)\")\nprint(f\" use_rslora=True (Rank Stabilized LoRA)\")\nprint(f\" use_gradient_checkpointing='unsloth' (๋ฉ”๋ชจ๋ฆฌ ์ตœ์ ํ™”)\")\n\nmodel.print_trainable_parameters()"
239
+ },
240
+ {
241
+ "cell_type": "markdown",
242
+ "metadata": {},
243
+ "source": [
244
+ "---\n",
245
+ "## 6๏ธโƒฃ Baseline ์ธก์ • (ํ•™์Šต ์ „)\n",
246
+ "\n",
247
+ "ํ•™์Šต ์ „ Base ๋ชจ๋ธ์˜ ์‘๋‹ต์„ ์ €์žฅํ•˜์—ฌ Fine-tuning ํšจ๊ณผ๋ฅผ ๋น„๊ตํ•ฉ๋‹ˆ๋‹ค."
248
+ ]
249
+ },
250
+ {
251
+ "cell_type": "code",
252
+ "execution_count": null,
253
+ "metadata": {},
254
+ "outputs": [],
255
+ "source": [
256
+ "# ๐Ÿ” Baseline ์ธก์ •\n",
257
+ "baseline_responses, baseline_results, baseline_avg = run_quality_evaluation(\n",
258
+ " TEST_PROMPTS, \n",
259
+ " QUALITY_CHECKS, \n",
260
+ " label=\"Baseline (ํ•™์Šต ์ „)\"\n",
261
+ ")\n",
262
+ "\n",
263
+ "print(\"\\nโš ๏ธ ์ด ์ ์ˆ˜๋ฅผ ํ•™์Šต ํ›„์™€ ๋น„๊ตํ•ฉ๋‹ˆ๋‹ค.\")"
264
+ ]
265
+ },
266
+ {
267
+ "cell_type": "markdown",
268
+ "metadata": {},
269
+ "source": [
270
+ "---\n",
271
+ "## 7๏ธโƒฃ ํ•™์Šต"
272
+ ]
273
+ },
274
+ {
275
+ "cell_type": "code",
276
+ "execution_count": null,
277
+ "metadata": {},
278
+ "outputs": [],
279
+ "source": "# WandB ์„ค์ • (์„ ํƒ)\nif CONFIG[\"use_wandb\"]:\n import wandb\n \n WANDB_KEY = None\n try:\n WANDB_KEY = userdata.get('WANDB_API_KEY')\n if isinstance(WANDB_KEY, dict):\n WANDB_KEY = WANDB_KEY.get('key') or WANDB_KEY.get('WANDB_API_KEY')\n except Exception:\n pass\n \n if not WANDB_KEY or not isinstance(WANDB_KEY, str):\n WANDB_KEY = input(\"WandB API ํ‚ค ์ž…๋ ฅ (Enter๋กœ ๊ฑด๋„ˆ๋›ฐ๊ธฐ): \")\n \n if WANDB_KEY:\n wandb.login(key=WANDB_KEY)\n print(\"โœ… WandB ๋กœ๊ทธ์ธ ์™„๋ฃŒ!\")\n else:\n CONFIG[\"use_wandb\"] = False\n print(\"โš ๏ธ WandB ๊ฑด๋„ˆ๋œ€\")"
280
+ },
281
+ {
282
+ "cell_type": "code",
283
+ "execution_count": null,
284
+ "metadata": {},
285
+ "outputs": [],
286
+ "source": "# ๐Ÿฆฅ SFTConfig ์„ค์ • (Unsloth ์ตœ์ ํ™”)\n# TRL ๋ฒ„์ „ ํ™•์ธ\nimport trl\nprint(f\"TRL ๋ฒ„์ „: {trl.__version__}\")\n\nfrom trl import SFTConfig\n\nsft_config = SFTConfig(\n # ์ถœ๋ ฅ\n output_dir=\"./yeji-qlora-v1\",\n run_name=\"yeji-qlora-8b-v1-unsloth\",\n \n # ํ•™์Šต ์„ค์ •\n num_train_epochs=CONFIG[\"num_epochs\"],\n per_device_train_batch_size=CONFIG[\"batch_size\"],\n per_device_eval_batch_size=CONFIG[\"batch_size\"],\n gradient_accumulation_steps=CONFIG[\"grad_accum_steps\"],\n \n # Optimizer (Unsloth ๊ถŒ์žฅ)\n learning_rate=CONFIG[\"learning_rate\"],\n lr_scheduler_type=\"cosine\",\n warmup_ratio=0.05,\n weight_decay=0.01,\n optim=\"adamw_8bit\",\n \n # Precision\n bf16=True,\n fp16=False,\n \n # ๋ฉ”๋ชจ๋ฆฌ ์ตœ์ ํ™”\n max_grad_norm=0.3,\n \n # ์ €์žฅ & ๋กœ๊น…\n save_strategy=\"steps\",\n save_steps=CONFIG[\"save_steps\"],\n save_total_limit=CONFIG[\"save_total_limit\"],\n logging_steps=50,\n \n # ํ‰๊ฐ€\n eval_strategy=\"steps\",\n eval_steps=CONFIG[\"eval_steps\"],\n \n # ๊ธฐํƒ€\n report_to=\"wandb\" if CONFIG[\"use_wandb\"] else \"none\",\n load_best_model_at_end=True,\n metric_for_best_model=\"eval_loss\",\n greater_is_better=False,\n \n # HuggingFace Hub\n push_to_hub=True,\n hub_model_id=CONFIG[\"output_repo\"],\n hub_strategy=\"checkpoint\",\n)\n\nprint(\"โœ… SFTConfig ์„ค์ • ์™„๋ฃŒ\")\nprint(f\" epochs: {sft_config.num_train_epochs}\")\nprint(f\" batch_size: {sft_config.per_device_train_batch_size} x {sft_config.gradient_accumulation_steps}\")\nprint(f\" lr: {sft_config.learning_rate}\")"
287
+ },
288
+ {
289
+ "cell_type": "code",
290
+ "execution_count": null,
291
+ "metadata": {},
292
+ "outputs": [],
293
+ "source": "# ๐Ÿฆฅ SFTTrainer ์ดˆ๊ธฐํ™” (Unsloth)\nfrom trl import SFTTrainer\n\ntrainer = SFTTrainer(\n model=model,\n tokenizer=tokenizer,\n args=sft_config,\n train_dataset=train_dataset,\n eval_dataset=eval_dataset,\n dataset_text_field=\"text\",\n)\n\nprint(\"โœ… SFTTrainer ์ดˆ๊ธฐํ™” ์™„๋ฃŒ!\")\nprint(\" ๐Ÿฆฅ Unsloth๊ฐ€ max_seq_length ์ž๋™ ์ฒ˜๋ฆฌ\")"
294
+ },
295
+ {
296
+ "cell_type": "code",
297
+ "execution_count": null,
298
+ "metadata": {},
299
+ "outputs": [],
300
+ "source": "# ํ•™์Šต ์‹œ์ž‘! (์—๋Ÿฌ ๋ฐœ์ƒ ์‹œ์—๋„ ๋ฆฌ์†Œ์Šค ํ•ด์ œ)\nprint(\"=\" * 60)\nprint(\"๐Ÿš€ YEJI QDoRA ํ•™์Šต ์‹œ์ž‘!\")\nprint(f\" ์‹œ์ž‘: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\")\nprint(f\" ๋ฐ์ดํ„ฐ: {len(train_dataset):,}๊ฑด\")\nprint(f\" Epochs: {CONFIG['num_epochs']}\")\nprint(f\" Baseline ์ ์ˆ˜: {baseline_avg:.0f}%\")\nprint(\"=\" * 60)\n\nstart_time = time.time()\ntrain_result = None\n\ntry:\n # ํ•™์Šต ์‹คํ–‰\n train_result = trainer.train()\n \n elapsed = time.time() - start_time\n print(f\"\\nโœ… ํ•™์Šต ์™„๋ฃŒ!\")\n print(f\" ์†Œ์š”: {elapsed/60:.1f}๋ถ„ ({elapsed/3600:.2f}์‹œ๊ฐ„)\")\n print(f\" Final Train Loss: {train_result.training_loss:.4f}\")\n\nexcept KeyboardInterrupt:\n elapsed = time.time() - start_time\n print(f\"\\nโš ๏ธ ํ•™์Šต ์ค‘๋‹จ๋จ (์‚ฌ์šฉ์ž ์ทจ์†Œ)\")\n print(f\" ์†Œ์š”: {elapsed/60:.1f}๋ถ„\")\n print(\"\\n๐Ÿ”Œ ๋ฆฌ์†Œ์Šค ํ•ด์ œ ์ค‘...\")\n shutdown_colab(\"unassign\")\n raise\n\nexcept Exception as e:\n elapsed = time.time() - start_time\n print(f\"\\nโŒ ํ•™์Šต ์‹คํŒจ!\")\n print(f\" ์—๋Ÿฌ: {type(e).__name__}: {e}\")\n print(f\" ์†Œ์š”: {elapsed/60:.1f}๋ถ„\")\n print(\"\\n๐Ÿ”Œ ๋ฆฌ์†Œ์Šค ํ•ด์ œ ์ค‘...\")\n shutdown_colab(\"unassign\")\n raise"
301
+ },
302
+ {
303
+ "cell_type": "markdown",
304
+ "metadata": {},
305
+ "source": [
306
+ "---\n",
307
+ "## 8๏ธโƒฃ ํ‰๊ฐ€ (Baseline ๋น„๊ต)"
308
+ ]
309
+ },
310
+ {
311
+ "cell_type": "code",
312
+ "execution_count": null,
313
+ "metadata": {},
314
+ "outputs": [],
315
+ "source": [
316
+ "# Eval Loss ํ™•์ธ\n",
317
+ "eval_result = trainer.evaluate()\n",
318
+ "\n",
319
+ "print(\"๐Ÿ“Š ํ‰๊ฐ€ ๊ฒฐ๊ณผ:\")\n",
320
+ "print(f\" Eval Loss: {eval_result['eval_loss']:.4f}\")\n",
321
+ "print(f\" Eval Runtime: {eval_result['eval_runtime']:.1f}์ดˆ\")"
322
+ ]
323
+ },
324
+ {
325
+ "cell_type": "code",
326
+ "execution_count": null,
327
+ "metadata": {},
328
+ "outputs": [],
329
+ "source": [
330
+ "# ๐Ÿ” ํ•™์Šต ํ›„ ํ’ˆ์งˆ ํ‰๊ฐ€\n",
331
+ "finetuned_responses, finetuned_results, finetuned_avg = run_quality_evaluation(\n",
332
+ " TEST_PROMPTS, \n",
333
+ " QUALITY_CHECKS, \n",
334
+ " label=\"Fine-tuned (ํ•™์Šต ํ›„)\"\n",
335
+ ")"
336
+ ]
337
+ },
338
+ {
339
+ "cell_type": "code",
340
+ "execution_count": null,
341
+ "metadata": {},
342
+ "outputs": [],
343
+ "source": [
344
+ "# ๐Ÿ“Š Baseline vs Fine-tuned ๋น„๊ต\n",
345
+ "print(\"=\" * 60)\n",
346
+ "print(\"๐Ÿ“Š Baseline vs Fine-tuned ๋น„๊ต\")\n",
347
+ "print(\"=\" * 60)\n",
348
+ "\n",
349
+ "for domain in finetuned_results:\n",
350
+ " b_score = baseline_results[domain][\"score\"]\n",
351
+ " f_score = finetuned_results[domain][\"score\"]\n",
352
+ " diff = f_score - b_score\n",
353
+ " diff_str = f\"+{diff:.0f}\" if diff >= 0 else f\"{diff:.0f}\"\n",
354
+ " trend = \"๐Ÿ“ˆ\" if diff > 0 else (\"๐Ÿ“‰\" if diff < 0 else \"โžก๏ธ\")\n",
355
+ " status = \"โœ…\" if f_score >= 50 else \"โš ๏ธ\"\n",
356
+ " \n",
357
+ " print(f\"\\n{status} {domain}:\")\n",
358
+ " print(f\" Baseline: {b_score:.0f}% โ†’ Fine-tuned: {f_score:.0f}% ({trend} {diff_str}%)\")\n",
359
+ "\n",
360
+ "# ์ข…ํ•ฉ ์ ์ˆ˜ ๋น„๊ต\n",
361
+ "improvement = finetuned_avg - baseline_avg\n",
362
+ "improvement_str = f\"+{improvement:.0f}\" if improvement >= 0 else f\"{improvement:.0f}\"\n",
363
+ "\n",
364
+ "print(\"\\n\" + \"=\" * 60)\n",
365
+ "print(\"๐Ÿ“Š ์ข…ํ•ฉ ์ ์ˆ˜ ๋น„๊ต:\")\n",
366
+ "print(f\" Baseline: {baseline_avg:.0f}%\")\n",
367
+ "print(f\" Fine-tuned: {finetuned_avg:.0f}%\")\n",
368
+ "print(f\" ๊ฐœ์„ : {improvement_str}%\")\n",
369
+ "print(\"=\" * 60)"
370
+ ]
371
+ },
372
+ {
373
+ "cell_type": "code",
374
+ "execution_count": null,
375
+ "metadata": {},
376
+ "outputs": [],
377
+ "source": [
378
+ "# ๐Ÿ“Š Before vs After ์‘๋‹ต ๋น„๊ต\n",
379
+ "print(\"=\" * 60)\n",
380
+ "print(\"๐Ÿ“Š Before vs After ์‘๋‹ต ๋น„๊ต\")\n",
381
+ "print(\"=\" * 60)\n",
382
+ "\n",
383
+ "for key in baseline_responses:\n",
384
+ " prompt = baseline_responses[key][\"prompt\"]\n",
385
+ " before = baseline_responses[key][\"response\"]\n",
386
+ " after = finetuned_responses[key][\"response\"]\n",
387
+ " \n",
388
+ " print(f\"\\n๐Ÿ”น ์งˆ๋ฌธ: {prompt[:50]}...\")\n",
389
+ " print(f\"\\n[Before] {before[:200]}...\" if len(before) > 200 else f\"\\n[Before] {before}\")\n",
390
+ " print(f\"\\n[After] {after[:200]}...\" if len(after) > 200 else f\"\\n[After] {after}\")\n",
391
+ " print(\"-\" * 60)"
392
+ ]
393
+ },
394
+ {
395
+ "cell_type": "markdown",
396
+ "metadata": {},
397
+ "source": [
398
+ "---\n",
399
+ "## 9๏ธโƒฃ ์ €์žฅ & ์—…๋กœ๋“œ"
400
+ ]
401
+ },
402
+ {
403
+ "cell_type": "code",
404
+ "execution_count": null,
405
+ "metadata": {},
406
+ "outputs": [],
407
+ "source": [
408
+ "# ์ตœ์ข… ๋ชจ๋ธ ์ €์žฅ ๋ฐ ์—…๋กœ๋“œ\n",
409
+ "print(\"๐Ÿ’พ ์ตœ์ข… ๋ชจ๋ธ ์ €์žฅ ์ค‘...\")\n",
410
+ "\n",
411
+ "# ๋กœ์ปฌ ์ €์žฅ\n",
412
+ "trainer.save_model(\"./yeji-qdora-v1-final\")\n",
413
+ "tokenizer.save_pretrained(\"./yeji-qdora-v1-final\")\n",
414
+ "\n",
415
+ "print(\"โœ… ๋กœ์ปฌ ์ €์žฅ ์™„๋ฃŒ: ./yeji-qdora-v1-final\")\n",
416
+ "\n",
417
+ "# HuggingFace Hub ์—…๋กœ๋“œ\n",
418
+ "print(f\"\\n๐Ÿ“ค HuggingFace Hub ์—…๋กœ๋“œ: {CONFIG['output_repo']}\")\n",
419
+ "trainer.push_to_hub(\n",
420
+ " commit_message=f\"YEJI QDoRA v1 - Final (Loss: {train_result.training_loss:.4f}, Quality: {baseline_avg:.0f}%โ†’{finetuned_avg:.0f}%)\"\n",
421
+ ")\n",
422
+ "\n",
423
+ "print(f\"\\nโœ… ์—…๋กœ๋“œ ์™„๋ฃŒ!\")\n",
424
+ "print(f\" https://huggingface.co/{CONFIG['output_repo']}\")"
425
+ ]
426
+ },
427
+ {
428
+ "cell_type": "code",
429
+ "source": "# ๐Ÿ“ฆ ๋…ธํŠธ๋ถ HuggingFace ๋ฐฑ์—…\nprint(\"=\" * 60)\nprint(\"๐Ÿ“ฆ ๋…ธํŠธ๋ถ ๋ฐฑ์—…\")\nprint(\"=\" * 60)\n\nbackup_success = backup_notebook_to_hf(\n repo_id=CONFIG[\"notebook_backup_repo\"],\n notebook_name=CONFIG[\"notebook_name\"],\n commit_msg=f\"Phase 1 Training Complete - Loss: {train_result.training_loss:.4f}, Quality: {baseline_avg:.0f}%โ†’{finetuned_avg:.0f}%\"\n)\n\nif backup_success:\n print(f\"\\nโœ… ๋…ธํŠธ๋ถ ๋ฐฑ์—… ์™„๋ฃŒ!\")\nelse:\n print(f\"\\nโš ๏ธ ๋…ธํŠธ๋ถ ๋ฐฑ์—… ์‹คํŒจ - ์ˆ˜๋™ ์—…๋กœ๋“œ ํ•„์š”\")",
430
+ "metadata": {},
431
+ "execution_count": null,
432
+ "outputs": []
433
+ },
434
+ {
435
+ "cell_type": "code",
436
+ "execution_count": null,
437
+ "metadata": {},
438
+ "outputs": [],
439
+ "source": [
440
+ "# ํ•™์Šต ๊ฒฐ๊ณผ ์š”์•ฝ\n",
441
+ "print(\"\\n\" + \"=\" * 60)\n",
442
+ "print(\"๐Ÿ“Š YEJI Phase 1 ํ•™์Šต ๊ฒฐ๊ณผ ์š”์•ฝ\")\n",
443
+ "print(\"=\" * 60)\n",
444
+ "\n",
445
+ "print(f\"\\n๐Ÿ”ง ์„ค์ •:\")\n",
446
+ "print(f\" ๋ชจ๋ธ: {CONFIG['base_model']}\")\n",
447
+ "print(f\" ๋ฐฉ๋ฒ•: QDoRA (r=8, alpha=16)\")\n",
448
+ "print(f\" ๋ฐ์ดํ„ฐ: {len(train_dataset):,}๊ฑด\")\n",
449
+ "print(f\" Epochs: {CONFIG['num_epochs']}\")\n",
450
+ "\n",
451
+ "print(f\"\\n๐Ÿ“ˆ ๋ฉ”ํŠธ๋ฆญ:\")\n",
452
+ "print(f\" Train Loss: {train_result.training_loss:.4f}\")\n",
453
+ "print(f\" Eval Loss: {eval_result['eval_loss']:.4f}\")\n",
454
+ "print(f\" ํ•™์Šต ์‹œ๊ฐ„: {elapsed/60:.1f}๋ถ„\")\n",
455
+ "\n",
456
+ "print(f\"\\n๐ŸŽฏ ํ’ˆ์งˆ (Baseline โ†’ Fine-tuned):\")\n",
457
+ "for domain in finetuned_results:\n",
458
+ " b_score = baseline_results[domain][\"score\"]\n",
459
+ " f_score = finetuned_results[domain][\"score\"]\n",
460
+ " diff = f_score - b_score\n",
461
+ " trend = \"๐Ÿ“ˆ\" if diff > 0 else (\"๐Ÿ“‰\" if diff < 0 else \"โžก๏ธ\")\n",
462
+ " status = \"โœ…\" if f_score >= 50 else \"โš ๏ธ\"\n",
463
+ " print(f\" {status} {domain}: {b_score:.0f}% โ†’ {f_score:.0f}% {trend}\")\n",
464
+ "\n",
465
+ "print(f\"\\n๐Ÿ“Š ์ข…ํ•ฉ: {baseline_avg:.0f}% โ†’ {finetuned_avg:.0f}% ({improvement_str}%)\")\n",
466
+ "\n",
467
+ "print(f\"\\n๐Ÿ“ฆ ์ถœ๋ ฅ:\")\n",
468
+ "print(f\" HuggingFace: {CONFIG['output_repo']}\")\n",
469
+ "print(f\" ๋กœ์ปฌ: ./yeji-qdora-v1-final\")\n",
470
+ "\n",
471
+ "print(\"\\n\" + \"=\" * 60)"
472
+ ]
473
+ },
474
+ {
475
+ "cell_type": "markdown",
476
+ "metadata": {},
477
+ "source": [
478
+ "---\n",
479
+ "## ๐Ÿ”Ÿ ๋ฆฌ์†Œ์Šค ์ •๋ฆฌ"
480
+ ]
481
+ },
482
+ {
483
+ "cell_type": "code",
484
+ "execution_count": null,
485
+ "metadata": {},
486
+ "outputs": [],
487
+ "source": [
488
+ "# GPU ๋ฉ”๋ชจ๋ฆฌ ์ •๋ฆฌ\n",
489
+ "del model\n",
490
+ "del trainer\n",
491
+ "gc.collect()\n",
492
+ "torch.cuda.empty_cache()\n",
493
+ "\n",
494
+ "print(\"โœ… GPU ๋ฉ”๋ชจ๋ฆฌ ์ •๋ฆฌ ์™„๋ฃŒ\")\n",
495
+ "!nvidia-smi"
496
+ ]
497
+ },
498
+ {
499
+ "cell_type": "code",
500
+ "execution_count": null,
501
+ "metadata": {},
502
+ "outputs": [],
503
+ "source": [
504
+ "# ์ž๋™ ์ข…๋ฃŒ\n",
505
+ "if CONFIG[\"auto_shutdown\"]:\n",
506
+ " print(f\"\\nโฐ 5์ดˆ ํ›„ ์ž๋™ ์ข…๋ฃŒ ({CONFIG['auto_shutdown']})...\")\n",
507
+ " print(\" ์ทจ์†Œํ•˜๋ ค๋ฉด ์…€ ์‹คํ–‰์„ ์ค‘๋‹จํ•˜์„ธ์š”.\")\n",
508
+ " time.sleep(5)\n",
509
+ " shutdown_colab(CONFIG[\"auto_shutdown\"])\n",
510
+ "else:\n",
511
+ " print(\"โ„น๏ธ ์ž๋™ ์ข…๋ฃŒ ๋น„ํ™œ์„ฑํ™”๋จ\")"
512
+ ]
513
+ },
514
+ {
515
+ "cell_type": "markdown",
516
+ "metadata": {},
517
+ "source": [
518
+ "---\n",
519
+ "## ๐Ÿงช ์œ ํ‹ธ๋ฆฌํ‹ฐ"
520
+ ]
521
+ },
522
+ {
523
+ "cell_type": "code",
524
+ "execution_count": null,
525
+ "metadata": {},
526
+ "outputs": [],
527
+ "source": [
528
+ "# (์„ ํƒ) ์ฒดํฌํฌ์ธํŠธ์—์„œ ์ด์–ด์„œ ํ•™์Šต\n",
529
+ "# trainer.train(resume_from_checkpoint=True)"
530
+ ]
531
+ },
532
+ {
533
+ "cell_type": "code",
534
+ "execution_count": null,
535
+ "metadata": {},
536
+ "outputs": [],
537
+ "source": [
538
+ "# (์„ ํƒ) ํ•™์Šต๋œ ๋ชจ๋ธ๋กœ ์ถ”๊ฐ€ ํ…Œ์ŠคํŠธ\n",
539
+ "# from peft import PeftModel\n",
540
+ "# \n",
541
+ "# base = AutoModelForCausalLM.from_pretrained(CONFIG[\"base_model\"], ...)\n",
542
+ "# model = PeftModel.from_pretrained(base, CONFIG[\"output_repo\"])"
543
+ ]
544
+ },
545
+ {
546
+ "cell_type": "code",
547
+ "execution_count": null,
548
+ "metadata": {},
549
+ "outputs": [],
550
+ "source": [
551
+ "# GPU๋งŒ ํ•ด์ œ (์ˆ˜๋™)\n",
552
+ "# from google.colab import runtime\n",
553
+ "# runtime.unassign()"
554
+ ]
555
+ },
556
+ {
557
+ "cell_type": "code",
558
+ "execution_count": null,
559
+ "metadata": {},
560
+ "outputs": [],
561
+ "source": [
562
+ "# ์„ธ์…˜ ์™„์ „ ์ข…๋ฃŒ (์ˆ˜๋™)\n",
563
+ "# import os\n",
564
+ "# os._exit(0)"
565
+ ]
566
+ }
567
+ ],
568
+ "metadata": {
569
+ "accelerator": "GPU",
570
+ "colab": {
571
+ "gpuType": "A100",
572
+ "provenance": []
573
+ },
574
+ "kernelspec": {
575
+ "display_name": "Python 3",
576
+ "name": "python3"
577
+ },
578
+ "language_info": {
579
+ "name": "python",
580
+ "version": "3.10.12"
581
+ }
582
+ },
583
+ "nbformat": 4,
584
+ "nbformat_minor": 0
585
+ }