mlabonne commited on
Commit
1bea093
·
1 Parent(s): b49d274

add SFT script with LFM2.5-1.2B

Browse files
Files changed (2) hide show
  1. README.md +35 -5
  2. sft-lfm2.5.py +512 -0
README.md CHANGED
@@ -21,6 +21,21 @@ UV scripts for fine-tuning LLMs and VLMs using [Unsloth](https://github.com/unsl
21
 
22
  ## Data Format
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  ### VLM Fine-tuning
25
 
26
  Requires `images` and `messages` columns:
@@ -63,7 +78,21 @@ Use `--text-column` if your column has a different name.
63
  View available options for any script:
64
 
65
  ```bash
66
- uv run https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/sft-qwen3-vl.py --help
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
  ```
68
 
69
  ### VLM fine-tuning
@@ -94,17 +123,18 @@ hf jobs uv run \
94
 
95
  ```bash
96
  hf jobs uv run \
97
- https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/sft-qwen3-vl.py \
98
- --flavor a100-large --secrets HF_TOKEN \
99
- -- --dataset your-username/dataset \
100
  --trackio-space your-username/trackio \
101
- --output-repo your-username/my-model
102
  ```
103
 
104
  ## Scripts
105
 
106
  | Script | Base Model | Task |
107
  |--------|------------|------|
 
108
  | [`sft-qwen3-vl.py`](sft-qwen3-vl.py) | Qwen3-VL-8B | VLM fine-tuning |
109
  | [`sft-gemma3-vlm.py`](sft-gemma3-vlm.py) | Gemma 3 4B | VLM fine-tuning (smaller) |
110
  | [`continued-pretraining.py`](continued-pretraining.py) | Qwen3-0.6B | Domain adaptation |
 
21
 
22
  ## Data Format
23
 
24
+ ### LLM Fine-tuning (SFT)
25
+
26
+ Requires conversation data in ShareGPT or similar format:
27
+
28
+ ```python
29
+ {
30
+ "messages": [
31
+ {"from": "human", "value": "What is the capital of France?"},
32
+ {"from": "gpt", "value": "The capital of France is Paris."}
33
+ ]
34
+ }
35
+ ```
36
+
37
+ The script auto-converts common formats (ShareGPT, Alpaca, etc.) via `standardize_data_formats`. See [mlabonne/FineTome-100k](https://huggingface.co/datasets/mlabonne/FineTome-100k) for a working dataset example.
38
+
39
  ### VLM Fine-tuning
40
 
41
  Requires `images` and `messages` columns:
 
78
  View available options for any script:
79
 
80
  ```bash
81
+ uv run https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/sft-lfm2.5.py --help
82
+ ```
83
+
84
+ ### LLM fine-tuning (recommended starting point)
85
+
86
+ Fine-tune [LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct), a compact and efficient text model from Liquid AI:
87
+
88
+ ```bash
89
+ hf jobs uv run \
90
+ https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/sft-lfm2.5.py \
91
+ --flavor a10g-small --secrets HF_TOKEN --timeout 4h \
92
+ -- --dataset mlabonne/FineTome-100k \
93
+ --num-epochs 1 \
94
+ --eval-split 0.2 \
95
+ --output-repo your-username/lfm-finetuned
96
  ```
97
 
98
  ### VLM fine-tuning
 
123
 
124
  ```bash
125
  hf jobs uv run \
126
+ https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/sft-lfm2.5.py \
127
+ --flavor a10g-small --secrets HF_TOKEN \
128
+ -- --dataset mlabonne/FineTome-100k \
129
  --trackio-space your-username/trackio \
130
+ --output-repo your-username/lfm-finetuned
131
  ```
132
 
133
  ## Scripts
134
 
135
  | Script | Base Model | Task |
136
  |--------|------------|------|
137
+ | [`sft-lfm2.5.py`](sft-lfm2.5.py) ⭐ | LFM2.5-1.2B-Instruct | LLM fine-tuning (recommended) |
138
  | [`sft-qwen3-vl.py`](sft-qwen3-vl.py) | Qwen3-VL-8B | VLM fine-tuning |
139
  | [`sft-gemma3-vlm.py`](sft-gemma3-vlm.py) | Gemma 3 4B | VLM fine-tuning (smaller) |
140
  | [`continued-pretraining.py`](continued-pretraining.py) | Qwen3-0.6B | Domain adaptation |
sft-lfm2.5.py ADDED
@@ -0,0 +1,512 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.10"
3
+ # dependencies = [
4
+ # "unsloth",
5
+ # "datasets",
6
+ # "trl==0.22.2",
7
+ # "huggingface_hub[hf_transfer]",
8
+ # "trackio",
9
+ # "tensorboard",
10
+ # "transformers==4.57.3",
11
+ # ]
12
+ # ///
13
+ """
14
+ Fine-tune LFM2.5-1.2B-Instruct (Liquid Foundation Model) using Unsloth optimizations.
15
+
16
+ Uses Unsloth for ~60% less VRAM and 2x faster training.
17
+ Supports epoch-based or step-based training with optional eval split.
18
+
19
+ Epoch-based training (recommended for full datasets):
20
+ uv run sft-lfm2.5.py \
21
+ --dataset mlabonne/FineTome-100k \
22
+ --num-epochs 1 \
23
+ --eval-split 0.2 \
24
+ --output-repo your-username/lfm-finetuned
25
+
26
+ Run on HF Jobs (1 epoch with eval):
27
+ hf jobs uv run sft-lfm2.5.py \
28
+ --flavor a10g-small --secrets HF_TOKEN --timeout 4h \
29
+ -- --dataset mlabonne/FineTome-100k \
30
+ --num-epochs 1 \
31
+ --eval-split 0.2 \
32
+ --output-repo your-username/lfm-finetuned
33
+
34
+ Step-based training (for quick tests):
35
+ uv run sft-lfm2.5.py \
36
+ --dataset mlabonne/FineTome-100k \
37
+ --max-steps 500 \
38
+ --output-repo your-username/lfm-finetuned
39
+ """
40
+
41
+ import argparse
42
+ import logging
43
+ import os
44
+ import sys
45
+ import time
46
+
47
+ # Force unbuffered output for HF Jobs logs
48
+ sys.stdout.reconfigure(line_buffering=True)
49
+ sys.stderr.reconfigure(line_buffering=True)
50
+
51
+ logging.basicConfig(
52
+ level=logging.INFO,
53
+ format="%(asctime)s - %(levelname)s - %(message)s",
54
+ )
55
+ logger = logging.getLogger(__name__)
56
+
57
+
58
+ def check_cuda():
59
+ """Check CUDA availability and exit if not available."""
60
+ import torch
61
+
62
+ if not torch.cuda.is_available():
63
+ logger.error("CUDA is not available. This script requires a GPU.")
64
+ logger.error("Run on a machine with a CUDA-capable GPU or use HF Jobs:")
65
+ logger.error(
66
+ " hf jobs uv run sft-lfm2.5.py --flavor a10g-small ..."
67
+ )
68
+ sys.exit(1)
69
+ logger.info(f"CUDA available: {torch.cuda.get_device_name(0)}")
70
+
71
+
72
+ def parse_args():
73
+ parser = argparse.ArgumentParser(
74
+ description="Fine-tune LFM2.5-1.2B-Instruct with Unsloth",
75
+ formatter_class=argparse.RawDescriptionHelpFormatter,
76
+ epilog="""
77
+ Examples:
78
+ # Quick test run
79
+ uv run sft-lfm2.5.py \\
80
+ --dataset mlabonne/FineTome-100k \\
81
+ --max-steps 50 \\
82
+ --output-repo username/lfm-test
83
+
84
+ # Full training with eval
85
+ uv run sft-lfm2.5.py \\
86
+ --dataset mlabonne/FineTome-100k \\
87
+ --num-epochs 1 \\
88
+ --eval-split 0.2 \\
89
+ --output-repo username/lfm-finetuned
90
+
91
+ # With Trackio monitoring
92
+ uv run sft-lfm2.5.py \\
93
+ --dataset mlabonne/FineTome-100k \\
94
+ --num-epochs 1 \\
95
+ --output-repo username/lfm-finetuned \\
96
+ --trackio-space username/trackio
97
+ """,
98
+ )
99
+
100
+ # Model and data
101
+ parser.add_argument(
102
+ "--base-model",
103
+ default="LiquidAI/LFM2.5-1.2B-Instruct",
104
+ help="Base model (default: LiquidAI/LFM2.5-1.2B-Instruct)",
105
+ )
106
+ parser.add_argument(
107
+ "--dataset",
108
+ required=True,
109
+ help="Dataset in ShareGPT/conversation format (e.g., mlabonne/FineTome-100k)",
110
+ )
111
+ parser.add_argument(
112
+ "--output-repo",
113
+ required=True,
114
+ help="HF Hub repo to push model to (e.g., 'username/lfm-finetuned')",
115
+ )
116
+
117
+ # Training config
118
+ parser.add_argument(
119
+ "--num-epochs",
120
+ type=float,
121
+ default=None,
122
+ help="Number of epochs (default: None). Use instead of --max-steps.",
123
+ )
124
+ parser.add_argument(
125
+ "--max-steps",
126
+ type=int,
127
+ default=None,
128
+ help="Training steps (default: None). Use for quick tests or streaming.",
129
+ )
130
+ parser.add_argument(
131
+ "--batch-size",
132
+ type=int,
133
+ default=2,
134
+ help="Per-device batch size (default: 2)",
135
+ )
136
+ parser.add_argument(
137
+ "--gradient-accumulation",
138
+ type=int,
139
+ default=4,
140
+ help="Gradient accumulation steps (default: 4). Effective batch = batch-size * this",
141
+ )
142
+ parser.add_argument(
143
+ "--learning-rate",
144
+ type=float,
145
+ default=2e-4,
146
+ help="Learning rate (default: 2e-4)",
147
+ )
148
+ parser.add_argument(
149
+ "--max-seq-length",
150
+ type=int,
151
+ default=2048,
152
+ help="Maximum sequence length (default: 2048)",
153
+ )
154
+
155
+ # LoRA config
156
+ parser.add_argument(
157
+ "--lora-r",
158
+ type=int,
159
+ default=16,
160
+ help="LoRA rank (default: 16). Higher = more capacity but more VRAM",
161
+ )
162
+ parser.add_argument(
163
+ "--lora-alpha",
164
+ type=int,
165
+ default=16,
166
+ help="LoRA alpha (default: 16). Same as r per Unsloth recommendation",
167
+ )
168
+
169
+ # Logging
170
+ parser.add_argument(
171
+ "--trackio-space",
172
+ default=None,
173
+ help="HF Space for Trackio dashboard (e.g., 'username/trackio')",
174
+ )
175
+ parser.add_argument(
176
+ "--run-name",
177
+ default=None,
178
+ help="Custom run name for Trackio (default: auto-generated)",
179
+ )
180
+ parser.add_argument(
181
+ "--save-local",
182
+ default="lfm-output",
183
+ help="Local directory to save model (default: lfm-output)",
184
+ )
185
+
186
+ # Evaluation and data control
187
+ parser.add_argument(
188
+ "--eval-split",
189
+ type=float,
190
+ default=0.0,
191
+ help="Fraction of data for evaluation (0.0-0.5). Default: 0.0 (no eval)",
192
+ )
193
+ parser.add_argument(
194
+ "--num-samples",
195
+ type=int,
196
+ default=None,
197
+ help="Limit samples (default: None = use all)",
198
+ )
199
+ parser.add_argument(
200
+ "--seed",
201
+ type=int,
202
+ default=3407,
203
+ help="Random seed for reproducibility (default: 3407)",
204
+ )
205
+ parser.add_argument(
206
+ "--merge-model",
207
+ action="store_true",
208
+ default=False,
209
+ help="Merge LoRA weights into base model before uploading (larger file, easier to use)",
210
+ )
211
+
212
+ return parser.parse_args()
213
+
214
+
215
+ def main():
216
+ args = parse_args()
217
+
218
+ # Validate epochs/steps configuration
219
+ if not args.num_epochs and not args.max_steps:
220
+ args.num_epochs = 1
221
+ logger.info("Using default --num-epochs=1")
222
+
223
+ # Determine training duration display
224
+ if args.num_epochs:
225
+ duration_str = f"{args.num_epochs} epoch(s)"
226
+ else:
227
+ duration_str = f"{args.max_steps} steps"
228
+
229
+ print("=" * 70)
230
+ print("LFM2.5-1.2B Fine-tuning with Unsloth")
231
+ print("=" * 70)
232
+ print("\nConfiguration:")
233
+ print(f" Base model: {args.base_model}")
234
+ print(f" Dataset: {args.dataset}")
235
+ print(f" Num samples: {args.num_samples or 'all'}")
236
+ print(f" Eval split: {args.eval_split if args.eval_split > 0 else '(disabled)'}")
237
+ print(f" Seed: {args.seed}")
238
+ print(f" Training: {duration_str}")
239
+ print(f" Batch size: {args.batch_size} x {args.gradient_accumulation} = {args.batch_size * args.gradient_accumulation}")
240
+ print(f" Learning rate: {args.learning_rate}")
241
+ print(f" LoRA rank: {args.lora_r}")
242
+ print(f" Max seq length: {args.max_seq_length}")
243
+ print(f" Output repo: {args.output_repo}")
244
+ print(f" Trackio space: {args.trackio_space or '(not configured)'}")
245
+ print()
246
+
247
+ # Check CUDA before heavy imports
248
+ check_cuda()
249
+
250
+ # Enable fast transfers
251
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
252
+
253
+ # Set Trackio space if provided
254
+ if args.trackio_space:
255
+ os.environ["TRACKIO_SPACE_ID"] = args.trackio_space
256
+ logger.info(f"Trackio dashboard: https://huggingface.co/spaces/{args.trackio_space}")
257
+
258
+ # Import heavy dependencies
259
+ from unsloth import FastLanguageModel
260
+ from unsloth.chat_templates import standardize_data_formats, train_on_responses_only
261
+ from datasets import load_dataset
262
+ from trl import SFTTrainer, SFTConfig
263
+ from huggingface_hub import login
264
+
265
+ # Login to Hub
266
+ token = os.environ.get("HF_TOKEN") or os.environ.get("hfjob")
267
+ if token:
268
+ login(token=token)
269
+ logger.info("Logged in to Hugging Face Hub")
270
+ else:
271
+ logger.warning("HF_TOKEN not set - model upload may fail")
272
+
273
+ # 1. Load model
274
+ print("\n[1/5] Loading model...")
275
+ start = time.time()
276
+
277
+ model, tokenizer = FastLanguageModel.from_pretrained(
278
+ model_name=args.base_model,
279
+ max_seq_length=args.max_seq_length,
280
+ load_in_4bit=False,
281
+ load_in_8bit=False,
282
+ load_in_16bit=True,
283
+ full_finetuning=False,
284
+ )
285
+
286
+ # Add LoRA adapters with LFM-specific target modules
287
+ model = FastLanguageModel.get_peft_model(
288
+ model,
289
+ r=args.lora_r,
290
+ target_modules=["q_proj", "k_proj", "v_proj", "out_proj", "in_proj", "w1", "w2", "w3"],
291
+ lora_alpha=args.lora_alpha,
292
+ lora_dropout=0,
293
+ bias="none",
294
+ use_gradient_checkpointing="unsloth",
295
+ random_state=args.seed,
296
+ use_rslora=False,
297
+ loftq_config=None,
298
+ )
299
+ print(f"Model loaded in {time.time() - start:.1f}s")
300
+
301
+ # 2. Load and prepare dataset
302
+ print("\n[2/5] Loading dataset...")
303
+ start = time.time()
304
+
305
+ dataset = load_dataset(args.dataset, split="train")
306
+ print(f" Dataset has {len(dataset)} total samples")
307
+
308
+ if args.num_samples:
309
+ dataset = dataset.select(range(min(args.num_samples, len(dataset))))
310
+ print(f" Limited to {len(dataset)} samples")
311
+
312
+ # Auto-detect and normalize conversation column
313
+ for col in ["messages", "conversations", "conversation"]:
314
+ if col in dataset.column_names and isinstance(dataset[0][col], list):
315
+ if col != "conversations":
316
+ dataset = dataset.rename_column(col, "conversations")
317
+ break
318
+ dataset = standardize_data_formats(dataset)
319
+
320
+ # Apply chat template
321
+ def formatting_prompts_func(examples):
322
+ texts = tokenizer.apply_chat_template(
323
+ examples["conversations"],
324
+ tokenize=False,
325
+ add_generation_prompt=False,
326
+ )
327
+ # Remove BOS token to avoid duplicates
328
+ return {"text": [x.removeprefix(tokenizer.bos_token) for x in texts]}
329
+
330
+ dataset = dataset.map(formatting_prompts_func, batched=True)
331
+
332
+ # Split for evaluation if requested
333
+ if args.eval_split > 0:
334
+ split = dataset.train_test_split(test_size=args.eval_split, seed=args.seed)
335
+ train_data = split["train"]
336
+ eval_data = split["test"]
337
+ print(f" Train: {len(train_data)} samples, Eval: {len(eval_data)} samples")
338
+ else:
339
+ train_data = dataset
340
+ eval_data = None
341
+
342
+ print(f" Dataset ready in {time.time() - start:.1f}s")
343
+
344
+ # 3. Configure trainer
345
+ print("\n[3/5] Configuring trainer...")
346
+
347
+ # Calculate steps per epoch for logging/eval intervals
348
+ effective_batch = args.batch_size * args.gradient_accumulation
349
+ num_samples = len(train_data)
350
+ steps_per_epoch = num_samples // effective_batch
351
+
352
+ # Determine run name and logging steps
353
+ if args.run_name:
354
+ run_name = args.run_name
355
+ elif args.num_epochs:
356
+ run_name = f"lfm2.5-sft-{args.num_epochs}ep"
357
+ else:
358
+ run_name = f"lfm2.5-sft-{args.max_steps}steps"
359
+
360
+ if args.num_epochs:
361
+ logging_steps = max(1, steps_per_epoch // 10)
362
+ save_steps = max(1, steps_per_epoch // 4)
363
+ else:
364
+ logging_steps = max(1, args.max_steps // 20)
365
+ save_steps = max(1, args.max_steps // 4)
366
+
367
+ # Determine reporting backend
368
+ if args.trackio_space:
369
+ report_to = ["tensorboard", "trackio"]
370
+ else:
371
+ report_to = ["tensorboard"]
372
+
373
+ training_config = SFTConfig(
374
+ output_dir=args.save_local,
375
+ dataset_text_field="text",
376
+ per_device_train_batch_size=args.batch_size,
377
+ gradient_accumulation_steps=args.gradient_accumulation,
378
+ warmup_steps=5,
379
+ num_train_epochs=args.num_epochs if args.num_epochs else 1,
380
+ max_steps=args.max_steps if args.max_steps else -1,
381
+ learning_rate=args.learning_rate,
382
+ logging_steps=logging_steps,
383
+ optim="adamw_8bit",
384
+ weight_decay=0.01,
385
+ lr_scheduler_type="linear",
386
+ seed=args.seed,
387
+ max_length=args.max_seq_length,
388
+ report_to=report_to,
389
+ run_name=run_name,
390
+ push_to_hub=True,
391
+ hub_model_id=args.output_repo,
392
+ save_steps=save_steps,
393
+ save_total_limit=3,
394
+ )
395
+
396
+ # Add evaluation config if eval is enabled
397
+ if eval_data:
398
+ if args.num_epochs:
399
+ training_config.eval_strategy = "epoch"
400
+ print(" Evaluation enabled: every epoch")
401
+ else:
402
+ training_config.eval_strategy = "steps"
403
+ training_config.eval_steps = max(1, args.max_steps // 5)
404
+ print(f" Evaluation enabled: every {training_config.eval_steps} steps")
405
+
406
+ trainer = SFTTrainer(
407
+ model=model,
408
+ tokenizer=tokenizer,
409
+ train_dataset=train_data,
410
+ eval_dataset=eval_data,
411
+ args=training_config,
412
+ )
413
+
414
+ # Train on responses only (mask user inputs)
415
+ trainer = train_on_responses_only(
416
+ trainer,
417
+ instruction_part="<|im_start|>user\n",
418
+ response_part="<|im_start|>assistant\n",
419
+ )
420
+
421
+ # 4. Train
422
+ print(f"\n[4/5] Training for {duration_str}...")
423
+ if args.num_epochs:
424
+ print(f" (~{steps_per_epoch} steps/epoch, {int(steps_per_epoch * args.num_epochs)} total steps)")
425
+ start = time.time()
426
+
427
+ train_result = trainer.train()
428
+
429
+ train_time = time.time() - start
430
+ total_steps = train_result.metrics.get("train_steps", args.max_steps or steps_per_epoch * args.num_epochs)
431
+ print(f"\nTraining completed in {train_time / 60:.1f} minutes")
432
+ print(f" Speed: {total_steps / train_time:.2f} steps/s")
433
+
434
+ # Print training metrics
435
+ train_loss = train_result.metrics.get("train_loss")
436
+ if train_loss:
437
+ print(f" Final train loss: {train_loss:.4f}")
438
+
439
+ # Print eval results if eval was enabled
440
+ if eval_data:
441
+ print("\nRunning final evaluation...")
442
+ try:
443
+ eval_results = trainer.evaluate()
444
+ eval_loss = eval_results.get("eval_loss")
445
+ if eval_loss:
446
+ print(f" Final eval loss: {eval_loss:.4f}")
447
+ if train_loss:
448
+ ratio = eval_loss / train_loss
449
+ if ratio > 1.5:
450
+ print(f" Warning: Eval loss is {ratio:.1f}x train loss - possible overfitting")
451
+ else:
452
+ print(f" Eval/train ratio: {ratio:.2f} - model generalizes well")
453
+ except Exception as e:
454
+ print(f" Warning: Final evaluation failed: {e}")
455
+ print(" Continuing to save model...")
456
+
457
+ # 5. Save and push
458
+ print("\n[5/5] Saving model...")
459
+
460
+ if args.merge_model:
461
+ print("Merging LoRA weights into base model...")
462
+ print(f"\nPushing merged model to {args.output_repo}...")
463
+ model.push_to_hub_merged(
464
+ args.output_repo,
465
+ tokenizer=tokenizer,
466
+ save_method="merged_16bit",
467
+ )
468
+ print(f"Merged model available at: https://huggingface.co/{args.output_repo}")
469
+ else:
470
+ model.save_pretrained(args.save_local)
471
+ tokenizer.save_pretrained(args.save_local)
472
+ print(f"Saved locally to {args.save_local}/")
473
+
474
+ print(f"\nPushing adapter to {args.output_repo}...")
475
+ model.push_to_hub(args.output_repo, tokenizer=tokenizer)
476
+ print(f"Adapter available at: https://huggingface.co/{args.output_repo}")
477
+
478
+ print("\n" + "=" * 70)
479
+ print("Done!")
480
+ print("=" * 70)
481
+
482
+
483
+ if __name__ == "__main__":
484
+ if len(sys.argv) == 1:
485
+ print("=" * 70)
486
+ print("LFM2.5-1.2B Fine-tuning with Unsloth")
487
+ print("=" * 70)
488
+ print("\nFine-tune Liquid Foundation Model with optional train/eval split.")
489
+ print("\nFeatures:")
490
+ print(" - ~60% less VRAM with Unsloth optimizations")
491
+ print(" - 2x faster training vs standard methods")
492
+ print(" - Epoch-based or step-based training")
493
+ print(" - Optional evaluation to detect overfitting")
494
+ print(" - Trains only on assistant responses (masked user inputs)")
495
+ print("\nEpoch-based training:")
496
+ print("\n uv run sft-lfm2.5.py \\")
497
+ print(" --dataset mlabonne/FineTome-100k \\")
498
+ print(" --num-epochs 1 \\")
499
+ print(" --eval-split 0.2 \\")
500
+ print(" --output-repo your-username/lfm-finetuned")
501
+ print("\nHF Jobs example:")
502
+ print("\n hf jobs uv run sft-lfm2.5.py \\")
503
+ print(" --flavor a10g-small --secrets HF_TOKEN --timeout 4h \\")
504
+ print(" -- --dataset mlabonne/FineTome-100k \\")
505
+ print(" --num-epochs 1 \\")
506
+ print(" --eval-split 0.2 \\")
507
+ print(" --output-repo your-username/lfm-finetuned")
508
+ print("\nFor full help: uv run sft-lfm2.5.py --help")
509
+ print("=" * 70)
510
+ sys.exit(0)
511
+
512
+ main()