Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Paper β’ 2311.03099 β’ Published β’ 33
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
An end-to-end automated pipeline for discovering, merging, evaluating, and fine-tuning open-source LLMs β with full MLOps integration.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Phase 1 β Discovery β Scan HF Hub β filter β rank β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Phase 2 β Merging β SLERP Β· TIES Β· DARE Β· TA β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Phase 3 β Evaluation β ROUGE Β· BERTScore Β· Judge β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Phase 4 β Fine-Tuning β LoRA/QLoRA Β· Synthetic Data β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Phase 5 β MLOps β vLLM Β· W&B Β· MLflow Β· HF Hub β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β__________________________|
Iterative improvement loop
git clone https://github.com/YOUR_USERNAME/llm-pipeline.git
cd llm-pipeline
pip install -r requirements.txt
export HF_TOKEN="hf_..." # Hugging Face token
export WANDB_API_KEY="..." # W&B token (optional)
# Full pipeline for reasoning models
python pipeline.py run reasoning
# With iterative improvement loop
python pipeline.py run code --loop --max-iter 3
# Custom merge strategy
python pipeline.py run medical --strategy breadcrumbs --top-k 3
python -m phase1_discovery.discover run reasoning --top-k 5
python -m phase1_discovery.discover run code --perplexity # adds perplexity probe
python -m phase1_discovery.discover run --all # all categories
# TIES merge (recommended for union)
python -m phase2_merging.merge run ties \
--model mistralai/Mistral-7B-v0.3 \
--model teknium/OpenHermes-2.5-Mistral-7B \
--base mistralai/Mistral-7B-v0.3
# SLERP interpolation
python -m phase2_merging.merge run slerp \
--model model_a --model model_b --alpha 0.6
# Breadcrumbs (conservative / intersection)
python -m phase2_merging.merge run breadcrumbs \
--model base --model ft_a --model ft_b --density 0.7
# Inspect architecture (DOM-tree view)
python -m phase2_merging.merge run ties \
--introspect mistralai/Mistral-7B-v0.3
# Evaluate on SQuAD v2
python -m phase3_evaluation.evaluate run ./merged_model --dataset squad --n-samples 200
# Compare multiple models
python -m phase3_evaluation.evaluate run model_a \
--compare model_b --compare model_c
# Disable LLM judge (faster)
python -m phase3_evaluation.evaluate run ./merged --no-judge
# Fine-tune targeting specific gaps
python -m phase4_finetuning.finetune run \
--base mistralai/Mistral-7B-v0.3 \
--gap factual_recall --gap numerical \
--n-syn 100 --output ./adapters/run1
# Use existing synthetic data
python -m phase4_finetuning.finetune run \
--base mistralai/Mistral-7B-v0.3 \
--data-path ./artifacts/data/synthetic_data.jsonl
# Iterative loop
python -m phase4_finetuning.finetune run --loop --max-iter 3
# Start vLLM server (OpenAI-compatible)
python -m phase5_mlops.serve serve ./merged_model --port 8000
# Benchmark throughput
python -m phase5_mlops.serve serve ./merged_model --bench
# Track experiment
python -m phase5_mlops.serve track my-run \
--model ./merged --strategy ties \
--rouge1 0.42 --bertscore 0.71 --judge 7.3
# Deploy to HF Hub
python -m phase5_mlops.serve deploy ./merged_model \
--repo your-username/my-merged-7b
# View leaderboard
python -m phase5_mlops.serve leaderboard
llm-pipeline/
βββ pipeline.py # Master orchestrator
βββ requirements.txt
βββ configs/
β βββ settings.py # All config: paths, scale, hyperparams
βββ utils/
β βββ logger.py # Centralized logging
βββ phase1_discovery/
β βββ discover.py # HF Hub crawler + ranking
βββ phase2_merging/
β βββ merge.py # Merging + architecture introspection
βββ phase3_evaluation/
β βββ evaluate.py # Multi-metric eval + gap detection
βββ phase4_finetuning/
β βββ finetune.py # QLoRA + synthetic data + loop
βββ phase5_mlops/
β βββ serve.py # vLLM + W&B + MLflow + HF deploy
βββ artifacts/ # Auto-created at runtime
βββ models/
βββ merges/
βββ adapters/
βββ evaluations/
βββ data/
Edit configs/settings.py to customize:
# Scale preset (currently: medium = 7B, single A100)
SCALE = "medium"
# Categories and keywords for discovery
HF_MODEL_CATEGORIES = {
"code": ["starcoder", "codellama", "deepseek-coder"],
"reasoning": ["mistral", "llama", "qwen"],
...
}
# Fine-tuning defaults
FT_BASE_MODEL = "mistralai/Mistral-7B-v0.3"
FT_EPOCHS = 3
FT_LR = 2e-4
# vLLM
VLLM_GPU_MEMORY_UTIL = 0.90
VLLM_MAX_MODEL_LEN = 4096
| Strategy | Type | Best For |
|---|---|---|
slerp |
Union | Two-model smooth interpolation |
ties |
Union | Multi-model, removes conflicting deltas |
dare_ties |
Union | Aggressive sparsification before TIES |
task_arithmetic |
Union | Adding task-specific capabilities |
breadcrumbs |
Intersection | Conservative, safety-preserving merge |
| Metric | Tool | Threshold |
|---|---|---|
| ROUGE-1/2/L | rouge-score |
β₯ 0.30 |
| BERTScore F1 | bert-score |
β₯ 0.50 |
| Faithfulness | cross-encoder/nli-deberta-v3-small |
β₯ 0.50 |
| Hallucination | Heuristic + NLI | < 10% |
| Judge Score | LLM-as-Judge | β₯ 5.0/10 |
ββ Evaluate model βββββββββββββββββββββββββββββββββββ
β ROUGE / BERTScore / Judge / Faithfulness β
ββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββ
β gaps detected?
βΌ
ββ Detect knowledge gaps ββββββββββββββββββββββββββββ
β factual_recall / numerical / code / reasoning β
ββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββ
β
βΌ
ββ Generate synthetic data ββββββββββββββββββββββββββ
β LLM generates targeted QA pairs per gap β
ββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββ
β
βΌ
ββ QLoRA fine-tune ββββββββββββββββββββββββββββββββββ
β Response-only loss, 4-bit NF4, paged_adamw β
ββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββ
β
ββββββββββββββββΊ repeat until target ROUGE or max_iter
| Scale | GPU | RAM | Notes |
|---|---|---|---|
| Small (1β3B) | Any CUDA GPU | 16GB | CPU possible but slow |
| Medium (7B) | A100 / H100 40GB | 32GB | Recommended |
| Large (13B+) | 2Γ A100 80GB | 64GB | Set tensor_parallel=2 |
transformers β model loadingpeft β LoRA/QLoRA adapterstrl β SFTTrainermergekit β TIES, DARE, SLERPvllm β high-throughput inferencebert-score β semantic evaluationwandb + mlflow β experiment tracking