Humigence Command - Ready to Use! π
β Yes, you can launch this with "humigence"!
The Humigence training pipeline has been successfully refactored and is now ready to use with the humigence command.
π― How to Use
Launch Humigence CLI
humigence
What You'll See
ββββββββββββββββ Humigence β Your AI. Your pipeline. Zero code. ββββββββββββββββ
A complete MLOps suite built for makers, teams, and enterprises.
Options:
1. Supervised Fine-Tuning β
2. RAG Implementation (coming soon)
3. EnterpriseGPT (coming soon)
4. Batch Inference (coming soon)
5. Context Length (coming soon)
6. Exit
Select an option:
Training Options
- Select "1. Supervised Fine-Tuning"
- Choose Setup Mode: Basic or Advanced
- Select Model: TinyLlama, Qwen, Phi-2, etc.
- Choose Training Recipe: LoRA, QLoRA, etc.
- Select Dataset: Your available datasets
- Choose Training Mode: Multi-GPU or Single-GPU
- Confirm Configuration: Review and start training
π What's New (Accelerate Refactor)
Clean Architecture
- Hugging Face Accelerate: Stable DDP training
- Single-GPU Evaluation: Always on cuda:0
- No More NCCL Errors: Robust distributed training
- Clean Code: Removed over-engineering
Key Features
- β Multi-GPU Training: 2Γ RTX 5090s support
- β Single-GPU Fallback: Automatic fallback if needed
- β LoRA/QLoRA Support: Parameter-efficient fine-tuning
- β Structured Logging: Clean, readable output
- β Error Handling: Robust error management
π Training Modes
Multi-GPU Training (Recommended)
- Uses
accelerate launchwith 2Γ RTX 5090s - Stable DDP training with NCCL backend
- Automatic device management
- Mixed precision (bf16/fp16)
Single-GPU Training
- Uses
python train.pyfor single GPU - Fallback option if multi-GPU fails
- Same functionality, single device
π― Usage Examples
Interactive CLI
humigence
# Select option 1
# Choose Multi-GPU Training
# Follow the configuration wizard
Direct Training (Advanced)
# Multi-GPU
accelerate launch --config_file accelerate_config.yaml train.py --config_file config.json
# Single-GPU
python train.py --config_file config.json
π§ Technical Details
Files Created/Updated
train.py- Clean Accelerate-based training scriptaccelerate_config.yaml- Multi-GPU configurationcli/main.py- Updated CLI integrationhumigence- Command-line entry point
Dependencies
- Hugging Face Accelerate - Distributed training
- Transformers - Model loading and training
- PEFT - LoRA/QLoRA support
- Rich - Beautiful CLI interface
π Ready to Use!
The Humigence training pipeline is now:
- β Refactored with Hugging Face Accelerate
- β Tested and working correctly
- β
Installed as
humigencecommand - β Ready for production use
Just run humigence and start training! π
π What You Get
- Clean CLI Interface - Easy to use
- Stable Multi-GPU Training - No more NCCL errors
- Single-GPU Evaluation - No device mismatches
- Structured Reporting - Clear training summaries
- Error Handling - Robust error management
- Production Ready - Works with your 2Γ RTX 5090s
The refactored Humigence pipeline is ready for your AI training needs! π―