File size: 3,552 Bytes
36ac84e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 | # Humigence Command - Ready to Use! π
## β
**Yes, you can launch this with "humigence"!**
The Humigence training pipeline has been successfully refactored and is now ready to use with the `humigence` command.
## π― **How to Use**
### **Launch Humigence CLI**
```bash
humigence
```
### **What You'll See**
```
ββββββββββββββββ Humigence β Your AI. Your pipeline. Zero code. ββββββββββββββββ
A complete MLOps suite built for makers, teams, and enterprises.
Options:
1. Supervised Fine-Tuning β
2. RAG Implementation (coming soon)
3. EnterpriseGPT (coming soon)
4. Batch Inference (coming soon)
5. Context Length (coming soon)
6. Exit
Select an option:
```
### **Training Options**
1. **Select "1. Supervised Fine-Tuning"**
2. **Choose Setup Mode**: Basic or Advanced
3. **Select Model**: TinyLlama, Qwen, Phi-2, etc.
4. **Choose Training Recipe**: LoRA, QLoRA, etc.
5. **Select Dataset**: Your available datasets
6. **Choose Training Mode**: Multi-GPU or Single-GPU
7. **Confirm Configuration**: Review and start training
## π **What's New (Accelerate Refactor)**
### **Clean Architecture**
- **Hugging Face Accelerate**: Stable DDP training
- **Single-GPU Evaluation**: Always on cuda:0
- **No More NCCL Errors**: Robust distributed training
- **Clean Code**: Removed over-engineering
### **Key Features**
- β
**Multi-GPU Training**: 2Γ RTX 5090s support
- β
**Single-GPU Fallback**: Automatic fallback if needed
- β
**LoRA/QLoRA Support**: Parameter-efficient fine-tuning
- β
**Structured Logging**: Clean, readable output
- β
**Error Handling**: Robust error management
## π **Training Modes**
### **Multi-GPU Training (Recommended)**
- Uses `accelerate launch` with 2Γ RTX 5090s
- Stable DDP training with NCCL backend
- Automatic device management
- Mixed precision (bf16/fp16)
### **Single-GPU Training**
- Uses `python train.py` for single GPU
- Fallback option if multi-GPU fails
- Same functionality, single device
## π― **Usage Examples**
### **Interactive CLI**
```bash
humigence
# Select option 1
# Choose Multi-GPU Training
# Follow the configuration wizard
```
### **Direct Training (Advanced)**
```bash
# Multi-GPU
accelerate launch --config_file accelerate_config.yaml train.py --config_file config.json
# Single-GPU
python train.py --config_file config.json
```
## π§ **Technical Details**
### **Files Created/Updated**
- **`train.py`** - Clean Accelerate-based training script
- **`accelerate_config.yaml`** - Multi-GPU configuration
- **`cli/main.py`** - Updated CLI integration
- **`humigence`** - Command-line entry point
### **Dependencies**
- **Hugging Face Accelerate** - Distributed training
- **Transformers** - Model loading and training
- **PEFT** - LoRA/QLoRA support
- **Rich** - Beautiful CLI interface
## π **Ready to Use!**
The Humigence training pipeline is now:
- β
**Refactored** with Hugging Face Accelerate
- β
**Tested** and working correctly
- β
**Installed** as `humigence` command
- β
**Ready** for production use
**Just run `humigence` and start training!** π
## π **What You Get**
1. **Clean CLI Interface** - Easy to use
2. **Stable Multi-GPU Training** - No more NCCL errors
3. **Single-GPU Evaluation** - No device mismatches
4. **Structured Reporting** - Clear training summaries
5. **Error Handling** - Robust error management
6. **Production Ready** - Works with your 2Γ RTX 5090s
**The refactored Humigence pipeline is ready for your AI training needs!** π―
|