humigencev2 / HUMIGENCE_COMMAND_READY.md
lilbablo's picture
chore: initial public release of Humigence with dual-GPU & CLI wizard
36ac84e

Humigence Command - Ready to Use! πŸš€

βœ… Yes, you can launch this with "humigence"!

The Humigence training pipeline has been successfully refactored and is now ready to use with the humigence command.

🎯 How to Use

Launch Humigence CLI

humigence

What You'll See

──────────────── Humigence β€” Your AI. Your pipeline. Zero code. ────────────────
A complete MLOps suite built for makers, teams, and enterprises.

Options:
1. Supervised Fine-Tuning βœ…
2. RAG Implementation (coming soon)
3. EnterpriseGPT (coming soon)
4. Batch Inference (coming soon)
5. Context Length (coming soon)
6. Exit

Select an option: 

Training Options

  1. Select "1. Supervised Fine-Tuning"
  2. Choose Setup Mode: Basic or Advanced
  3. Select Model: TinyLlama, Qwen, Phi-2, etc.
  4. Choose Training Recipe: LoRA, QLoRA, etc.
  5. Select Dataset: Your available datasets
  6. Choose Training Mode: Multi-GPU or Single-GPU
  7. Confirm Configuration: Review and start training

πŸš€ What's New (Accelerate Refactor)

Clean Architecture

  • Hugging Face Accelerate: Stable DDP training
  • Single-GPU Evaluation: Always on cuda:0
  • No More NCCL Errors: Robust distributed training
  • Clean Code: Removed over-engineering

Key Features

  • βœ… Multi-GPU Training: 2Γ— RTX 5090s support
  • βœ… Single-GPU Fallback: Automatic fallback if needed
  • βœ… LoRA/QLoRA Support: Parameter-efficient fine-tuning
  • βœ… Structured Logging: Clean, readable output
  • βœ… Error Handling: Robust error management

πŸ“‹ Training Modes

Multi-GPU Training (Recommended)

  • Uses accelerate launch with 2Γ— RTX 5090s
  • Stable DDP training with NCCL backend
  • Automatic device management
  • Mixed precision (bf16/fp16)

Single-GPU Training

  • Uses python train.py for single GPU
  • Fallback option if multi-GPU fails
  • Same functionality, single device

🎯 Usage Examples

Interactive CLI

humigence
# Select option 1
# Choose Multi-GPU Training
# Follow the configuration wizard

Direct Training (Advanced)

# Multi-GPU
accelerate launch --config_file accelerate_config.yaml train.py --config_file config.json

# Single-GPU
python train.py --config_file config.json

πŸ”§ Technical Details

Files Created/Updated

  • train.py - Clean Accelerate-based training script
  • accelerate_config.yaml - Multi-GPU configuration
  • cli/main.py - Updated CLI integration
  • humigence - Command-line entry point

Dependencies

  • Hugging Face Accelerate - Distributed training
  • Transformers - Model loading and training
  • PEFT - LoRA/QLoRA support
  • Rich - Beautiful CLI interface

πŸŽ‰ Ready to Use!

The Humigence training pipeline is now:

  • βœ… Refactored with Hugging Face Accelerate
  • βœ… Tested and working correctly
  • βœ… Installed as humigence command
  • βœ… Ready for production use

Just run humigence and start training! πŸš€

πŸ“Š What You Get

  1. Clean CLI Interface - Easy to use
  2. Stable Multi-GPU Training - No more NCCL errors
  3. Single-GPU Evaluation - No device mismatches
  4. Structured Reporting - Clear training summaries
  5. Error Handling - Robust error management
  6. Production Ready - Works with your 2Γ— RTX 5090s

The refactored Humigence pipeline is ready for your AI training needs! 🎯