# Meta Tensor Error - Fix Applied ✅ ## Summary of Changes Successfully applied fixes to resolve the **"Tensor.item() cannot be called on meta tensors"** error that was preventing model initialization on Hugging Face Spaces with ZeroGPU. ## Files Modified ### 1. `acestep/handler.py` - 3 fixes - ✅ Line 498: DiT model loading with `device_map={"": device}` - ✅ Line 573: VAE model loading with `device_map={"": vae_device}` - ✅ Line 606: Text encoder loading with `device_map={"": text_encoder_device}` ### 2. `acestep/llm_inference.py` - 3 fixes - ✅ Line 282: Main LLM loading with `device_map={"": target_device}` - ✅ Line 3028: vLLM scoring model with `device_map={"": str(device)}` - ✅ Line 3058: MLX scoring model with `device_map={"": device}` ## What Was Fixed The issue occurred because on Hugging Face Spaces with ZeroGPU, Transformers creates models on "meta" device (placeholder tensors) during initialization. The custom ACE-Step model code tried to perform operations during `__init__`, which failed with meta tensors. By adding explicit `device_map` parameters to all model loading calls, we force models to load directly onto the target device (CUDA/CPU), bypassing the meta device phase entirely. ## Deployment Steps ### Option 1: Automated (Recommended) ```bash deploy_hf_fix.bat ``` This script will: 1. Show current git status 2. Ask for confirmation 3. Commit changes with descriptive message 4. Push to remote repository ### Option 2: Manual ```bash git add acestep/handler.py acestep/llm_inference.py git commit -m "Fix: Add device_map to prevent meta tensor errors on ZeroGPU" git push ``` ## After Deployment Monitor your HF Space logs for: **✅ Expected (Success):** ``` 2026-02-09 XX:XX:XX - acestep.handler - INFO - [initialize_service] Attempting to load model with attention implementation: sdpa 2026-02-09 XX:XX:XX - acestep.handler - INFO - ✅ Model initialized successfully on cuda ``` **❌ Previously (Error):** ``` RuntimeError: Tensor.item() cannot be called on meta tensors ``` ## Testing Checklist After deployment to HF Space: - [ ] Space builds successfully without errors - [ ] Models initialize without meta tensor errors - [ ] Standard generation works with test prompts - [ ] No crashes during model loading - [ ] GPU allocation works correctly with ZeroGPU ## Documentation - `FIX_META_TENSOR_ERROR.md` - Detailed technical explanation - `verify_fix.py` - Local verification script - `deploy_hf_fix.bat` - Automated deployment script ## Support If you encounter issues after deployment: 1. Check HF Space logs for specific error messages 2. Verify all 6 device_map additions are in your deployed code 3. Ensure Transformers version >= 4.20.0 in requirements.txt 4. Check that `spaces` package is properly configured for ZeroGPU ## Expected Behavior ✅ Models load directly to CUDA on ZeroGPU ✅ No meta device intermediate step ✅ All tensor operations work correctly during initialization ✅ Compatible with both local and HF Space environments --- **Status**: ✅ Fix Applied and Ready for Deployment **Date**: 2026-02-09 **Impact**: Resolves critical initialization failure on HF Spaces