Spaces:
Running
on
Zero
Running
on
Zero
File size: 3,180 Bytes
6ccd18b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
# Meta Tensor Error - Fix Applied β
## Summary of Changes
Successfully applied fixes to resolve the **"Tensor.item() cannot be called on meta tensors"** error that was preventing model initialization on Hugging Face Spaces with ZeroGPU.
## Files Modified
### 1. `acestep/handler.py` - 3 fixes
- β
Line 498: DiT model loading with `device_map={"": device}`
- β
Line 573: VAE model loading with `device_map={"": vae_device}`
- β
Line 606: Text encoder loading with `device_map={"": text_encoder_device}`
### 2. `acestep/llm_inference.py` - 3 fixes
- β
Line 282: Main LLM loading with `device_map={"": target_device}`
- β
Line 3028: vLLM scoring model with `device_map={"": str(device)}`
- β
Line 3058: MLX scoring model with `device_map={"": device}`
## What Was Fixed
The issue occurred because on Hugging Face Spaces with ZeroGPU, Transformers creates models on "meta" device (placeholder tensors) during initialization. The custom ACE-Step model code tried to perform operations during `__init__`, which failed with meta tensors.
By adding explicit `device_map` parameters to all model loading calls, we force models to load directly onto the target device (CUDA/CPU), bypassing the meta device phase entirely.
## Deployment Steps
### Option 1: Automated (Recommended)
```bash
deploy_hf_fix.bat
```
This script will:
1. Show current git status
2. Ask for confirmation
3. Commit changes with descriptive message
4. Push to remote repository
### Option 2: Manual
```bash
git add acestep/handler.py acestep/llm_inference.py
git commit -m "Fix: Add device_map to prevent meta tensor errors on ZeroGPU"
git push
```
## After Deployment
Monitor your HF Space logs for:
**β
Expected (Success):**
```
2026-02-09 XX:XX:XX - acestep.handler - INFO - [initialize_service] Attempting to load model with attention implementation: sdpa
2026-02-09 XX:XX:XX - acestep.handler - INFO - β
Model initialized successfully on cuda
```
**β Previously (Error):**
```
RuntimeError: Tensor.item() cannot be called on meta tensors
```
## Testing Checklist
After deployment to HF Space:
- [ ] Space builds successfully without errors
- [ ] Models initialize without meta tensor errors
- [ ] Standard generation works with test prompts
- [ ] No crashes during model loading
- [ ] GPU allocation works correctly with ZeroGPU
## Documentation
- `FIX_META_TENSOR_ERROR.md` - Detailed technical explanation
- `verify_fix.py` - Local verification script
- `deploy_hf_fix.bat` - Automated deployment script
## Support
If you encounter issues after deployment:
1. Check HF Space logs for specific error messages
2. Verify all 6 device_map additions are in your deployed code
3. Ensure Transformers version >= 4.20.0 in requirements.txt
4. Check that `spaces` package is properly configured for ZeroGPU
## Expected Behavior
β
Models load directly to CUDA on ZeroGPU
β
No meta device intermediate step
β
All tensor operations work correctly during initialization
β
Compatible with both local and HF Space environments
---
**Status**: β
Fix Applied and Ready for Deployment
**Date**: 2026-02-09
**Impact**: Resolves critical initialization failure on HF Spaces
|