Spaces:
Running
on
Zero
Running
on
Zero
Commit History
Add aggressive GPU memory cleanup for T4 instances
5549e2b
unverified
Update AEON inference to require sex parameter
0d1b788
unverified
Add GitHub Actions workflows and comprehensive test suite
4780d8d
unverified
Fix remaining hardcoded data paths after rebase
c2c8715
unverified
Add comprehensive logging for batch processing verification
6e06a36
unverified
Fix segment_tissue unpacking in batch analysis
42bcf72
unverified
Add batch processing optimization for slide analysis
0234c58
unverified
Fix user directory creation and hide HF Spaces logs locally
07d6e0e
unverified
Fix circular import by moving get_data_directory to separate module
52f8cb9
unverified
Fix model file location to use HuggingFace cache directory
751062d
unverified
Update src/mosaic/analysis.py
9cc95d5
unverified
Add Aeon model test suite and reproducibility scripts
0506a57
unverified
Add comprehensive sex and tissue site parameter support
a2b6947
unverified
Complete implementation of sex and tissue site parameters
49fbf68
unverified
Add sex and tissue site parameters to Aeon inference
de40714
unverified
Fix user detection by decoding JWT token from referer
24b5de2
unverified
Add debug logging for request object
6d1bfd0
unverified
Improve user detection: check username and auth headers
cd62763
unverified
Add request parameter to analyze_slide function signature
0524123
unverified
Add gr.Request parameter to analyze_slide signature
7f1cc8e
unverified
Add dynamic GPU duration: 60s for anonymous, 300s for logged-in users
0e5928b
unverified
Fix T4 detection by checking actual GPU name
cbb7db9
unverified
Add T4 GPU detection and optimized settings
18b7b6e
unverified
Actually apply simplification: remove individual decorators, add single pipeline decorator
91b0515
unverified
Simplify: single GPU decorator on pipeline instead of 4 separate calls
949282a
unverified
Reduce GPU durations to fit 300s total limit per request
4d7da11
unverified
Remove chunking completely - doesn't work with ZeroGPU
8d94a99
unverified
Increase chunk sizes for better performance
80e07ea
unverified
Implement proper multi-GPU-call chunking for ZeroGPU
14f8f4b
unverified
Drastically reduce chunk sizes for ZeroGPU reliability
658b7b2
unverified
Make all GPU memory stats collection optional with try-except
c6bd865
unverified
Fix CUDA device error by checking availability before reset
aafc601
unverified
Implement chunked processing for ZeroGPU to prevent token expiry
445c0ed
unverified
Fix ZeroGPU detection to use correct env var
d69d3b8
unverified
Optimize batch sizes for H100 ZeroGPU and reduce Optimus duration
3f232ad
unverified
Fix ZeroGPU detection and increase workers for non-ZeroGPU environments
875e616
unverified
Increase Optimus GPU duration to 600s (max)
a1464f6
unverified
Separate GPU calls for each operation in HF Spaces
23a1b1e
unverified
Import spaces before CUDA packages to fix initialization error
849bc8d
unverified
fix: increase zerogpu duration
c1f995e
unverified
Set num_workers=0 for Zero GPU compatibility
641c24a
copilot-swe-agent[bot]
raylim
commited on
Separate tissue segmentation from GPU-decorated function
02bf3db
copilot-swe-agent[bot]
raylim
commited on
Add Hugging Face Spaces Zero GPU support
2a074d9
copilot-swe-agent[bot]
raylim
commited on
Add comprehensive documentation improvements
71ae2f0
copilot-swe-agent[bot]
raylim
commited on
Refactor Gradio UI into separate module for better readability
b955807
copilot-swe-agent[bot]
raylim
commited on