Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
Alovestocode
/
ZeroGPU-LLM-Inference
like
0
Sleeping
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
808203f
ZeroGPU-LLM-Inference
116 kB
1 contributor
History:
38 commits
Alikestocode
Add advanced vLLM and LLM Compressor optimizations
808203f
3 months ago
.dockerignore
104 Bytes
Add Google Cloud Platform deployment configurations
3 months ago
.gitattributes
1.52 kB
Initial commit: ZeroGPU LLM Inference Space
3 months ago
.gitignore
27 Bytes
Add .gitignore and remove cache files
3 months ago
DEPLOYMENT_STATUS.md
2.21 kB
Add deployment status document after re-authentication
3 months ago
Dockerfile
680 Bytes
Add Google Cloud Platform deployment configurations
3 months ago
FIX_PERMISSIONS.md
2.05 kB
Add permission fix guide for spherical-gate-477614-q7 project
3 months ago
LLM_COMPRESSOR_FEATURES.md
6.26 kB
Add advanced vLLM and LLM Compressor optimizations
3 months ago
QUANTIZE_AWQ.md
3.22 kB
Add Colab notebook for AWQ quantization of router models
3 months ago
QUICK_DEPLOY.md
2.86 kB
Add Cloud Build deployment script and permission setup helper
3 months ago
README.md
4.23 kB
Implement vLLM with LLM Compressor and performance optimizations
3 months ago
app.py
40.6 kB
Add advanced vLLM and LLM Compressor optimizations
3 months ago
apt.txt
11 Bytes
Initial commit: ZeroGPU LLM Inference Space
3 months ago
cloudbuild.yaml
1.36 kB
Add Cloud Build deployment script and permission setup helper
3 months ago
deploy-cloud-build.sh
3.31 kB
Add Cloud Build deployment script and permission setup helper
3 months ago
deploy-compute-engine.sh
4.23 kB
Add Google Cloud Platform deployment configurations
3 months ago
deploy-gcp.sh
2.67 kB
Add Google Cloud Platform deployment configurations
3 months ago
gcp-deployment.md
5.32 kB
Add Google Cloud Platform deployment configurations
3 months ago
quantize_to_awq_colab.ipynb
20 kB
Add advanced vLLM and LLM Compressor optimizations
3 months ago
requirements.txt
397 Bytes
Clarify LLM Compressor optional status - vLLM has native AWQ support
3 months ago
setup-gcp-permissions.sh
1.8 kB
Add Cloud Build deployment script and permission setup helper
3 months ago
style.css
2.84 kB
Initial commit: ZeroGPU LLM Inference Space
3 months ago
test_api.py
3.43 kB
Migrate to AWQ quantization with FlashAttention-2
3 months ago
test_api_gradio_client.py
7.2 kB
Implement vLLM with LLM Compressor and performance optimizations
3 months ago