Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
FallenMerick
/
Space-Whale-Lite-13B-GGUF
like
0
Text Generation
GGUF
quantized
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
Merge
frankenmerge
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Space-Whale-Lite-13B-GGUF
41.6 GB
1 contributor
History:
6 commits
FallenMerick
Create README.md
0f588b8
verified
almost 2 years ago
.gitattributes
Safe
1.79 kB
Upload Space-Whale-Lite-13B-Q8_0.gguf
almost 2 years ago
README.md
Safe
348 Bytes
Create README.md
almost 2 years ago
Space-Whale-Lite-13B-Q4_K_M.gguf
7.87 GB
xet
Upload Space-Whale-Lite-13B-Q4_K_M.gguf
almost 2 years ago
Space-Whale-Lite-13B-Q5_K_M.gguf
Safe
9.23 GB
xet
Upload Space-Whale-Lite-13B-Q5_K_M.gguf
almost 2 years ago
Space-Whale-Lite-13B-Q6_K.gguf
10.7 GB
xet
Upload Space-Whale-Lite-13B-Q6_K.gguf
almost 2 years ago
Space-Whale-Lite-13B-Q8_0.gguf
13.8 GB
xet
Upload Space-Whale-Lite-13B-Q8_0.gguf
almost 2 years ago