Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
tensorblock
/
Multiverse4FM_Autogressive-32B-GGUF
like
0
Follow
TensorBlock
309
Text Generation
Transformers
GGUF
Multiverse4FM/Autoregressive-1K-mixed
Multiverse4FM/Multiverse-1K
simplescaling/s1K-1.1
TensorBlock
GGUF
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
f93fa8c
Multiverse4FM_Autogressive-32B-GGUF
247 GB
Ctrl+K
Ctrl+K
1 contributor
History:
2 commits
morriszms
Upload folder using huggingface_hub
f93fa8c
verified
9 months ago
.gitattributes
Safe
2.29 kB
Upload folder using huggingface_hub
9 months ago
Autogressive-32B-Q2_K.gguf
12.3 GB
xet
Upload folder using huggingface_hub
9 months ago
Autogressive-32B-Q3_K_L.gguf
17.2 GB
xet
Upload folder using huggingface_hub
9 months ago
Autogressive-32B-Q3_K_M.gguf
15.9 GB
xet
Upload folder using huggingface_hub
9 months ago
Autogressive-32B-Q3_K_S.gguf
14.4 GB
xet
Upload folder using huggingface_hub
9 months ago
Autogressive-32B-Q4_0.gguf
18.6 GB
xet
Upload folder using huggingface_hub
9 months ago
Autogressive-32B-Q4_K_M.gguf
19.9 GB
xet
Upload folder using huggingface_hub
9 months ago
Autogressive-32B-Q4_K_S.gguf
18.8 GB
xet
Upload folder using huggingface_hub
9 months ago
Autogressive-32B-Q5_0.gguf
22.6 GB
xet
Upload folder using huggingface_hub
9 months ago
Autogressive-32B-Q5_K_M.gguf
23.3 GB
xet
Upload folder using huggingface_hub
9 months ago
Autogressive-32B-Q5_K_S.gguf
22.6 GB
xet
Upload folder using huggingface_hub
9 months ago
Autogressive-32B-Q6_K.gguf
26.9 GB
xet
Upload folder using huggingface_hub
9 months ago
Autogressive-32B-Q8_0.gguf
34.8 GB
xet
Upload folder using huggingface_hub
9 months ago
README.md
Safe
7.26 kB
Upload folder using huggingface_hub
9 months ago