Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
efops
/
marziel-8b-custom
like
0
Text Generation
MLX
Safetensors
GGUF
English
llama
vllm
4-bit precision
local-ai
private
maritime
vessel-tracking
osint
conversational
compressed-tensors
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
marziel-8b-custom
1 contributor
History:
20 commits
efops
v0.5.9: semantic intent routing
9cd2a84
verified
about 9 hours ago
.gitattributes
1.63 kB
Initial release v0.5.0 (Clean history)
1 day ago
README.md
13.2 kB
v0.5.9: semantic intent routing
about 9 hours ago
chat_template.jinja
Safe
4.61 kB
Initial release v0.5.0 (Clean history)
1 day ago
config.json
1.81 kB
v0.5.8: GPTQ W4A16 quantized model for vLLM CPU (~4GB)
about 11 hours ago
generation_config.json
Safe
155 Bytes
v0.5.8: GPTQ W4A16 quantized model for vLLM CPU (~4GB)
about 11 hours ago
marziel-8b-custom.gguf
4.92 GB
xet
Initial release v0.5.0 (Clean history)
1 day ago
model-00001-of-00002.safetensors
4.65 GB
xet
v0.5.8: GPTQ W4A16 quantized model for vLLM CPU (~4GB)
about 11 hours ago
model-00002-of-00002.safetensors
1.05 GB
xet
v0.5.8: GPTQ W4A16 quantized model for vLLM CPU (~4GB)
about 11 hours ago
model.safetensors.index.json
Safe
64.6 kB
v0.5.8: GPTQ W4A16 quantized model for vLLM CPU (~4GB)
about 11 hours ago
recipe.yaml
Safe
170 Bytes
v0.5.8: GPTQ W4A16 quantized model for vLLM CPU (~4GB)
about 11 hours ago
special_tokens_map.json
Safe
296 Bytes
v0.5.8: GPTQ W4A16 quantized model for vLLM CPU (~4GB)
about 11 hours ago
tokenizer.json
Safe
17.2 MB
xet
Initial release v0.5.0 (Clean history)
1 day ago
tokenizer_config.json
Safe
50.5 kB
v0.5.8: GPTQ W4A16 quantized model for vLLM CPU (~4GB)
about 11 hours ago