Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
intrect
/
VELA
like
0
Text Generation
Transformers
Safetensors
GGUF
llama-cpp-python
MLX
Korean
English
qwen2
finance
korean
stock-analysis
reasoning
dpo
llama-cpp
apple-silicon
4bit
quantized
vllm
ollama
conversational
text-generation-inference
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
1
Deploy
Use this model
refs/pr/1
VELA
/
mlx-int4
4.3 GB
Ctrl+K
Ctrl+K
1 contributor
History:
2 commits
intrect
fix: replace Qwen default system prompt with VELA identity in chat templates
8fa0fb0
verified
3 days ago
added_tokens.json
Safe
605 Bytes
feat: add MLX 4-bit quantized model (Apple Silicon optimized)
3 days ago
chat_template.jinja
Safe
2.51 kB
feat: add MLX 4-bit quantized model (Apple Silicon optimized)
3 days ago
config.json
Safe
1.82 kB
feat: add MLX 4-bit quantized model (Apple Silicon optimized)
3 days ago
generation_config.json
Safe
242 Bytes
feat: add MLX 4-bit quantized model (Apple Silicon optimized)
3 days ago
merges.txt
Safe
1.67 MB
feat: add MLX 4-bit quantized model (Apple Silicon optimized)
3 days ago
model.safetensors
4.28 GB
xet
feat: add MLX 4-bit quantized model (Apple Silicon optimized)
3 days ago
model.safetensors.index.json
Safe
51.8 kB
feat: add MLX 4-bit quantized model (Apple Silicon optimized)
3 days ago
special_tokens_map.json
Safe
782 Bytes
feat: add MLX 4-bit quantized model (Apple Silicon optimized)
3 days ago
tokenizer.json
Safe
11.4 MB
xet
feat: add MLX 4-bit quantized model (Apple Silicon optimized)
3 days ago
tokenizer_config.json
Safe
5.42 kB
fix: replace Qwen default system prompt with VELA identity in chat templates
3 days ago
vocab.json
Safe
2.78 MB
feat: add MLX 4-bit quantized model (Apple Silicon optimized)
3 days ago