YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
lora-merged - GGUF
This is a GGUF version of the lora-merged model.
Model Details
- Base Model: /workspace/lora-merged
- Format: GGUF
- Quantization: q4_k_m
Usage
This model can be used with llama.cpp and compatible applications.
# Example llama.cpp command
./main -m keip-assistant.q4_k_m.gguf -n 1024 -p "Your prompt here"
- Downloads last month
- 1
Hardware compatibility
Log In
to add your hardware
4-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support