DRAGON Models
Collection
Production-grade RAG-optimized 6-7B parameter models - "Delivering RAG on ..." the leading foundation base models • 23 items • Updated • 45
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf llmware/dragon-llama-answer-tool# Run inference directly in the terminal:
llama-cli -hf llmware/dragon-llama-answer-tool# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf llmware/dragon-llama-answer-tool# Run inference directly in the terminal:
./llama-cli -hf llmware/dragon-llama-answer-toolgit clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf llmware/dragon-llama-answer-tool# Run inference directly in the terminal:
./build/bin/llama-cli -hf llmware/dragon-llama-answer-tooldocker model run hf.co/llmware/dragon-llama-answer-tooldragon-llama-answer-tool is a quantized version of DRAGON Llama 7B, with 4_K_M GGUF quantization, providing a fast, small inference implementation for use on CPUs.
dragon-llama-7b is a fact-based question-answering model, optimized for complex business documents.
To pull the model via API:
from huggingface_hub import snapshot_download
snapshot_download("llmware/dragon-llama-answer-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
Load in your favorite GGUF inference engine, or try with llmware as follows:
from llmware.models import ModelCatalog
model = ModelCatalog().load_model("dragon-llama-answer-tool")
response = model.inference(query, add_context=text_sample)
Note: please review config.json in the repository for prompt wrapping information, details on the model, and full test set.
Darren Oberst & llmware team
We're not able to determine the quantization variants.
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf llmware/dragon-llama-answer-tool# Run inference directly in the terminal: llama-cli -hf llmware/dragon-llama-answer-tool