Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf llmware/slim-q-gen-tiny-tool# Run inference directly in the terminal:
llama-cli -hf llmware/slim-q-gen-tiny-toolUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf llmware/slim-q-gen-tiny-tool# Run inference directly in the terminal:
./llama-cli -hf llmware/slim-q-gen-tiny-toolBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf llmware/slim-q-gen-tiny-tool# Run inference directly in the terminal:
./build/bin/llama-cli -hf llmware/slim-q-gen-tiny-toolUse Docker
docker model run hf.co/llmware/slim-q-gen-tiny-toolSLIM-Q-GEN-TINY-TOOL
slim-q-gen-tiny-tool is a 4_K_M quantized GGUF version of slim-q-gen-tiny, providing a small, fast inference implementation, optimized for multi-model concurrent deployment.
This model implements a generative 'question' (e.g., 'q-gen') function, which takes a context passage as an input, and then generates as an output a python dictionary consisting of one key:
`{'question': ['What was the amount of revenue in the quarter?']} `
The model has been designed to accept one of three different parameters to guide the type of question-answer created: 'question' (generates a standard question), 'boolean' (generates a 'yes-no' question), and 'multiple choice' (generates a multiple choice question).
slim-qa-gen-tiny-tool is a fine-tune of a tinyllama (1b) parameter model, designed for fast, local deployment and rapid testing and prototyping. Please also see slim-q-gen-phi-3-tool, which is finetune of phi-3, and will provide higher-quality results, at the trade-off of slightly slower performance and requiring more memory.
slim-q-gen-tiny is the Pytorch version of the model, and suitable for fine-tuning for further domain adaptation.
To pull the model via API:
from huggingface_hub import snapshot_download
snapshot_download("llmware/slim-q-gen-tiny-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
Load in your favorite GGUF inference engine, or try with llmware as follows:
from llmware.models import ModelCatalog
# to load the model and make a basic inference
model = ModelCatalog().load_model("slim-q-gen-tiny-tool", sample=True, temperature=0.7)
response = model.function_call(text_sample, params=['question'])
# this one line will download the model and run a series of tests
ModelCatalog().tool_test_run("slim-q-gen-tiny-tool", verbose=True)
Note: please review config.json in the repository for prompt template information, details on the model, and full test set.
Model Card Contact
Darren Oberst & llmware team
- Downloads last month
- 42
We're not able to determine the quantization variants.
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf llmware/slim-q-gen-tiny-tool# Run inference directly in the terminal: llama-cli -hf llmware/slim-q-gen-tiny-tool