llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)FRAME NL β Intent Compiler (English)
This model converts natural language into structured intent JSON:
{ "intent": "string", "params": { "key": "value" } }
Important:
- The model is NOT trusted for correctness
- Runtime MUST enforce substring validation
- Runtime MUST compute missing params
Example:
Input:
send bob hello
Output:
{ "intent": "message.send", "params": { "to": "bob", "text": "hello" } }
Note: this is not perfect for a reason, frames runtime will fix it entirely over time
Run locally (llama.cpp)
Requirements: llama.cpp built (llama-cli binary available)
Example:
git clone https://huggingface.co/frameprotocol/frame-intent-english
/path/to/llama-cli \
-m model.gguf \
-p "send bob 5 dollars" \
-n 100 \
--temp 0.0
Expected output (approx):
{"intent":"payment.send","params":{"to":"bob","text":"5"}}
Run with validation (this repo):
python infer.py "send bob 5 dollars"
Expected output:
{"intent":"payment.send","params":{"to":"bob"}}
Notes:
- Output is strict JSON only
- Params not present in input are removed by validation
How it works
flowchart LR
A[Natural language input] --> B[GGUF model\nllama-cli]
B --> C[Raw JSON output]
C --> D[Validation\ninfer.py]
D --> E[Cleaned intent JSON]
- Downloads last month
- 3
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="frameprotocol/frame-intent-english", filename="model.gguf", )