phi4_adaptableIE_v2-gguf : GGUF
This model was finetuned and converted to GGUF format using Unsloth.
Example usage:
- For text only LLMs:
./llama.cpp/llama-cli -hf FinaPolat/phi4_adaptableIE_v2-gguf --jinja - For multimodal models:
./llama.cpp/llama-mtmd-cli -hf FinaPolat/phi4_adaptableIE_v2-gguf --jinja
Available Model files:
FinaPolat/phi4_adaptableIE_v2-gguf
Ollama
An Ollama Modelfile is included for easy deployment.
Please see: https://github.com/EnexaProject/phi4-ie-demo
This was trained 2x faster with Unsloth

Phi-4-AdaptableIE: Efficient Adaptive Knowledge Graph Extraction
This model has gguf version: https://huggingface.co/FinaPolat/phi4_adaptableIE_v2-gguf
Phi-4-AdaptableIE is a specialized 14.7B parameter Small Language Model (SLM) optimized via Supervised Fine-Tuning (SFT) for high-precision, Joint Named Entity Recognition (NER) and Relation Extraction (RE).
Unlike traditional multi-stage pipelines that are prone to cascading error propagation, this model performs entity identification and relational mapping in a single cohesive pass. It is designed to be ontology-adaptive, allowing it to conform to dynamic, unseen schemas at inference time through a specialized Structured Prompt Architecture.
π Model Highlights
- Joint Extraction: Unified NER + RE reducing pipeline complexity.
- Ontology-Adaptive: Zero-shot adaptation to diverse domains (Astronomy, Music, Healthcare, etc.) via dynamic schema variables.
- Local & Private: Optimized for local CPU-only inference (via GGUF/Ollama - FinaPolat/phi4_adaptableIE_v2-gguf ), ensuring data sovereignty without external API dependencies.
- Instruction Aligned: Fine-tuned to follow strict negative constraints, ensuring zero conversational filler in outputs.
π Methodology
The model was fine-tuned using QLoRA on the WebNLG subset of the Text2KGBench benchmark. The training process focused on Conversational Alignment, ensuring the model treats extraction as a strict logical mapping:
Prompt = f(task, schema, example, text)
π Prompting Strategy
To achieve high-fidelity extraction, the model requires a specific prompt structure.
1. System Prompt
{
"role": "system",
"content": "You are a helpful AI assistant specializing in Information Extraction tasks such as Named Entity Recognition and Relation Extraction. Follow the instructions given by the user."
}
2. User Prompt Template
Information Extraction is the process of automatically identifying and extracting structured information from unstructured text data... [Context] ...
Always extract numbers, dates, and currency values regardless of the specific task.
The task at hand is {task}.
Here is an example of task execution:
{example}
Analyze the text and targets carefully, identify relevant information.
Extract the information in the following format: `{output_format}`.
If no matching entities are found, return an empty list: [].
Please provide only the extracted information without any explanations.
Schema: {schema}
Text: {inputs}
- Downloads last month
- 40
4-bit