EPLAN Electric P8 Assistant (GGUF)

Qwen2.5-3B-Instruct fine-tuned con LoRA en 5,116 pares Q&A de EPLAN Electric P8.

Quality Size Framework

πŸš€ Quickstart

llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
    repo_id="covaga/eplan-assistant-v2-gguf",
    filename="eplan-assistant-v2-q4_k_m.gguf",
    n_ctx=4096
)

output = llm.create_chat_completion(messages=[...])

Ollama

ollama pull covaga/eplan-assistant-v2-gguf
ollama run covaga/eplan-assistant-v2-gguf

🎯 Use Cases βœ… EPLAN API C# scripting βœ… Project automation & batch processing βœ… Troubleshooting NullReferenceException βœ… Function vs FunctionBase differences βœ… PDF/Excel export scripts βœ… Electrical engineering best practices

πŸ“„ Citation

@misc{eplan-assistant-gguf,
  author = {covaga},
  title = {EPLAN Electric P8 Assistant GGUF},
  year = {2026},
  publisher = {Hugging Face},
  url = {https://huggingface.co/covaga/eplan-assistant-v2-gguf}
}
Downloads last month
43
GGUF
Model size
3B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for covaga/eplan-assistant-v2-gguf

Base model

Qwen/Qwen2.5-3B
Quantized
(191)
this model

Dataset used to train covaga/eplan-assistant-v2-gguf