Meet the Coding LLM That Explains Itself — And Knows When It Might Be Wrong

#1
by girish00 - opened

A lightweight coding LLM that generates code with explanations, confidence scores, and hallucination checks — built for reliable, real-world use.

#$# Introducing ConicAI Coding LLM — a lightweight, LoRA-finetuned model built on Qwen2.5-Coder, designed not just to generate code, but to explain and evaluate it.

That gives output in the following manner

  • Code
  • Explanation
  • Confidence score
  • Relevancy score
  • Hallucination flag

This makes it far more usable for real-world tasks like debugging, learning, and building AI coding tools.

Efficient, locally runnable, and focused on practical performance — not just raw generation.

🔗 https://huggingface.co/girish00/ConicAI_LLM_model

#benchmarks :-
benchmark

If you’re exploring coding LLMs that are not just powerful but also interpretable and usable — this is worth your time.

Try it. Test it.

Try this prompt:

"Fix this code: def add(a,b) return a+b" or "you can paste or generate any code "

Steps :-
1)Go to (https://huggingface.co/girish00/ConicAI_LLM_model)
2)then go below and copy code from "How to Get Started with the Model"
3)Open Colab/ jupyter notebook or any IDE and paste that code
4)click run and Enter above prompt or any other prompt
5)and Get result

and see how it explains the output.

Sign up or log in to comment