How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="darkai-1/darkit",
	filename="darkit-1.5-pro.gguf",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

DarkIT-1.5-Pro is a high-performance large language model designed for advanced programming, reasoning, and natural conversation.

is optimized to deliver strong results in:

  • Code generation and debugging
  • Logical reasoning & problem solving
  • Instruction following in complex tasks
  • No request is refused

Key Specifications

  • Model Size: 4B parameters (optimized inference build)
  • Context Length: 128K tokens
  • Format: GGUF (quantized for efficient local deployment)
  • Target Use: Local AI inference (CPU / GPU)
  • Optimization: Fine-tuned + merged high-quality training pipeline

Performance Notes

  • Optimized for speed and memory efficiency
  • Stable output generation across long prompts
  • Strong balance between creativity and correctness
  • Suitable for both chat and developer workflows

⚠️ Notes

  • Designed for inference-only deployment
  • Performance may vary depending on hardware and quantization level
  • Best results with structured prompts

Company DarkAI is an independent AI research initiative focused on building efficient, powerful, and scalable language models for real-world applications.


Downloads last month
-
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support