akeel-cot

Research Prototype โ€” Thought Injection for Grounded Reasoning

A QRK Labs Research Model

License Model Size


Overview

akeel-cot is a research prototype exploring Thought Injection โ€” a novel approach to retrieval-augmented generation where the model learns to request external knowledge mid-generation using explicit <knowledge> tags.

Unlike traditional RAG (which retrieves before generation) or tool-use (which requires special training), Thought Injection allows the model to:

  1. Reason about what it knows and doesn't know
  2. Request specific information at the moment it's needed
  3. Integrate retrieved context seamlessly into its response

This model is part of QRK Labs' research into human-centric AI systems.

How It Works

User: What is the capital of France?

Model: <think>
The user is asking about the capital of France. This is common knowledge.
</think>

The capital of France is Paris.
User: What were QRK Labs' Q4 2025 revenues?

Model: <think>
This is asking for specific financial data I don't have. I need to retrieve this.
<knowledge>QRK Labs Q4 2025 revenue financial results</knowledge>
[Retrieved: QRK Labs reported Q4 2025 revenues of $2.3M, up 45% YoY...]
</k_res>
Based on the retrieved information, I can now answer.
</think>

QRK Labs reported Q4 2025 revenues of $2.3 million, representing a 45% year-over-year increase.

Architecture

  • Base Model: Qwen3-0.6B
  • Training: Fine-tuned on thought injection reasoning traces
  • Format: ChatML with <think>, <knowledge>, and </k_res> tags

Intended Use

This is a research prototype for exploring thought injection techniques. It is intended for:

  • Academic research on RAG and reasoning
  • Experimentation with knowledge-grounded generation
  • Understanding model uncertainty and knowledge boundaries

Not intended for production use.

Limitations

  • Small model size (0.6B) limits general capabilities
  • Requires compatible inference infrastructure to inject retrieved content
  • Research prototype โ€” not optimized for real-world deployment
  • May hallucinate or generate incorrect <knowledge> queries

Citation

If you use this model in your research, please cite:

@misc{akeel-cot-2026,
  author = {QRK Labs},
  title = {Akeel-CoT: Thought Injection for Grounded Reasoning},
  year = {2026},
  publisher = {Hugging Face},
  url = {https://huggingface.co/qrk-labs/akeel-cot}
}

Links


Built with โ˜๏ธ by QRK Labs โ€” Human-centric AI
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for qrk-labs/akeel-cot

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(694)
this model