akeel-cot / README.md
neutrino2211's picture
Add model card with Muqarnas branding
674c127 verified
metadata
license: apache-2.0
language:
  - en
tags:
  - thought-injection
  - rag
  - research
  - qwen
  - qrk-labs
pipeline_tag: text-generation
library_name: transformers
base_model: Qwen/Qwen3-0.6B

akeel-cot

Research Prototype — Thought Injection for Grounded Reasoning

A QRK Labs Research Model

License Model Size


Overview

akeel-cot is a research prototype exploring Thought Injection — a novel approach to retrieval-augmented generation where the model learns to request external knowledge mid-generation using explicit <knowledge> tags.

Unlike traditional RAG (which retrieves before generation) or tool-use (which requires special training), Thought Injection allows the model to:

  1. Reason about what it knows and doesn't know
  2. Request specific information at the moment it's needed
  3. Integrate retrieved context seamlessly into its response

This model is part of QRK Labs' research into human-centric AI systems.

How It Works

User: What is the capital of France?

Model: <think>
The user is asking about the capital of France. This is common knowledge.
</think>

The capital of France is Paris.
User: What were QRK Labs' Q4 2025 revenues?

Model: <think>
This is asking for specific financial data I don't have. I need to retrieve this.
<knowledge>QRK Labs Q4 2025 revenue financial results</knowledge>
[Retrieved: QRK Labs reported Q4 2025 revenues of $2.3M, up 45% YoY...]
</k_res>
Based on the retrieved information, I can now answer.
</think>

QRK Labs reported Q4 2025 revenues of $2.3 million, representing a 45% year-over-year increase.

Architecture

  • Base Model: Qwen3-0.6B
  • Training: Fine-tuned on thought injection reasoning traces
  • Format: ChatML with <think>, <knowledge>, and </k_res> tags

Intended Use

This is a research prototype for exploring thought injection techniques. It is intended for:

  • Academic research on RAG and reasoning
  • Experimentation with knowledge-grounded generation
  • Understanding model uncertainty and knowledge boundaries

Not intended for production use.

Limitations

  • Small model size (0.6B) limits general capabilities
  • Requires compatible inference infrastructure to inject retrieved content
  • Research prototype — not optimized for real-world deployment
  • May hallucinate or generate incorrect <knowledge> queries

Citation

If you use this model in your research, please cite:

@misc{akeel-cot-2026,
  author = {QRK Labs},
  title = {Akeel-CoT: Thought Injection for Grounded Reasoning},
  year = {2026},
  publisher = {Hugging Face},
  url = {https://huggingface.co/qrk-labs/akeel-cot}
}

Links


Built with ☁️ by QRK Labs — Human-centric AI