neutrino2211 commited on
Commit
674c127
·
verified ·
1 Parent(s): 00bd5d8

Add model card with Muqarnas branding

Browse files
Files changed (1) hide show
  1. README.md +134 -0
README.md ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - thought-injection
7
+ - rag
8
+ - research
9
+ - qwen
10
+ - qrk-labs
11
+ pipeline_tag: text-generation
12
+ library_name: transformers
13
+ base_model: Qwen/Qwen3-0.6B
14
+ ---
15
+
16
+ <div align="center">
17
+
18
+ <!-- Muqarnas Layers Logo -->
19
+ <svg width="120" height="120" viewBox="0 0 64 64" fill="none" xmlns="http://www.w3.org/2000/svg">
20
+ <path d="M32 4 L4 32 L32 60 L60 32 Z" fill="none" stroke="#fafafa" stroke-width="0.6" opacity="0.3"/>
21
+ <path d="M32 10 L10 32 L32 54 L54 32 Z" fill="none" stroke="#fafafa" stroke-width="0.8" opacity="0.5"/>
22
+ <path d="M32 16 L16 32 L32 48 L48 32 Z" fill="none" stroke="#fafafa" stroke-width="1" opacity="0.7"/>
23
+ <path d="M32 22 L22 32 L32 42 L42 32 Z" fill="none" stroke="#fafafa" stroke-width="1.2"/>
24
+ <line x1="32" y1="4" x2="32" y2="22" stroke="#fafafa" stroke-width="0.5" opacity="0.4"/>
25
+ <line x1="60" y1="32" x2="42" y2="32" stroke="#fafafa" stroke-width="0.5" opacity="0.4"/>
26
+ <line x1="32" y1="60" x2="32" y2="42" stroke="#fafafa" stroke-width="0.5" opacity="0.4"/>
27
+ <line x1="4" y1="32" x2="22" y2="32" stroke="#fafafa" stroke-width="0.5" opacity="0.4"/>
28
+ <line x1="18" y1="18" x2="27" y2="27" stroke="#fafafa" stroke-width="0.5" opacity="0.3"/>
29
+ <line x1="46" y1="18" x2="37" y2="27" stroke="#fafafa" stroke-width="0.5" opacity="0.3"/>
30
+ <line x1="46" y1="46" x2="37" y2="37" stroke="#fafafa" stroke-width="0.5" opacity="0.3"/>
31
+ <line x1="18" y1="46" x2="27" y2="37" stroke="#fafafa" stroke-width="0.5" opacity="0.3"/>
32
+ <circle cx="32" cy="32" r="6" fill="none" stroke="#fafafa" stroke-width="0.8"/>
33
+ <circle cx="32" cy="32" r="2.5" fill="#fafafa"/>
34
+ </svg>
35
+
36
+ # akeel-cot
37
+
38
+ **Research Prototype — Thought Injection for Grounded Reasoning**
39
+
40
+ *A QRK Labs Research Model*
41
+
42
+ [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
43
+ [![Model Size](https://img.shields.io/badge/Parameters-0.6B-green.svg)]()
44
+
45
+ </div>
46
+
47
+ ---
48
+
49
+ ## Overview
50
+
51
+ **akeel-cot** is a research prototype exploring *Thought Injection* — a novel approach to retrieval-augmented generation where the model learns to request external knowledge mid-generation using explicit `<knowledge>` tags.
52
+
53
+ Unlike traditional RAG (which retrieves before generation) or tool-use (which requires special training), Thought Injection allows the model to:
54
+
55
+ 1. **Reason** about what it knows and doesn't know
56
+ 2. **Request** specific information at the moment it's needed
57
+ 3. **Integrate** retrieved context seamlessly into its response
58
+
59
+ This model is part of QRK Labs' research into human-centric AI systems.
60
+
61
+ ## How It Works
62
+
63
+ ```
64
+ User: What is the capital of France?
65
+
66
+ Model: <think>
67
+ The user is asking about the capital of France. This is common knowledge.
68
+ </think>
69
+
70
+ The capital of France is Paris.
71
+ ```
72
+
73
+ ```
74
+ User: What were QRK Labs' Q4 2025 revenues?
75
+
76
+ Model: <think>
77
+ This is asking for specific financial data I don't have. I need to retrieve this.
78
+ <knowledge>QRK Labs Q4 2025 revenue financial results</knowledge>
79
+ [Retrieved: QRK Labs reported Q4 2025 revenues of $2.3M, up 45% YoY...]
80
+ </k_res>
81
+ Based on the retrieved information, I can now answer.
82
+ </think>
83
+
84
+ QRK Labs reported Q4 2025 revenues of $2.3 million, representing a 45% year-over-year increase.
85
+ ```
86
+
87
+ ## Architecture
88
+
89
+ - **Base Model:** Qwen3-0.6B
90
+ - **Training:** Fine-tuned on thought injection reasoning traces
91
+ - **Format:** ChatML with `<think>`, `<knowledge>`, and `</k_res>` tags
92
+
93
+ ## Intended Use
94
+
95
+ This is a **research prototype** for exploring thought injection techniques. It is intended for:
96
+
97
+ - Academic research on RAG and reasoning
98
+ - Experimentation with knowledge-grounded generation
99
+ - Understanding model uncertainty and knowledge boundaries
100
+
101
+ **Not intended for production use.**
102
+
103
+ ## Limitations
104
+
105
+ - Small model size (0.6B) limits general capabilities
106
+ - Requires compatible inference infrastructure to inject retrieved content
107
+ - Research prototype — not optimized for real-world deployment
108
+ - May hallucinate or generate incorrect `<knowledge>` queries
109
+
110
+ ## Citation
111
+
112
+ If you use this model in your research, please cite:
113
+
114
+ ```bibtex
115
+ @misc{akeel-cot-2026,
116
+ author = {QRK Labs},
117
+ title = {Akeel-CoT: Thought Injection for Grounded Reasoning},
118
+ year = {2026},
119
+ publisher = {Hugging Face},
120
+ url = {https://huggingface.co/qrk-labs/akeel-cot}
121
+ }
122
+ ```
123
+
124
+ ## Links
125
+
126
+ - **QRK Labs:** [qrk.ng](https://qrk.ng)
127
+ - **Research:** Coming soon
128
+ - **Contact:** research@qrk.ng
129
+
130
+ ---
131
+
132
+ <div align="center">
133
+ <sub>Built with ☁️ by QRK Labs — Human-centric AI</sub>
134
+ </div>