Update README.md
Browse files
README.md
CHANGED
|
@@ -14,4 +14,101 @@ tags:
|
|
| 14 |
- math
|
| 15 |
- code
|
| 16 |
- general-purpose
|
| 17 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
- math
|
| 15 |
- code
|
| 16 |
- general-purpose
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
# **OpenRHO-2B-Thinker**
|
| 22 |
+
|
| 23 |
+
> **OpenRHO-2B-Thinker** is a **general-purpose reasoning model** designed to enhance the cognitive abilities of **edge-deployed large language models (LLMs)** through **reinforcement learning (RL)**. Fine-tuned from **Qwen2-1.5B-Instruct** using the **QwQ distill dataset**, it delivers refined improvements in logical reasoning, structured problem-solving, and lightweight coding — making it highly efficient for **resource-constrained environments**.
|
| 24 |
+
|
| 25 |
+
## **Key Improvements**
|
| 26 |
+
|
| 27 |
+
1. **Advanced Reasoning via RL**:
|
| 28 |
+
Built to support symbolic reasoning, logical deduction, and structured problem-solving with high efficiency — specifically optimized for real-time use on edge systems.
|
| 29 |
+
|
| 30 |
+
2. **Compact Coding Assistant**:
|
| 31 |
+
Enhanced understanding of multiple programming paradigms and syntax across Python, JavaScript, C++, and more. Supports in-situ code generation and debugging for embedded coding scenarios.
|
| 32 |
+
|
| 33 |
+
3. **Error Detection & Correction**:
|
| 34 |
+
Identifies logic errors, malformed data structures (e.g., JSON, XML), and provides corrections quickly — with lightweight inference and minimal latency.
|
| 35 |
+
|
| 36 |
+
4. **Instruction Following & Precision**:
|
| 37 |
+
Tuned to follow multi-step instructions with improved contextual memory, offering consistent and precise responses across a variety of prompt types.
|
| 38 |
+
|
| 39 |
+
5. **Extended Context Compatibility**:
|
| 40 |
+
Maintains support for **128K token inputs** and **8K token outputs**, while remaining lean enough for real-time edge usage with low power consumption.
|
| 41 |
+
|
| 42 |
+
## **Quickstart with Transformers**
|
| 43 |
+
|
| 44 |
+
```python
|
| 45 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 46 |
+
|
| 47 |
+
model_name = "prithivMLmods/OpenRHO-2B-Thinker"
|
| 48 |
+
|
| 49 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 50 |
+
model_name,
|
| 51 |
+
torch_dtype="auto",
|
| 52 |
+
device_map="auto"
|
| 53 |
+
)
|
| 54 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 55 |
+
|
| 56 |
+
prompt = "What is a generator function in Python? Explain with an example."
|
| 57 |
+
messages = [
|
| 58 |
+
{"role": "system", "content": "You are a helpful and concise AI assistant skilled in programming and reasoning."},
|
| 59 |
+
{"role": "user", "content": prompt}
|
| 60 |
+
]
|
| 61 |
+
text = tokenizer.apply_chat_template(
|
| 62 |
+
messages,
|
| 63 |
+
tokenize=False,
|
| 64 |
+
add_generation_prompt=True
|
| 65 |
+
)
|
| 66 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
| 67 |
+
|
| 68 |
+
generated_ids = model.generate(
|
| 69 |
+
**model_inputs,
|
| 70 |
+
max_new_tokens=512
|
| 71 |
+
)
|
| 72 |
+
generated_ids = [
|
| 73 |
+
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
| 74 |
+
]
|
| 75 |
+
|
| 76 |
+
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
## **Intended Use**
|
| 80 |
+
|
| 81 |
+
1. **Edge LLM Applications**:
|
| 82 |
+
Built for embedded AI agents, mobile inference, and low-latency chatbots on constrained hardware.
|
| 83 |
+
|
| 84 |
+
2. **General-Purpose Reasoning**:
|
| 85 |
+
Effective for real-time logical reasoning, structured deduction, and lightweight problem-solving tasks in everyday applications.
|
| 86 |
+
|
| 87 |
+
3. **Educational & Programming Tools**:
|
| 88 |
+
Helpful for teaching programming and debugging in interactive, constrained environments (e.g., IoT, robotics kits).
|
| 89 |
+
|
| 90 |
+
4. **Lightweight Conversational Agents**:
|
| 91 |
+
Enables responsive, intelligent interactions in edge-deployed customer service bots, support kiosks, and automation systems.
|
| 92 |
+
|
| 93 |
+
5. **Multilingual Mini-NLP Tasks**:
|
| 94 |
+
Supports basic multilingual tasks such as translation, summarization, and information retrieval across multiple languages.
|
| 95 |
+
|
| 96 |
+
6. **Structured Format Generation**:
|
| 97 |
+
Can generate JSON, Markdown, tables, or tabular outputs in lightweight settings for embedded data workflows.
|
| 98 |
+
|
| 99 |
+
## **Limitations**
|
| 100 |
+
|
| 101 |
+
1. **Hardware Requirements (Minimal but Non-Zero)**:
|
| 102 |
+
While designed for edge use, optimal performance still benefits from mid-range NPUs, GPUs, or specialized accelerators.
|
| 103 |
+
|
| 104 |
+
2. **Knowledge Cutoff & Real-Time Awareness**:
|
| 105 |
+
No ability to fetch live data or respond to real-time information beyond its training snapshot.
|
| 106 |
+
|
| 107 |
+
3. **Limited Creative Output**:
|
| 108 |
+
Less effective for creative writing, abstract thinking, or tasks requiring deep imagination.
|
| 109 |
+
|
| 110 |
+
4. **Prompt Sensitivity**:
|
| 111 |
+
Outputs can vary based on prompt clarity; structured prompts yield better, more predictable results.
|
| 112 |
+
|
| 113 |
+
5. **Inherited Biases**:
|
| 114 |
+
May reflect biases from pretraining data. Use caution in sensitive or high-stakes domains.
|