Rukun-Qwen-32B
Rukun-Qwen-32B is an open-source, fine-tuned variant of Qwen2.5-32B-Instruct, aligned with the Rukun Negara (National Principles of Malaysia).
Live Demo: https://rukunnegara.ai
The model preserves the strong general reasoning and multilingual capabilities of Qwen 2.5 while introducing civic alignment, cultural sensitivity, and safer response behavior within Malaysian social, cultural, and governance contexts.
Model Summary
| Attribute | Description |
|---|---|
| Base Model | Qwen/Qwen2.5-32B-Instruct |
| Parameters | 32B |
| Fine-Tuning | LoRA (merged adapters) |
| Precision | BF16 |
| Max Context | 32k (trained & validated up to 8k) |
| Languages | English, Malay, Chinese, Tamil |
| License | Responsible AI License (OpenRAIL) |
Intended Use
Rukun-Qwen-32B is intended for general-purpose AI assistance, with particular emphasis on Malaysian civic and cultural contexts.
Supported Use Cases
- General question answering and reasoning
- Writing, summarization, and multilingual translation
- Coding assistance and technical explanations
- Educational and informational use
- Malaysia-specific cultural, historical, and social topics
Civic & Safety Alignment
Model responses are guided by the five principles of the Rukun Negara:
- Kepercayaan kepada Tuhan โ Belief in God
- Kesetiaan kepada Raja dan Negara โ Loyalty to King and Country
- Keluhuran Perlembagaan โ Supremacy of the Constitution
- Kedaulatan Undang-undang โ Rule of Law
- Kesopanan dan Kesusilaan โ Courtesy and Morality
Performance Characteristics
Multilingual Capability
- Strong performance in English and Malay
- Robust handling of Chinese and Tamil prompts
- Improved code-switching and local phrasing
Alignment Behavior
- Refuses explicit hate speech and racially inflammatory content
- Avoids outputs that promote violence or social unrest
- Responds cautiously to politically or religiously sensitive prompts
Refusal Philosophy
Rukun-Qwen-32B applies proportional, context-sensitive refusals focused on harm reduction and civic responsibility.
Training Overview (High-Level)
The model was fine-tuned using a curated multilingual instruction dataset with emphasis on de-escalation and safer handling of sensitive topics.
No private, proprietary, or personal data was intentionally included.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Entermind/Rukun-Qwen-32B"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the principles of Rukun Negara."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Limitations
This model may produce incorrect or biased outputs. It does not provide legal, medical, or professional advice.
License
Released under the Responsible AI License (OpenRAIL). See LICENSE file for full terms.
Support
- Downloads last month
- 28