File size: 7,726 Bytes
93f971d d106d71 93f971d 149b3de 93f971d 6f6ef9c af8f3d4 149b3de 93f971d 149b3de 93f971d 3756235 93f971d 149b3de 93f971d 149b3de 93f971d 149b3de 93f971d 149b3de 93f971d ebb370d 93f971d 149b3de 93f971d 149b3de 93f971d 149b3de 93f971d 149b3de 93f971d 149b3de 93f971d 149b3de 93f971d ebb370d 93f971d 3756235 93f971d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-14B
language:
- th
- en
---
# OpenJAI-v1.0
<img src="OpenJAI-logo.jpg" width="313"/>
OpenJAI-v1.0 is an open-source Large Language Model from Jasmine Technology Solution (JTS), designed for high performance in both Thai and English. Using the powerful Qwen3-14B as its foundation, our work focused on enhancing its capabilities for practical applications through meticulous data curation in three key domains: instruction following, long-context understanding, and tool calling.
Comprehensive evaluation results demonstrate that OpenJAI-v1.0 improves upon its base model and outperforms other leading open-source Thai models of comparable size across a diverse suite of benchmarks. Crucially, these gains were achieved without significant degradation of the model's foundational knowledge.
For a complete overview of our dataset, methodology, and benchmarks, please refer to our paper: **[OpenJAI-v1.0: An Open Thai Large Language Model](https://arxiv.org/abs/2510.06847)**.
## OpenJAI-v1.0 Highlights
- **Thai-Centric Excellence**: Specifically finetuned to achieve state-of-the-art performance in both Thai and English.
- **Enhanced Practical Skills**: Built on the robust Qwen3-14B, OpenJAI-v1.0 excels in:
- Complex instruction following
- Long-context understanding (up to 120,000 tokens)
- Reliable tool and function calling
- **Top-Tier Performance**: Outperforms its base model and other leading open-source Thai models of comparable size across a diverse set of benchmarks.
- **Knowledge Retention**: Finetuning enhancements were achieved without significant degradation of the base model's core knowledge, avoiding catastrophic forgetting.
- **Fully Open-Source**: OpenJAI-v1.0 is publicly released to foster research and application development within the Thai AI community.
## Model Performance
| Benchmark/Model | **OpenJAI-v1.0-14b** | Qwen3-14b | Typhoon2.1-gemma3-12b | OpenThaiGPT1.5-14b | GPT-4.1-nano-2025-04-14 |
| :--- | :---: | :---: | :---: | :---: | :---: |
| **Instruction Following** | | | | | |
| IFBench-EN | **32.4** | 29.7 | 27.4 | 30.6 | 28.3 |
| IFBench-TH | **39.4** | 38.1 | 36.5 | 35.4 | 34.9 |
| **Multi-turn Capability** | | | | | |
| MT-Bench-EN | 8.4 | 8.4 | 8.3 | 7.8 | **8.5** |
| MT-Bench-TH | **8.1** | 8.0 | **8.1** | 6.9 | 8.0 |
| **Long-context Understanding** | | | | | |
| MRCR | **18.9** | 18.3 | 16.9 | 16.9 | 16.2 |
| LongBench-v2 | **33.6** | 32.4 | 29.2 | **33.6** | 28.8 |
| **Tool Calling** | | | | | |
| BFCL-v3-EN | **60.5** | 59.2 | 52.2 | 52.9 | 53.1 |
| BFCL-v3-TH | **47.0** | 46.0 | 45.0 | 44.9 | 41.1 |
| **General Knowledge** | | | | | |
| MMLU-ProX-lite-EN | 66.0 | **66.6** | 55.1 | 64.3 | 36.3 |
| MMLU-ProX-lite-TH | 54.7 | **57.5** | 45.2 | 49.3 | 39.8 |
## Quickstart
To get started, we recommend using the latest version of the Hugging Face `transformers` library.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "JTS-AI/OpenJAI-v1.0-14B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "แนะนำที่เที่ยวแถวสยามหน่อย"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=1024
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True).strip("\n")
print("Content:", content)
```
> [!NOTE]
> OpenJAI-v1.0 is optimized for **non-thinking mode**. While the base model's thinking mode may be accessible, its performance is not guaranteed.
## Tool Calling / Agentic Use
OpenJAI-v1.0 has strong tool-calling capabilities. You can use frameworks like `Qwen-Agent` by adapting the model configuration.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM, pointing to your OpenJAI-v1.0 endpoint
llm_cfg = {
'model': 'JTS-AI/OpenJAI-v1.0-14B',
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'วาดกราฟแสดงราคาหุ้นของ JTS ในช่วง 1 เดือนที่ผ่านมา'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
OpenJAI-v1.0 was trained for robust performance on input context lengths up to **120,000 tokens**. The model operates optimally within its native 32,768-token window. To process contexts exceeding this limit, applying a context extension technique like YaRN or Dynamic RoPE scaling is necessary.
Frameworks like `vLLM` and `SGLang` support passing command-line arguments to enable RoPE scaling.
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
### Citation
```
@misc{trakuekul2025openjaiv10openthailarge,
title={OpenJAI-v1.0: An Open Thai Large Language Model},
author={Pontakorn Trakuekul and Attapol T. Rutherford and Jullajak Karnjanaekarin and Narongkorn Panitsrisit and Sumana Sumanakul},
year={2025},
eprint={2510.06847},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2510.06847},
}
``` |