File size: 2,631 Bytes
2ad7c4c
 
 
 
 
eb23da0
 
 
 
 
 
 
 
 
 
 
2ad7c4c
 
eb23da0
 
 
2ad7c4c
 
2750c72
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
license: apache-2.0
library_name: onnxruntime-genai
pipeline_tag: text-generation
tags:
- onnx
- directml
- int4
- quantized
- qwen
- qwen3
- instruct
- text-generation
- windows
- csharp
- dotnet
inference: false
base_model: Qwen/Qwen3-14B-Instruct
language:
- en
- zh
---

# Qwen3-14B-Instruct – DirectML INT4 (ONNX Runtime)

This repository provides **Qwen3-14B-Instruct** converted to **INT4 ONNX** and optimized for **DirectML** using **Microsoft Olive** and **ONNX Runtime GenAI**.

It is designed for **native Windows GPU inference** (Intel Arc, AMD RDNA, NVIDIA RTX) without CUDA and without running a Python server.  
Ideal for integration in **C# / .NET applications** using ONNX Runtime + DirectML.

---

## Model Details

- Base model: `OpenPipe/Qwen3-14B-Instruct`
- Quantization: INT4 (MatMul NBits)
- Format: ONNX
- Runtime: ONNX Runtime with `DmlExecutionProvider`
- Conversion toolchain: Microsoft Olive + onnxruntime-genai
- Target hardware: 
  - Intel Arc (A770, A750, 130V, etc.)
  - AMD RDNA2 / RDNA3
  - NVIDIA RTX (via DirectML)

---

## Files

Main inference files:

- `model.onnx`
- `model.onnx.data`  ← INT4 weights (≈ 9 GB)
- `genai_config.json`
- `tokenizer.json`, `vocab.json`, `merges.txt`
- `chat_template.jinja`

---

## Usage in C# (DirectML)

Example (ONNX Runtime GenAI):

```csharp
using Microsoft.ML.OnnxRuntimeGenAI;

var modelPath = @"Qwen3-14B-Instruct-DirectML-INT4";

using var model = Model.Load(modelPath, new ModelOptions
{
    ExecutionProvider = ExecutionProvider.DirectML
});

using var tokenizer = new Tokenizer(model);
var tokens = tokenizer.Encode("Explain what a Dutch mortgage deed is.");

using var generator = new Generator(model, new GeneratorParams
{
    MaxLength = 1024,
    Temperature = 0.7f
});

generator.AppendTokens(tokens);
generator.Generate();

string output = tokenizer.Decode(generator.GetSequence(0));
Console.WriteLine(output);
Prompt Format
This model supports standard chat-style prompts and works well with Hermes-style system prompts and tool calling.

The included chat_template.jinja can be used to format multi-role conversations.

Performance Notes
INT4 allows the 14B model to run on:

16 GB VRAM GPUs (Arc 130V, RTX 3060, RX 6800)

Throughput depends heavily on DirectML backend and driver quality.

First token latency may be high due to graph compilation.

License & Attribution
Base model:

Qwen3-14B-Instruct by Alibaba / OpenPipe

License: see original model card

Conversion:

ONNX + INT4 quantization performed by Wekkel using Microsoft Olive.

This is an independent community conversion.

No affiliation with Alibaba or Qwen team.