File size: 6,446 Bytes
f7fcbf5 c7bee64 f7fcbf5 9b849cc c7bee64 f7fcbf5 c7bee64 f7fcbf5 c7bee64 f7fcbf5 c7bee64 f7fcbf5 c7bee64 f7fcbf5 c7bee64 f7fcbf5 c7bee64 9b849cc c7bee64 f7fcbf5 c7bee64 f7fcbf5 e0cfa41 f7fcbf5 e0cfa41 f7fcbf5 e0cfa41 f7fcbf5 e0cfa41 c7bee64 e0cfa41 0960e1e e0cfa41 0960e1e e0cfa41 0960e1e e0cfa41 0960e1e e0cfa41 0960e1e e0cfa41 c7bee64 f7fcbf5 c7bee64 f7fcbf5 c7bee64 f7fcbf5 c7bee64 f7fcbf5 c7bee64 f7fcbf5 c7bee64 f7fcbf5 c7bee64 f7fcbf5 c7bee64 f7fcbf5 c7bee64 f7fcbf5 c7bee64 f7fcbf5 c7bee64 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 |
---
license: apache-2.0
base_model:
- Qwen/Qwen3-Coder-30B-A3B-Instruct
---
## Model Details
This model is an int4 model with group_size 128 and symmetric quantization of [Qwen/Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm.
Please follow the license of the original model.
## How To Use
**vLLM usage**
~~~bash
vllm serve Intel/Qwen3-Coder-30B-A3B-Instruct-int4-AutoRound --tensor-parallel-size 4 --max-model-len 65536
~~~
**INT4 Inference on CPU/Intel GPU/CUDA**
~~~python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Intel/Qwen3-Coder-30B-A3B-Instruct-int4-AutoRound"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Write a quick sort algorithm."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=65536
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print("content:", content)
"""
content: Here's a quicksort algorithm implementation in Python:
```python
def quicksort(arr):
'''
Sorts an array using the quicksort algorithm.
Args:
arr: List of comparable elements
Returns:
None (sorts in-place)
'''
if len(arr) <= 1:
return
def partition(low, high):
'''Partition function using the last element as pivot'''
pivot = arr[high]
i = low - 1 # Index of smaller element
for j in range(low, high):
if arr[j] <= pivot:
i += 1
arr[i], arr[j] = arr[j], arr[i] # Swap elements
arr[i + 1], arr[high] = arr[high], arr[i + 1] # Place pivot in correct position
return i + 1
def quicksort_helper(low, high):
'''Recursive helper function'''
if low < high:
# Partition the array and get pivot index
pi = partition(low, high)
# Recursively sort elements before and after partition
quicksort_helper(low, pi - 1)
quicksort_helper(pi + 1, high)
quicksort_helper(0, len(arr) - 1)
# Example usage:
if __name__ == "__main__":
# Test the algorithm
test_array = [64, 34, 25, 12, 22, 11, 90]
print("Original array:", test_array)
quicksort(test_array)
print("Sorted array:", test_array)
# Test with other examples
test_cases = [
[5, 2, 8, 1, 9],
[1],
[],
[3, 3, 3, 3],
[5, 4, 3, 2, 1]
]
for i, case in enumerate(test_cases):
original = case.copy()
quicksort(case)
print(f"Test {i+1}: {original} → {case}")
**How it works:**
1. **Divide**: Choose a "pivot" element and partition the array so that elements smaller than the pivot are on the left, and larger elements are on the right.
2. **Conquer**: Recursively apply quicksort to the sub-arrays on both sides of the pivot.
3. **Combine**: Since we're sorting in-place, no additional combining step is needed.
**Key features:**
- **Time Complexity**: O(n log n) average case, O(n²) worst case
- **Space Complexity**: O(log n) due to recursion stack
- **In-place sorting**: Modifies the original array
- **Not stable**: Relative order of equal elements may change
**Alternative version with random pivot selection** (better average performance):
```python
import random
def quicksort_random(arr):
'''Quicksort with random pivot selection for better average performance'''
def partition(low, high):
# Randomly select pivot and swap with last element
random_index = random.randint(low, high)
arr[random_index], arr[high] = arr[high], arr[random_index]
pivot = arr[high]
i = low - 1
for j in range(low, high):
if arr[j] <= pivot:
i += 1
arr[i], arr[j] = arr[j], arr[i]
arr[i + 1], arr[high] = arr[high], arr[i + 1]
return i + 1
def quicksort_helper(low, high):
if low < high:
pi = partition(low, high)
quicksort_helper(low, pi - 1)
quicksort_helper(pi + 1, high)
if len(arr) > 1:
quicksort_helper(0, len(arr) - 1)
The algorithm efficiently sorts arrays by repeatedly dividing them into smaller subproblems, making it one of the most widely used sorting algorithms in practice.
"""
~~~
### Generate the model
Here is the sample command to reproduce the model
~~~bash
auto-round --model Qwen/Qwen3-Coder-30B-A3B-Instruct --output_dir "./tmp_autoround" --enable_torch_compile --nsamples 512 --fp_layers mlp.gate
~~~
## Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Cite
@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round) |