v0.45.0
Browse filesSee https://github.com/quic/ai-hub-models/releases/v0.45.0 for changelog.
README.md
CHANGED
|
@@ -31,18 +31,6 @@ This model is an implementation of Qwen2.5-1.5B-Instruct found [here](https://gi
|
|
| 31 |
- Input sequence length for Prompt Processor: 128
|
| 32 |
- Context length: 4096
|
| 33 |
- Precision: w4 + w8 (few layers) with fp16 activations
|
| 34 |
-
- Num of key-value heads: 4
|
| 35 |
-
- Information about the model parts: Prompt Processor and Token Generator are split into 6 parts each. Each corresponding Prompt Processor and Token Generator part share weights.
|
| 36 |
-
- Prompt processor input (part1): 128 tokens
|
| 37 |
-
- Prompt processor output (part1): Embeddings output
|
| 38 |
-
- Prompt processor input (other parts): 128 tokens + KVCache initialized with pad token
|
| 39 |
-
- Prompt processor output (other parts): 128 output tokens + KVCache for token generator
|
| 40 |
-
- Token generator input (part1): 128 tokens
|
| 41 |
-
- Token generator output (part1): Embeddings output
|
| 42 |
-
- Token generator input (other parts): 1 input token + past KVCache
|
| 43 |
-
- Token generator output (other parts): 1 output token + KVCache for next iteration
|
| 44 |
-
- Use: Initiate conversation with prompt-processor and then token generator for subsequent iterations.
|
| 45 |
-
- Minimum QNN SDK version required: 2.27.7
|
| 46 |
- Supported languages: Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
|
| 47 |
- TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
|
| 48 |
- Response Rate: Rate of response generation after the first response token.
|
|
|
|
| 31 |
- Input sequence length for Prompt Processor: 128
|
| 32 |
- Context length: 4096
|
| 33 |
- Precision: w4 + w8 (few layers) with fp16 activations
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
- Supported languages: Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
|
| 35 |
- TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
|
| 36 |
- Response Rate: Rate of response generation after the first response token.
|