Update README.md
Browse files
README.md
CHANGED
|
@@ -26,6 +26,7 @@ library_name: transformers
|
|
| 26 |
## π’ News & Updates
|
| 27 |
|
| 28 |
- π **Online Demo**: Explore Step3-VL-10B on [Hugging Face Spaces](https://huggingface.co/spaces/stepfun-ai/Step3-VL-10B) !
|
|
|
|
| 29 |
- π’ **[Notice] vLLM Support:** vLLM integration is now officially supported! (PR [#32329](https://github.com/vllm-project/vllm/pull/32329))
|
| 30 |
- β
**[Fixed] HF Inference:** Resolved the `eos_token_id` misconfiguration in `config.json` that caused infinite generation loops. (PR [#abdf3](https://huggingface.co/stepfun-ai/Step3-VL-10B/commit/abdf3618e914a9e3de0ad74efacc8b7a10f06c10))
|
| 31 |
- β
**[Fixing] Metric Correction:** We sincerely apologize for inaccuracies in the Qwen3VL-8B benchmarks (e.g., AIME, HMMT, LCB). The errors were caused by an incorrect max_tokens setting (mistakenly set to 32k) during our large-scale evaluation process. We are re-running the tests and will provide corrected numbers in the next version of technical report.
|
|
@@ -50,6 +51,7 @@ The success of STEP3-VL-10B is driven by two key strategic designs:
|
|
| 50 |
| :-------------------- | :--- | :----------------------------------------------------------------: | :----------------------------------------------------------------------: |
|
| 51 |
| **STEP3-VL-10B-Base** | Base | [π€ Download](https://huggingface.co/stepfun-ai/Step3-VL-10B-Base) | [π€ Download](https://modelscope.cn/models/stepfun-ai/Step3-VL-10B-Base) |
|
| 52 |
| **STEP3-VL-10B** | Chat | [π€ Download](https://huggingface.co/stepfun-ai/Step3-VL-10B) | [π€ Download](https://modelscope.cn/models/stepfun-ai/Step3-VL-10B) |
|
|
|
|
| 53 |
|
| 54 |
## π Performance
|
| 55 |
|
|
|
|
| 26 |
## π’ News & Updates
|
| 27 |
|
| 28 |
- π **Online Demo**: Explore Step3-VL-10B on [Hugging Face Spaces](https://huggingface.co/spaces/stepfun-ai/Step3-VL-10B) !
|
| 29 |
+
- π’ **[Notice] FP8 Quantization Support :** FP8 quantized weights are now available. ([Download link](https://huggingface.co/stepfun-ai/Step3-VL-10B-FP8))
|
| 30 |
- π’ **[Notice] vLLM Support:** vLLM integration is now officially supported! (PR [#32329](https://github.com/vllm-project/vllm/pull/32329))
|
| 31 |
- β
**[Fixed] HF Inference:** Resolved the `eos_token_id` misconfiguration in `config.json` that caused infinite generation loops. (PR [#abdf3](https://huggingface.co/stepfun-ai/Step3-VL-10B/commit/abdf3618e914a9e3de0ad74efacc8b7a10f06c10))
|
| 32 |
- β
**[Fixing] Metric Correction:** We sincerely apologize for inaccuracies in the Qwen3VL-8B benchmarks (e.g., AIME, HMMT, LCB). The errors were caused by an incorrect max_tokens setting (mistakenly set to 32k) during our large-scale evaluation process. We are re-running the tests and will provide corrected numbers in the next version of technical report.
|
|
|
|
| 51 |
| :-------------------- | :--- | :----------------------------------------------------------------: | :----------------------------------------------------------------------: |
|
| 52 |
| **STEP3-VL-10B-Base** | Base | [π€ Download](https://huggingface.co/stepfun-ai/Step3-VL-10B-Base) | [π€ Download](https://modelscope.cn/models/stepfun-ai/Step3-VL-10B-Base) |
|
| 53 |
| **STEP3-VL-10B** | Chat | [π€ Download](https://huggingface.co/stepfun-ai/Step3-VL-10B) | [π€ Download](https://modelscope.cn/models/stepfun-ai/Step3-VL-10B) |
|
| 54 |
+
| **STEP3-VL-10B-FP8** | Quantized | [π€ Download](https://huggingface.co/stepfun-ai/Step3-VL-10B-FP8) | [π€ Download](https://modelscope.cn/models/stepfun-ai/Step3-VL-10B-FP8) |
|
| 55 |
|
| 56 |
## π Performance
|
| 57 |
|