update notice for inference fix
Browse files
README.md
CHANGED
|
@@ -123,6 +123,8 @@ STEP3-VL-10B delivers best-in-class performance across major multimodal benchmar
|
|
| 123 |
|
| 124 |
We introduce how to use our model at inference stage using transformers library. It is recommended to use python=3.10, torch>=2.1.0, and transformers=4.57.0 as the development environment.We currently only support bf16 inference, and multi-patch for image preprocessing is supported by default. This behavior is aligned with vllm and sglang.
|
| 125 |
|
|
|
|
|
|
|
| 126 |
```python
|
| 127 |
from transformers import AutoProcessor, AutoModelForCausalLM
|
| 128 |
|
|
|
|
| 123 |
|
| 124 |
We introduce how to use our model at inference stage using transformers library. It is recommended to use python=3.10, torch>=2.1.0, and transformers=4.57.0 as the development environment.We currently only support bf16 inference, and multi-patch for image preprocessing is supported by default. This behavior is aligned with vllm and sglang.
|
| 125 |
|
| 126 |
+
**Note:** If you experience infinite generation issues, please check [Discussion #9](https://huggingface.co/stepfun-ai/Step3-VL-10B/discussions/9) for the fix.
|
| 127 |
+
|
| 128 |
```python
|
| 129 |
from transformers import AutoProcessor, AutoModelForCausalLM
|
| 130 |
|