Support MiniCPM-V-4_5 AWQ?
Thank you.
Thank you. I just used vllm version 0.10.1 but it said it doesn't support MiniCPM-V-4.5. Docs : https://github.com/OpenSQZ/MiniCPM-V-CookBook/deployment/vllm/minicpm-v4_5_vllm.md
ValueError: Currently, MiniCPMV only supports versions 2.0, 2.5, 2.6, 4.0. Got version: (4, 5)
Yepp. Thank you very much. I’d like to ask: when extracting OCR information, what values should I use for temperature and top_p? I can’t find this in the documentation.
We do not design different parameters for different problems, you can use the default parameters. ^_^
You can use it on our demo, I found that the floating point accuracy seems to be OK, and then you can test the quantization accuracy.
The above is the effect of using demo directly.
Copy this into your browser's address bar: 101.126.42.235:30910
This is just numbers and symbols, so there shouldn't be any problems.
If you can't open it, try "ping 101.126.42.235" to test whether your network can connect to it.
Alternatively, download the HF model and run the results using the BF16 model first.
https://huggingface.co/openbmb/MiniCPM-V-4_5
Could you share the prompt and the parameters for temperature and top_p that you used?
You can use the default transformers inference code. The prompt is the same as yours, and parameters such as temperature were kept at their default values without any custom settings.
Thank you. However, I need high accuracy for Vietnamese punctuation and wording, and it seems this model is not suitable for this task.



