Question about max model len

#2
by hackingracoon - opened

It seems that the browser-using model operates with just one H100 card, delivering both fast execution speed and high accuracy.

However, when running the browser-use with the max-model-len set to 32768 as described in the tutorial, it abnormally terminates due to exceeding the token limit. If this model is based on Qwen3 vl 30b-a3b, would it be possible to set the max-model-len to 262144?
Would there be any performance issues with this adjustment?

Thank you very much for sharing such a wonderful model.

Browser Use org

Should be possible to use with larger context length but we tested with at most 64k context!

In most cases you shouldn’t reach the 32k token limit tho, our library is heavily optimized for token usage.

mertunsal changed discussion status to closed

Sign up or log in to comment