Philosopher model :)
Hello team.
Thank you very much for this model. I'm with GLM models since 4.5 and am always positively surprised. Astonishing work :)
I observed when using in Roo Code, that this model has veeeeery long thinking process. In some cases it is great, as it delivers very good answers and analysis.
In other cases though I would like to limit its philosophical nature but not completely turn off thinking.
Is there any neat way to say to it 'think but not write a book about my question?' π
Seed models brought thinking budgets to open source, check it. Now many other models supports thinking budgets.
You can introduce thinking budgets in sglang deployment
Would it just 'cut' thinking block or actually model will 'know' that it should keep it shorter?
I found that GLM-4.7 and later models do support thinking budget on SiliconFlow (while some older models like Kimi-K2 and DeepSeek don't)
I just wonder why no one mention that their models support thinking budget (like Qwen3) and none of these open models have ADJUSTABLE THINKING LEVEl yet (sometimes it can be quite useful)