name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8c34i8
You can your apps like the E-Worker Studio [app.eworker.ca](http://app.eworker.ca) * They have agents, connect one of the agents to your LLM, local or remote * The LLM is then given tools to spawn sub agents Example of the tools: https://preview.redd.it/d94md6gchqmg1.png?width=2495&format=png&auto=webp&s=d372294432afe08c92c1d5442eeac6493226768a
1
0
2026-03-03T01:33:45
eworker8888
false
null
0
o8c34i8
false
/r/LocalLLaMA/comments/1rjat7a/general_llm_that_uses_sub_ais_to_complete_complex/o8c34i8/
false
1
t1_o8c348j
There are actually a lot of models using transformers with time series forecasting https://huggingface.co/models?pipeline_tag=time-series-forecasting&library=transformers&sort=trending
1
0
2026-03-03T01:33:43
DinoAmino
false
null
0
o8c348j
false
/r/LocalLLaMA/comments/1rjb7s0/transformers_for_numeric_data/o8c348j/
false
1
t1_o8c335i
That is a high possibility. Qwen3 used to generate the same stuff as OP's when I didn't have a proper chat template setup. I ended up changing the chat template to the one which Qwen's comfortable with (the |im\_start| one) and it started working.
1
0
2026-03-03T01:33:31
EverGreen04082003
false
null
0
o8c335i
false
/r/LocalLLaMA/comments/1rj8gb4/for_sure/o8c335i/
false
1
t1_o8c31zo
Are you serious?
1
0
2026-03-03T01:33:20
CapitalShake3085
false
null
0
o8c31zo
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c31zo/
false
1
t1_o8c30bn
Notes on 9B with thinking enabled: The 9B summarization score (\~0.11) is a thinking model artifact, not real performance. It outputs its full chain-of-thought as plain text ("Thinking Process: 1. Analyze the Request..."). The model actually extracts the right keywords internally but keeps self-correcting and never outputs a clean answer.
1
0
2026-03-03T01:33:04
Rough-Heart-7623
false
null
0
o8c30bn
false
/r/LocalLLaMA/comments/1rjbw0p/benchmarked_qwen_35_small_models_08b2b4b9b_on/o8c30bn/
false
1
t1_o8c2xqk
Be more concerned with TTFT, 10-15 tokens a sec is fine, but not if you're waiting 10-30sec for them to start.
1
0
2026-03-03T01:32:37
syle_is_here
false
null
0
o8c2xqk
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o8c2xqk/
false
1
t1_o8c2xnz
What are you even on about? Are you some kind of bot? I'm saying that they should pull the cloud inference and just offer finetuning services for Qwen3.5 27B and you're spouting off some economics trauma. Genuinely.
1
0
2026-03-03T01:32:36
Anduin1357
false
null
0
o8c2xnz
false
/r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8c2xnz/
false
1
t1_o8c2wch
[https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard)
1
0
2026-03-03T01:32:23
pmttyji
false
null
0
o8c2wch
false
/r/LocalLLaMA/comments/1rj7p2h/i_need_an_uncensored_llm_for_8gb_vram/o8c2wch/
false
1
t1_o8c2s9s
We’ve been hosting several of the new Qwen variants on our runtime with vLLM and seeing very stable behavior, including tool use and long reasoning chains. In our experience a lot of the reported issues are runtime configuration and backend differences, not the base models themselves.
1
0
2026-03-03T01:31:43
pmv143
false
null
0
o8c2s9s
false
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8c2s9s/
false
1
t1_o8c2n7t
Does anyone know if the LM studio guys plan to add the presence penalty setting?
1
0
2026-03-03T01:30:54
neil_555
false
null
0
o8c2n7t
false
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8c2n7t/
false
1
t1_o8c2fk9
?
1
0
2026-03-03T01:29:39
Ok-Internal9317
false
null
0
o8c2fk9
false
/r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8c2fk9/
false
1
t1_o8c2b5w
But it didnt answer in base 64??
1
0
2026-03-03T01:28:55
hapliniste
false
null
0
o8c2b5w
false
/r/LocalLLaMA/comments/1rja0sb/gptoss_had_to_think_for_4_minutes_where_qwen359b/o8c2b5w/
false
1
t1_o8c1zpf
if you don't mind sharing your CMD output are you getting an error something? Try running this command or drop a link to the HF model you tried `llama-server -hf unsloth/Qwen3.5-9B-GGUF:Q8_0 --ctx-size 16384 --temp 0.6 --top-p 0.95 --top-k 20 --min-p 0.00 --port 8073 --chat-template-kwargs "{\"enable_thinking\":true}" --presence_penalty = 1.5`
1
0
2026-03-03T01:27:01
DegenDataGuy
false
null
0
o8c1zpf
false
/r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/o8c1zpf/
false
1
t1_o8c1s85
Are you saying they should not release the weights, and instead charge cheaper for cloud inference? Also: you think 2060ti is a server level card, suitable for serving 1000 concurrent requests?
1
0
2026-03-03T01:25:48
Badger-Purple
false
null
0
o8c1s85
false
/r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8c1s85/
false
1
t1_o8c1q5p
I'm very excited to try a quant version of 3.5 on my Mac Mini M4 Pro 24GB
1
0
2026-03-03T01:25:27
turlocks
false
null
0
o8c1q5p
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8c1q5p/
false
1
t1_o8c1kl5
Honestly these days I recommend Q5 as a bare minimum and Q6 as the recommended minimum for most models, you really notice the quality hit at Q4 especially on reasoning models (not to mention they take longer reasoning due to increased uncertainty which means you need longer context anyway).
1
0
2026-03-03T01:24:32
MerePotato
false
null
0
o8c1kl5
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c1kl5/
false
1
t1_o8c1jdd
the moment i noticed "Elon said" i stoped reading :)
1
0
2026-03-03T01:24:20
Euphoric_North_745
false
null
0
o8c1jdd
false
/r/LocalLLaMA/comments/1rj39se/intelligence_density_per_gb_is_increasing_and_i/o8c1jdd/
false
1
t1_o8c1i8g
They’re greedy though like all companies. And economics proves that they make more money with higher prices
1
0
2026-03-03T01:24:09
Savantskie1
false
null
0
o8c1i8g
false
/r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8c1i8g/
false
1
t1_o8c1fvr
Don't cry, asking just 'hi' with no previous message takes 48s instead of 36 — enjoy! ahaha https://preview.redd.it/w9jqbajffqmg1.png?width=748&format=png&auto=webp&s=ba10b84c23f74a276e7aafe35a66904c019f24fd
1
0
2026-03-03T01:23:47
CapitalShake3085
false
null
0
o8c1fvr
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c1fvr/
false
1
t1_o8c1fze
how did u get them to respect roocode prompt-based too calling? i find that these agents fail tool calling really bad in roo code
1
0
2026-03-03T01:23:47
Express_Quail_1493
false
null
0
o8c1fze
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8c1fze/
false
1
t1_o8c1fi0
We’ve had plenty of good conversational LLMs since 2024, do you want conversation or models that can answer questions and solve problems better? Right now overthinking CoT is the best way to improve model intelligence without scaling
1
0
2026-03-03T01:23:43
fulgencio_batista
false
null
0
o8c1fi0
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c1fi0/
false
1
t1_o8c1epq
Then something is wrong, plainly stated.
1
0
2026-03-03T01:23:35
siggystabs
false
null
0
o8c1epq
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c1epq/
false
1
t1_o8c1e8a
People have such a lack of patience, the reason local models can compete with the cloud at such small sizes is because they leverage the fact you can let them reason longer without worrying about paying on local hardware.
1
0
2026-03-03T01:23:30
MerePotato
false
null
0
o8c1e8a
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c1e8a/
false
1
t1_o8c1c1y
What hardware are you using
1
0
2026-03-03T01:23:09
Busy-Guru-1254
false
null
0
o8c1c1y
false
/r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8c1c1y/
false
1
t1_o8c1bgp
I already tired it alongside reasoning budget. If I apply the same commands to bartowiski quants or the unsloth quants for the 35b model, I can disable and enable thinking at will, but if I do it with the new small quants there is no result
1
0
2026-03-03T01:23:03
guiopen
false
null
0
o8c1bgp
false
/r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/o8c1bgp/
false
1
t1_o8c1a2e
Well thank you for your service to the community
1
0
2026-03-03T01:22:49
Savantskie1
false
null
0
o8c1a2e
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o8c1a2e/
false
1
t1_o8c199f
Why wouldn't they, if they have the hardware and the recipes to finetune models for local and cloud use? "Trust".
1
0
2026-03-03T01:22:41
Anduin1357
false
null
0
o8c199f
false
/r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8c199f/
false
1
t1_o8c13to
it passes on openrouter, so might be the quantization
1
0
2026-03-03T01:21:48
Ok-Ad-8976
false
null
0
o8c13to
false
/r/LocalLLaMA/comments/1rj8e7z/is_anyone_else_seeing_qwen_35_35b_outperform/o8c13to/
false
1
t1_o8c1210
Me at parties
1
0
2026-03-03T01:21:29
IngenuityMotor2106
false
null
0
o8c1210
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c1210/
false
1
t1_o8c0xrd
So what's the training data mix? At least the programming languages?
1
0
2026-03-03T01:20:49
DeProgrammer99
false
null
0
o8c0xrd
false
/r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o8c0xrd/
false
1
t1_o8c0s4p
Disabling reasoning — which I have already done — does not improve performance in my case compared to Qwen3 Instruct. So the real question is this: why claim that the model outperforms the previous generation if achieving that improvement requires roughly four times the execution time? If the performance gain only materializes under significantly higher latency and computational cost, then in practical terms it is not a clear upgrade — at least not for workloads where efficiency and throughput matter.
1
0
2026-03-03T01:19:54
CapitalShake3085
false
null
0
o8c0s4p
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c0s4p/
false
1
t1_o8c0ntr
And copilot is trash and half the time they’re not even using their own llm. It’s usually a variant of Claude
1
0
2026-03-03T01:19:11
Savantskie1
false
null
0
o8c0ntr
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8c0ntr/
false
1
t1_o8c0nqy
No. You probably need to ask llm to build one for you. Claude Cowork does this locally.
1
0
2026-03-03T01:19:11
zball_
false
null
0
o8c0nqy
false
/r/LocalLLaMA/comments/1rjazyt/is_there_a_list_of_the_tools_geminichatgptclaude/o8c0nqy/
false
1
t1_o8c0k0a
no its not..
1
0
2026-03-03T01:18:33
howardhus
false
null
0
o8c0k0a
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c0k0a/
false
1
t1_o8c0iec
If using LM studio then add the line below to the start of the jinja template (you can find this at the bottom of the inference settings tab) {%- set enable\_thinking = true %} https://preview.redd.it/53fhyh4keqmg1.png?width=1920&format=png&auto=webp&s=3161bfdfa8bd58e83a05e7abc627840316df08d1
1
0
2026-03-03T01:18:17
neil_555
false
null
0
o8c0iec
false
/r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/o8c0iec/
false
1
t1_o8c0erv
that is not true. the ressoning says you gabe it „ciao“ before.with no further contrxt. even if you see it as a person: youbare in a closed room with no other input as text: someone comes in and says „ciao“ in italian… then the next is just a „hi“. that is heck confusing. the model is correct in its thinking: you seemingly switched languages? is „hi“ italian? like… WHAT DO YOU WANT?? you ise a thinking moodel (for complex problems) and give it a highly confusing prompt that would confuse most humans… it does ot help its a 4B model.. „hey guys i used my model in a wrong way and ut does not know how to respond lol“
1
0
2026-03-03T01:17:42
howardhus
false
null
0
o8c0erv
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8c0erv/
false
1
t1_o8c0dqd
I’ve had issues with double quotes so I figured I’d just weigh in
1
0
2026-03-03T01:17:31
Savantskie1
false
null
0
o8c0dqd
false
/r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o8c0dqd/
false
1
t1_o8c053e
1 bit anything is useless bro. 2 bit anything is pretty much useless to imho. It might trick you into looking like it kinda works but in general, nah.
1
0
2026-03-03T01:16:07
johnnyApplePRNG
false
null
0
o8c053e
false
/r/LocalLLaMA/comments/1riz0db/qwen35_397ba17b_1bit_quantization_udtq1_0_vs_27b/o8c053e/
false
1
t1_o8c03vh
Trust me they wouldn’t.
1
0
2026-03-03T01:15:56
Savantskie1
false
null
0
o8c03vh
false
/r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8c03vh/
false
1
t1_o8c02tk
1. Use a client supporting MCP. 2. Write an "LLM-MCP" to call other LLM APIs.
1
0
2026-03-03T01:15:46
Dr_Me_123
false
null
0
o8c02tk
false
/r/LocalLLaMA/comments/1rjat7a/general_llm_that_uses_sub_ais_to_complete_complex/o8c02tk/
false
1
t1_o8c02uj
Do you know if there are any local implementations similar to this then?
1
0
2026-03-03T01:15:46
OUT_OF_HOST_MEMORY
false
null
0
o8c02uj
false
/r/LocalLLaMA/comments/1rjazyt/is_there_a_list_of_the_tools_geminichatgptclaude/o8c02uj/
false
1
t1_o8bzqay
In open weights, I'd look at models that do well on spatial reasoning and common-sense benchmarks that are entirely private. ARC-AGI, Simplebench, etc. If you want me to pick 3, I'd say that generally I believe the Kimi models, GPT-OSS, and Gemma models do relatively better there compared to their overally benchmark performance -- conversely, Deepseek / Qwen etc do well on their headline benchmarks but sometimes perform well under their other results on these kinds of benchmarks. I don't have specific models to recommend, I don't actually use very many of them and can't run anything larger than about 12B locally myself. But be careful with all the results/suggestions. Trying to measure the quantity of not being designed to be measured is pretty difficult.
1
0
2026-03-03T01:13:45
-main
false
null
0
o8bzqay
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8bzqay/
false
1
t1_o8bzpv3
With that much ram, why 27b? You could be running Qwen3.5-397B-A17B
1
0
2026-03-03T01:13:41
lolwutdo
false
null
0
o8bzpv3
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8bzpv3/
false
1
t1_o8bzkfv
How the hell did you guys learn what all this shit means??
1
0
2026-03-03T01:12:48
nutyourself
false
null
0
o8bzkfv
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8bzkfv/
false
1
t1_o8bzkc3
I used opencode cli recently. Also I think I used IntelliJ with some bridge.
1
0
2026-03-03T01:12:47
former_farmer
false
null
0
o8bzkc3
false
/r/LocalLLaMA/comments/1rjaymu/how_do_you_configure_your_local_model_better_for/o8bzkc3/
false
1
t1_o8bzj76
What's your parameters? I was dealing with the same until I played with presence and repeat penalties and the temperature
1
0
2026-03-03T01:12:36
4bitben
false
null
0
o8bzj76
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bzj76/
false
1
t1_o8bzizt
https://preview.redd.it/…ve used do that.
1
0
2026-03-03T01:12:34
CapitalShake3085
false
null
0
o8bzizt
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bzizt/
false
1
t1_o8bzfhl
Heads up for LM Studio users running the 9B: since it’s a thinking model, it generates <think>…</think> internally before the visible answer, and those tokens still consume your context budget even if they don’t show in the UI. So if you start seeing “context size exceeded” with the default 4096 (depends on prompt size / history), it’s usually worth bumping the context length — in my case 16384 stopped the errors. Also, if you parse outputs / do structured extraction, watch for <think> tags leaking into the text; stripping them client-side avoids broken parsers.
1
0
2026-03-03T01:12:00
Rough-Heart-7623
false
null
0
o8bzfhl
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bzfhl/
false
1
t1_o8bzaqz
"Lacks personality" is exactly one finetune away from being solved -- and honestly that's the most interesting part of these weights being open.a
1
0
2026-03-03T01:11:14
theagentledger
false
null
0
o8bzaqz
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8bzaqz/
false
1
t1_o8bza6f
If only it could roleplay like GLM 4.7 Flash
1
0
2026-03-03T01:11:08
Witty_Mycologist_995
false
null
0
o8bza6f
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8bza6f/
false
1
t1_o8bz2j1
When building out the tools used by our company I went around asking the closed models in various different ways what tools do you have / what tools would you recommend for a local model to replicate your abilities. The vast majority of it came down to a python sandbox with file ingestion and creation support.
1
0
2026-03-03T01:09:53
Conscious_Cut_6144
false
null
0
o8bz2j1
false
/r/LocalLLaMA/comments/1rjazyt/is_there_a_list_of_the_tools_geminichatgptclaude/o8bz2j1/
false
1
t1_o8byrh1
The old Qwen3 4B was instruct. This is a hybrid reasoning model. The fix is easy, disable thinking, and it’ll be what you expected before.
1
0
2026-03-03T01:08:07
siggystabs
false
null
0
o8byrh1
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8byrh1/
false
1
t1_o8byq19
…… , I'm going to buy another Mac Mini M4 .
1
0
2026-03-03T01:07:53
dali1305117
false
null
0
o8byq19
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o8byq19/
false
1
t1_o8bymil
In what way is it broken? I’m trying to figure out if I was hitting the failure mode. I was using open openwebui as my chat client and it would just hang.
1
0
2026-03-03T01:07:19
AwesomePantalones
false
null
0
o8bymil
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8bymil/
false
1
t1_o8byjcg
Thanks, I'll put it on the list!
1
0
2026-03-03T01:06:49
mr_riptano
false
null
0
o8byjcg
false
/r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8byjcg/
false
1
t1_o8byint
I think Claude basically allocates a container whenever you start a chat. Only a few tools are not enough, it provides a full linux distro that has full internet access to install new packages and softwares.
1
0
2026-03-03T01:06:42
zball_
false
null
0
o8byint
false
/r/LocalLLaMA/comments/1rjazyt/is_there_a_list_of_the_tools_geminichatgptclaude/o8byint/
false
1
t1_o8byihg
Can you be more specific? Like what you tried and what error you got? And for bonus points... throw that error into your llm and ask how to fix it :D
1
0
2026-03-03T01:06:41
Conscious_Cut_6144
false
null
0
o8byihg
false
/r/LocalLLaMA/comments/1rjb1d1/self_hosted_provider_tunnel/o8byihg/
false
1
t1_o8by7xq
what UI or CLI are you using to code? normally the models tell you the optimals settings in huggingface. mostly top\_P min\_p top\_k and temp
1
0
2026-03-03T01:04:58
Express_Quail_1493
false
null
0
o8by7xq
false
/r/LocalLLaMA/comments/1rjaymu/how_do_you_configure_your_local_model_better_for/o8by7xq/
false
1
t1_o8by3sf
please share your setup and config. i only able to run it on 32k context window
1
0
2026-03-03T01:04:17
dodistyo
false
null
0
o8by3sf
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8by3sf/
false
1
t1_o8by2p7
I don't understand how, faced with the evidence, someone has to find an explanation for something that is simply unusable. Qwen3.5 4B is the evolution of Qwen3 4B, which I have been using without any issues until today. Why do you have to talk nonsense just to post something on Reddit?
1
0
2026-03-03T01:04:06
CapitalShake3085
false
null
0
o8by2p7
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8by2p7/
false
1
t1_o8bxwmj
:( Does anything change when you disable it? NOTE: It will drop thinking tags in your model steaming output and not correctly split those out
1
0
2026-03-03T01:03:07
sig_kill
false
null
0
o8bxwmj
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8bxwmj/
false
1
t1_o8bxr7z
If you are on windows use \--chat-template-kwargs "{\\"enable\_thinking\\":false}"
1
0
2026-03-03T01:02:13
DegenDataGuy
false
null
0
o8bxr7z
false
/r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/o8bxr7z/
false
1
t1_o8bxmpx
[removed]
1
0
2026-03-03T01:01:30
[deleted]
true
null
0
o8bxmpx
false
/r/LocalLLaMA/comments/1rjazyt/is_there_a_list_of_the_tools_geminichatgptclaude/o8bxmpx/
false
1
t1_o8bxlac
Nice try, but I'm fresh out of ingredients.
1
0
2026-03-03T01:01:16
theagentledger
false
null
0
o8bxlac
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bxlac/
false
1
t1_o8bxhnx
I paid $170 cdn
1
0
2026-03-03T01:00:42
Annual_Award1260
false
null
0
o8bxhnx
false
/r/LocalLLaMA/comments/1rj54kw/local_llm/o8bxhnx/
false
1
t1_o8bxc3b
Might just be a case of picking the right model for the right task. They clearly were training this with the goal of creating a 4b model that punches above its weight in reasoning tasks, may not be the right fit for simple query rewrite tasks.
1
0
2026-03-03T00:59:48
ProgrammersAreSexy
false
null
0
o8bxc3b
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bxc3b/
false
1
t1_o8bx6gu
Are you asking me? I didn't do anything. I mean... virtually. 1. I did not even write a prompt. I didn't bother. 2. I copied and pasted a screenshot of parts of this discussion. 3. I clicked one of the new templates offered in Gemini. That's it.
1
0
2026-03-03T00:58:53
themoregames
false
null
0
o8bx6gu
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8bx6gu/
false
1
t1_o8bwvyd
So it's base not instruct?
1
0
2026-03-03T00:57:13
UndecidedLee
false
null
0
o8bwvyd
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bwvyd/
false
1
t1_o8bwvot
Thanks mate! The impact is less than I expected. If one is creative enough, there would definitely ways to take advantage of having two models running at once. Hopefully they will trickle down the NPU support to Strix Point machines soon. I want to have a 20B OSS always loaded on my laptop as local backup in case of network outage. That 1/5 power consumption is attractive.
1
0
2026-03-03T00:57:11
o0genesis0o
false
null
0
o8bwvot
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8bwvot/
false
1
t1_o8bws7o
They're out today, but you can't use them sadly (at least in LM Studio they don't allow selection)
1
0
2026-03-03T00:56:37
sig_kill
false
null
0
o8bws7o
false
/r/LocalLLaMA/comments/1rgp2nu/anyone_doing_speculative_decoding_with_the_new/o8bws7o/
false
1
t1_o8bwdh2
I have qwen image, z image, Klein 9B all working on 8gb. Explore SVDQ/nunchaku if your system / the model supports it, otherwise i just go gguf
1
0
2026-03-03T00:54:17
jacobcantspeak
false
null
0
o8bwdh2
false
/r/LocalLLaMA/comments/1rj5czr/free_image_models_that_can_run_on_12gb_vram/o8bwdh2/
false
1
t1_o8bwc5q
Here you go. But it's not really representative since both these are running the same model at the same size. So it's running the same model twice at the same time. In spec decoding it's running a much smaller model to help a much bigger model. Anyways, here you go with a 54K prompt. **NPU** Average decoding speed: 16.339 tokens/s -- Average prefill speed: 450.42 tokens/s (solo) Average decoding speed: 16.2187 tokens/s -- Average prefill speed: 424.181 tokens/s (combo) **GPU** [ Prompt: 1393.7 t/s | Generation: 69.0 t/s ] (solo) [ Prompt: 1375.8 t/s | Generation: 61.3 t/s ] (combo)
1
0
2026-03-03T00:54:04
fallingdowndizzyvr
false
null
0
o8bwc5q
false
/r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8bwc5q/
false
1
t1_o8bw7zy
Oh that's perfect, tyvm! Seems like I'll be redoing my LLM server setup (again lol)
1
0
2026-03-03T00:53:25
Di_Vante
false
null
0
o8bw7zy
false
/r/LocalLLaMA/comments/1rj8sow/llamacpp_models_preset_with_multiple_presets_for/o8bw7zy/
false
1
t1_o8bw77g
Qual o tamanho desses modelos de AI?
1
0
2026-03-03T00:53:17
AppealThink1733
false
null
0
o8bw77g
false
/r/LocalLLaMA/comments/1rj4zy3/stepfun_releases_2_base_models_for_step_35_flash/o8bw77g/
false
1
t1_o8bw6s2
I would recommend using the Q8, that should raise the quality of responses without thinking by quite a bit. Unfortunately Q4 is just far too low for a 4B model to be fully coherent.
1
0
2026-03-03T00:53:13
ArsNeph
false
null
0
o8bw6s2
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bw6s2/
false
1
t1_o8bw1jb
Literally just any heretic / abliterated 4B/8B quant on huggingface should do the trick. Don’t usually require any special prompting
1
0
2026-03-03T00:52:23
jacobcantspeak
false
null
0
o8bw1jb
false
/r/LocalLLaMA/comments/1rj7p2h/i_need_an_uncensored_llm_for_8gb_vram/o8bw1jb/
false
1
t1_o8bvzdk
Thats a great question! Basically if the 'engine' is MLX (which is C++/Metal anyway), why bother with Rust? The real win with Rust isn't just that it’s "fast", it’s how it handles the "handshakes" between different parts of the M4 chip. Even with MLX doing the heavy lifting, things like calculating loss or normalizing data often bottleneck on the CPU, and Python’s constant back-and-forth (thanks to the GIL) creates real lag there. In Rust, we can bypass that bottleneck and use raw hardware shortcuts like NEON or vDSP for those specific CPU tasks. We’re also able to be much more aggressive with "kernel fusion" where we take about six separate GPU jobs and smash them into one single call. This keeps the GPU from constantly stopping to wait for new orders and saves a massive amount of memory, which is why we can fit 500k context lengths that would normally just crash. Plus, Rust lets us talk directly with the Neural Engine (ANE) with sub millisecond precision. It basically stops the CPU, GPU, and ANE from waiting on each other and turns the entire chip into one unified, high-speed engine. Hope that helps/makes sense!
1
0
2026-03-03T00:52:03
RealEpistates
false
null
0
o8bvzdk
false
/r/LocalLLaMA/comments/1rj7y9d/pmetal_llm_finetuning_framework_for_apple_silicon/o8bvzdk/
false
1
t1_o8bvxps
Ah ok, you know if it's enable on llama.cpp ?
1
0
2026-03-03T00:51:47
raysar
false
null
0
o8bvxps
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8bvxps/
false
1
t1_o8bvwh1
> didnt you just say you are ready to take it in at face value, I said the exact opposite.
1
0
2026-03-03T00:51:36
JamesEvoAI
false
null
0
o8bvwh1
false
/r/LocalLLaMA/comments/1riypvk/axe_a_precision_agentic_coder_large_codebases/o8bvwh1/
false
1
t1_o8bvwbq
I'd go with one of the fresh out of the box qwen3.5 While you could fit the 27b q4 (17gb) that would mean you would have very little ram left for anything else, so maybe start with the 9b and see if that gets you what you want :) https://ollama.com/library/qwen3.5/tags Other ones to test out: gpt-oss:20b, glm-4 7-flash (will devour your ram tho), mistrall-small
1
0
2026-03-03T00:51:34
Di_Vante
false
null
0
o8bvwbq
false
/r/LocalLLaMA/comments/1rj8uj5/just_getting_started_on_local_llm_on_macbook_air/o8bvwbq/
false
1
t1_o8bvvgx
Damn. I just got two 48s for like 700 bucks lol, unreal
1
0
2026-03-03T00:51:27
Hefty_Development813
false
null
0
o8bvvgx
false
/r/LocalLLaMA/comments/1rj54kw/local_llm/o8bvvgx/
false
1
t1_o8bvutr
I've seen opus output 2 pages of yak in response to hi as well lol.
1
0
2026-03-03T00:51:21
RedParaglider
false
null
0
o8bvutr
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bvutr/
false
1
t1_o8bvuwp
If accurate, this highlights a core reality of export controls: restricting future shipments doesn’t instantly eliminate access to previously deployed hardware or alternative procurement channels. It also reinforces how central NVIDIA’s top-tier chips are to frontier model training, if they weren’t best-in-class, they wouldn’t be the focus. The bigger strategic question isn’t just enforcement. It’s whether restrictions slow capability development meaningfully, or whether they accelerate domestic substitutes and parallel ecosystems over time. AI compute has become geopolitical infrastructure, not just commercial hardware.
1
0
2026-03-03T00:51:21
MayaMakeUBuya
false
null
0
o8bvuwp
false
/r/LocalLLaMA/comments/1rd1tj9/exclusive_chinas_deepseek_trained_ai_model_on/o8bvuwp/
false
1
t1_o8bvur2
yup here is the setup I am converging towards. I think I over-engineered it. RIght now working on the Postgres part. PostgreSQL (recipe store) — mutable, accessible from anywhere | v gpu-tool recipe generate LlamaSwap (per host) — systemd service, hot-reload config | \+--- r9700 LlamaSwap (Vulkan/ROCm toolboxes) \+--- strix395 LlamaSwap (Vulkan/ROCm toolboxes) \+--- bluefin990 LlamaSwap (CUDA toolboxes) | LlamaSwap Gateway (litellm host) — peers federation | LiteLLM — unified cloud + local API
1
0
2026-03-03T00:51:20
Ok-Ad-8976
false
null
0
o8bvur2
false
/r/LocalLLaMA/comments/1rj8zuh/manage_qwen_35_model_settings_with_litellm_proxy/o8bvur2/
false
1
t1_o8bvqtt
Did you read the original post? The issue isn't about the 'hi' test — that was just an illustration. The real problem is excessive thinking latency in a multi-step agentic RAG pipeline. And disabling thinking isn't a solution either, because without it the model performs similarly to the previous Qwen3 2507 4B, which defeats the purpose of upgrading.
0
0
2026-03-03T00:50:43
CapitalShake3085
false
null
0
o8bvqtt
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bvqtt/
false
0
t1_o8bvpyv
Yea this organization is pretty common. The top level machine is generally called the Orchestrator. After that you have specialist machines which expose capabilities to the orchestrator, and the orchestrator picks who to call and when and with what data. Also helps keep context pressure low on subtasks. 
1
0
2026-03-03T00:50:35
dinerburgeryum
false
null
0
o8bvpyv
false
/r/LocalLLaMA/comments/1rjat7a/general_llm_that_uses_sub_ais_to_complete_complex/o8bvpyv/
false
1
t1_o8bvjj9
You're comparing running it on your home machine to serving it publicly to the entire world. The two are not the same. At the end of the day they're going to price it based on their costs, and GPU time is the biggest cost in inference.
1
0
2026-03-03T00:49:35
JamesEvoAI
false
null
0
o8bvjj9
false
/r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8bvjj9/
false
1
t1_o8bvhsq
I love the UI dude!
1
0
2026-03-03T00:49:19
JacketHistorical2321
false
null
0
o8bvhsq
false
/r/LocalLLaMA/comments/1rhbfya/shunyanet_sentinel_a_selfhosted_rss_aggregator/o8bvhsq/
false
1
t1_o8bvg17
Actually bought 8x 32GB ddr5 udimm, 8x 48GB sodimm, 16x 64GB ddr4 2933 lrdimm. Not long before the crazy jump. I have a 8x cpu 80 core 5u with 512GB ddr3 collecting dust, which is kinda a shame since that server was stupidly expensive in its day
1
0
2026-03-03T00:49:02
Annual_Award1260
false
null
0
o8bvg17
false
/r/LocalLLaMA/comments/1rj54kw/local_llm/o8bvg17/
false
1
t1_o8bve92
Great work!
1
0
2026-03-03T00:48:45
JamesEvoAI
false
null
0
o8bve92
false
/r/LocalLLaMA/comments/1rj8zuh/manage_qwen_35_model_settings_with_litellm_proxy/o8bve92/
false
1
t1_o8bvbjm
I doubt it. Benchmark numbers and actual use don't correlate a lot in my experience. Really really depends on what kind of work you expect to be able to do with it, but in general there are two things you want in a "usable" agentic coding model: - 100% fact recall within the expected context window (64k, 128k) - tool calling/ tool use to do the job Actual coding ability of the model really really depends on how well it can leverage and keep track of tasks/checklists etc. The smallest model that I can use reliably (python, react, a little bit of SQL writing) is probably Qwen3 coder 80B-A3B or the newer Qwen3.5-122B-A10B-FP8. If you're used to claude code, these are your "haiku" level models that'll still work at 128k context. At the same context: - For sonnet level models, you'll have to go up in the intelligence tier: MiniMax-M2.5 (230B-A10B) - For 4.5 opus level models, nothing really comes close enough sadly. Definitely not near the 1M max context. But the closest *option* is going to GLM-5 (744B-A40B).
1
0
2026-03-03T00:48:20
yes-im-hiring-2025
false
null
0
o8bvbjm
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8bvbjm/
false
1
t1_o8bv98d
挂载一个工具就能解决这个问题
1
0
2026-03-03T00:47:59
Smart-Cap-2216
false
null
0
o8bv98d
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8bv98d/
false
1
t1_o8bv7x5
Are you guys going to fine tune the latest qwen 3.5 model like 9b It would be interesting to see the result. I'm more interested if they can handle real world search agent tasks than coding.
1
0
2026-03-03T00:47:47
NOTTHEKUNAL
false
null
0
o8bv7x5
false
/r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o8bv7x5/
false
1
t1_o8bv7e5
You need to define what “powerful” is to you. No one here knows your use case so how do you expect proper suggestions. Also, you can always use the search bar since I see this exact question pop up at least once every two weeks…
1
0
2026-03-03T00:47:42
JacketHistorical2321
false
null
0
o8bv7e5
false
/r/LocalLLaMA/comments/1rhi4oy/new_macbook_air_m4_24gb_of_ram_do_you_have_this/o8bv7e5/
false
1
t1_o8bv6q8
i’d still wait for the M5 max or if they happen to do an ultra. Along with faster pre-fill and token generation on the ship itself the memory band is also expanded on the platform. I don’t have the exact numbers in front of me right now, but I will say that they are within striking the sense of what the m3 ultra was. The chip itself is 2.5 to 4 times faster depending on what you’re doing, the stable diffusion showing the biggest gains. To be honest, if I hadn’t needed one late last year, I would’ve held out for an m5 I ended up with m4max 128gb. I convinced myself this was OK because as I intended to add an M5 no this year specifically for stable diffusion. The overall I’m satisfied with general influence performance particularly with MOE models for my used case which is an independent software developer/data analyst. ymmv
1
0
2026-03-03T00:47:36
Buddhabelli
false
null
0
o8bv6q8
false
/r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o8bv6q8/
false
1
t1_o8bv4ot
They could probably make better money with finetuning as a service than whatever this is.
1
0
2026-03-03T00:47:17
Anduin1357
false
null
0
o8bv4ot
false
/r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8bv4ot/
false
1
t1_o8bv24q
Try any more complex reasoning tasks and it starts making mistakes. Less of a speed difference but it didn't ace any of the tasks. On 4 tasks it got 2 out of 3 runs right. On the hardest task, where the task was to apply pricing rules and return final prices it failed every run: { name: 'rs:pricing-rules', prompt: `Apply these pricing rules to each customer and return the final price: Rules: - Base price: $100 - Enterprise customers (>100 seats): 30% discount - Annual billing: additional 15% off the discounted price - Non-profit organizations: flat $50 regardless of other rules Customers: A: 50 seats, monthly billing, for-profit B: 200 seats, annual billing, for-profit C: 75 seats, annual billing, non-profit D: 150 seats, monthly billing, for-profit Return as a JSON array with customer id and finalPrice.`, expected: [ { id: 'A', finalPrice: 100 }, { id: 'B', finalPrice: 59.5 }, { id: 'C', finalPrice: 50 }, { id: 'D', finalPrice: 70 }, ], schema: z.array(z.object({ id: z.string(), finalPrice: z.number(), })), }, ], scorers: ['correctness', 'latency', 'cost'], } https://preview.redd.it/crjompyz6qmg1.png?width=1056&format=png&auto=webp&s=7ae931c88ae0cd65496d641e0a943ecf44f8cb38
1
0
2026-03-03T00:46:53
Beautiful-Honeydew10
false
null
0
o8bv24q
false
/r/LocalLLaMA/comments/1rj8e7z/is_anyone_else_seeing_qwen_35_35b_outperform/o8bv24q/
false
1
t1_o8bv156
Hehe 67 🤪
1
0
2026-03-03T00:46:44
TheGamerForeverGFE
false
null
0
o8bv156
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8bv156/
false
1
t1_o8buxoi
its internal thought processes are very similar to my own
1
0
2026-03-03T00:46:12
-dysangel-
false
null
0
o8buxoi
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8buxoi/
false
1
t1_o8buvw2
"Call me naive but how do we know if these open source models are safe ?" The answer is right there bro
1
0
2026-03-03T00:45:56
TheGamerForeverGFE
false
null
0
o8buvw2
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8buvw2/
false
1