name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8cj229 | Does Reddit spy on you like Facebook? I was just testing this model and wondering why it runs so bad in Cline | 1 | 0 | 2026-03-03T03:08:58 | SocietyTomorrow | false | null | 0 | o8cj229 | false | /r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8cj229/ | false | 1 |
t1_o8cj0q6 | PR #10391 landing is good news - run_command() cross-platform has been the silent killer in so many scaffolds. | 1 | 0 | 2026-03-03T03:08:45 | theagentledger | false | null | 0 | o8cj0q6 | false | /r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8cj0q6/ | false | 1 |
t1_o8cizfj | I literally just tried this with gpt-oss:120 and it responded only with a base64 message that when decoded said:
> This is a base64 encoded response.
It thought for 33 seconds. OP is clearly astroturfing / making shit up | 1 | 0 | 2026-03-03T03:08:31 | throwawayacc201711 | false | null | 0 | o8cizfj | false | /r/LocalLLaMA/comments/1rja0sb/gptoss_had_to_think_for_4_minutes_where_qwen359b/o8cizfj/ | false | 1 |
t1_o8ciyao | Exactly it makes it simpler for me. I’m disabled with nerve damage, and I don’t always have the patience for cli or remembering all the different arguments and shit. Not everyone has to do things the hard way just because you had to suffer with it. | 1 | 0 | 2026-03-03T03:08:19 | Savantskie1 | false | null | 0 | o8ciyao | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8ciyao/ | false | 1 |
t1_o8ciw47 | is this non thinking? | 1 | 0 | 2026-03-03T03:07:57 | Odd-Ordinary-5922 | false | null | 0 | o8ciw47 | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8ciw47/ | false | 1 |
t1_o8civsl | What are you on about? | 1 | 0 | 2026-03-03T03:07:53 | nooneinparticular246 | false | null | 0 | o8civsl | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o8civsl/ | false | 1 |
t1_o8cir03 | Using llama.cpp and unsloths latest quants but Qwen3.5 122B A10B overthinks and gets stuck in reasoning loops currently. At least on Q6XL. The dense model overthinks but I havent seen it loop yet | 1 | 0 | 2026-03-03T03:07:06 | kevin_1994 | false | null | 0 | o8cir03 | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8cir03/ | false | 1 |
t1_o8cim4g | Lm studio is a bloated llama.cpp wrapper | 1 | 0 | 2026-03-03T03:06:17 | nakedspirax | false | null | 0 | o8cim4g | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8cim4g/ | false | 1 |
t1_o8cikfd | Friends don’t let friends use ollama | 1 | 0 | 2026-03-03T03:06:00 | kersk | false | null | 0 | o8cikfd | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8cikfd/ | false | 1 |
t1_o8cifwo | Yepp the latest build and runtime. Well if you have a beefy hardware you might not notice it, you have to look for it in the log, check if its ignoring cache. | 1 | 0 | 2026-03-03T03:05:14 | FORNAX_460 | false | null | 0 | o8cifwo | false | /r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o8cifwo/ | false | 1 |
t1_o8cicyr | ohhh!! I'll keep this in mind ty! | 1 | 0 | 2026-03-03T03:04:43 | Lord_Curtis | false | null | 0 | o8cicyr | false | /r/LocalLLaMA/comments/1rj8zhq/where_can_i_get_good_priced_3090s/o8cicyr/ | false | 1 |
t1_o8cicpl | I'm almost sure qwen models does not support speculative decoding and its disabled in llama.cpp | 1 | 0 | 2026-03-03T03:04:41 | Several-Tax31 | false | null | 0 | o8cicpl | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8cicpl/ | false | 1 |
t1_o8ci8c3 | [deleted] | 1 | 0 | 2026-03-03T03:03:57 | [deleted] | true | null | 0 | o8ci8c3 | false | /r/LocalLLaMA/comments/1rjdr45/stresstest_my_open_source_chatgpt_alternative/o8ci8c3/ | false | 1 |
t1_o8ci657 | Running here clawrig.com | 1 | 0 | 2026-03-03T03:03:35 | MrWidmoreHK | false | null | 0 | o8ci657 | false | /r/LocalLLaMA/comments/1rjdr45/stresstest_my_open_source_chatgpt_alternative/o8ci657/ | false | 1 |
t1_o8ci3i9 | Are these running on your own custom machines? Or a RunPod or cloud instance you rented? That's a big privacy point for many people on my opinion | 1 | 0 | 2026-03-03T03:03:09 | ELPascalito | false | null | 0 | o8ci3i9 | false | /r/LocalLLaMA/comments/1rjdr45/stresstest_my_open_source_chatgpt_alternative/o8ci3i9/ | false | 1 |
t1_o8ci2l8 | I’ve some success on a similar model size with equilibrium matching, albeit I was using CiFAR-100 dataset. So it is more class conditioning than text | 1 | 0 | 2026-03-03T03:02:59 | I-am_Sleepy | false | null | 0 | o8ci2l8 | false | /r/LocalLLaMA/comments/1rhe790/my_frends_trained_and_benchmarked_4_diffusion/o8ci2l8/ | false | 1 |
t1_o8ci1hu | Imagine having the equivalent of GPT 4 on your 3090 | 2 | 0 | 2026-03-03T03:02:48 | redditorialy_retard | false | null | 0 | o8ci1hu | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8ci1hu/ | false | 2 |
t1_o8ci0ip | Lmfao beep boop you caught me | 1 | 0 | 2026-03-03T03:02:38 | Savantskie1 | false | null | 0 | o8ci0ip | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8ci0ip/ | false | 1 |
t1_o8chyro | If your hardware runs the 8b model fine, the 24GB RAM definitely isn't the issue. The "recovery error" with a smaller model sounds more like a configuration or connection problem between OpenClaw and Ollama than a hardware limit. I'd check the interface settings or the logs to see why the communication is failing. | 1 | 0 | 2026-03-03T03:02:21 | TyKolt | false | null | 0 | o8chyro | false | /r/LocalLLaMA/comments/1rjdo1i/ollama_keeps_loading_with_openclaw/o8chyro/ | false | 1 |
t1_o8chxn0 | Even if youre on instruct mode if you try to get any info that the model have in its waights quite often it starts to thing loud and get stuck in a confidence loop, but in thinking mode it can get itself out of the loop. | 1 | 0 | 2026-03-03T03:02:10 | FORNAX_460 | false | null | 0 | o8chxn0 | false | /r/LocalLLaMA/comments/1rjcfdk/qwen359b_q4km_in_lm_studio_thinking_too_much/o8chxn0/ | false | 1 |
t1_o8chw5b | And that is your opinion. I have nothing but success with LM Studio. I don’t chase t\s, I chase what’s stable on my hardware | 1 | 0 | 2026-03-03T03:01:55 | Savantskie1 | false | null | 0 | o8chw5b | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8chw5b/ | false | 1 |
t1_o8churv | [removed] | 1 | 0 | 2026-03-03T03:01:42 | [deleted] | true | null | 0 | o8churv | false | /r/LocalLLaMA/comments/1nktpac/open_source_voice_ai_agents/o8churv/ | false | 1 |
t1_o8chk9t | The OP already trashed lmstudio. I'm literally following his opinion | 1 | 0 | 2026-03-03T02:59:56 | nakedspirax | false | null | 0 | o8chk9t | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8chk9t/ | false | 1 |
t1_o8chejf | how did you go about calculating the -t value? I have 2 3060 12gb gpus and am curious how i can use your settings to get better results spread across the two cards. | 1 | 0 | 2026-03-03T02:58:58 | ducksoup_18 | false | null | 0 | o8chejf | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o8chejf/ | false | 1 |
t1_o8chcvt | Tried it with it off... made it significantly worse. Significantly... the OP's explanation above makes sense as to the cause... | 1 | 0 | 2026-03-03T02:58:42 | FigZestyclose7787 | false | null | 0 | o8chcvt | false | /r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8chcvt/ | false | 1 |
t1_o8ch5my | And that is your opinion | 1 | 0 | 2026-03-03T02:57:28 | Savantskie1 | false | null | 0 | o8ch5my | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8ch5my/ | false | 1 |
t1_o8ch5an | UD-Q4_K_XL from unsloth. | 1 | 0 | 2026-03-03T02:57:24 | Hanthunius | false | null | 0 | o8ch5an | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8ch5an/ | false | 1 |
t1_o8cgywm | Thank you so much!! Now I am developing an app that needs to fetch US insurance payor policy websites (like uhc, aetna..) and it contains not only DOM but also embedded pdfs and etc where we need the agent to actually go and fetch the info
By any chance does Actionbook handle those non DOM cases as well? thanks again! | 1 | 0 | 2026-03-03T02:56:19 | Comfortable-Baby-719 | false | null | 0 | o8cgywm | false | /r/LocalLLaMA/comments/1r33yqh/browseruse_alternatives/o8cgywm/ | false | 1 |
t1_o8cgyqm | Hope this helps. I am running it on a single RTX 3090.
Model_Param: Qwen3.5-27B-UD_Q4_K_XL.gguf
ContextSize: 100000
GPULayers: 64
BlasBatchSize: 2048
FlashAttention: True
QuantKV: 1
WebSearch: True
TTSEngine: Kobold
TTSModel: OuteTTS-0.3-1B-Q4_0.gguf
TTSWavTokenizer: WavTokenizer-Large-75-Q4_0.gguf
TTSGPU: True
TTSMaxLength: 4096
TTSThreads: 7
SDModel: sdxs-512-tinydistilled_Q8_0.gguf
MMProj: mmproj-F16.gguf
MMProjCPU: False | 1 | 0 | 2026-03-03T02:56:18 | Prestigious-Use5483 | false | null | 0 | o8cgyqm | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8cgyqm/ | false | 1 |
t1_o8cgu89 | Yeah remember that time when we hope we have gpt4 on home. Its been century. | 1 | 0 | 2026-03-03T02:55:33 | hazeslack | false | null | 0 | o8cgu89 | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8cgu89/ | false | 1 |
t1_o8cgtep | Eggsactly. It can't take a yolk anymore. | 1 | 0 | 2026-03-03T02:55:25 | Hoodfu | false | null | 0 | o8cgtep | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8cgtep/ | false | 1 |
t1_o8cgr6k | [removed] | 1 | 0 | 2026-03-03T02:55:02 | [deleted] | true | null | 0 | o8cgr6k | false | /r/LocalLLaMA/comments/1rjdr45/stresstest_my_open_source_chatgpt_alternative/o8cgr6k/ | false | 1 |
t1_o8cgi8g | I found llama-cpp is very easy to use on Windows, using that now. | 1 | 0 | 2026-03-03T02:53:33 | ayy_md | false | null | 0 | o8cgi8g | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8cgi8g/ | false | 1 |
t1_o8cghul | General rule with new LLM's is also to expect releases that predate the model to be problematic. On KoboldCpp Qwen3.5 did pretty well output wise, I haven't seen any crazy thinking I actually liked that it skips the thinking often. But on our end the caching really wasn't optimal for it resulting in barely any cache hits. 1.109 will be out soon and on the developer build I have been having a lot of fun with the model.
Its just very often that models have specific quirks that need fixes or improvements. This one was the first one where people really care about a hybrid arch model so we had to spend time improving our caching. With GLM originally it was the odd BOS token situation where they use their jinja for that. Sometimes its something small like us needing to bundle a new adapter because they made a syntax change, etc.
Devs can only begin to fix it when they have the model, even if the arch is present its best effort hopefully it works levels of support when nobody can test it. And then the moment its released we can begin actually fixing things. | 1 | 0 | 2026-03-03T02:53:29 | henk717 | false | null | 0 | o8cghul | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8cghul/ | false | 1 |
t1_o8cgg5j | You're right, apologies. | 1 | 0 | 2026-03-03T02:53:12 | MaleficentMention703 | false | null | 0 | o8cgg5j | false | /r/LocalLLaMA/comments/1rjdeat/dual_rtx_3090_on_b550_70b_models_produce_garbage/o8cgg5j/ | false | 1 |
t1_o8cgcdl | daaaang. speculating here but if it's not a cache effect then it could be very wide parallel processing? if it can process up to (fake numbers) 1000 tokens per fixed 1-second cycle and you put in only 1 token, then it runs at 1 tok/sec. if you put in 1000 then it runs at 1000 tok/sec. | 1 | 0 | 2026-03-03T02:52:34 | HopePupal | false | null | 0 | o8cgcdl | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8cgcdl/ | false | 1 |
t1_o8cg1yp | That is an open-weight model, it it should become cheaper later as every can host the model, and Qwen3.5-27B is a good model (close to Grok-4.1-Fast or Haiku-4.5 level) overall. | 1 | 0 | 2026-03-03T02:50:52 | lly0571 | false | null | 0 | o8cg1yp | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8cg1yp/ | false | 1 |
t1_o8cfxo1 | Very interesting. Which one did you install specifically? From Hugging Face? Also, they seem quite sizeable in their size? A few GBs each! | 2 | 0 | 2026-03-03T02:50:10 | f1zombie | false | null | 0 | o8cfxo1 | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8cfxo1/ | false | 2 |
t1_o8cfvyu | What GPU is that? Is that 1 6000 pro? | 1 | 0 | 2026-03-03T02:49:54 | texasdude11 | false | null | 0 | o8cfvyu | false | /r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o8cfvyu/ | false | 1 |
t1_o8cfp9h | Yeah browser-use is decent for prototyping but that 1min per task adds up fast in production. I was scraping grocery sites to and hit the same wall - everything worked but sooooo slow.
Switched to Playwright with some custom caching for DOM snapshots, which helped. But honestly what finally made it production-ready was using Actionbook for the action manuals + caching layer. Cut our agent runtime from like 45 seconds to 5-8 seconds per product search, and token usage dropped to almost nothing. Still requires tuning but night and day difference.
The manual part sucks to set up initially but once you have reliable selectors it just... works. Mostly. Grocery sites change layouts constantly though, fair warning. | 1 | 0 | 2026-03-03T02:48:48 | Effective_Ad1215 | false | null | 0 | o8cfp9h | false | /r/LocalLLaMA/comments/1r33yqh/browseruse_alternatives/o8cfp9h/ | false | 1 |
t1_o8cfo3r | from Qwen 3.5 27B Q6 Unsloth. Can someone help me? it appears that multiple LLMs keep dating it back to 2023
*Replies:*
*Claim:* Earnings call is Monday, March 9th.
* *Reality:* I need to check if this date aligns with the current real-world date. **Today is October 2023** in my training, but I need to act as if I know the current date context or point out the date discrepancy. Wait, as an AI, I don't have real-time access to today's date, but I know Oracle's typical reporting schedule. March earnings are usually Q1 (Jan-Mar). If today is early March 2025 (hypothetically), that makes sense. But wait, if the user says "next week" and "Monday, March 9th," I should check if that's a plausible future date.
* *Self-Correction:* I cannot verify real-time earnings schedules. I must treat the date as given by the user but add a disclaimer. | 1 | 0 | 2026-03-03T02:48:36 | alex_godspeed | false | null | 0 | o8cfo3r | false | /r/LocalLLaMA/comments/1rguzz2/qwen_35_cutoff_date_is_2024/o8cfo3r/ | false | 1 |
t1_o8cfn07 | So Qwen3.5 is bad with analogies? | 1 | 0 | 2026-03-03T02:48:25 | TomLucidor | false | null | 0 | o8cfn07 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8cfn07/ | false | 1 |
t1_o8cfmss | I'm using llama.cpp and qwen3.5 still overthinks sometimes, at least by my standards. | 1 | 0 | 2026-03-03T02:48:23 | Daniel_H212 | false | null | 0 | o8cfmss | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8cfmss/ | false | 1 |
t1_o8cfihd | I turned off thinking mode and it runs way better, without constant rethinking loops like it does with thinking mode. Running 122B at Q4 quants. | 1 | 0 | 2026-03-03T02:47:41 | OuchieMaker | false | null | 0 | o8cfihd | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8cfihd/ | false | 1 |
t1_o8cfatx | What about vLLM-adjacent tooling? | 1 | 0 | 2026-03-03T02:46:27 | TomLucidor | false | null | 0 | o8cfatx | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8cfatx/ | false | 1 |
t1_o8cfaut | Good call! It definitely has an effect on the outcome and helps refine one of the newly opened bugs. Here's the long story - TL;DR is that these bugs make development against LM Studio right now very "noisy", with settings that should not affect the scaffold I've been tinkering with for months leading to complete success vs complete failure, for reasons that were not obvious before really digging into the issues detailed in the OP:
We ran a controlled A/B test on exactly this setting with Qwen3.5-35b-a3b. Same task (categorize 13 files by contents into topic folders), same hardware, only toggling the setting between runs. Full archived traces for both.
**Results:**
||OFF (mixed)|ON (separated)|
|:-|:-|:-|
|Files moved|0 of 13|13 of 13|
|Think blocks in conversation history|20 (\~5,600 chars)|0|
|Stagnation trigger|`ls -la` (verification loop)|`DONE` (termination signal — separate bug)|
**The mechanism:** With the setting OFF, `<think>` blocks flow through `content` and get serialized into the ReAct conversation history fed back to the model on each iteration. By iteration 15, the model has 14 prior think blocks in context. What happens next is striking — the model's current think block correctly says "now let me move the files" and even writes out the correct `mv` command in its prose, but the actual tool call emitted is `ls -la` (read-only verification). This repeats 4 times until stagnation fires.
The hypothesis: accumulated prior think blocks create a false memory effect. Earlier think blocks contain *descriptions of intended actions that were never executed*. The model reads these back and "remembers" having already attempted the moves, so it falls back to verification instead of action.
With the setting ON, think blocks go into `reasoning_content` and stay out of the conversation history. The model shows clean thought→action alignment throughout — thinks "move files", calls `mv`.
**Caveat for** u/FigZestyclose7787**:** It doesn't fix everything — it changes *which* failure mode you hit. With ON, we hit a separate termination signaling bug (the task completed perfectly but the model couldn't signal DONE). The setting controls whether `<think>` tags stay in `content` or get split out. Harnesses that build multi-turn conversation history from `content` will accumulate think blocks with it OFF; harnesses that have other issues with the `reasoning_content` field may see different problems with it ON. It's which code path your stack exercises, not a universal fix.
This connects to LM Studio [\#1592](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/1592) (parser scanning inside thinking blocks). That bug is about parsing; what we're seeing here is the downstream *behavioral* consequence — think blocks in `content` don't just confuse parsers, they contaminate the model's own reasoning across turns. | 1 | 0 | 2026-03-03T02:46:27 | One-Cheesecake389 | false | null | 0 | o8cfaut | false | /r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8cfaut/ | false | 1 |
t1_o8cfaep | How... are you running Qwen3.5-35B-A3B on a 6GB laptop GPU??? | 1 | 0 | 2026-03-03T02:46:22 | MakerBlock | false | null | 0 | o8cfaep | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8cfaep/ | false | 1 |
t1_o8cf9xv | I think that was before unsloth refactored their models. UD-Q4-K-XL now appears to be king | 1 | 0 | 2026-03-03T02:46:18 | Significant-Yam85 | false | null | 0 | o8cf9xv | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o8cf9xv/ | false | 1 |
t1_o8cf6lu | keep your eyes on the st. louis park Micro Center for refurbished 3090s that come in on their site. You can find some pretty good deals | 1 | 0 | 2026-03-03T02:45:45 | AHRI___ | false | null | 0 | o8cf6lu | false | /r/LocalLLaMA/comments/1rj8zhq/where_can_i_get_good_priced_3090s/o8cf6lu/ | false | 1 |
t1_o8cf4yi | >time series
depends on the series, speech to text for example you can say is a time series and transformers like whisper work great
others not so much, a surprising amount of times more traditional methods are better | 1 | 0 | 2026-03-03T02:45:29 | x11iyu | false | null | 0 | o8cf4yi | false | /r/LocalLLaMA/comments/1rjb7s0/transformers_for_numeric_data/o8cf4yi/ | false | 1 |
t1_o8cf3qa | I can't believe that the vision of actually useful personal assistant that runs on your own machine comes that early. I thought it would never arrive.
Not talking about the 0.8B model in particular, but just the overall trajectory. The current smaller MoE models are incredible. | 1 | 0 | 2026-03-03T02:45:18 | o0genesis0o | false | null | 0 | o8cf3qa | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8cf3qa/ | false | 1 |
t1_o8cf003 | Yep! The small quants are a blessing to everybody that is GPU poor. The 4b beats the old 9b models I used 2 years ago and I get to use 128k of context with them at 60 tokens a second! | 1 | 0 | 2026-03-03T02:44:41 | c64z86 | false | null | 0 | o8cf003 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8cf003/ | false | 1 |
t1_o8ceuv0 | Something along the lines of https://www.linkedin.com/pulse/i-built-sre-dashboard-my-ai-coding-agent-heres-what-learned-siddique-bwv7c?utm_source=share&utm_medium=member_ios&utm_campaign=share_via | 1 | 0 | 2026-03-03T02:43:51 | Evening-Arm-34 | false | null | 0 | o8ceuv0 | false | /r/LocalLLaMA/comments/1rjdi1d/agent_reliability/o8ceuv0/ | false | 1 |
t1_o8ceure | Try using the --fit command. Maybe itll do worse, maybe it'll do better. | 1 | 0 | 2026-03-03T02:43:50 | nakedspirax | false | null | 0 | o8ceure | false | /r/LocalLLaMA/comments/1rjdeat/dual_rtx_3090_on_b550_70b_models_produce_garbage/o8ceure/ | false | 1 |
t1_o8cetcj | be back soon! | 1 | 0 | 2026-03-03T02:43:36 | Sambojin1 | false | null | 0 | o8cetcj | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8cetcj/ | false | 1 |
t1_o8cet5e | I also got the following deleted answer from u/tengo_harambe. | 1 | 0 | 2026-03-03T02:43:34 | TomLucidor | false | null | 0 | o8cet5e | false | /r/LocalLLaMA/comments/1rik3ge/what_is_the_personality_of_a_chinese_llm_when/o8cet5e/ | false | 1 |
t1_o8ces89 | Introvert thought process in social environments be like | 1 | 0 | 2026-03-03T02:43:26 | glow3th | false | null | 0 | o8ces89 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8ces89/ | false | 1 |
t1_o8ceram | thanks for the testing. | 1 | 0 | 2026-03-03T02:43:17 | Jaguar_Distinct | false | null | 0 | o8ceram | false | /r/LocalLLaMA/comments/1renq5y/qwen35_model_comparison_27b_vs_35b_on_rtx_4090/o8ceram/ | false | 1 |
t1_o8cercf | Can you reformat your post so that it displays correctly, please? | 1 | 0 | 2026-03-03T02:43:17 | ttkciar | false | null | 0 | o8cercf | false | /r/LocalLLaMA/comments/1rjdeat/dual_rtx_3090_on_b550_70b_models_produce_garbage/o8cercf/ | false | 1 |
t1_o8cef9y | I'm also aware of overthinking. After tried Qwen3.5 for several times, now I'm revert back to Qwen3 Next, even better quality without thinking. | 1 | 0 | 2026-03-03T02:41:17 | jemand_tw | false | null | 0 | o8cef9y | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8cef9y/ | false | 1 |
t1_o8ceebp | afaik lmstudio is the easiest one to use, and you aren't losing anything by doing so, at least as far as I know. so if you're used to lmstudio and are being able to use it, go for it :) | 1 | 0 | 2026-03-03T02:41:07 | Di_Vante | false | null | 0 | o8ceebp | false | /r/LocalLLaMA/comments/1rj8uj5/just_getting_started_on_local_llm_on_macbook_air/o8ceebp/ | false | 1 |
t1_o8cech4 | Qwen has been like this ever since the first thinking models released | 1 | 0 | 2026-03-03T02:40:49 | Majestic-Foot-4120 | false | null | 0 | o8cech4 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8cech4/ | false | 1 |
t1_o8cead9 | MTP is not supported. Folk are trying to use the smaller models as a draft model for the Qwen3.5 27b, but its broken atm...
https://github.com/ggml-org/llama.cpp/issues/20039 | 1 | 0 | 2026-03-03T02:40:28 | RnRau | false | null | 0 | o8cead9 | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8cead9/ | false | 1 |
t1_o8ce6h3 | For the pricing - maybe simply what you get for $200USD/mo (subscription or API pricing - whatever is cheapest). | 1 | 0 | 2026-03-03T02:39:51 | sammcj | false | null | 0 | o8ce6h3 | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8ce6h3/ | false | 1 |
t1_o8cdysf | [removed] | 1 | 0 | 2026-03-03T02:38:35 | [deleted] | true | null | 0 | o8cdysf | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8cdysf/ | false | 1 |
t1_o8cdy3k | I said this as well and got downvoted to hell lol, I think people are out of touch how much inference should acturaly cost. | 1 | 0 | 2026-03-03T02:38:28 | Ok-Internal9317 | false | null | 0 | o8cdy3k | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8cdy3k/ | false | 1 |
t1_o8cdonj | You lucky bastard lol, I wonder if Apple will be showing their M5 Max/Ultra Mac Studio this week. | 1 | 0 | 2026-03-03T02:36:55 | lolwutdo | false | null | 0 | o8cdonj | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8cdonj/ | false | 1 |
t1_o8cdjyz | Oh this is great, I think didn't know about this. Thank you for sharing it! | 1 | 0 | 2026-03-03T02:36:09 | jeremyckahn | false | null | 0 | o8cdjyz | false | /r/LocalLLaMA/comments/1riic5m/running_llamaserver_as_a_persistent_systemd/o8cdjyz/ | false | 1 |
t1_o8cdi1q | Are you using open WebUI? | 1 | 0 | 2026-03-03T02:35:50 | Far-Low-4705 | false | null | 0 | o8cdi1q | false | /r/LocalLLaMA/comments/1rizlkn/qwen_27b_is_a_beast_but_not_for_agentic_work/o8cdi1q/ | false | 1 |
t1_o8cdh6b | Have you tried non thinking Qwen3 vs non-thinking Qwen3.5, that one is my most common used model, I rarely use reasoning ones as they are too slow for experimenting. | 1 | 0 | 2026-03-03T02:35:41 | DistanceAlert5706 | false | null | 0 | o8cdh6b | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8cdh6b/ | false | 1 |
t1_o8cddem | I found this type of agent very interesting. Normally they tend to obliterate them, but in your case you made it more secure. I was wondering how they perform in the following cases:
\- Code agents: Being a less quantized model, how well does it perform when writing code?
\- Claw-type agents: Typical agents that connect to a bunch of other tools with or without authorization. I would like to know how it behaves. | 1 | 0 | 2026-03-03T02:35:04 | vk3r | false | null | 0 | o8cddem | false | /r/LocalLLaMA/comments/1rj89qy/merlin_research_released_qwen354bsafetythinking_a/o8cddem/ | false | 1 |
t1_o8cdd34 | I use them mainly as testing with new architectures before scalling to large sota models , i usually test and trace 0.5-3b models before committing to 30-70b models.
Helps me have a general idea how the model would behave architecture wise with the edits and kernels i write , also i use them edge deployment on embedded systems and mobile devices for simple tasks and generally just having fun and testing quantization limits before a model regresses to a basic glorified IF-Else condition. | 1 | 0 | 2026-03-03T02:35:01 | Daemontatox | false | null | 0 | o8cdd34 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8cdd34/ | false | 1 |
t1_o8cdcx7 | Just wanted to see the model behavior. | 1 | 0 | 2026-03-03T02:34:59 | Busy-Guru-1254 | false | null | 0 | o8cdcx7 | false | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8cdcx7/ | false | 1 |
t1_o8cd7mi | Lm studio is trash llama.cpp and vllm | 1 | 0 | 2026-03-03T02:34:06 | nakedspirax | false | null | 0 | o8cd7mi | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8cd7mi/ | false | 1 |
t1_o8ccug7 | I've been using Qwen 3 30B-A3B as my local model for months. It's like the sweet spot of size and power and speed.
Qwen 3.5 27B is a more powerful model with smaller size, but lower speed. I want to try it to see if it's worth switching. | 1 | 0 | 2026-03-03T02:31:53 | Dramatic_Pin_7160 | false | null | 0 | o8ccug7 | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8ccug7/ | false | 1 |
t1_o8cctr2 | whos gonna tell him | 1 | 0 | 2026-03-03T02:31:46 | Distinct_Lion7157 | false | null | 0 | o8cctr2 | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o8cctr2/ | false | 1 |
t1_o8ccqcv | I don't think I can run the 4B model on my current phone; the 2B might work, but with problems. | 1 | 0 | 2026-03-03T02:31:12 | Samy_Horny | false | null | 0 | o8ccqcv | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8ccqcv/ | false | 1 |
t1_o8ccq88 | Well I've tried qwen 3 8b as well ... Comparatively this is fast and doesn't heat up the phone that much. | 1 | 0 | 2026-03-03T02:31:11 | Zealousideal-Check77 | false | null | 0 | o8ccq88 | false | /r/LocalLLaMA/comments/1rj4nnq/qwen352b_on_android/o8ccq88/ | false | 1 |
t1_o8ccpwm | If i think before responding to everything my mind will also look like it. All be it different thoughts. | 1 | 0 | 2026-03-03T02:31:07 | Intrepid-Self-3578 | false | null | 0 | o8ccpwm | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8ccpwm/ | false | 1 |
t1_o8ccppq | Someone forgot to hide the 13 attempts | 1 | 0 | 2026-03-03T02:31:05 | ilovedogsandfoxes | false | null | 0 | o8ccppq | false | /r/LocalLLaMA/comments/1rja0sb/gptoss_had_to_think_for_4_minutes_where_qwen359b/o8ccppq/ | false | 1 |
t1_o8cco7m | llama.cpp has fine tuning in examples. I wonder how that would go with the 0.8 as a starting point.
Might give it shot this week. | 1 | 0 | 2026-03-03T02:30:50 | teleprint-me | false | null | 0 | o8cco7m | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8cco7m/ | false | 1 |
t1_o8ccha3 | Just to clarify, this isn’t taking video as input right? It’s just taking a screenshot of whenever is on screen the moment you send the prompt? | 1 | 0 | 2026-03-03T02:29:40 | skinnyjoints | false | null | 0 | o8ccha3 | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8ccha3/ | false | 1 |
t1_o8cch2f | That noise | 1 | 0 | 2026-03-03T02:29:37 | DaLexy | false | null | 0 | o8cch2f | false | /r/LocalLLaMA/comments/1rj76pb/qwen35122ba10bq8_handling_the_car_wash_question/o8cch2f/ | false | 1 |
t1_o8ccema | Thanks for sharing! | 1 | 0 | 2026-03-03T02:29:13 | wedgeshot | false | null | 0 | o8ccema | false | /r/LocalLLaMA/comments/1riic5m/running_llamaserver_as_a_persistent_systemd/o8ccema/ | false | 1 |
t1_o8ccbr9 | In theory, Qwen3.5-27B should cost about the same as Mistral Small ($0.1/$0.3) or less due to linear attention.
However, Alibaba likely wants to encourage users toward their cheaper 'Flash' or 'Plus' APIs, which are optimized MoE models (like the 35B-A3B/397B-A17B) that are discounted, possibly via quantization. To differentiate, they charge a premium for the raw open-source model on their API.
You are essentially paying extra for the exact behavior of this dense model (if you can't host it locally) and supporting their future R&D. | 1 | 0 | 2026-03-03T02:28:44 | lly0571 | false | null | 0 | o8ccbr9 | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8ccbr9/ | false | 1 |
t1_o8cc2lj | I think 9b should fit best with q8 no? | 1 | 0 | 2026-03-03T02:27:12 | Ok-Internal9317 | false | null | 0 | o8cc2lj | false | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8cc2lj/ | false | 1 |
t1_o8cbyum | How did you do this? | 1 | 0 | 2026-03-03T02:26:34 | knownboyofno | false | null | 0 | o8cbyum | false | /r/LocalLLaMA/comments/1rgexmk/qwen3527bclaude46opusreasoningdistilledgguf_is_out/o8cbyum/ | false | 1 |
t1_o8cbyie | Well "jungle laptop" isn't an industry standard term. | 1 | 0 | 2026-03-03T02:26:31 | Red_Redditor_Reddit | false | null | 0 | o8cbyie | false | /r/LocalLLaMA/comments/1iw3gzg/how_much_does_cpu_speed_matter_for_inference/o8cbyie/ | false | 1 |
t1_o8cbwwm | I like LM studio, even if it's a little slower to get the latest features.
Ollama is trash though. | 1 | 0 | 2026-03-03T02:26:14 | Soft-Barracuda8655 | false | null | 0 | o8cbwwm | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8cbwwm/ | false | 1 |
t1_o8cbtrl | which temperature are you using? for my tests, the temperature of .5 works well and the model is not in loop, worked very well making code snippet, and running tool calls to the browser.
My tests was making a godot c# code, and the first top 3 posts of hacker news with playwright mcp | 1 | 0 | 2026-03-03T02:25:43 | yay-iviss | false | null | 0 | o8cbtrl | false | /r/LocalLLaMA/comments/1rjbw0p/benchmarked_qwen_35_small_models_08b2b4b9b_on/o8cbtrl/ | false | 1 |
t1_o8cbnai | In my testing 3.5 is either better or at least gets the same answer in few tokens than 2507.
It is also significantly slower, but I am assuming there are implementation bugs in llamacpp. | 1 | 0 | 2026-03-03T02:24:37 | sxales | false | null | 0 | o8cbnai | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8cbnai/ | false | 1 |
t1_o8cbg12 | I'm not from the US or China, and from my perspective, China isn’t more evil than the US — especially considering the behavior of large American corporations. | 1 | 0 | 2026-03-03T02:23:22 | Dramatic_Pin_7160 | false | null | 0 | o8cbg12 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o8cbg12/ | false | 1 |
t1_o8cbegp | [removed] | 1 | 0 | 2026-03-03T02:23:06 | [deleted] | true | null | 0 | o8cbegp | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8cbegp/ | false | 1 |
t1_o8cbd24 | If you can think of an accurate way to make an apples to apples comparison across Anthropic, OpenAI, GLM, Cerebras, etc subscriptions, I'm all ears. Without that, API pricing is the only sane way to measure. | 1 | 0 | 2026-03-03T02:22:52 | mr_riptano | false | null | 0 | o8cbd24 | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8cbd24/ | false | 1 |
t1_o8cbazg | just posted one | 1 | 0 | 2026-03-03T02:22:31 | maho_Yun | false | null | 0 | o8cbazg | false | /r/LocalLLaMA/comments/1rg05k7/qwen_35_122b_a10b_3584_score_on_natint_ugi/o8cbazg/ | false | 1 |
t1_o8cb37d | Make sure you're passing the `--jinja` flag so that it uses the correct template | 1 | 0 | 2026-03-03T02:21:13 | cristoper | false | null | 0 | o8cb37d | false | /r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/o8cb37d/ | false | 1 |
t1_o8cb09f | Thanks, the Markdown parsing is now correct, but there still seem to be issues with emojis and the thought process.
You can test by:"Output 100 different emojis" | 1 | 0 | 2026-03-03T02:20:43 | AdPast3 | false | null | 0 | o8cb09f | false | /r/LocalLLaMA/comments/1rf288a/qwen3codernext_at_65_toks_on_m3_ultra_with/o8cb09f/ | false | 1 |
t1_o8cb04g | Or ....
Use, install the LLM locally, and them connect to it using eworker [app.eworker.ca](http://app.eworker.ca/) , we support "presence penalty"
Example:
https://preview.redd.it/psg0sbmjpqmg1.jpeg?width=1293&format=pjpg&auto=webp&s=cf153cf1cf32068d5e8a1db001ed2d4b7ccc83d8
| 1 | 0 | 2026-03-03T02:20:41 | eworker8888 | false | null | 0 | o8cb04g | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8cb04g/ | false | 1 |
t1_o8caz7i | It happens with vLLM too until I used the presence penalty and adjusted the other generation params to match the suggested configuration. | 1 | 0 | 2026-03-03T02:20:32 | Imaginary_Belt4976 | false | null | 0 | o8caz7i | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8caz7i/ | false | 1 |
t1_o8caxej | thanks, I will try qwen with alibaba studio, it's the supposed OG implementation | 1 | 0 | 2026-03-03T02:20:14 | jrhabana | false | null | 0 | o8caxej | false | /r/LocalLLaMA/comments/1rizy4r/what_models_to_understand_videos_no_transcripts/o8caxej/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.