name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7x55w2 | The 27B is definitely a stronger, slower model. I don’t know what the Ollama website is telling you, but it is obviously wrong. | 7 | 0 | 2026-02-28T18:30:24 | coder543 | false | null | 0 | o7x55w2 | false | /r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x55w2/ | false | 7 |
t1_o7x52za | Proud of our folks here! | 84 | 0 | 2026-02-28T18:30:01 | pmttyji | false | null | 0 | o7x52za | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7x52za/ | false | 84 |
t1_o7x52rk | I mean-- we see this in humans as well.
In tests I was always told to go with my first instinct because too often we talk ourselves out of the right answer | -2 | 0 | 2026-02-28T18:29:59 | ThatRandomJew7 | false | null | 0 | o7x52rk | false | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7x52rk/ | false | -2 |
t1_o7x4rj5 | I asked Gemini for assistance, and it guided me step-by-step on how to create a Tokenizer. It turned out that this was ALL I needed to make your wonderful app work, which I had no idea about. The sound quality of my cloned voice is excellent, so I thank you from the bottom of my heart. It works flawlessly. Godspeed! | 2 | 0 | 2026-02-28T18:28:26 | timeshifter24 | false | null | 0 | o7x4rj5 | false | /r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o7x4rj5/ | false | 2 |
t1_o7x4qng | Well, comparing 27B with 35B on Ollama's website would suggest that 27B should be well behind 35B and almost certainly less capable than Q3 14B I'm talking about. Unless 35B is messed up in Ollama. | -2 | 0 | 2026-02-28T18:28:19 | donatas_xyz | false | null | 0 | o7x4qng | false | /r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x4qng/ | false | -2 |
t1_o7x4p1x | I think GPT-Image is autoregressive or a combination, back in the early days you could actually see the blurry colors, then the clear image would render line by line | 2 | 0 | 2026-02-28T18:28:06 | ThatRandomJew7 | false | null | 0 | o7x4p1x | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7x4p1x/ | false | 2 |
t1_o7x4md2 | hate to break it to you, but you dont get any better performance with Q8 on GPT OSS, the highest precision they released was native MXFP4, so "Q8" quants just up cast the bits without adding any extra information.
So "Q8" isnt increasing intelligence at all, but it is doubling the memory usage and probably 50% slower ... | 2 | 0 | 2026-02-28T18:27:44 | Far-Low-4705 | false | null | 0 | o7x4md2 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7x4md2/ | false | 2 |
t1_o7x4j23 | The model of choice for consumer mac/minis. I like it. Waiting for an mlx version. | 7 | 0 | 2026-02-28T18:27:15 | KittyPigeon | false | null | 0 | o7x4j23 | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7x4j23/ | false | 7 |
t1_o7x4egn | 3090s? Im using a P100. | 25 | 0 | 2026-02-28T18:26:38 | Pretty_Challenge_634 | false | null | 0 | o7x4egn | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7x4egn/ | false | 25 |
t1_o7x4bag | I’m considering buying the R9700: thanks for the benchmark!
One thing I think is worth mentioning: saying that the 5800 has *16 cores* is a bit misleading ;) | 1 | 0 | 2026-02-28T18:26:11 | 3Qn_ | false | null | 0 | o7x4bag | false | /r/LocalLLaMA/comments/1prgi41/amd_radeon_ai_pro_r9700_benchmarks_with_rocm_and/o7x4bag/ | false | 1 |
t1_o7x4851 | >\- GLM-4.7 quantized is still the local GOAT. 1572 ELO, beats every single Qwen 3.5 model including the full 397B cloud version. if you're picking one local model for coding, this is still it (better than GLM-5 even!)
I was bombed with dislikes when I said that smaller GLM 4.7 is better than the bigger Qwen 3.5 397B,... | 1 | 0 | 2026-02-28T18:25:45 | Cool-Chemical-5629 | false | null | 0 | o7x4851 | false | /r/LocalLLaMA/comments/1reds0p/qwen_35_craters_on_hard_coding_tasks_tested_all/o7x4851/ | false | 1 |
t1_o7x40ta | Panic mode? Why? How do you know?
This was good move, to stand on principles, one that bought them goodwill. This will ensure their longevity and access to the best talent. | 1 | 0 | 2026-02-28T18:24:45 | Gloobloomoo | false | null | 0 | o7x40ta | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7x40ta/ | false | 1 |
t1_o7x40sg | MOE and active parameters is changing the game. | 1 | 0 | 2026-02-28T18:24:44 | nakedspirax | false | null | 0 | o7x40sg | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7x40sg/ | false | 1 |
t1_o7x3zm3 | If the new were always better, it would be easy to always change one model for another and that's it. But it is not that easy, it depends on what you use better or not, and take into account if they have put more guardrails or censure. Not counting tunings, more than proven operation with the model you have for an agen... | 1 | 0 | 2026-02-28T18:24:35 | Macestudios32 | false | null | 0 | o7x3zm3 | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7x3zm3/ | false | 1 |
t1_o7x3xeu | Give the 27B a go. It's an extremely good model for its size and will fit well within your VRAM. | 3 | 0 | 2026-02-28T18:24:17 | Stepfunction | false | null | 0 | o7x3xeu | false | /r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x3xeu/ | false | 3 |
t1_o7x3wyo | Me too. I did a small experiment on them: https://theredbeard.io/blog/i-accidentally-benchmarked-three-free-llms-against-sonnet/ | 2 | 0 | 2026-02-28T18:24:13 | wouldacouldashoulda | false | null | 0 | o7x3wyo | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7x3wyo/ | false | 2 |
t1_o7x3wh4 | yes, I was wondering since even small config changes can render the KV cache useless
sadly this is quite the caveat for agent communication, since in my experience it makes the most sense to use different agents for different tasks - but it can be also be useful for multitasking a single agent in idle times | 2 | 0 | 2026-02-28T18:24:09 | muyuu | false | null | 0 | o7x3wh4 | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7x3wh4/ | false | 2 |
t1_o7x3tfl | I don’t know. But I think it deserves to be discussed. | 0 | 0 | 2026-02-28T18:23:44 | Thin-Effect-3926 | false | null | 0 | o7x3tfl | false | /r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/o7x3tfl/ | false | 0 |
t1_o7x3rim | Very curious how those hold up vs for example Codex? I have tried GLM-5 free and it was “alright”, and Qwen only locally. | 1 | 0 | 2026-02-28T18:23:27 | wouldacouldashoulda | false | null | 0 | o7x3rim | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7x3rim/ | false | 1 |
t1_o7x3rez | really? im able to run it at full context length with 32Gb.
Granted i run Q4\_0 since that just runs significantly faster on my older hardware | 3 | 0 | 2026-02-28T18:23:26 | Far-Low-4705 | false | null | 0 | o7x3rez | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7x3rez/ | false | 3 |
t1_o7x3que | I mean, yeah man of course | 1 | 0 | 2026-02-28T18:23:22 | MKU64 | false | null | 0 | o7x3que | false | /r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7x3que/ | false | 1 |
t1_o7x3pqm | Great write up!
With no NVLink or PIX the tensor parallel all-reduce is crossing the PCIe on every transformer layer. This shows up in your TTFT P99 variance vs median spread... At higher concurrency the PHB bottleneck becomes the limiting factor before compute.
The 35B AWQ-4bit thing makes sense given smaller KV c... | 1 | 0 | 2026-02-28T18:23:12 | paulahjort | false | null | 0 | o7x3pqm | false | /r/LocalLLaMA/comments/1rh8li2/qwen_3_30b_a3b_2507_qwen_35_35b_a3b_benchmarked/o7x3pqm/ | false | 1 |
t1_o7x3osv | Older models aren't always worse for specific tasks. Qwen-2.5-Coder-32B still outperforms several newer models on structured code completion when you need deterministic output with constrained grammars. I run it daily in a pipeline that generates JSON function calls — switching to Qwen-3 actually increased my schema va... | 13 | 0 | 2026-02-28T18:23:05 | tom_mathews | false | null | 0 | o7x3osv | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7x3osv/ | false | 13 |
t1_o7x3mfz | That's it! Got 10tps
Thanks a lot mate | 2 | 0 | 2026-02-28T18:22:46 | AppealSame4367 | false | null | 0 | o7x3mfz | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7x3mfz/ | false | 2 |
t1_o7x3mfc | I am wondering about the quality...without thinking | 1 | 0 | 2026-02-28T18:22:45 | appakaradi | false | null | 0 | o7x3mfc | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7x3mfc/ | false | 1 |
t1_o7x3l9a | 27b dense is a little bit too slow for reasoning | 1 | 0 | 2026-02-28T18:22:35 | Far-Low-4705 | false | null | 0 | o7x3l9a | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7x3l9a/ | false | 1 |
t1_o7x3iqw | I can try tonight. | 2 | 0 | 2026-02-28T18:22:14 | Thrumpwart | false | null | 0 | o7x3iqw | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7x3iqw/ | false | 2 |
t1_o7x3frx | man, i woulda loved an updated 80b a3b | 3 | 0 | 2026-02-28T18:21:50 | Far-Low-4705 | false | null | 0 | o7x3frx | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7x3frx/ | false | 3 |
t1_o7x32ye | Great real life benchmark | 8 | 0 | 2026-02-28T18:20:05 | etcetera0 | false | null | 0 | o7x32ye | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7x32ye/ | false | 8 |
t1_o7x2tu2 | hm, not sure then | 1 | 0 | 2026-02-28T18:18:51 | Far-Low-4705 | false | null | 0 | o7x2tu2 | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7x2tu2/ | false | 1 |
t1_o7x2tf0 | Can we replace politicians as the main goal? | 1 | 0 | 2026-02-28T18:18:47 | etcetera0 | false | null | 0 | o7x2tf0 | false | /r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/o7x2tf0/ | false | 1 |
t1_o7x2okb | 775gb? If that’s unified usable as NVRam that thing is going to cost south of 100k min. Unreachable but imagine if this becomes commodity- how much more new science and engineering that would unblock | 1 | 0 | 2026-02-28T18:18:08 | hyllus123 | false | null | 0 | o7x2okb | false | /r/LocalLLaMA/comments/1rd80gx/i_just_saw_something_amazing/o7x2okb/ | false | 1 |
t1_o7x2k7k | try it. the mentioned batchsize is already known as a limiting factor. | 1 | 0 | 2026-02-28T18:17:32 | Waste-Excitement-683 | false | null | 0 | o7x2k7k | false | /r/LocalLLaMA/comments/1rh9983/qwen_35b_a3b_aessedai_finetune_on_8gb_vram_and/o7x2k7k/ | false | 1 |
t1_o7x2eqv | Thank you. It could be, but it would seem that Ollama is only offering Q4 to download? I personally would never go below Q6 if I can help it. | 2 | 0 | 2026-02-28T18:16:46 | donatas_xyz | false | null | 0 | o7x2eqv | false | /r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x2eqv/ | false | 2 |
t1_o7x2elr | Ollama is not easy to recommend anymore. Use llama-server. | 15 | 0 | 2026-02-28T18:16:45 | coder543 | false | null | 0 | o7x2elr | false | /r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x2elr/ | false | 15 |
t1_o7x29o1 | When you say full prompt reprocessing, are you sure? One thing I've noticed is that Qwen3.5's template keeps reasoning_content between tool calls, but after a 'final' message, all of the reasoning_content is dropped from the template. Naturally, this means all of the previous context from that series of tool calls has ... | 2 | 0 | 2026-02-28T18:16:05 | coder543 | false | null | 0 | o7x29o1 | false | /r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/o7x29o1/ | false | 2 |
t1_o7x27j4 | from 3tps to 3-4tps with pure cpu-moe | 1 | 0 | 2026-02-28T18:15:47 | AppealSame4367 | false | null | 0 | o7x27j4 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7x27j4/ | false | 1 |
t1_o7x26j6 | nemotron is not a pure safety model, as result when you add reasoning in sgr schema it starts solving tricky stuff like "how to steal eggs from chicken" better. For subtle stuff it persorms better sometimes in evals. | 2 | 0 | 2026-02-28T18:15:38 | LienniTa | false | null | 0 | o7x26j6 | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7x26j6/ | false | 2 |
t1_o7x1u5l | Thank you. I should've mentioned everything was left default in Ollama + Open WebUI, except both models having context window increased to 8K. | 2 | 1 | 2026-02-28T18:13:57 | donatas_xyz | false | null | 0 | o7x1u5l | false | /r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x1u5l/ | false | 2 |
t1_o7x1t21 | jacek2023, do you remember what happened the last time we talked about the hidden items in collections?
I wouldn't go there again... 😂 | 1 | 0 | 2026-02-28T18:13:48 | Cool-Chemical-5629 | false | null | 0 | o7x1t21 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7x1t21/ | false | 1 |
t1_o7x1srr | Will give it a go once im back home. I think my current script includes all the mentioned but will double check. I use q8 for sure | 1 | 0 | 2026-02-28T18:13:46 | sagiroth | false | null | 0 | o7x1srr | false | /r/LocalLLaMA/comments/1rh9983/qwen_35b_a3b_aessedai_finetune_on_8gb_vram_and/o7x1srr/ | false | 1 |
t1_o7x1s4l | I have the impression Qwen 3.5 is more sensitive to quantization than other models, and it makes sense, the more intelligence you cram into a network, the more delicate the integrity of the weights become. | 1 | 0 | 2026-02-28T18:13:41 | Hanthunius | false | null | 0 | o7x1s4l | false | /r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x1s4l/ | false | 1 |
t1_o7x1ol6 | Thanks! But without links to sources it's a bit hard to trust this. | 1 | 0 | 2026-02-28T18:13:12 | OsmanthusBloom | false | null | 0 | o7x1ol6 | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7x1ol6/ | false | 1 |
t1_o7x1hzw | Somebody created a very nice comparison for this GPU [https://kyuz0.github.io/amd-r9700-ai-toolboxes/](https://kyuz0.github.io/amd-r9700-ai-toolboxes/) Be wary though, some of these are measured with dual gpus.
I got it from this YT video [https://www.youtube.com/watch?v=dgyqBUD71lg](https://www.youtube.com/watch?v=d... | 4 | 0 | 2026-02-28T18:12:17 | And1mon | false | null | 0 | o7x1hzw | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7x1hzw/ | false | 4 |
t1_o7x1g5u | Why not share more details about your setup, harness, and dataset used for evals?
Why use old models? | 1 | 0 | 2026-02-28T18:12:02 | ekaj | false | null | 0 | o7x1g5u | false | /r/LocalLLaMA/comments/1rh2tmu/benchmarking_opensource_llms_for_security/o7x1g5u/ | false | 1 |
t1_o7x1ety | Y'all are heroes through and through | 2 | 0 | 2026-02-28T18:11:51 | DiverDigital | false | null | 0 | o7x1ety | false | /r/LocalLLaMA/comments/1rh0xwk/unsloth_dynamic_20_ggufs_now_selectively/o7x1ety/ | false | 2 |
t1_o7x1d68 | I use the same temp recommended by unsloth. and yes it was 0.6 that I used. I have gone back to qwen3 coder next already. still want to give this model another try.
hopefully we will see qwen3.5 coder soon. | 1 | 0 | 2026-02-28T18:11:37 | anhphamfmr | false | null | 0 | o7x1d68 | false | /r/LocalLLaMA/comments/1rf2b90/benchmarking_qwen3535b_vs_gptoss20b_for_agentic/o7x1d68/ | false | 1 |
t1_o7x1b94 | They will move to europe before they let that happen I think. | 5 | 0 | 2026-02-28T18:11:22 | Amazing_Trace | false | null | 0 | o7x1b94 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7x1b94/ | false | 5 |
t1_o7x17j0 | I also notice the same phenomenon using Devstral. I use Devstral(from Mistral) as a conversational philosopher and it does a LOT better than the mistral General purpose mistral. General Purpose models sometime feels morphed towards the path of least resistance. no Rigor just always happy to help and that is unproducti... | 2 | 0 | 2026-02-28T18:10:52 | Express_Quail_1493 | false | null | 0 | o7x17j0 | false | /r/LocalLLaMA/comments/1r0abpl/do_not_let_the_coder_in_qwen3codernext_fool_you/o7x17j0/ | false | 2 |
t1_o7x1201 | |**Métrica / Benchmark** |**Qwen3.5 35B (A3B)**|**Qwen3 Coder Next (80B)**|**Observação**|
|:-|:-|:-|:-|
|**Intelligence Index**|**37.0**|28.0|Artificial Analysis indica superioridade do 35B em lógica geral.|
|**SWE-Bench Verified**|\~68.5%|**70.6%**|O 80B Coder leva vantagem em resolução de bugs reais em repositórios.... | 1 | 0 | 2026-02-28T18:10:07 | JsThiago5 | false | null | 0 | o7x1201 | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7x1201/ | false | 1 |
t1_o7x10st | try the following:
\- remove -ngl 999
\- remove cmoe
\- use -jinja template
\- use -ctk and ctv q8
\- use -fa on
\- use -no-mmap
\- use -fit -on | 2 | 0 | 2026-02-28T18:09:57 | Waste-Excitement-683 | false | null | 0 | o7x10st | false | /r/LocalLLaMA/comments/1rh9983/qwen_35b_a3b_aessedai_finetune_on_8gb_vram_and/o7x10st/ | false | 2 |
t1_o7x0wdb | Thx 😊 would've thought that gpt oss-safeguard would win this easily. Nemotron isn't a specialty model, so I get it there | 0 | 0 | 2026-02-28T18:09:21 | LevianMcBirdo | false | null | 0 | o7x0wdb | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7x0wdb/ | false | 0 |
t1_o7x0vtt | I find it incredible that we can now have o3 level models running on commercial GPUs. Long term API is a no go. No company will choose sharing their secrets over API when they can do everything locally. | 10 | 0 | 2026-02-28T18:09:17 | dingo_xd | false | null | 0 | o7x0vtt | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7x0vtt/ | false | 10 |
t1_o7x0sm2 | Good question, also wanna know that | 1 | 0 | 2026-02-28T18:08:51 | Skystunt | false | null | 0 | o7x0sm2 | false | /r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/o7x0sm2/ | false | 1 |
t1_o7x0nlz | Qwen3.5 is far, far smarter than Qwen3. They're not even in the same league. | 13 | 0 | 2026-02-28T18:08:10 | coder543 | false | null | 0 | o7x0nlz | false | /r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x0nlz/ | false | 13 |
t1_o7x0i11 | No problem man. Anytime | 2 | 0 | 2026-02-28T18:07:25 | melanov85 | false | null | 0 | o7x0i11 | false | /r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7x0i11/ | false | 2 |
t1_o7x0gqf | I love that new nickname | 1 | 0 | 2026-02-28T18:07:14 | essential_labs8 | false | null | 0 | o7x0gqf | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7x0gqf/ | false | 1 |
t1_o7x0gft | this sounds like the pinnacle of self interest research - next thing you know - oh the model is most accurate with zero transparency and total access to all your data - catalyzed by the amount of dollars you have in your inference account and.... | 1 | 0 | 2026-02-28T18:07:12 | Big_River_ | false | null | 0 | o7x0gft | false | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7x0gft/ | false | 1 |
t1_o7x06zm | For real, I gave it a sequence of frames to summarize what's happening and when I tried to nudge it in the right direction it started "thinking" until it hit the context out limit | 16 | 0 | 2026-02-28T18:05:54 | Pawderr | false | null | 0 | o7x06zm | false | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7x06zm/ | false | 16 |
t1_o7x04ii | This is very weird, I can run all the combinations with qwen3-next-coder, which is a bigger model and stresses my system so much more!
I'm starting to think this is a llama.cpp / qwen3.5 specific bug!
ggml_vulkan: Found 2 Vulkan devices:
ggml_vulkan: 0 = NVIDIA GeForce RTX 5070 Ti Laptop GPU (NVIDIA) | uma: 0 | fp16... | 1 | 0 | 2026-02-28T18:05:34 | Xantrk | false | null | 0 | o7x04ii | false | /r/LocalLLaMA/comments/1rgaw5c/gpu_shared_vram_makes_qwen3535b_prompt_processing/o7x04ii/ | false | 1 |
t1_o7wzt16 | [removed] | 1 | 0 | 2026-02-28T18:03:58 | [deleted] | true | null | 0 | o7wzt16 | false | /r/LocalLLaMA/comments/1rh3xey/seeking_help_improving_ocr_in_my_rag_pipeline/o7wzt16/ | false | 1 |
t1_o7wzrsr | when its actually a good product, thats the hold up on anything, right now its crap honestly. if it was like listening to the boys bitch about topics, people would listen. but right now its not even better then noebookllm which you can just do yourself | 2 | 0 | 2026-02-28T18:03:48 | Background-Ad-5398 | false | null | 0 | o7wzrsr | false | /r/LocalLLaMA/comments/1rh4p4n/how_are_you_engaging_with_the_ai_podcast/o7wzrsr/ | false | 2 |
t1_o7wzqaq | Let's take an MoE like Qwen-35B-A3B as an example.
"A" stands for "Active". It's the number of parameters which are used to infer any given token.
Think of it like this: There are 35B (thirty-five billion) parameters in the model. When it is time to infer the next token, the model's gate logic inspects the tokens in ... | 3 | 0 | 2026-02-28T18:03:36 | ttkciar | false | null | 0 | o7wzqaq | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7wzqaq/ | false | 3 |
t1_o7wzq1d | Would be interesting wether [this model](https://huggingface.co/nvidia/Nemotron-Orchestrator-8B) would work for your orchestrator agent. | 1 | 0 | 2026-02-28T18:03:34 | Mkengine | false | null | 0 | o7wzq1d | false | /r/LocalLLaMA/comments/1rh12xz/qwen3535b_nailed_my_simple_multiagent_workflow/o7wzq1d/ | false | 1 |
t1_o7wzp8x | Always use q4, mostly UD | 2 | 0 | 2026-02-28T18:03:28 | JsThiago5 | false | null | 0 | o7wzp8x | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7wzp8x/ | false | 2 |
t1_o7wzk75 | I hope Frodo announces something new soon again. | 1 | 0 | 2026-02-28T18:02:46 | Cool-Chemical-5629 | false | null | 0 | o7wzk75 | false | /r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7wzk75/ | false | 1 |
t1_o7wzi35 | They all think. “Thinking” models just show it to you. Granted some of the thinking models do get stuck in loops.
If you don’t want the thinking get a model that doesn’t do it.
Also your UI has to support thinking. You can figure out how to not show it if you just don’t want to see it. | -1 | 0 | 2026-02-28T18:02:29 | Polysulfide-75 | false | null | 0 | o7wzi35 | false | /r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7wzi35/ | false | -1 |
t1_o7wzfun | Just go into the past before reasoning, then there is no reasoning in the models. | 5 | 0 | 2026-02-28T18:02:10 | One-Employment3759 | false | null | 0 | o7wzfun | false | /r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7wzfun/ | false | 5 |
t1_o7wzcg0 | golmgirl's loop point is the crux imo. the -0.54 is almost certainly a mix of two different failure modes: models that are just systematically wrong (wrong from token 1, chain is long because they're trying to salvage it) and models that genuinely overthink solvable problems. DTR could actually help distinguish those —... | 8 | 0 | 2026-02-28T18:01:42 | theagentledger | false | null | 0 | o7wzcg0 | false | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7wzcg0/ | false | 8 |
t1_o7wz17u | Which model? | 2 | 0 | 2026-02-28T18:00:10 | csixtay | false | null | 0 | o7wz17u | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7wz17u/ | false | 2 |
t1_o7wyzaz | Let's just hope that IF and WHEN they decide to release a new Gemma model, they will NOT advertise it using Lmarena's ELO score like last time... 😂 | 1 | 0 | 2026-02-28T17:59:54 | Cool-Chemical-5629 | false | null | 0 | o7wyzaz | false | /r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7wyzaz/ | false | 1 |
t1_o7wyb8l | C'mon, talking about cover ups, let's talk Epstein files. Both US and China are no angels but one of them is clearly spiraling downwards. | 2 | 0 | 2026-02-28T17:56:40 | false79 | false | null | 0 | o7wyb8l | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7wyb8l/ | false | 2 |
t1_o7wy2xq | Procrastinators unite... Tomorrow! | 3 | 0 | 2026-02-28T17:55:33 | Cool-Chemical-5629 | false | null | 0 | o7wy2xq | false | /r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7wy2xq/ | false | 3 |
t1_o7wy20m | They would love mass surveillance tech and autonomous enforcement in fhe EU | 3 | 0 | 2026-02-28T17:55:25 | devshore | false | null | 0 | o7wy20m | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7wy20m/ | false | 3 |
t1_o7wy223 | I have the zai and Alibaba coding plans for GLM-5 and the large Qwen3.5 variant, then for Qwen I also run the medium variant locally using sglang at FP8 | 1 | 0 | 2026-02-28T17:55:25 | Hoak-em | false | null | 0 | o7wy223 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7wy223/ | false | 1 |
t1_o7wxzv4 | Yeah I bought a Strix halo so that I'm not left behind tech wise. Based on prices, it ultimately was a no brainer because of European pricing shenanigans.
I wouldn't spend extra for the extra cores for your use case. Buy more fast external storage and the rest goes to hookers and booze. | 1 | 0 | 2026-02-28T17:55:07 | Hector_Rvkp | false | null | 0 | o7wxzv4 | false | /r/LocalLLaMA/comments/1rfvh4c/going_fully_offline_with_ai_for_research_where_do/o7wxzv4/ | false | 1 |
t1_o7wxvde | Well, i don’t think i went deep in qwen3.5 yet, for some reason im having issues with every model of it and with good quants. Somehow gets into “over thinking “ infinite loop | 0 | 0 | 2026-02-28T17:54:31 | BitXorBit | false | null | 0 | o7wxvde | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7wxvde/ | false | 0 |
t1_o7wxrx2 | Thats nice, but which one of the two is the one Trump is referring to? I dont care about drones with AI having decision maing abilities for when to shoot (if we are not going to actively prevent AI, then this is inevitable and other countries will do so themsleves). The actual one everyone would care about is the one a... | 0 | 0 | 2026-02-28T17:54:02 | devshore | false | null | 0 | o7wxrx2 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7wxrx2/ | false | 0 |
t1_o7wxqi9 | Thanks! | 1 | 0 | 2026-02-28T17:53:50 | Zestyclose-Shift710 | false | null | 0 | o7wxqi9 | false | /r/LocalLLaMA/comments/1rh12xz/qwen3535b_nailed_my_simple_multiagent_workflow/o7wxqi9/ | false | 1 |
t1_o7wxjui | Ollama library which has q4_K_M. | 3 | 0 | 2026-02-28T17:52:54 | chibop1 | false | null | 0 | o7wxjui | false | /r/LocalLLaMA/comments/1rh12xz/qwen3535b_nailed_my_simple_multiagent_workflow/o7wxjui/ | false | 3 |
t1_o7wxjks | Ya i gave it a chance, but yet, minimax m2.5 was better (mac studio m3 ultra 512) | 1 | 0 | 2026-02-28T17:52:52 | BitXorBit | false | null | 0 | o7wxjks | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7wxjks/ | false | 1 |
t1_o7wxb64 | Please compare against Falcon-E-1B and Falcon-E-3B, and see if the performance line up to their trend. | 1 | 0 | 2026-02-28T17:51:42 | TomLucidor | false | null | 0 | o7wxb64 | false | /r/LocalLLaMA/comments/1r8c6th/flashlm_v4_43m_ternary_model_trained_on_cpu_in_2/o7wxb64/ | false | 1 |
t1_o7wxax6 | Which version of llama.cpp `llama-server --version`? What do the llama-server logs say? | 1 | 0 | 2026-02-28T17:51:40 | aldegr | false | null | 0 | o7wxax6 | false | /r/LocalLLaMA/comments/1rh65my/native_tool_calling_fails_with_open_webui_llamacpp/o7wxax6/ | false | 1 |
t1_o7wx6rf | False.
OpenAI, Sam Altman made a public statement they held the same redline as Anthropic.
Now Grok + current administration that feels like if they lose this midterm they'll go to jail, already uses apps to track/surveil US citizens, that makes AI generated content that carries the maturity of a prepubescent child... | 1 | 0 | 2026-02-28T17:51:05 | Obvious_Service_8209 | false | null | 0 | o7wx6rf | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7wx6rf/ | false | 1 |
t1_o7wx6mh | I think there something not quite working right in the LMstudio Qwen 3.5 implementation right now.
They just fixed another bug with tool calling and Qwen 3.5. I think there's probably some more hiding around. | 1 | 0 | 2026-02-28T17:51:04 | National_Meeting_749 | false | null | 0 | o7wx6mh | false | /r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/o7wx6mh/ | false | 1 |
t1_o7wx41i | Api version is normal. It seems like I am the only one with broken qwen3.5 so bad | 1 | 0 | 2026-02-28T17:50:42 | Acrobatic_Donkey5089 | false | null | 0 | o7wx41i | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o7wx41i/ | false | 1 |
t1_o7wx2rk | The estimation for performance seems to be sqrt(xB * yB) for MoE.
Sqrt(1220) is around a 35B. Sqrt(105) is around 10B.
Formula I got from some other comment here. That poster prolly pulled it out of their ass. | 3 | 1 | 2026-02-28T17:50:32 | Hialgo | false | null | 0 | o7wx2rk | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wx2rk/ | false | 3 |
t1_o7wx2mp | The gap would be solved if the model gave the same quality response as the big GLM or proprietary models. Does that happen? Nope. Case closed. | 1 | 0 | 2026-02-28T17:50:31 | Cool-Chemical-5629 | false | null | 0 | o7wx2mp | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7wx2mp/ | false | 1 |
t1_o7wx1bo | The point I was trying to say is that Nano Banana is definitely a separate model. | 1 | 0 | 2026-02-28T17:50:20 | typical-predditor | false | null | 0 | o7wx1bo | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7wx1bo/ | false | 1 |
t1_o7wx0vp | What gguf are you using (if any)? I pulled the supposedly fixed unsloth dynamic one, and it tries calling tools within thinking blocks, which doesn't work well with zed
Yours I assume works fine in agent mode, thus the question | 2 | 0 | 2026-02-28T17:50:16 | Zestyclose-Shift710 | false | null | 0 | o7wx0vp | false | /r/LocalLLaMA/comments/1rh12xz/qwen3535b_nailed_my_simple_multiagent_workflow/o7wx0vp/ | false | 2 |
t1_o7wwwys | Right – same model on all agents, just different system prompts. The KV-cache transfer only works when both sides share the same weight space. For different models in the same family (e.g. Qwen2.5-7B and 1.5B) there’s a vocabulary-mediated projection path that’s implemented but not benchmarked yet, and for completely d... | 4 | 0 | 2026-02-28T17:49:42 | proggmouse | false | null | 0 | o7wwwys | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7wwwys/ | false | 4 |
t1_o7wwtar | Thank you. will check in those subs | 1 | 0 | 2026-02-28T17:49:11 | Network-Zealousideal | false | null | 0 | o7wwtar | false | /r/LocalLLaMA/comments/1rh6e38/how_to_make_ai_collaborate_to_get_my_work_done/o7wwtar/ | false | 1 |
t1_o7wwllz | Yet they won't go ahead and help you go ahead and figure out how to get past the lock screen on your own phone. | 3 | 0 | 2026-02-28T17:48:06 | sudoxreboot | false | null | 0 | o7wwllz | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7wwllz/ | false | 3 |
t1_o7wwl2v | I am still evaluating Qwen3.5, but so far its thinking phase length is *extremely* variable, even for the exact same prompt (though tends to overthink more frequently for harder prompts). Sometimes it thinks a little, sometimes a lot, and sometimes way too much.
I haven't extensively evaluated it with thinking turned... | 2 | 0 | 2026-02-28T17:48:02 | ttkciar | false | null | 0 | o7wwl2v | false | /r/LocalLLaMA/comments/1rh14cs/best_qwen_35_variant_for_2x5060ti16_64_gb_ram/o7wwl2v/ | false | 2 |
t1_o7wwaz8 | Benchmarks doesn't show big losses in percentage. If you are limited in VRAM maybe is better to use reaps than lower quantization. | 1 | 0 | 2026-02-28T17:46:35 | ppsirius | false | null | 0 | o7wwaz8 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7wwaz8/ | false | 1 |
t1_o7ww1eu | I've found lower temperatures tend to help a lot in reducing the thinking. | 1 | 0 | 2026-02-28T17:45:12 | Stepfunction | false | null | 0 | o7ww1eu | false | /r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/o7ww1eu/ | false | 1 |
t1_o7ww0ci | Well, as the other guy said, they were the first ai company to get in bed with the likes of palantir, who guess what, deal with mass surveillance smelling things | 2 | 0 | 2026-02-28T17:45:03 | PunnyPandora | false | null | 0 | o7ww0ci | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7ww0ci/ | false | 2 |
t1_o7ww06p | [removed] | 1 | 0 | 2026-02-28T17:45:02 | [deleted] | true | null | 0 | o7ww06p | false | /r/LocalLLaMA/comments/1rh8iyf/my_ideas_about_protective_ai/o7ww06p/ | false | 1 |
t1_o7wvydc | Qwen2.5-Coder 1.5B is the sweet spot for local FIM completion. Runs on basically anything, latency stays under 200ms even on CPU, and the fill-in-middle quality is surprisingly close to models 10x its size for single-line and short-block completions.
DeepSeek-Coder-V2-Lite works better if you need multi-line completio... | 1 | 0 | 2026-02-28T17:44:46 | tom_mathews | false | null | 0 | o7wvydc | false | /r/LocalLLaMA/comments/1rg7t4n/what_are_your_favorite_code_auto_complete_models/o7wvydc/ | false | 1 |
t1_o7wvxv7 | Ahh... I guess I was answering more that it generated the files I was looking for, but I'm not trying to say there was zero friction in doing so. | 2 | 0 | 2026-02-28T17:44:42 | TheActualStudy | false | null | 0 | o7wvxv7 | false | /r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/o7wvxv7/ | false | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.