name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7xbd5k
That would be helpful, could you please share the links
1
0
2026-02-28T19:01:28
evnix
false
null
0
o7xbd5k
false
/r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7xbd5k/
false
1
t1_o7xbce4
It's the 35b version, I have about 28 GB of shared memory and I am using LMStudio. I am maxing out all settings on LM Studio in terms of GPU offloading
1
0
2026-02-28T19:01:21
ChickenShieeeeeet
false
null
0
o7xbce4
false
/r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o7xbce4/
false
1
t1_o7xb928
It makes sense once you see how much money Kushner has in OpenAI.
5
0
2026-02-28T19:00:52
whiskybottle
false
null
0
o7xb928
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7xb928/
false
5
t1_o7xb8jc
You can, there's a bunch of open source tools. Even Claude Code can be used with local models.
1
0
2026-02-28T19:00:48
Djagatahel
false
null
0
o7xb8jc
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7xb8jc/
false
1
t1_o7xb72f
I suspect that folks with more than 24GB of VRAM (home enthusiasts) but less than 192GB (corporate users) are rare enough that nobody training models deems us a worthwhile audience.
15
0
2026-02-28T19:00:35
ttkciar
false
null
0
o7xb72f
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7xb72f/
false
15
t1_o7xb4xe
There it is again, I feel like you're being sincere, but the results I was seeing are full of red flags. It would make sense that models were being downloaded from HF. It was describing all these things it was doing to the models in the process, and they were small. My HD has plenty of space. Why would the progress...
1
0
2026-02-28T19:00:18
SmChocolateBunnies
false
null
0
o7xb4xe
false
/r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/o7xb4xe/
false
1
t1_o7xb3pa
Honestly haven’t thought much about cost tracking for JSON fallback – right now the handshake just picks a mode and goes with it. In practice if you’re falling back to JSON you’re just doing normal text communication, so whatever cost tracking you already have would apply. Not really an AVP-specific problem at that poi...
0
0
2026-02-28T19:00:08
proggmouse
false
null
0
o7xb3pa
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7xb3pa/
false
0
t1_o7xb34e
I can do tokens per second by hand. I know fast math.
6
0
2026-02-28T19:00:03
Putrumpador
false
null
0
o7xb34e
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xb34e/
false
6
t1_o7xb2jd
The 35B A3B IQ4\_XS quant is doing that for me with recommended settings. The 27B Q5\_K\_XL hasn't done that so far (both the updated versions with no relevant MXFP4 layers).
2
0
2026-02-28T18:59:58
Chromix_
false
null
0
o7xb2jd
false
/r/LocalLLaMA/comments/1rhaoty/anyone_noticing_qwen35_27b_getting_stuck_in/o7xb2jd/
false
2
t1_o7xb1p4
How about using bigger model with smaller quant? You can have 120A10@Q6 ow whatever effectively being 100A10Q8
1
0
2026-02-28T18:59:51
uti24
false
null
0
o7xb1p4
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7xb1p4/
false
1
t1_o7xayhk
35B or 27B? Also, what's your shared memory? Are you offloading the full model to the gpu? What software are you using for inference?
1
0
2026-02-28T18:59:25
jslominski
false
null
0
o7xayhk
false
/r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o7xayhk/
false
1
t1_o7xaxro
I must have missed this one thanks will experiment
1
0
2026-02-28T18:59:19
sagiroth
false
null
0
o7xaxro
false
/r/LocalLLaMA/comments/1rh9983/qwen_35b_a3b_aessedai_finetune_on_8gb_vram_and/o7xaxro/
false
1
t1_o7xaxbl
I thought Qwen 3 would flatten Q4 120b Qwen 3.5, but now I thought about it and I have actually no clue. For real, that's a good question.
2
0
2026-02-28T18:59:16
Long_comment_san
false
null
0
o7xaxbl
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7xaxbl/
false
2
t1_o7xaqmi
I'm afraid this is not the end and DDR4 price will continue to rise so you should not sell the old server. As for GPUs - of course sell them, especially the ancient M40's. Try to find a local buyer on Facebook marketplace or /r/homelabsales instead of Ebay, these bitches with 20% seller fees must die.
1
0
2026-02-28T18:58:19
MelodicRecognition7
false
null
0
o7xaqmi
false
/r/LocalLLaMA/comments/1rha4g1/advice_on_hardware_purchase_and_selling_old/o7xaqmi/
false
1
t1_o7xaogp
3.5 coder model ?
1
0
2026-02-28T18:58:00
NoobMLDude
false
null
0
o7xaogp
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7xaogp/
false
1
t1_o7xajxq
Can you guys imagine if they also released a distilled 80-100b version alongside it? Would be in heaven…
1
0
2026-02-28T18:57:23
GrungeWerX
false
null
0
o7xajxq
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7xajxq/
false
1
t1_o7xafjm
Check unsloth for the correct settings. Qwen35 is sensitive on parameters. Temperature set to 0.0 could cause this. https://unsloth.ai/docs/models/qwen3.5
3
0
2026-02-28T18:56:46
Pixer---
false
null
0
o7xafjm
false
/r/LocalLLaMA/comments/1rhaoty/anyone_noticing_qwen35_27b_getting_stuck_in/o7xafjm/
false
3
t1_o7xacsh
More discovers. While the max concurrency is at 13 for the A30 2507 model, it take more because max model len is at 36k. It's UNABLE to actually "eat" that much and the system overloads because of the architecture | Model (AWQ-4bit) | Prompts (Batch) | Total time ⏱️ | Output Throughput 🚀 | Total Throughput 🌪️ |...
1
0
2026-02-28T18:56:24
LinkSea8324
false
null
0
o7xacsh
false
/r/LocalLLaMA/comments/1rh8li2/qwen_3_30b_a3b_2507_qwen_35_35b_a3b_benchmarked/o7xacsh/
false
1
t1_o7xa9n1
Joined this sub gave me a very unfair advantage at work. While everyone struggles to figure out why Atlassian MCP wasn’t working, many didn’t even know how to choose between CLAUDE.md and Skills, I was rocking with running claude code with local model, being the only one in the office that has the macbook sounds like a...
46
0
2026-02-28T18:55:58
bobaburger
false
null
0
o7xa9n1
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xa9n1/
false
46
t1_o7xa8jp
nemotron ultra, soon right?
1
0
2026-02-28T18:55:49
loadsamuny
false
null
0
o7xa8jp
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7xa8jp/
false
1
t1_o7xa31i
Sounds a bit mechanical: tokens that stabilize early in shallow layers are "filler" (words like "and", "is", "the"). tokens that keep getting revised in deep layers are actual reasoning.
1
0
2026-02-28T18:55:04
michael2v
false
null
0
o7xa31i
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7xa31i/
false
1
t1_o7xa0g5
I see the word "cloud" and immediately the answer is no. Haha.
4
0
2026-02-28T18:54:42
valdev
false
null
0
o7xa0g5
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xa0g5/
false
4
t1_o7x9v8p
Really impressive work on AVP. The 47-53% redundant processing you identified is a huge inefficiency that most people probably don't even realize exists in their multi-agent setups. Your benchmarking approach caught my attention - tracking token usage across different models and chain lengths to quantify the savin...
1
0
2026-02-28T18:53:58
eliko613
false
null
0
o7x9v8p
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7x9v8p/
false
1
t1_o7x9q1c
I'm moving to anthropic now. Any company on the opposite side from Orange Man is a good company, and I don't even know what this mess is about. Should use it more at work, too.
2
0
2026-02-28T18:53:15
Raukstar
false
null
0
o7x9q1c
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7x9q1c/
false
2
t1_o7x9oog
The models are downloading from Hugging Face. Sometimes the CDN assigned to your area or ISP throttles concurrent model downloads, which can slow things down. The 4-digit PIN with recovery keys are the credentials used to trigger the Kill Switch — a feature that lets you erase all your credentials and logs from Bodeg...
1
0
2026-02-28T18:53:03
EmbarrassedAsk2887
false
null
0
o7x9oog
false
/r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/o7x9oog/
false
1
t1_o7x9gh8
In my testing on the 5090 and 3090 setup... Qwen3.5 27B simply didn't run well or solve things quickly, especially for the speed trade off. One of my favorite tests is solving a "solved" crossword, where the LLM has to use vision for a bit of OCR, but then reason its way to understand where blanks are supposed to be. ...
8
0
2026-02-28T18:51:53
valdev
false
null
0
o7x9gh8
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7x9gh8/
false
8
t1_o7x9fpr
Yeah, I can confirm that it definitely works really fast, and it's definitely sub-200 milliseconds. It's also insanely accurate and like pretty much comparable to GPT-4o level transcribe.
2
0
2026-02-28T18:51:47
Turbulent-Apple2911
false
null
0
o7x9fpr
false
/r/LocalLLaMA/comments/1qvvcd6/new_voxtralminirealtime_from_mistral_stt_in_under/o7x9fpr/
false
2
t1_o7x9ep8
I think the risk for us chinese model users is less that these models are "banned", but more that they are no longer published as open weights - because this is currently driven by the chinese government as a power play against US.
1
0
2026-02-28T18:51:38
Least-Platform-7648
false
null
0
o7x9ep8
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7x9ep8/
false
1
t1_o7x9b2q
But again… your tests are not using recommendable software or settings.
2
0
2026-02-28T18:51:08
coder543
false
null
0
o7x9b2q
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x9b2q/
false
2
t1_o7x99fu
The models are downloading from Hugging Face. Sometimes the CDN assigned to your area or ISP throttles concurrent model downloads, which can slow things down. The 4-digit PIN with recovery keys are the credentials used to trigger the Kill Switch — a feature that lets you erase all your credentials and logs from Bodeg...
1
0
2026-02-28T18:50:54
EmbarrassedAsk2887
false
null
0
o7x99fu
false
/r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/o7x99fu/
false
1
t1_o7x95rx
We are also speedrunning model uncensoring with better and better methods like it once was Doom or Bad Apple!
11
0
2026-02-28T18:50:23
kabachuha
false
null
0
o7x95rx
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7x95rx/
false
11
t1_o7x9464
Claude is far from being the best model
1
0
2026-02-28T18:50:10
SameSnow8167
false
null
0
o7x9464
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7x9464/
false
1
t1_o7x8yr9
Recently [LongCat-Flash-Lite (69B A3B)](https://www.reddit.com/r/LocalLLaMA/comments/1rgkxy3/comment/o7tf967/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)
1
0
2026-02-28T18:49:25
pmttyji
false
null
0
o7x8yr9
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7x8yr9/
false
1
t1_o7x8v0k
All these people stressing about tokens per second, when there are people making tokens per year the old fashioned way. We salute you for keeping tradition alive.
52
0
2026-02-28T18:48:54
Lakius_2401
false
null
0
o7x8v0k
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7x8v0k/
false
52
t1_o7x8u9u
Make sure you're using a proper IQ4 GGUF quant like the ones from bartowski or Unsloth and not bitsandbytes 4bit, which will be considerably worse.
2
0
2026-02-28T18:48:47
Stepfunction
false
null
0
o7x8u9u
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x8u9u/
false
2
t1_o7x8sh4
+15 social credit
1
0
2026-02-28T18:48:32
MelodicRecognition7
false
null
0
o7x8sh4
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7x8sh4/
false
1
t1_o7x8n3j
sure ,that's frist new things, thank you
2
0
2026-02-28T18:47:47
Disastrous_Talk7604
false
null
0
o7x8n3j
false
/r/LocalLLaMA/comments/1rh7mlv/before_i_rewrite_my_stack_again_advice/o7x8n3j/
false
2
t1_o7x8la8
Ah, ok, I probably missed that part somehow. That would make sense and I should try 27B next. Still, my tests would suggest Qwen3.5 35B Q4 MoE does not outperform Qwen3 14B Q8 dense :)
0
0
2026-02-28T18:47:32
donatas_xyz
false
null
0
o7x8la8
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x8la8/
false
0
t1_o7x8l58
Generally, it's the geometric mean of the full and active parameters, so more like: Effective Size = Sqrt(35 \* 3) \~ 10B
0
0
2026-02-28T18:47:31
Stepfunction
false
null
0
o7x8l58
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x8l58/
false
0
t1_o7x8k21
Did you try the dense 27B model?
11
0
2026-02-28T18:47:22
Fault23
false
null
0
o7x8k21
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7x8k21/
false
11
t1_o7x8a7e
as you did not provide any examples I assume they do not exist, so I am reporting your comments as off-topic.
1
0
2026-02-28T18:46:01
MelodicRecognition7
false
null
0
o7x8a7e
false
/r/LocalLLaMA/comments/1r6zxy0/kimi_k2_was_spreading_disinformation_and_made_up/o7x8a7e/
false
1
t1_o7x8a91
[removed]
1
0
2026-02-28T18:46:01
[deleted]
true
null
0
o7x8a91
false
/r/LocalLLaMA/comments/1rf9891/anyone_actually_running_multiagent_setups_that/o7x8a91/
false
1
t1_o7x88wi
Better in compatibility or am I missing something else?  My impression is accuracy difference is negligible and nvfp4 is faster inference. IMO we should definitely support open protocols
2
0
2026-02-28T18:45:50
sir_creamy
false
null
0
o7x88wi
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7x88wi/
false
2
t1_o7x87yz
But 3090 is popular here. I remember someone here stacked 12 3090s to use big/large models :)
7
0
2026-02-28T18:45:43
pmttyji
false
null
0
o7x87yz
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7x87yz/
false
7
t1_o7x87wy
You should stop using Ollama. Switch to llama.cpp, run whatever size model you want, with whatever configuration you want, and it'll run twice as fast as well.
5
0
2026-02-28T18:45:42
suicidaleggroll
false
null
0
o7x87wy
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x87wy/
false
5
t1_o7x86ln
Yeah good point. So my protocol handles this through the handshake. Before any KV-cache transfer, both agents exchange a model hash (SHA-256 of the sorted model config). If anything differs – quantization, head count, hidden dim, whatever – the handshake detects it and either routes through projection (same family) or ...
3
0
2026-02-28T18:45:31
proggmouse
false
null
0
o7x86ln
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7x86ln/
false
3
t1_o7x867k
Could you share your n8n workflow and your mcp?
1
0
2026-02-28T18:45:28
AccuratePay2878
false
null
0
o7x867k
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7x867k/
false
1
t1_o7x83jr
Great to know this... how is LocalAGI working for you? good enough ? Too many tools now a days.. better to take feedback fro one who is actually using before spending time ...so ...
3
0
2026-02-28T18:45:06
ExtremeKangaroo5437
false
null
0
o7x83jr
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7x83jr/
false
3
t1_o7x80tb
The 27B is a dense model, the 35B is MoE. A 35B A3B MoE is the size of a 35B dense model, with the intelligence of a ~20B dense model, that runs at the speed of a 3B dense model. If you have the RAM, MoE lets you run much larger and therefore smarter models without sacrificing speed, but when comparing a dense and Mo...
2
0
2026-02-28T18:44:43
suicidaleggroll
false
null
0
o7x80tb
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x80tb/
false
2
t1_o7x7ydr
Thank you. No I haven't as usually I don't need anything else doing, as I always try choosing Q8+ models that still fit within my system. Q4 is simply the only option here. But I shall investigate it further if that's how it is these days.
2
0
2026-02-28T18:44:23
donatas_xyz
false
null
0
o7x7ydr
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x7ydr/
false
2
t1_o7x7ucj
no, lol and I'm a guy that praised qwen3:14b here before
3
0
2026-02-28T18:43:49
jax_cooper
false
null
0
o7x7ucj
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x7ucj/
false
3
t1_o7x7u6o
https://preview.redd.it/…82685698164dc6b3
1
0
2026-02-28T18:43:48
norms_are_practical
false
null
0
o7x7u6o
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x7u6o/
false
1
t1_o7x7tfp
Qwen3 Next Instruct, Qwen3 Next Thinking, Qwen3 Coder Next Q3_K_XL
5
0
2026-02-28T18:43:42
Juan_Valadez
false
null
0
o7x7tfp
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7x7tfp/
false
5
t1_o7x7t3l
What are you using as the agentic framework? Did you build it from the ground up?
7
0
2026-02-28T18:43:39
parabellum630
false
null
0
o7x7t3l
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7x7t3l/
false
7
t1_o7x7sti
https://preview.redd.it/…f1383707b7d739fc
1
0
2026-02-28T18:43:37
norms_are_practical
false
null
0
o7x7sti
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x7sti/
false
1
t1_o7x7r5l
The 27B is quite good option as well.
1
0
2026-02-28T18:43:23
norms_are_practical
false
null
0
o7x7r5l
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x7r5l/
false
1
t1_o7x7nzi
This is Trump, it’s still a bluff. He makes a lot of noise, issues orders, then shits himself and backs out. 6 months is a long time before any actual consequences. Long time for Trump to backpedal. It’d be a real shame if the war dept suddenly couldn’t use any tech because it’s all blacklisted because they have ...
1
0
2026-02-28T18:42:57
huzbum
false
null
0
o7x7nzi
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7x7nzi/
false
1
t1_o7x7nkg
Thank you. Yeah there are lots of gaps but i wanted to start a thread for people with similar setup. Thanks for sharing will use your settings for my machine and try the Q3 as well.
1
0
2026-02-28T18:42:54
sagiroth
false
null
0
o7x7nkg
false
/r/LocalLLaMA/comments/1rh9983/qwen_35b_a3b_aessedai_finetune_on_8gb_vram_and/o7x7nkg/
false
1
t1_o7x7lys
Did you check this recent thread? Filled with many experiments & comparisons using llama.cpp command. [Follow-up: Qwen3.5-35B-A3B — 7 community-requested experiments on RTX 5080 16GB](https://www.reddit.com/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/?utm_source=share&utm_medium=web3x&utm...
1
0
2026-02-28T18:42:40
pmttyji
false
null
0
o7x7lys
false
/r/LocalLLaMA/comments/1rh9983/qwen_35b_a3b_aessedai_finetune_on_8gb_vram_and/o7x7lys/
false
1
t1_o7x7jdv
u/danielhanchen a bit of a weird one, but there is me and some other people on github issues for llama.cpp that are having segmentation faults / memory read errors on some quants. Not just unsloth ones, but AesSedai as well. Interestingly, Qwen3.5-35B-A3B-UD-Q5_K_XL.gguf appears not affected while Qwen3.5-35B-A3B-UD...
2
0
2026-02-28T18:42:19
Xantrk
false
null
0
o7x7jdv
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7x7jdv/
false
2
t1_o7x7g2u
Bro I got similar hardware specs. Can you guide me how to setup on Windows? I am a newbie.. Step by step can you give..?
0
0
2026-02-28T18:41:51
DockyardTechlabs
false
null
0
o7x7g2u
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7x7g2u/
false
0
t1_o7x7efd
It's a sweet spot for anyone who wants to avoid multi GPU setups but has money to buy a datacenter GPU. For the same reason it would also be a good choice for experimentation and research since there are no gpu communication issues and inefficiency
1
0
2026-02-28T18:41:37
IonizedRay
false
null
0
o7x7efd
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7x7efd/
false
1
t1_o7x7cut
I tend to agree. I've been lurking anonymously on this sub for a couple years but yesterday I decided to bite the bullet and register an account, just so I can comment on other people's awesome posts.
18
0
2026-02-28T18:41:24
OsmanthusBloom
false
null
0
o7x7cut
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7x7cut/
false
18
t1_o7x7bk3
Thank you. Hmm... Perhaps I should wait a week or two to see if 35B gets fixed, if that's the case.
1
0
2026-02-28T18:41:13
donatas_xyz
false
null
0
o7x7bk3
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x7bk3/
false
1
t1_o7x7b0c
Which gguf specifically? As in from whom
3
0
2026-02-28T18:41:08
Zestyclose-Shift710
false
null
0
o7x7b0c
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7x7b0c/
false
3
t1_o7x765m
is 3060 too expensive upgrade? because it has 12GB
1
0
2026-02-28T18:40:27
jacek2023
false
null
0
o7x765m
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7x765m/
false
1
t1_o7x748s
3090? I'm using pen and paper to calculate those matrices.
166
0
2026-02-28T18:40:11
Hector_Rvkp
false
null
0
o7x748s
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7x748s/
false
166
t1_o7x747m
Thank you. Hmm... Perhaps I should wait a week or two to see if 35B gets fixed, if that's the case.
1
0
2026-02-28T18:40:11
donatas_xyz
false
null
0
o7x747m
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x747m/
false
1
t1_o7x745a
No problem, glad it helped!
2
0
2026-02-28T18:40:10
Zestyclose-Shift710
false
null
0
o7x745a
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7x745a/
false
2
t1_o7x71sx
Yes, on the Qwen huggingface pages, they recommend certain settings, and that’s what I use.
1
0
2026-02-28T18:39:51
coder543
false
null
0
o7x71sx
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x71sx/
false
1
t1_o7x710d
good test for bots, they upvote anything with qwen as long as it's not negative ;)
7
0
2026-02-28T18:39:45
jacek2023
false
null
0
o7x710d
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7x710d/
false
7
t1_o7x6vv1
Isn't mxfp4 supposed to be super optimized for the hardware? I'm sure the XL is better in absolute terms, but how is it obvious that the precision gain is worth the speed loss?
5
0
2026-02-28T18:39:01
Hector_Rvkp
false
null
0
o7x6vv1
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7x6vv1/
false
5
t1_o7x6voo
Have you looked at Qwen’s official benchmarks? That would answer your questions quite definitively. It is a 27B model. Not 27B A3B. The 35B A3B model only uses 3B parameters per token. The 27B model uses 27B parameters per token.
6
0
2026-02-28T18:39:00
coder543
false
null
0
o7x6voo
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x6voo/
false
6
t1_o7x6tkr
I have a much weaker machine - a 2021 model Asus ROG Zephyrus gaming laptop with a RTX 3060 Laptop GPU (6GB VRAM) and 24GB RAM, 8 core AMD Ryzen 7 5800HS. I was able to achieve higher PP and TG speeds than you got, but I'm using a much smaller quant (UD-Q3\_K\_M from Unsloth, from yesterday, with the recent quantizatio...
2
0
2026-02-28T18:38:42
OsmanthusBloom
false
null
0
o7x6tkr
false
/r/LocalLLaMA/comments/1rh9983/qwen_35b_a3b_aessedai_finetune_on_8gb_vram_and/o7x6tkr/
false
2
t1_o7x6pdi
:D Kind of. They import few from away, that's why the delay. Well, I'm from other part of world.
1
0
2026-02-28T18:38:07
pmttyji
false
null
0
o7x6pdi
false
/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7x6pdi/
false
1
t1_o7x6n24
I only have one 3090, but it already can do SO MUCH. I can’t wait to get more of them lol, now I just need to find them for cheap 😭
5
0
2026-02-28T18:37:48
Borkato
false
null
0
o7x6n24
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7x6n24/
false
5
t1_o7x6lwp
Get rid of -ngl and do -ncmoe 48 Your prefill rate won’t be great but you should be able to achieve close to ~30tps
1
0
2026-02-28T18:37:38
DisgracedPhysicist
false
null
0
o7x6lwp
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7x6lwp/
false
1
t1_o7x6j2v
I mean,. what are you expecting from it? Is it sweet spot for your harware? Otherwice, 70BA10 will be smart as 25B model, and fast as 10B, at that point you can just run something like Mistral small 25B
-5
0
2026-02-28T18:37:15
uti24
false
null
0
o7x6j2v
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7x6j2v/
false
-5
t1_o7x6hwe
Yes, a new 70B dense model like llama 3.3 would be amazing for anyone who has a GPU that is quite fast and has 65+GB of VRAM, I bet that it could come close to 200B+ params MoE models
3
0
2026-02-28T18:37:05
IonizedRay
false
null
0
o7x6hwe
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7x6hwe/
false
3
t1_o7x6hna
Both 27B and 35B are only available in Q4, 27B being much smaller, of course. So I can't see how it can be better than 35B?
-3
0
2026-02-28T18:37:03
donatas_xyz
false
null
0
o7x6hna
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x6hna/
false
-3
t1_o7x6e9x
You know this if you've ever used Gemini 3/3.1 Pro for programming beyond one-shots
0
0
2026-02-28T18:36:34
themixtergames
false
null
0
o7x6e9x
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7x6e9x/
false
0
t1_o7x6e1m
Thanks for doing this. today I evalled a heretic model and compared to vanilla: AHA 2026 scores of Qwen3.5 27B Normal 50% Heretic abliteration 55% It is interesting that when you reduce censorship, it ends up getting more aligned. (My alignment is probably inverse of the industry safety alignments.) My que...
1
0
2026-02-28T18:36:33
de4dee
false
null
0
o7x6e1m
false
/r/LocalLLaMA/comments/1r4n3as/heretic_12_released_70_lower_vram_usage_with/o7x6e1m/
false
1
t1_o7x6dc4
what's so special about 64GB VRAM? if you don't see models for that setup then why this setup is so good?
3
0
2026-02-28T18:36:26
jacek2023
false
null
0
o7x6dc4
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7x6dc4/
false
3
t1_o7x6cug
> I'd go straight for the mxlp4 though, and only reconsider if it disappoints. If you mean MXFP4 then don't. Use Q4_XL. That's better.
7
0
2026-02-28T18:36:22
fallingdowndizzyvr
false
null
0
o7x6cug
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7x6cug/
false
7
t1_o7x6ab1
You could have been using a “bad quantization” of the 35B model. The Qwen3.5 MOE structure should be quite stable in quantizations, but firstly there was a little controversy about lacking performance from some if the initial quants of Qwen3.5. ALso try out the Q3_K_XL or even the Q2_K_XL - it sounds weird because pe...
1
0
2026-02-28T18:36:01
norms_are_practical
false
null
0
o7x6ab1
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x6ab1/
false
1
t1_o7x686w
Newer data for specific purposes. Which can degrade performance in others.
2
0
2026-02-28T18:35:43
toothpastespiders
false
null
0
o7x686w
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7x686w/
false
2
t1_o7x6674
Or just put "--reasoning-budget 0" on the command line.
5
0
2026-02-28T18:35:27
fallingdowndizzyvr
false
null
0
o7x6674
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7x6674/
false
5
t1_o7x65lx
Agree, Q4 @ 15-16GB
1
0
2026-02-28T18:35:22
pmttyji
false
null
0
o7x65lx
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x65lx/
false
1
t1_o7x65fz
4 3090s for life! Or until I can get 4 6000s/become rich.
13
0
2026-02-28T18:35:21
klenen
false
null
0
o7x65fz
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7x65fz/
false
13
t1_o7x62nz
What settings are you using particularly? Just Qwen recommendations ?
1
0
2026-02-28T18:34:57
masseus
false
null
0
o7x62nz
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7x62nz/
false
1
t1_o7x5xiq
So, your signup does include Google, but also Email signup as an option. Username(email), password. After installation, it asked me to make an account, using Email or sign in with Google. I chose email. I logged in, and it began the (multiple) download and update sequences. When it was done, it had hit an error wi...
1
0
2026-02-28T18:34:14
SmChocolateBunnies
false
null
0
o7x5xiq
false
/r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/o7x5xiq/
false
1
t1_o7x5x39
I like GLM-5 better than Codex, it follows the GSD harness better, but I use Codex for quick fixes that don't benefit from a harness, where it performs better
1
0
2026-02-28T18:34:11
Hoak-em
false
null
0
o7x5x39
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7x5x39/
false
1
t1_o7x5vpu
[removed]
1
0
2026-02-28T18:34:00
[deleted]
true
null
0
o7x5vpu
false
/r/LocalLLaMA/comments/1rdew81/finally_got_openclaw_working_on_windows_after_way/o7x5vpu/
false
1
t1_o7x5vj1
"new rig" is on the ship or what ;)
1
0
2026-02-28T18:33:58
jacek2023
false
null
0
o7x5vj1
false
/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7x5vj1/
false
1
t1_o7x5uev
I just want a modern 70b dense model for long context prompt comprehension. What is the point of creating extensive lorebooks for my setting if the model is just going to ignore half of it anyway? Which is something MoEs are more vulnerable to as a lot of the information it has to keep track of wasn't ever in its train...
4
0
2026-02-28T18:33:49
Equivalent-Freedom92
false
null
0
o7x5uev
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7x5uev/
false
4
t1_o7x5qgj
That might be the biggest blob of text I've ever seen on reddit in my 18+ years of being on reddit. Congrats, I guess.
2
0
2026-02-28T18:33:15
JMowery
false
null
0
o7x5qgj
false
/r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/o7x5qgj/
false
2
t1_o7x5of2
I bet Nvidia really regrets making those! 16gb? Or 32gb?
7
0
2026-02-28T18:32:59
cmdr-William-Riker
false
null
0
o7x5of2
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7x5of2/
false
7
t1_o7x5nrm
Whatever the DOD (only Congress can authorize a name change) insists on isn't relevant to us, unless we're in the military. Anthropic has always been anticompetitive as hell, advocating for the restriction of open models in the name of "safety". Is what the DOD doing a good thing? No, it's horrible. But I'm not going...
1
0
2026-02-28T18:32:53
ThatRandomJew7
false
null
0
o7x5nrm
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7x5nrm/
false
1
t1_o7x5ad6
There are times when you want thinking and times when you don't. I don't want the model to "but wait" 30 times before it tells me what to make for dinner. LIkewise I want the model to get coding question right on the first try and don't mind waiting a bit for the correct answer rather than regenerating 20 times. With t...
12
0
2026-02-28T18:31:01
lans_throwaway
false
null
0
o7x5ad6
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7x5ad6/
false
12
t1_o7x57g2
My fav is Scam Faultman
2
0
2026-02-28T18:30:37
escept1co
false
null
0
o7x57g2
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7x57g2/
false
2