name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7v3goa
For real-time calls, the tough part is not selecting the model - it makes the conversation feel smooth, handles interruptions, and makes sure the actions will actually work. You also want flexibility to switch speech tools without rebuilding everything. We open-sourced Unpod as a modular voice AI orchestration layer th...
1
0
2026-02-28T11:25:29
Missbutterscotchh
false
null
0
o7v3goa
false
/r/LocalLLaMA/comments/1plz1gb/best_solution_for_building_a_realtime/o7v3goa/
false
1
t1_o7v3g7c
pci is never a bottleneck for inference, it could be not enough for training though
1
0
2026-02-28T11:25:21
exceptioncause
false
null
0
o7v3g7c
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v3g7c/
false
1
t1_o7v3deg
But 35B-A3B is just so much worse than the 27B dense model. 122B-A10B still works acceptable to me on a 3090 with 96GB DDR5 (64 should be fine as well). 22T/s TG and 500T/s PP. It's just all of the thinking that these models do that make it really slow...
3
0
2026-02-28T11:24:39
chris_0611
false
null
0
o7v3deg
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v3deg/
false
3
t1_o7v3aed
I build apps, qwen3:0.6b is my current 'app component' llm; the the size it's remarkably good! So really looking forward to the TINY qwen3.5s I can see 7b being a killer 'planner' etc.
1
0
2026-02-28T11:23:54
scottgal2
false
null
0
o7v3aed
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7v3aed/
false
1
t1_o7v3a67
Unfortunately, it is not supported for the qwen3-next/qwen3.5 architecture and in llama.cpp this parameter is simply not used, even if specified.
1
0
2026-02-28T11:23:50
ThetaMeson
false
null
0
o7v3a67
false
/r/LocalLLaMA/comments/1rgp2nu/anyone_doing_speculative_decoding_with_the_new/o7v3a67/
false
1
t1_o7v38mv
But you can download compiled llama.cpp and it works with qwen for many days now
45
0
2026-02-28T11:23:27
jacek2023
false
null
0
o7v38mv
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v38mv/
false
45
t1_o7v368o
~/llama-server \ -m ./Qwen3.5-27B-UD-Q4_K_XL.gguf \ --mmproj ./mmproj-F16.gguf \ --n-gpu-layers 99 \ --threads 16 \ -c 90000 -fa 1 \ --temp 1.0 --top-p 0.95 --top-k 20 --min-p 0.00 \ --reasoning-budget -1 \ --presence-penalty 1.5 --repeat-penalty 1.0 ...
4
0
2026-02-28T11:22:50
chris_0611
false
null
0
o7v368o
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v368o/
false
4
t1_o7v34mt
They probably called bluff and did not expect it to really happen.
0
1
2026-02-28T11:22:26
shroddy
false
null
0
o7v34mt
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7v34mt/
false
0
t1_o7v31ui
don't the llama.cpp releases available work for you? they usually have some compiled binaries that you can download.
5
0
2026-02-28T11:21:44
fredconex
false
null
0
o7v31ui
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v31ui/
false
5
t1_o7v30t4
It could be just quants for the 4 already released big models.
10
0
2026-02-28T11:21:29
hum_ma
false
null
0
o7v30t4
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v30t4/
false
10
t1_o7v2xnr
I read that it was advised.... Did you test it?
1
0
2026-02-28T11:20:43
Uranday
false
null
0
o7v2xnr
false
/r/LocalLLaMA/comments/1rglgma/qwen_35_llamacpp_turn_of_reasoning_and_performance/o7v2xnr/
false
1
t1_o7v2xdf
Yes, but benchmarks already put the 27b and 122b a10b close to eachother. The 27b is dense which makes it much smarter for its size.
2
0
2026-02-28T11:20:39
eribob
false
null
0
o7v2xdf
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7v2xdf/
false
2
t1_o7v2vtl
Use the recommended inference parameters. It'll overthink unless you set `presence_penalty` to something high.
4
0
2026-02-28T11:20:15
Thunderstarer
false
null
0
o7v2vtl
false
/r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/o7v2vtl/
false
4
t1_o7v2uwk
my old z.ai $3 is still better so is $100/200 claude. I tested qwen 3.5 inside qwen cli I can see after few enshitifications @ cloud LLM, in 1-2 years model like qwen 5 will be really good local
1
0
2026-02-28T11:20:02
evia89
false
null
0
o7v2uwk
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v2uwk/
false
1
t1_o7v2qjf
I'll definitely be running the 35B at Q4_K_M, and still figuring out if there are prompts where the 122B at IQ2_XXS performs better than the 35B. I have a 3y old laptop with 12GB VRAM and 32GB RAM.
1
0
2026-02-28T11:18:57
LMLocalizer
false
null
0
o7v2qjf
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7v2qjf/
false
1
t1_o7v2mki
Really enjoyed this exchange you're clearly approaching this the right way and doing your homework before spending serious money. Just want to leave you with one last thought from my experience. At the $5k range on the desktop side, a Mac Studio with M3 Ultra gets you around 96GB unified memory, which is solid. But for...
2
0
2026-02-28T11:17:57
melanov85
false
null
0
o7v2mki
false
/r/LocalLLaMA/comments/1rfvh4c/going_fully_offline_with_ai_for_research_where_do/o7v2mki/
false
2
t1_o7v2kvb
I honestly don’t know ! I never had the opportunity to test such large models unfortunately. My experience is limited to models around 24b and ministral3:24 gave me the best results on legal texts analysis.
1
0
2026-02-28T11:17:32
Personal-Gur-1
false
null
0
o7v2kvb
false
/r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o7v2kvb/
false
1
t1_o7v2ifw
Thanks! The MAB was a natural fit, each (task\_type, topic) pair has a different optimal retrieval strategy, and Thompson Sampling converges faster than epsilon-greedy with our sample sizes. **On ablations:** just finished running the baselines 10 minutes ago. Here's where it gets interesting: |Config|Overall Score|A...
1
0
2026-02-28T11:16:56
ikchain
false
null
0
o7v2ifw
false
/r/LocalLLaMA/comments/1r8jgwv/i_built_a_local_ai_dev_assistant_with_hybrid_rag/o7v2ifw/
false
1
t1_o7v2gyl
Do you notice performance improvements?
1
0
2026-02-28T11:16:34
SoMuchLasagna
false
null
0
o7v2gyl
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7v2gyl/
false
1
t1_o7v2fk8
Those do tend to get expensive, at least I'm seeing the Orange Pi Ultra at 320$. At some point it's less hassle to just buy a Jetson, and at least get CUDA so the non-mainstream projects have a chance of running.
1
0
2026-02-28T11:16:12
MoffKalast
false
null
0
o7v2fk8
false
/r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o7v2fk8/
false
1
t1_o7v2bf9
I've switched from installing ubuntu on my PCs to kubuntu, there is no major difference. But you need to know that while undervolting is trivial in windows (with msi afterburner), it is significantly more complex in linux. Power limiting is simple, but not undervolting.
4
0
2026-02-28T11:15:09
Prudent-Ad4509
false
null
0
o7v2bf9
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7v2bf9/
false
4
t1_o7v29ff
Look what I found!
5
0
2026-02-28T11:14:39
AppealThink1733
false
null
0
o7v29ff
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7v29ff/
false
5
t1_o7v298x
Any links for a know good template for llama.cpp?
1
0
2026-02-28T11:14:36
ANTONBORODA
false
null
0
o7v298x
false
/r/LocalLLaMA/comments/1rg05k7/qwen_35_122b_a10b_3584_score_on_natint_ugi/o7v298x/
false
1
t1_o7v28xg
[deleted]
1
0
2026-02-28T11:14:31
[deleted]
true
null
0
o7v28xg
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v28xg/
false
1
t1_o7v28tf
Yea, i would bet so.
1
0
2026-02-28T11:14:30
Space__Whiskey
false
null
0
o7v28tf
false
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7v28tf/
false
1
t1_o7v28ge
I’m building a system that does most of this, but they are specific bots that confine the systems. I’m building separate bots that work together to do parts of the job while the system works. I have a few simple user surfaces where most of the activity will be via terminal interface. One of the bots is a context bot...
1
0
2026-02-28T11:14:24
Jazzlike_Syllabub_91
false
null
0
o7v28ge
false
/r/LocalLLaMA/comments/1rf9891/anyone_actually_running_multiagent_setups_that/o7v28ge/
false
1
t1_o7v22t1
32g ram. it seems qwen3.5 new hybrid attention architecture reduce kv cache usage.
1
0
2026-02-28T11:13:00
Conscious_Chef_3233
false
null
0
o7v22t1
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7v22t1/
false
1
t1_o7v21vr
On one hand good on Anthropic for standing up to Hesgeth I applaud them for saying no and taking the hit and sticking to their principles. Imagine Trump admin getting their hands on AGI or ASI - what a dystopia that would end up being. On the other hand the irony is palpable - Anthropic the most hawkish and "muricah!...
8
0
2026-02-28T11:12:45
GreenGreasyGreasels
false
null
0
o7v21vr
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7v21vr/
false
8
t1_o7v2153
Why they got upset with Anthropic?
1
0
2026-02-28T11:12:34
iamevpo
false
null
0
o7v2153
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7v2153/
false
1
t1_o7v1zgh
Llama 3.3 70b don't do that, also Nemo, Mistral Small 3.2, Gemma 3 etc.
3
0
2026-02-28T11:12:09
-Ellary-
false
null
0
o7v1zgh
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7v1zgh/
false
3
t1_o7v1z6c
I've been struggling to get edit and write tool calls to work with opencode, I keep getting  ~ Preparing write... Tool execution aborted "Invalid diff: now finding less tool calls!" Does this happen for you? I've been struggling to figure out how people can actually use opencode for writing and patching code. Happ...
1
0
2026-02-28T11:12:05
rema1000fan
false
null
0
o7v1z6c
false
/r/LocalLLaMA/comments/1rgxr0v/qwen_35_is_multimodal_here_is_how_to_enable_image/o7v1z6c/
false
1
t1_o7v1xrg
Works for me: `llama-server -c 32768 -ctk q8_0 -ctv q8_0 --temp 0.7 --top-p 1.0 --top-k 0.00 --min-p 0.05 --dynatemp-range 0.3 --dynatemp-exp 1.2 --dry-multiplier 0.8 --repeat-penalty 1.05 --mlock --chat-template-kwargs "{"enable_thinking": false}" -m Qwen3.5-35B-A3B.Q4_K_M.gguf`
2
0
2026-02-28T11:11:44
Life-Screen-9923
false
null
0
o7v1xrg
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7v1xrg/
false
2
t1_o7v1to5
131k of context and only 12gb of vram ? How much of ram do you have ? I usually get 65k of context with 16gb of vram ( no offloading ) with gpt oss 20b , 7 experts actives from lm studio .
1
0
2026-02-28T11:10:42
Turbulent_Dot3764
false
null
0
o7v1to5
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7v1to5/
false
1
t1_o7v1r1m
Honestly the Nemotron Nano shoutout is spot on, that thing punches way above its weight for agentic stuff. I've been running it locally for tool calling pipelines and its surprisingly reliable even with limited vram. Also agree on Qwen 3.5 being a massive step up, the 35B model specifically feels like it closed the gap...
1
0
2026-02-28T11:10:01
jduartedj
false
null
0
o7v1r1m
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7v1r1m/
false
1
t1_o7v1op0
80b coder is better for me than 35b but most of my tasks ar coding. I have done simple tests for general tasks on the 122b but i dont have a conclusive result yet to tell which one i like more Cant wait for the coder variants of qwen3.5 models
2
0
2026-02-28T11:09:26
m_mukhtar
false
null
0
o7v1op0
false
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7v1op0/
false
2
t1_o7v1l8i
Yes I had exactly the same - not sure which element fixed it but I ended up updating everything including CUDA
2
0
2026-02-28T11:08:34
thigger
false
null
0
o7v1l8i
false
/r/LocalLLaMA/comments/1re9xbi/qwen35_on_vllm/o7v1l8i/
false
2
t1_o7v1h25
I ended up reinstalling everything including upgrading to CUDA13.1 and now am getting sensible outputs. Running on 2xA6000 Ada, WSL2. The AWQ quants (cyankiwi) don't seem to work on SGLang (but they're fine on vllm). The FP8 and full-precision work on both.
1
0
2026-02-28T11:07:30
thigger
false
null
0
o7v1h25
false
/r/LocalLLaMA/comments/1re9xbi/qwen35_on_vllm/o7v1h25/
false
1
t1_o7v1dur
I am super wary of Ubuntu. I do actually use it and I'm too invested in it now to change, I have it mostly just how i want it but as someone who doesnt really know linux, the headache of nvidia drivers from multiple sources and different ways of installing, leading to cuda issues and problems presenting when trying to ...
1
0
2026-02-28T11:06:41
munkiemagik
false
null
0
o7v1dur
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7v1dur/
false
1
t1_o7v1ct3
I'm ready for any Qwens. They're all broken in ollama and there's like 10 pull requests trying to make them work in the first place and I'm too stupid to compile llama.cpp.
13
0
2026-02-28T11:06:25
LosEagle
false
null
0
o7v1ct3
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v1ct3/
false
13
t1_o7v1cib
Probably a base model and the instruct trained model. So 2 sizes. If I hazard to take a guess, the 9B model and a 4B model. So this way they have size equivalents of the Gemma 3 Models, 4B, 9B, 27B
65
0
2026-02-28T11:06:20
Few_Painter_5588
false
null
0
o7v1cib
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v1cib/
false
65
t1_o7v183u
Both are great, the 35b is faster and has vision which is very handy, honestly the 35b (Q4\_K\_M) have been smarter on some tasks than 80b (Q3\_K\_M), for example I've asked it to disable theme sync on my app when I change only background color, there's two dropdowns one for theme and another for background color, and ...
4
0
2026-02-28T11:05:13
fredconex
false
null
0
o7v183u
false
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7v183u/
false
4
t1_o7v16jh
they made claude code, so far the best agent.. it would be nice if that was open sourced )
5
0
2026-02-28T11:04:50
goingsplit
false
null
0
o7v16jh
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7v16jh/
false
5
t1_o7v16cd
Kimi k2. 5 Glm5
2
0
2026-02-28T11:04:47
Amgadoz
false
null
0
o7v16cd
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7v16cd/
false
2
t1_o7v15pr
fair enough, you're right that the email case specifically is pretty tractable. historyId + structured tagging is clean and I've actually done something similar. I think where it gets messy tho is when you move beyond email into less structured domains, like "remember that conversation we had about X last week" or "you...
1
0
2026-02-28T11:04:37
jduartedj
false
null
0
o7v15pr
false
/r/LocalLLaMA/comments/1re897q/rlocalllama_whats_the_biggest_missing_piece_for/o7v15pr/
false
1
t1_o7v11p0
Never used Manus nor the following: https://github.com/OthmanAdi/planning-with-files but maybe it is something useful.
1
0
2026-02-28T11:03:36
FewBasis7497
false
null
0
o7v11p0
false
/r/LocalLLaMA/comments/1rgyb4r/local_manus/o7v11p0/
false
1
t1_o7v11pt
I'm testing the 397B NVFP4 on 4 x Rtx6000 pros. Great model so far but I have a lot more to test.
1
0
2026-02-28T11:03:36
TaiMaiShu-71
false
null
0
o7v11pt
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7v11pt/
false
1
t1_o7v0vqg
As far as I'm concerned, Anthropic could go bankrupt right now, nothing would change. Anthropic has never implemented any open-source model, and Chinese companies have done more for those who want to use a local model than Anthropic has ever done in all of history.
10
0
2026-02-28T11:02:03
AppealThink1733
false
null
0
o7v0vqg
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7v0vqg/
false
10
t1_o7v0vmd
I tried giving Qwen3.5 122B some coding tasks, it just got into a sort of loop/too much thinking. I waited for 30mins and stopped the process. On the other hand, minimax m2.5 finished the task in 3mins, qwen3 coder next in 9 minutes (and got better code score). Im still unable to understand the hype around Qwen3.5
1
0
2026-02-28T11:02:01
BitXorBit
false
null
0
o7v0vmd
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v0vmd/
false
1
t1_o7v0v9e
That is literally killing me!!!
0
0
2026-02-28T11:01:55
d4mations
false
null
0
o7v0v9e
false
/r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/o7v0v9e/
false
0
t1_o7v0t8v
Do you really want to disable reasoning for coding? I'm testing a few different Qwen3.5 models right now on my 4070 Ti Super. Getting 50+t/s in chat, but it's way slower in open code building a project.
1
0
2026-02-28T11:01:25
T3KO
false
null
0
o7v0t8v
false
/r/LocalLLaMA/comments/1rglgma/qwen_35_llamacpp_turn_of_reasoning_and_performance/o7v0t8v/
false
1
t1_o7v0swb
This did not go well, I get a bunch of errors during startup after which it's running but barely. I am using CUDA 13.0 with vllm/vllm-openai:nightly + huggingface/transformers.git and Sehyo/Qwen3.5-122B-A10B-NVFP4. If you can please share you vllm launch command line and any other environment details that I should take...
1
0
2026-02-28T11:01:19
NaiRogers
false
null
0
o7v0swb
false
/r/LocalLLaMA/comments/1rgiait/switched_to_qwen35122ba10bi1gguf/o7v0swb/
false
1
t1_o7v0rkn
Try LFM's recent 24B MOE model, should be faster than 30B MOEs
1
0
2026-02-28T11:00:59
pmttyji
false
null
0
o7v0rkn
false
/r/LocalLLaMA/comments/1r1c7ct/no_gpu_club_how_many_of_you_do_use_local_llms/o7v0rkn/
false
1
t1_o7v0qn4
I've been trying different system prompts but not much luck so far. It definitely works to tell it what the steps should be called, or contain, but even if you tell it to only use three it'll do the three and then generate a big pile of "Wait..." - I've not really found any way to use system prompts to keep it short.
1
0
2026-02-28T11:00:46
thigger
false
null
0
o7v0qn4
false
/r/LocalLLaMA/comments/1rg0487/system_prompt_for_qwen35_27b35ba3b_to_reduce/o7v0qn4/
false
1
t1_o7v0ouy
IME the quantization frontier has really improved in the last couple years. I generally find better results using the Q3 quants for very large models than I do using Q8 quants for smaller ones at similar filesizes.
3
0
2026-02-28T11:00:18
Thunderstarer
false
null
0
o7v0ouy
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7v0ouy/
false
3
t1_o7v0oh9
Well, show that to the "gguf when ?" crowd. I've been there long enough to know that people would try to quantize the models even before they are officially supported by llama.cpp So what would you prefer, a bug tracker in a responsive community or the wild west?
1
0
2026-02-28T11:00:12
TitwitMuffbiscuit
false
null
0
o7v0oh9
false
/r/LocalLLaMA/comments/1rfds1h/qwen3535ba3b_q4_quantization_comparison/o7v0oh9/
false
1
t1_o7v0n0g
Can a company even win the Nobel Peace Prize? Because if so, Anthropic absolutely deserves one!
-1
0
2026-02-28T10:59:50
kRoy_03
false
null
0
o7v0n0g
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7v0n0g/
false
-1
t1_o7v0mfn
Haven't tested Vibe integration yet but Devstral 2 is really impressive for the size even just in regular use. How well does it manage context? Like if I can only load say, 16k, could it gradually check a huge repo for issues anyway?
2
0
2026-02-28T10:59:41
MoffKalast
false
null
0
o7v0mfn
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7v0mfn/
false
2
t1_o7v0ly8
Good stuff in that post, but you still dodged the question: what laptops are there with a big wide mem bus? About the price/cost: yes, expensive, more than my framework13. But there are also expensive amd64 based powerful laptops, the thinkpad p1, and others.. They still dont have the same mem bandwidth.. and there is ...
1
0
2026-02-28T10:59:34
m3thos
false
null
0
o7v0ly8
false
/r/LocalLLaMA/comments/1rb8mzd/this_is_how_slow_local_llms_are_on_my_framework/o7v0ly8/
false
1
t1_o7v0k72
Good execution on the Thompson Sampling for retrieval strategy selection - that is the kind of self-improving mechanism that makes the flywheel concept actually meaningful. Curious to see what the ablation numbers show.
2
0
2026-02-28T10:59:07
BC_MARO
false
null
0
o7v0k72
false
/r/LocalLLaMA/comments/1r8jgwv/i_built_a_local_ai_dev_assistant_with_hybrid_rag/o7v0k72/
false
2
t1_o7v0jgd
It qwen3-coder-next seems to make dumb mistakes a lot more often and require more babysitting to avoid doing random stuff with the context, but it manages to do a better job as soon as shell tool calling becomes important 
2
0
2026-02-28T10:58:56
DidItABit
false
null
0
o7v0jgd
false
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7v0jgd/
false
2
t1_o7v0h4c
It's pretty good - though its performance at long context definitely suffers. I'm presently running a few benchmarks - I have a suspicion that for my use-case I'm going to have to leave thinking turned on, even though it \*loves\* to "Wait..." over and over again even after it's already copied out its entire input.
17
0
2026-02-28T10:58:21
thigger
false
null
0
o7v0h4c
false
/r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/o7v0h4c/
false
17
t1_o7v0g9o
i think the slowness may be mainly due to the length of the final output generated from llm, the lookup is pretty fast and accurate from my experience and from looking at the demo: [https://private-user-images.githubusercontent.com/10822018/556079553-704dbc0a-3df6-436a-b7f7-fb1edefbfb8c.mp4?jwt=eyJ0eXAiOiJKV1QiLCJhb...
1
0
2026-02-28T10:58:08
HugeConsideration211
false
null
0
o7v0g9o
false
/r/LocalLLaMA/comments/1r5e0x8/sirchmunk_embeddingandindexfree_retrieval_for/o7v0g9o/
false
1
t1_o7v0g07
isnt that what agents are for?
4
0
2026-02-28T10:58:04
howardhus
false
null
0
o7v0g07
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v0g07/
false
4
t1_o7v0egy
I greatly dislike giving money to US based companies but I'm tempted to make an exception for Anthropic. when the deranged throw such a fit you know Anthropic has done something good.
1
0
2026-02-28T10:57:40
starhobo
false
null
0
o7v0egy
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7v0egy/
false
1
t1_o7v0dsz
I find myself, surprisingly, limited by compute, and I'm not really sure why. I can get 20-25 T/s on similarly-sized dense models, but even though Qwen 27B fits into my VRAM, I only get ~14 T/s.
3
0
2026-02-28T10:57:29
Thunderstarer
false
null
0
o7v0dsz
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7v0dsz/
false
3
t1_o7v0ber
WHat does the image say? I'm blind
2
0
2026-02-28T10:56:52
Silver-Champion-4846
false
null
0
o7v0ber
false
/r/LocalLLaMA/comments/1rgt4m4/not_creeped_out_at_all_i_swear/o7v0ber/
false
2
t1_o7v077t
Thanks - I'm going to give it a try but it won't leave much room for context, unfortunately.
1
0
2026-02-28T10:55:47
Zyj
false
null
0
o7v077t
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7v077t/
false
1
t1_o7v0686
Mistral makes one of the few larger models that aren't stuck on parrot/summary mode and don't need 200gb for IQ1. Your other options around 100B are brainlet MoE codemaxxers and months old models from cohere. The creative SOTA is basically all on API for most people these days. Free inference for that has been pullin...
5
0
2026-02-28T10:55:31
a_beautiful_rhind
false
null
0
o7v0686
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7v0686/
false
5
t1_o7v02vw
I found Q4\_K\_S better and faster than 397b-A17B MXFP4.
1
0
2026-02-28T10:54:39
Zyj
false
null
0
o7v02vw
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7v02vw/
false
1
t1_o7v01un
- Captioning 100k+ of images - I only read a tiny tiny fraction of it - Finishing the project before 2047
6
0
2026-02-28T10:54:23
reto-wyss
false
null
0
o7v01un
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7v01un/
false
6
t1_o7v00u8
Yes, but im pretty sure they weren't downloaded for older models as well tho.
1
0
2026-02-28T10:54:08
KURD_1_STAN
false
null
0
o7v00u8
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7v00u8/
false
1
t1_o7uzyyo
\> But Claude is also really the best model available right now. I recommend to use Claude  I got r\_ped, but damn he was good at sex, the biggest cock in town I recomand getting r\_ped too but wear a condom
1
0
2026-02-28T10:53:37
Ambitious-Call-7565
false
null
0
o7uzyyo
false
/r/LocalLLaMA/comments/1rd8cfw/anthropics_recent_distillation_blog_should_make/o7uzyyo/
false
1
t1_o7uzxwk
I tried Qwen 3.5 397B-A17B MXFP4\_MOE but found Q4\_K\_S to be both better and faster. I will probably try Qwen 3.5 122B-A10B for the speedup.
1
0
2026-02-28T10:53:21
Zyj
false
null
0
o7uzxwk
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uzxwk/
false
1
t1_o7uzvj0
Yeah, but $25/M output tokens 😭 I have to prepare myself mentally for this.
2
0
2026-02-28T10:52:44
fairydreaming
false
null
0
o7uzvj0
false
/r/LocalLLaMA/comments/1rg9lli/little_qwen_35_27b_and_qwen_35ba3b_models_did/o7uzvj0/
false
2
t1_o7uznco
Meh. I don't love the REAPs. I feel like the strategy has potential, but it's too immature and imprecise, and it ends up ripping out too much, to the point where I notice it failing in edge-cases.
1
0
2026-02-28T10:50:37
Thunderstarer
false
null
0
o7uznco
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uznco/
false
1
t1_o7uzkzx
[https://thinkingmachines.ai/blog/lora/](https://thinkingmachines.ai/blog/lora/)
1
0
2026-02-28T10:50:01
Few-Welcome3297
false
null
0
o7uzkzx
false
/r/LocalLLaMA/comments/1rgbwwh/lora_training_vs_fft_what_do_i_need_to_know/o7uzkzx/
false
1
t1_o7uzkd5
I would also like to know the answer to that question
1
0
2026-02-28T10:49:51
Silver-Champion-4846
false
null
0
o7uzkd5
false
/r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/o7uzkd5/
false
1
t1_o7uzj5s
Thank you u/danielhanchen can you quickly elaborate what does this mean for the Qwen 3.5 397B A17B quants?
1
0
2026-02-28T10:49:33
Zyj
false
null
0
o7uzj5s
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7uzj5s/
false
1
t1_o7uzhyb
Update... **9 days later** Hey everyone, wanted to circle back with a progress update since the discussion here was genuinely useful. **TL;DR:** Went from 413 tests > 991 tests. Built the hyper-personalization engine and 6 algorithm improvements. Every gap called out in this thread got addressed. Directly from your ...
1
0
2026-02-28T10:49:14
ikchain
false
null
0
o7uzhyb
false
/r/LocalLLaMA/comments/1r8jgwv/i_built_a_local_ai_dev_assistant_with_hybrid_rag/o7uzhyb/
false
1
t1_o7uzhz7
I won't be able to run it while chrome is up, meaning I can't actually multitask with it.
2
0
2026-02-28T10:49:14
Silver-Champion-4846
false
null
0
o7uzhz7
false
/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7uzhz7/
false
2
t1_o7uzgi9
Thank you we appreciate it, the support is more than we could ever hope for! :)
2
0
2026-02-28T10:48:52
yoracale
false
null
0
o7uzgi9
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7uzgi9/
false
2
t1_o7uz4hr
Old localllama appreciated projects that conserved prompt tokens. Openclaw is the opposite of that.
9
0
2026-02-28T10:45:43
LMLocalizer
false
null
0
o7uz4hr
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uz4hr/
false
9
t1_o7uz3qf
thank you very much for the explanation, I'll then go on with 27B.
3
0
2026-02-28T10:45:31
vogelvogelvogelvogel
false
null
0
o7uz3qf
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uz3qf/
false
3
t1_o7uz0pr
I honestly fail to see how qwen can rank above glm. Idk, maybe I'm biased or something, but all the recent qwen releases have been a massive disappointment to me, while glm keeps punching waaaay above its weight. I'd rank 4.7 above kimi, too.
3
0
2026-02-28T10:44:43
stoppableDissolution
false
null
0
o7uz0pr
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7uz0pr/
false
3
t1_o7uyyvo
I've managed to run GLM-4.7 Flash and Qwen3-30B-A3B on 6GB VRAM, even Qwen3 Coder Next 80B as a Q1 quant, but man, 4GB VRAM is super tight! Nevertheless I would try Qwen3-35B-A3B in a tight quant (Q3 or even Q2), KV cache quantization to q8_0 or possibly even lower for the V part, context limited to say 32k or whatev...
1
0
2026-02-28T10:44:13
OsmanthusBloom
false
null
0
o7uyyvo
false
/r/LocalLLaMA/comments/1rg1dfi/searching_advice_nvidia_t6000_4gb_vram_useful_for/o7uyyvo/
false
1
t1_o7uyyim
position bias (mentioned above) is real, but there's another one that tripped me up more: verbosity bias. the judge tends to favor whichever agent gave the longer, more detailed response, even if the shorter one was actually more accurate. what helped was explicitly telling the judge to score accuracy and relevance sep...
5
0
2026-02-28T10:44:08
Exact_Guarantee4695
false
null
0
o7uyyim
false
/r/LocalLLaMA/comments/1rgt43l/using_a_third_llm_as_a_judge_to_evaluate_two/o7uyyim/
false
5
t1_o7uyv18
Yeah I'm not shitting on innovation to be clear, I'm glad it happened, I loved being there for it, the novelty just wore off at some point or another. I'm guilty of writing that in a hurry, I did not mean to sound like an ungrateful cunt. Bloke paved the way, but now there's bartowski \\ unsloth and one could quantize ...
2
0
2026-02-28T10:43:13
Lan_BobPage
false
null
0
o7uyv18
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uyv18/
false
2
t1_o7uys50
You might be using a broken quant. 
1
0
2026-02-28T10:42:27
My_Unbiased_Opinion
false
null
0
o7uys50
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7uys50/
false
1
t1_o7uyr0u
yeah logging everything is step one but unstructured logs aren't much better when you have a multi-step failure. what actually helped: wrapping every tool call in a structured event capturing step number, tool name, input summary, output summary, token count, elapsed ms. took about 2h to build, but now when step 4 die...
3
0
2026-02-28T10:42:09
Exact_Guarantee4695
false
null
0
o7uyr0u
false
/r/LocalLLaMA/comments/1rgwyqi/agent_debugging_is_a_mess_am_i_the_only_one/o7uyr0u/
false
3
t1_o7uynug
They just need to release the new, refined dense 123b. Not a tune on top of the old base, but actually new big dense base. But they probably wont because ai act's idiotic flops ceiling.
2
0
2026-02-28T10:41:19
stoppableDissolution
false
null
0
o7uynug
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7uynug/
false
2
t1_o7uym7u
Personally, I'd say that privacy in general is overrated, and it's not the reason I use local AI.
1
0
2026-02-28T10:40:54
OrganicPlasma
false
null
0
o7uym7u
false
/r/LocalLLaMA/comments/1re2qzr/after_all_the_news_do_you_worry_about_privacy/o7uym7u/
false
1
t1_o7uyhig
He's got major money and nothing except his family to prevent him from putting lots of that time and money into whatever he wants, so that gets you pretty close to power user territory without even trying that hard. I watched part of one of his build videos and he had 8 fairly high-end GPUs crammed into his computer, ...
3
0
2026-02-28T10:39:40
RogerRamjet999
false
null
0
o7uyhig
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7uyhig/
false
3
t1_o7uy9e2
? what values? >Nov 7, 2024 — ***Anthropic and Palantir Technologies Inc. (NYSE: PLTR***) today announced a partnership with Amazon Web Services (AWS) to provide US intelligence and defense... it's probably just negotiations breakdown and farmnig public sentiment. we didn't forget
41
0
2026-02-28T10:37:33
PunnyPandora
false
null
0
o7uy9e2
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uy9e2/
false
41
t1_o7uy7v4
What's the purpose? If you mean that you want it to somehow livestream your gameplay and provide real time tips, I don't think there are any frameworks that would harness a multimodal model in that way. If you want to clip some parts of your gameplay and the model to provide you with some feedback based on that, maybe ...
1
0
2026-02-28T10:37:09
popecostea
false
null
0
o7uy7v4
false
/r/LocalLLaMA/comments/1rgz6u3/how_tò_build_your_local_gaming_copilot_with/o7uy7v4/
false
1
t1_o7uy7u8
Some of his videos are like what LTT was 10 years ago and should be now. Remember whole room water cooling, or 7 gamers 1 GPU? This guy's drilling holes in his walls to "borrow" power from his bathroom circuit to power the hacked, undervolted Chinese 4090s in his computer.
1
0
2026-02-28T10:37:09
Doct0r0710
false
null
0
o7uy7u8
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7uy7u8/
false
1
t1_o7uy7c9
Anthropic was advocating to restrict open weights. Now they're persona non grata and their lobbying efforts are more likely to fall flat.
20
0
2026-02-28T10:37:01
a_beautiful_rhind
false
null
0
o7uy7c9
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uy7c9/
false
20
t1_o7uy684
I use Linux Mint and it works very well. But distro choice is mostly a matter of taste.
6
0
2026-02-28T10:36:44
OsmanthusBloom
false
null
0
o7uy684
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7uy684/
false
6
t1_o7uy43w
the context management point is real... biggest issue i've hit is keeping workflows simple when you need document understanding. ended up moving those to needle app since you just describe what you want and it builds it (has rag built in). way easier than wiring together llm + vector db + chunking logic
1
0
2026-02-28T10:36:10
jannemansonh
false
null
0
o7uy43w
false
/r/LocalLLaMA/comments/1rgt0au/whats_the_biggest_issues_youre_facing_with_llms/o7uy43w/
false
1
t1_o7uy0pd
397b
1
0
2026-02-28T10:35:15
ciprianveg
false
null
0
o7uy0pd
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uy0pd/
false
1
t1_o7uxzzr
I've tried even Qwen3 Coder Next 80B Q1 quants on my RTX 3060 laptop GPU with only 6GB VRAM and 24GB RAM. It was surprisingly good.  But I'd recommend the new Qwen3-35B-A3B which is easier to fit in low VRAM with partial CPU offload.
2
0
2026-02-28T10:35:04
OsmanthusBloom
false
null
0
o7uxzzr
false
/r/LocalLLaMA/comments/1rfzrpa/8gb_vram_and_28gb_ram_windows_os/o7uxzzr/
false
2