name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o856zy3
It's a vLLM thing specifically. Apparently, vLLM has some wonky 8-bit LV quantization quality, according to my friend (OP) that uses vLLM.
2
0
2026-03-02T00:04:27
Kamal965
false
null
0
o856zy3
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o856zy3/
false
2
t1_o85701b
The trick is "don't use llama.cpp, lmstudio or ollama". For a project so widespread and with so many contributors, there has to be something fundamentally wrong if every other project like sglang, vllm, are basically 20x faster. I just measured on my rig, it's more than 20 times faster. This is not just a "bug". I be...
4
0
2026-03-02T00:04:27
ortegaalfredo
false
null
0
o85701b
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o85701b/
false
4
t1_o856y4b
I don't see how it could, but that would be a game changer
2
0
2026-03-02T00:04:09
ItsNoahJ83
false
null
0
o856y4b
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o856y4b/
false
2
t1_o856rw0
Could you please link the pr?
2
0
2026-03-02T00:03:07
FORNAX_460
false
null
0
o856rw0
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o856rw0/
false
2
t1_o856q1s
I am fairly certain that is not a scam. I tried it (That's the chat jimmy folks right?) and it was so unbeliably fast.
1
0
2026-03-02T00:02:49
LeRobber
false
null
0
o856q1s
false
/r/LocalLLaMA/comments/1re9crt/an_llm_hardcoded_into_silicon_that_can_do/o856q1s/
false
1
t1_o856cjm
I want to live in a society where the idea of censoring someone for anything (other than certain crimes against children), including denying the Holocaust, is unthinkable
1
0
2026-03-02T00:00:37
Decent-Reach-9831
false
null
0
o856cjm
false
/r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o856cjm/
false
1
t1_o856c8u
Honestly, I've found that this model's coding performance sometimes falls short of the Qwen3.5 35B. My primary use cases are translation and quick Q&A, for those tasks, it works well. According to Meituan's technical report, this model performs slightly better than Qwen3-Next-80B-A3B-Instruct.
1
0
2026-03-02T00:00:34
Sad-Pickle4282
false
null
0
o856c8u
false
/r/LocalLLaMA/comments/1rhjg6w/longcatflashlite_685b_maybe_a_relatively_good/o856c8u/
false
1
t1_o8566a9
OP, You are a gd genius. I am about to pop off the cover and attempt your proof of concept. **If you did this today, would you do anything different?** The adapter you mentioned is based in England, perhaps you are as well? If not, did you go with that solution or is there a US based solution that works? ...
1
0
2026-03-01T23:59:36
b4d6d5d9dcf1
false
null
0
o8566a9
false
/r/LocalLLaMA/comments/1qn02w8/i_put_an_rtx_pro_4000_blackwell_sff_in_my_mss1/o8566a9/
false
1
t1_o8564p9
Is this recorded on Sunday morning at Dario's home as soon as he woke up and he just quickly pulled a housecoat over his pajama for the interview?
1
0
2026-03-01T23:59:20
Cool-Chemical-5629
false
null
0
o8564p9
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o8564p9/
false
1
t1_o8561c5
Ok thanks a lot I think I have been enough of trouble for you today I will go look at some videos. Thanks a lot man I really appreciate your help.
1
0
2026-03-01T23:58:46
Electrify338
false
null
0
o8561c5
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o8561c5/
false
1
t1_o855ola
Yes, you should have separate folder called "models", put all GGUFs there. You should not put anything into llama.cpp folder because let's say next week there will be amazing update and everything will be much better, so you would need to download new zip and delete old files.
1
0
2026-03-01T23:56:45
jacek2023
false
null
0
o855ola
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o855ola/
false
1
t1_o855gdl
Thanks a lot now I followed the guys steps which I have the model working great now my question is is there a way to have all my models in a single folder instead of what I just did here.
1
0
2026-03-01T23:55:26
Electrify338
false
null
0
o855gdl
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o855gdl/
false
1
t1_o855gca
> \> You seem to believe that a MoE has the knowledge of its total parameters but only the intelligence of a dense model large as its active parameters. Was that the wording you intended?
1
0
2026-03-01T23:55:25
ttkciar
false
null
0
o855gca
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o855gca/
false
1
t1_o8555mp
If that’s case, I suggest you start with a working version of a open source RAG. The one I have tested is at https://github.com/HKUDS/LightRAG/blob/main/lightrag/api/README.md I have no relation to the project and I am not promoting this one over others. I mention this one because I have tested it with a set of documen...
1
0
2026-03-01T23:53:41
pl201
false
null
0
o8555mp
false
/r/LocalLLaMA/comments/1ribaws/offline_llm_best_pipeline_tools_to_query/o8555mp/
false
1
t1_o8555jh
Ah yeah good idea! :D
1
0
2026-03-01T23:53:41
c64z86
false
null
0
o8555jh
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o8555jh/
false
1
t1_o8551pq
You mean the prompt cache update for vision models? Yeah, that is merged in llama.cpp.
6
0
2026-03-01T23:53:03
dampflokfreund
false
null
0
o8551pq
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o8551pq/
false
6
t1_o854z5k
Holy fuck I was just now investigating, trying to figure out exactly what the OpenClaw craze is actually about, and the comment above is just chef's kiss. Imagine paying an OpenAI subscription to write in whatsapp what you could fucking write in Google Calendar pressing the New Event button.
6
0
2026-03-01T23:52:39
JoseSuarez
false
null
0
o854z5k
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o854z5k/
false
6
t1_o854ycp
This is amazing content. I have two servers with the A100 80GB and was considering the 35BA3B MoE due to high concurrency of users + low tolerance for latency, but this may be better as it gets better intelligence.
6
0
2026-03-01T23:52:31
xfalcox
false
null
0
o854ycp
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o854ycp/
false
6
t1_o854y6g
0.8B agent swarms would be legendary. I'd love to try pitting 100 worker ants against Claude Code to see who wins.
4
0
2026-03-01T23:52:29
Zestyclose839
false
null
0
o854y6g
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o854y6g/
false
4
t1_o854x56
If it regenerates every prompt, that is either because you are a context size lower than 512 token or you do not have the latest llama.cpp installed. This is fixable. What is not fixable however is the issue I'm talking about here. Once you exceed the context size, you have to reprocess the prompt every time you hit ge...
3
0
2026-03-01T23:52:20
dampflokfreund
false
null
0
o854x56
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o854x56/
false
3
t1_o854tfn
I know right, I am even looking at yours and thinking how the bleep are you getting 34 t/s. Oh hang on you're using Q4\_K\_M, I was seeing around 24t/s on Q6\_K\_L.. What other parameters are you running in your llama-server command?
3
0
2026-03-01T23:51:44
munkiemagik
false
null
0
o854tfn
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o854tfn/
false
3
t1_o854thn
now run llama-server so you could do that in your browser like in ChatGPT
1
0
2026-03-01T23:51:44
jacek2023
false
null
0
o854thn
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o854thn/
false
1
t1_o854pav
maybe move GGUF into other folder because it will be messy later ;)
2
0
2026-03-01T23:51:03
jacek2023
false
null
0
o854pav
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o854pav/
false
2
t1_o854o01
https://preview.redd.it/…hat much memory
2
0
2026-03-01T23:50:50
Electrify338
false
null
0
o854o01
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o854o01/
false
2
t1_o854k1g
[removed]
1
0
2026-03-01T23:50:11
[deleted]
true
null
0
o854k1g
false
/r/LocalLLaMA/comments/1q87rs6/a_practical_2026_roadmap_for_modern_ai_search_rag/o854k1g/
false
1
t1_o854i79
> Is there any actual source that said Claude is used to launch attack? We're talking about Anthropic.  You're not going to find any open source.
1
0
2026-03-01T23:49:53
llmentry
false
null
0
o854i79
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o854i79/
false
1
t1_o854hza
Did they officially roll the update?
1
0
2026-03-01T23:49:51
FORNAX_460
false
null
0
o854hza
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o854hza/
false
1
t1_o854erx
Je ne suis pas certain d'avoir compris car j'ai l'impression que ceci se produit à chaque génération même sans toucher à l'invite, et ce dès le premier message. Je pense que ce soici sera réglé dans quelques semaines, ce n'est pas la première fois que des modèles ont des problèmes à leir sortie.
-6
0
2026-03-01T23:49:19
Adventurous-Paper566
false
null
0
o854erx
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o854erx/
false
-6
t1_o854c5d
Yes, only capacity increases without any speedup of other things for single user setups. I like running bigger models and the new Qwen 3.5 397b at quant 4_K_M is pretty good for chats.
1
0
2026-03-01T23:48:52
ProfessionalSpend589
false
null
0
o854c5d
false
/r/LocalLLaMA/comments/1qpneiq/amd_strix_halo_gmtek_128gb_unified_rocks/o854c5d/
false
1
t1_o8548gc
Three teaching stories instead of abstract rules.
1
0
2026-03-01T23:48:16
RTS53Mini
false
null
0
o8548gc
false
/r/LocalLLaMA/comments/1riat5w/vignettes_handy_for_ais/o8548gc/
false
1
t1_o854010
Hmmm run RL with GRPO to create O-LoRA to perform search on Wiki, or just SFT a small model for it. I don't see why not.
2
0
2026-03-01T23:46:54
Budget-Juggernaut-68
false
null
0
o854010
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o854010/
false
2
t1_o853zgj
hunyuan-image-3-finetune. for loras I believe.
1
0
2026-03-01T23:46:49
Front_Eagle739
false
null
0
o853zgj
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o853zgj/
false
1
t1_o853rpm
But here is the tricky part, a 20B model only fits at Q4, a 30B parameter also fits at Q4, but people neglect actual usage, the KV cache and context need VRAM, smaller models are faster, and with the extra space you can run way larger contexts, so they become actually more usable.
1
0
2026-03-01T23:45:34
guesdo
false
null
0
o853rpm
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o853rpm/
false
1
t1_o853rqh
You and the other 4 people with enough ram to run that must be very happy :D
2
0
2026-03-01T23:45:34
jacobpederson
false
null
0
o853rqh
false
/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o853rqh/
false
2
t1_o853pqt
jokes aside, it basically takes up the entire cards. I think I have like 20MB free VRAM? I also run headless Linux just to make sure vLLM gets every bit of the VRAM, the OS VRAM usage is under 1MB.
13
0
2026-03-01T23:45:15
JohnTheNerd3
false
null
0
o853pqt
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o853pqt/
false
13
t1_o853k4l
yes.
12
0
2026-03-01T23:44:19
JohnTheNerd3
false
null
0
o853k4l
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o853k4l/
false
12
t1_o853ipt
I run LM-studio and the settings are always wrong by default there for some reason.
1
0
2026-03-01T23:44:05
jacobpederson
false
null
0
o853ipt
false
/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o853ipt/
false
1
t1_o853igi
western models won't misgender caitlyn jenner to avert nuclear holocaust but yeah let's talk about chinese bias
1
0
2026-03-01T23:44:02
woahdudee2a
false
null
0
o853igi
false
/r/LocalLLaMA/comments/1rdlaqr/for_those_who_use_local_chinese_models_does_bias/o853igi/
false
1
t1_o853h65
there are million options in llama-server to make it faster/better, have fun and good luck
2
0
2026-03-01T23:43:49
jacek2023
false
null
0
o853h65
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o853h65/
false
2
t1_o853ghm
"Ultimately you have to host it on the cloud" What a BS. I hope deepseek makes engram work so we can run the actual SOTA models on our computer.
2
0
2026-03-01T23:43:43
Several-Tax31
false
null
0
o853ghm
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o853ghm/
false
2
t1_o853gdm
I only did it once to run glm-4.7-flash when it first came out before I had enough risers to put multiple gpus in one box. it worked but hurt performance a bit. iirc I got like 15t/s vs 25 with all the gpus in one box. you may need to recompile llama.cpp with rpc support
1
0
2026-03-01T23:43:41
tvall_
false
null
0
o853gdm
false
/r/LocalLLaMA/comments/1ribx4f/sharded_deployment/o853gdm/
false
1
t1_o853ffq
I unzipped everything into one folder, here's what mine looks like with the gguf model included https://preview.redd.it/9beycyarsimg1.png?width=1408&format=png&auto=webp&s=131c5226d2721685885b3631cd3db1e143ca0423
1
0
2026-03-01T23:43:32
c64z86
false
null
0
o853ffq
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o853ffq/
false
1
t1_o853ccg
he's saying he wants to have the most performance that can fit on his 16gb, ofc
2
0
2026-03-01T23:43:02
itsTF
false
null
0
o853ccg
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o853ccg/
false
2
t1_o853bzt
u/waiting_for_zban , it took me a while, but I dug through the Librechat agents for search and got the most critical parts to reduce the amount of tokens used by SearXNG/Firecrawl/Jina.AI hopefully fixed: [https://github.com/danny-avila/agents/pull/63](https://github.com/danny-avila/agents/pull/63) . I will create a...
2
0
2026-03-01T23:42:58
runsleeprepeat
false
null
0
o853bzt
false
/r/LocalLLaMA/comments/1mucj1p/which_models_are_suitable_for_websearch/o853bzt/
false
2
t1_o8535jv
What about the zipped files?
1
0
2026-03-01T23:41:54
Electrify338
false
null
0
o8535jv
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o8535jv/
false
1
t1_o8531s3
Haha I just figured it out right as you commented :D
1
0
2026-03-01T23:41:16
c64z86
false
null
0
o8531s3
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o8531s3/
false
1
t1_o852yag
You open a terminal in the folder you created I've just figured out how to launch it as a web browser ".\\llama-server.exe -m .\\Qwen3.5-35B-A3B-UD-Q4\_K\_M.gguf --port 8080 --host 127.0.0." Type that, and it will throw a lot of words at you, when it's finished just head over to [http://127.0.0.1:8080](http://127.0.0...
1
0
2026-03-01T23:40:42
c64z86
false
null
0
o852yag
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o852yag/
false
1
t1_o852wn1
BTW now try llama-server instead llama-cli so you can connect with your browser
3
0
2026-03-01T23:40:25
jacek2023
false
null
0
o852wn1
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o852wn1/
false
3
t1_o852vkb
He has convinced himself that he can still "win" this, somehow. Deepseek changed \*everything\*. Open weights alone are enough to allow third-party customization. I would be much more limited in LLM capabilities (so much so that I probably would just not use them) if it weren't for Q4\_K\_M quantizations that make them...
4
0
2026-03-01T23:40:14
MushroomCharacter411
false
null
0
o852vkb
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o852vkb/
false
4
t1_o852sjj
But but... speed?
3
0
2026-03-01T23:39:44
guesdo
false
null
0
o852sjj
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o852sjj/
false
3
t1_o852p52
Yes, it's not a rocket science, just friendly local AI
1
0
2026-03-01T23:39:12
jacek2023
false
null
0
o852p52
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o852p52/
false
1
t1_o852j4o
[deleted]
1
0
2026-03-01T23:38:12
[deleted]
true
null
0
o852j4o
false
/r/LocalLLaMA/comments/1rick3t/i_benchmarked_8_local_llms_for_phonetohome_chat/o852j4o/
false
1
t1_o852hzl
Ok so I downloaded both folders do I extract them to a random folder and copy the model folder or the gguf and do I open the terminal in the folder I created or just open the terminal
1
0
2026-03-01T23:38:00
Electrify338
false
null
0
o852hzl
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o852hzl/
false
1
t1_o852gsj
Damn these look great! Cant wait to try them! That said, I am just waiting on the rest of the family, specially the embedding models.
1
0
2026-03-01T23:37:48
guesdo
false
null
0
o852gsj
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o852gsj/
false
1
t1_o852c2r
Ah LM studio is a bit behind of llama.cpp and llama got performance improvements for qwen. You should try number of experts on cpu slider until you see model fit vram. 32-35 is a good ballpark. I recommend you use Jan or llama.cpp directly instead of lm studio if you can to do this automatically via "fit"
1
0
2026-03-01T23:37:02
Xantrk
false
null
0
o852c2r
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o852c2r/
false
1
t1_o852bv8
And 20B models leave a lot of performance in the table compared to 30B models... and so on and so forth
1
0
2026-03-01T23:37:00
guesdo
false
null
0
o852bv8
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o852bv8/
false
1
t1_o8524be
Oprah Winfrey reference when she used to give prices to audience. YW!
13
0
2026-03-01T23:35:44
guesdo
false
null
0
o8524be
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8524be/
false
13
t1_o85242p
100% true. Also it's beautiful , big and beautiful model.
1
0
2026-03-01T23:35:42
voyager256
false
null
0
o85242p
false
/r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o85242p/
false
1
t1_o85243g
Im getting "cant continue due to copyright laws" from 35b, 27b works fine though. Wish there was a way to jailbreak this model
1
0
2026-03-01T23:35:42
adolfin4
false
null
0
o85243g
false
/r/LocalLLaMA/comments/1re17th/blown_away_by_qwen_35_35b_a3b/o85243g/
false
1
t1_o8521yo
What's the latency for speech stuff? Like if I wanted to "talk" to a model, would it take a bit to process and send back a response?
1
0
2026-03-01T23:35:20
Anarchaotic
false
null
0
o8521yo
false
/r/LocalLLaMA/comments/1qpneiq/amd_strix_halo_gmtek_128gb_unified_rocks/o8521yo/
false
1
t1_o851w9b
Does increasing VRAM allocation help with speeds for stuff under 96gb? Or is it just to run larger models/context?
1
0
2026-03-01T23:34:23
Anarchaotic
false
null
0
o851w9b
false
/r/LocalLLaMA/comments/1qpneiq/amd_strix_halo_gmtek_128gb_unified_rocks/o851w9b/
false
1
t1_o851vof
So your credentials is you went to university for CS?? Buddy, I got over 20yrs dealing with systems and networks. If you gave Claude or Opus to DARPA for like 3-4 years, yeah it could control some specific weapons systems. However, as it stands today, Claude nor Opus can or should be tasked with controlling weapon...
1
0
2026-03-01T23:34:17
illicITparameters
false
null
0
o851vof
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o851vof/
false
1
t1_o851qbg
12 VRAM and 32 RAM. it sure is spilling to CPU, a decent quant cant fit in VRAM. the only quants that could fit are the 2bit quants or some XS 3bit. To use that I'm better off with the 5bit of the 35b MoE edit: tested context window for dense model was 4k, with the MoE I can push it to 80k with 23t/s
1
0
2026-03-01T23:33:23
MaCl0wSt
false
null
0
o851qbg
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o851qbg/
false
1
t1_o851qbm
[removed]
1
0
2026-03-01T23:33:23
[deleted]
true
null
0
o851qbm
false
/r/LocalLLaMA/comments/1ri232z/worth_it_to_buy_tesla_p40s/o851qbm/
false
1
t1_o851pbc
Ordered one last week I can't wait! Do you have any tips on setting it up? I want it to run headless so will definitely go Linux.
1
0
2026-03-01T23:33:13
Anarchaotic
false
null
0
o851pbc
false
/r/LocalLLaMA/comments/1qpneiq/amd_strix_halo_gmtek_128gb_unified_rocks/o851pbc/
false
1
t1_o851o4p
This. A raspberry pi is perfect and half the price of a mac. I'm planning to test it out just with a docker container. Haven't yet. Any old cheap laptop should do fine as well. Talking 10 year old $400 brand new... Laptops. I imagine you can get something just fine for under $200 easy. Maybe even one of those crappy...
1
0
2026-03-01T23:33:01
woundedkarma
false
null
0
o851o4p
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o851o4p/
false
1
t1_o851mqt
so you probably only need one file from here [https://github.com/ggml-org/llama.cpp/releases/tag/b8184](https://github.com/ggml-org/llama.cpp/releases/tag/b8184) I don't use binaries so I don't know what's inside, but I assume Windows CUDA is what you need
1
0
2026-03-01T23:32:47
jacek2023
false
null
0
o851mqt
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o851mqt/
false
1
t1_o851mhg
If you are GPU poor, then don't complain about the models as you can't run them as they should be used. That's misleading.
1
0
2026-03-01T23:32:45
Iory1998
false
null
0
o851mhg
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o851mhg/
false
1
t1_o851jw0
update: take a look at this in openwebui's most recent change log: * 🧠 **Reasoning model KV cache preservation.** Reasoning model thinking tags are no longer stored as HTML in the database, preserving KV cache efficiency for backends like llama.cpp and ensuring faster subsequent conversation turns. [\#21815](https://...
2
0
2026-03-01T23:32:19
Far-Low-4705
false
null
0
o851jw0
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o851jw0/
false
2
t1_o851htq
What’s the best way to run Qwen in iOS
1
0
2026-03-01T23:31:59
muzerfuker
false
null
0
o851htq
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o851htq/
false
1
t1_o851f1p
Fair enough.
1
0
2026-03-01T23:31:32
Iory1998
false
null
0
o851f1p
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o851f1p/
false
1
t1_o851e7o
No idea what this means
1
0
2026-03-01T23:31:23
Ylsid
false
null
0
o851e7o
false
/r/LocalLLaMA/comments/1riat5w/vignettes_handy_for_ais/o851e7o/
false
1
t1_o851e5i
I have the the Q4_K_M model installed it's gguf
1
0
2026-03-01T23:31:23
Electrify338
false
null
0
o851e5i
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o851e5i/
false
1
t1_o85191i
Sure! I went to download llama.cpp first here: [Releases · ggml-org/llama.cpp](https://github.com/ggml-org/llama.cpp/releases) I selected Windows x64 CUDA 13 and also the DLLS alongside it. I then put them into the same folder (just name it anything). I then copied over the model I had already downloaded in LM Studio...
2
0
2026-03-01T23:30:32
c64z86
false
null
0
o85191i
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85191i/
false
2
t1_o85182e
llama.cpp uses GGUF software like lm studio uses llama.cpp so it also uses GGUF, but I don't know which GGUF do you have, I gave you specific size (Q4)
1
0
2026-03-01T23:30:23
jacek2023
false
null
0
o85182e
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85182e/
false
1
t1_o8513su
I think of prompt engineering as mostly a relic of the era when RLHF was less commonplace or less advanced and models were much dumber.
3
0
2026-03-01T23:29:41
Economy_Cabinet_7719
false
null
0
o8513su
false
/r/LocalLLaMA/comments/1riboy2/learnt_about_emergent_intention_maybe_prompt/o8513su/
false
3
t1_o8511mb
Can you imagine what a terrible product Opus 4.6 is if it actually cost what they charge to run it?
3
0
2026-03-01T23:29:19
Alive_Interaction835
false
null
0
o8511mb
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o8511mb/
false
3
t1_o850zom
Ok sorry if I am giving you a hard time but I have some more questions can I use the downloaded models from LM studio on llama.cpp or will I have to redownload them?
1
0
2026-03-01T23:29:01
Electrify338
false
null
0
o850zom
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o850zom/
false
1
t1_o850wka
You may as well run it from a swap file on an ssd, you’ll get better speeds. There is a reason why nvidia have high speed interconnects between everything in the server racks.
1
0
2026-03-01T23:28:31
jtjstock
false
null
0
o850wka
false
/r/LocalLLaMA/comments/1ribx4f/sharded_deployment/o850wka/
false
1
t1_o850w2w
Happy to share on request: \- Exact run config + CLI commands for reproduction \- KPI formula derivation and axis normalization notes \- magistral:24b exclusion write-up (thermal instability + timeout analysis) \- Full benchmark reports The repo: [https://github.com/JoseviOliveira/my-gpt](https://github.com/Jos...
1
0
2026-03-01T23:28:26
Vivid-Gur2349
false
null
0
o850w2w
false
/r/LocalLLaMA/comments/1rick3t/i_benchmarked_8_local_llms_for_phonetohome_chat/o850w2w/
false
1
t1_o850sab
To all the comments that could not follow. ???? It made perfect sense to me. But my brain catalogs stuff and compartmentalized most of what he said just fine. But I do not talk fast like this, but made sense what he said and brought up some valid points of what we are looking at with open source and why we sho...
1
0
2026-03-01T23:27:50
Ztoxed
false
null
0
o850sab
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o850sab/
false
1
t1_o850qd4
the thing you refuse to do, install llama.cpp, just download one file and be a happy person
1
0
2026-03-01T23:27:30
jacek2023
false
null
0
o850qd4
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o850qd4/
false
1
t1_o850kme
can you tell me what you did to achieve this performance
1
0
2026-03-01T23:26:35
Electrify338
false
null
0
o850kme
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o850kme/
false
1
t1_o850iy7
he seems to be getting fantastic results but what did he do
1
0
2026-03-01T23:26:19
Electrify338
false
null
0
o850iy7
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o850iy7/
false
1
t1_o850bzw
lol thanks! I knew that LM Studio llama.cpp was outdated... but not THIS much!! Freaking hell!
1
0
2026-03-01T23:25:09
c64z86
false
null
0
o850bzw
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o850bzw/
false
1
t1_o850b5f
check screenshot of other guy
1
0
2026-03-01T23:25:01
jacek2023
false
null
0
o850b5f
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o850b5f/
false
1
t1_o8509ne
Yes, local network. I was wondering if anyone has actually tried this.
1
0
2026-03-01T23:24:46
zica-do-reddit
false
null
0
o8509ne
false
/r/LocalLLaMA/comments/1ribx4f/sharded_deployment/o8509ne/
false
1
t1_o8508hq
Thanks for your reply, unfortunately not relatable to my case in full, but still a good info point for future reference. I suspect that my "findings" might be completely irrelevant as soon as I go into "GPU only" inference territory, however GPU + CPU offload is still something I'll most likely use, so I do need to loo...
1
0
2026-03-01T23:24:35
mdziekon
false
null
0
o8508hq
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o8508hq/
false
1
t1_o8508ik
keep in mind the whole point of the swarm is it just tries to record lessons for itself (although it is trying to help), and what it claims is llms saying things so hallucination. thats why there are all these L-930 summary (recorded in memory) and updates the next etc and it mainly intends me to send the intended part...
1
0
2026-03-01T23:24:35
dafdaf1234444
false
null
0
o8508ik
false
/r/LocalLLaMA/comments/1ribqx1/swarm_self_prompting_protocol_with_a_single/o8508ik/
false
1
t1_o850884
Yeah thing is I am still testing out and the GUI is more initiative to me can you explain to me what I have here if you can't no problem I'll research with Gemini and Claude
1
0
2026-03-01T23:24:32
Electrify338
false
null
0
o850884
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o850884/
false
1
t1_o850795
This ... enable_thinking false worked for me
1
0
2026-03-01T23:24:23
sephiroth_pradah
false
null
0
o850795
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o850795/
false
1
t1_o8505bo
Really nice! What is your overall VRAM usage at 170k context?
4
0
2026-03-01T23:24:04
oxygen_addiction
false
null
0
o8505bo
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8505bo/
false
4
t1_o85025j
congratulations, world is saved again
1
0
2026-03-01T23:23:33
jacek2023
false
null
0
o85025j
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85025j/
false
1
t1_o84zz78
ok so I enabled K cache to F16 ( I have no idea what that really did) bit I am able to get 125k context at 17 tokens per seocnd https://preview.redd.it/zg66hiq4pimg1.png?width=1105&format=png&auto=webp&s=700dfce15889fde648f4b473af5731543918afc4
1
0
2026-03-01T23:23:05
Electrify338
false
null
0
o84zz78
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84zz78/
false
1
t1_o84zyrr
I got a 5070ti, and before that I had an rtx 3050. It's surprising what you can do with just that
2
0
2026-03-01T23:23:00
cmdr-William-Riker
false
null
0
o84zyrr
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o84zyrr/
false
2
t1_o84zwpa
Omg wow, I'm getting 57 tokens a second on mine now!! :o https://preview.redd.it/d81susb1pimg1.png?width=1483&format=png&auto=webp&s=fbb9f93c572da9e9924e2b87af0b39cba02ce77b
1
0
2026-03-01T23:22:40
c64z86
false
null
0
o84zwpa
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84zwpa/
false
1
t1_o84zw0z
Yes you can randomly move sliders but command line is easier to test and reproduce
1
0
2026-03-01T23:22:33
jacek2023
false
null
0
o84zw0z
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84zw0z/
false
1
t1_o84zvvc
Drunk bloke in the pub. /s
1
0
2026-03-01T23:22:32
ChibbleChobble
false
null
0
o84zvvc
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o84zvvc/
false
1
t1_o84zu26
with k cache unified at F16 and I got 17tokens per second
1
0
2026-03-01T23:22:14
Electrify338
false
null
0
o84zu26
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84zu26/
false
1
t1_o84zr0n
https://huggingface.co/kaitchup/Qwen3.5-27B-NVFP4
2
0
2026-03-01T23:21:45
this-just_in
false
null
0
o84zr0n
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84zr0n/
false
2