name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o84zr3r
hosts connected by what? consider that VRAM bandwidth is typically measured in the high hundreds of GB/s, while GigE is around 100 MB/s. even 25G networks are only 2.5GB/s. unless you've got some infiniband gear laying around, it's likely to be very slow.
2
0
2026-03-01T23:21:45
Live-Crab3086
false
null
0
o84zr3r
false
/r/LocalLLaMA/comments/1ribx4f/sharded_deployment/o84zr3r/
false
2
t1_o84zp0v
ok so I am not sure what I did but I did something I am not sure who mentioned https://preview.redd.it/uzhhdv1uoimg1.png?width=1105&format=png&auto=webp&s=acdd7e08415beb88b0f67948bc7816b9a70331e4
1
0
2026-03-01T23:21:25
Electrify338
false
null
0
o84zp0v
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84zp0v/
false
1
t1_o84zm2b
Why keeping KV cache fp16? Only full attention layers use KV cache. Linear attention doesn't have kV cache. There are tons of tests that shows that, with full attention, 8bpw kV cache quantization Is harmless . Only 4 bpw KV cache quantization is bad, IMHO, for GQA and MLA with long context.
1
0
2026-03-01T23:20:57
Pentium95
false
null
0
o84zm2b
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o84zm2b/
false
1
t1_o84zhun
Wish there was a faster way to hook a smart small model to wikipedia apart from chromium mcp
2
0
2026-03-01T23:20:16
AvidCyclist250
false
null
0
o84zhun
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o84zhun/
false
2
t1_o84zg72
Is there a nvfp4 version of 27b? Where?
1
0
2026-03-01T23:20:01
Green-Ad-3964
false
null
0
o84zg72
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84zg72/
false
1
t1_o84zfhj
Hi, it is only going to be me using, and I don´t care much for time. If it takes 30 min is still better than I taking time to look at them myself. The report´s varies but most are like 10 pages (technicians don´t like to write to much), would say 1% more than 100 pages, but 90% are going to be less than 20 pages long...
1
0
2026-03-01T23:19:54
No_One_BR
false
null
0
o84zfhj
false
/r/LocalLLaMA/comments/1ribaws/offline_llm_best_pipeline_tools_to_query/o84zfhj/
false
1
t1_o84zel6
^ this. INT8. My bad.
13
0
2026-03-01T23:19:45
Kamal965
false
null
0
o84zel6
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o84zel6/
false
13
t1_o84zdu4
Switch to llama.ccp. It gave me a almost a 30+ token speed boost compared to using lm studio
1
0
2026-03-01T23:19:38
nakedspirax
false
null
0
o84zdu4
false
/r/LocalLLaMA/comments/1ri60l3/qwen_35_35b_a3b_lmstudio_settings/o84zdu4/
false
1
t1_o84zati
Ah, well, I meant 8-bit in general, my bad. You'd look for INT8 AWQ, as Ampere does INT8 and INT4.
2
0
2026-03-01T23:19:09
Kamal965
false
null
0
o84zati
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o84zati/
false
2
t1_o84zahn
Yeah exactly that's why I have it as my own secret benchmark. All the rest is impressive but if I want to use it for work it better be accurate.
3
0
2026-03-01T23:19:06
Windowsideplant
false
null
0
o84zahn
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o84zahn/
false
3
t1_o84z9rb
Imagine if the singularity ends up being infinite context, RAG and 800 million parameters
10
0
2026-03-01T23:18:59
bucolucas
false
null
0
o84z9rb
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84z9rb/
false
10
t1_o84z0y2
md created by swarm to your inquiry, not sure if any value or anything itneresting. # Draft response: wavestreamer.ai inquiry (S418) Generated by swarm as internal artifact — human decides whether/how to post. Swarm analysis: L-930 | Signal: SIG-47 --- ## Swarm's internal verd...
1
0
2026-03-01T23:17:33
dafdaf1234444
false
null
0
o84z0y2
false
/r/LocalLLaMA/comments/1ribqx1/swarm_self_prompting_protocol_with_a_single/o84z0y2/
false
1
t1_o84ylbk
Asked it to swarm your question, curious what it does. Its decision is most likely will be recorded with your exact question and what it did with it.
1
0
2026-03-01T23:15:04
dafdaf1234444
false
null
0
o84ylbk
false
/r/LocalLLaMA/comments/1ribqx1/swarm_self_prompting_protocol_with_a_single/o84ylbk/
false
1
t1_o84ygk7
long-context hallucination is the benchmark that actually matters for production use - excited to see MoE getting there at this size.
4
0
2026-03-01T23:14:19
theagentledger
false
null
0
o84ygk7
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o84ygk7/
false
4
t1_o84y8sh
There isn't any offloading with this model on Strix. It fits completely in memory.
3
0
2026-03-01T23:13:06
fallingdowndizzyvr
false
null
0
o84y8sh
false
/r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o84y8sh/
false
3
t1_o84y4p1
*Gemma 3n
1
0
2026-03-01T23:12:27
JawGBoi
false
null
0
o84y4p1
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84y4p1/
false
1
t1_o84y1tu
[https://www.youtube.com/watch?v=Oeh-JxYYmDc](https://www.youtube.com/watch?v=Oeh-JxYYmDc)
1
0
2026-03-01T23:11:59
daysofdre
false
null
0
o84y1tu
false
/r/LocalLLaMA/comments/1ri7gor/ai_waifu_desktop_open_source/o84y1tu/
false
1
t1_o84xzpp
The chat template does retain all thinking traces from the current assistant turn, i.e. interleaved thinking. It can be easily modified to keep all thinking traces from previous turns as well, i.e. preserved thinking. This issue where the model struggles to specify the line offset seems to be an inherent flaw of the ...
2
0
2026-03-01T23:11:39
notdba
false
null
0
o84xzpp
false
/r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o84xzpp/
false
2
t1_o84xwth
My bullshit meter is off the charts
3
0
2026-03-01T23:11:12
Technical-Earth-3254
false
null
0
o84xwth
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o84xwth/
false
3
t1_o84xs2r
I have an Asus WS-X99 PCIE3 at 8X. I wrote about it before in my post history somewhere. The slowdown isn’t really in the PCIE bandwidth between the cards. Not a lot of data goes through the bus when doing inference. The only time it becomes a bottleneck is during training. I have my 3090s power limited to 300W but w...
2
0
2026-03-01T23:10:27
No-Statement-0001
false
null
0
o84xs2r
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o84xs2r/
false
2
t1_o84xqvt
I am a big fan of Frieren, can you share your (beautiful) Ghostty theme please ? 🥹
2
0
2026-03-01T23:10:15
His0kx
false
null
0
o84xqvt
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o84xqvt/
false
2
t1_o84xm49
Yeah, you're not the only one confused tonight lol. A lot of things in this thread are flying over my head.
1
0
2026-03-01T23:09:31
c64z86
false
null
0
o84xm49
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84xm49/
false
1
t1_o84xkdc
What is the length and structure of your thousand docs? How many users? What is your expectation on the query performance, a couple of seconds, 30 seconds or minutes? You need more than a vector DB for sure and LLMs will be involved. It can be done locally with your hardware but I think you should do hybrid for an acce...
1
0
2026-03-01T23:09:14
pl201
false
null
0
o84xkdc
false
/r/LocalLLaMA/comments/1ribaws/offline_llm_best_pipeline_tools_to_query/o84xkdc/
false
1
t1_o84xg65
Hello - I’d love to see how this could be applied to www.wavestreamer.ai - we have hundreds of prediction bots answering questions on the future of AI. Could this be a good fit for your swarm?
0
0
2026-03-01T23:08:35
Puzzleheaded-Nail814
false
null
0
o84xg65
false
/r/LocalLLaMA/comments/1ribqx1/swarm_self_prompting_protocol_with_a_single/o84xg65/
false
0
t1_o84xfoo
llama.cpp is updated very often, are you sure LM Studio uses todays version of llama.cpp and not the one from last year? ;) also you have probably more control with command line, you can quickly test various settings (and I executed a simple command)
1
0
2026-03-01T23:08:30
jacek2023
false
null
0
o84xfoo
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84xfoo/
false
1
t1_o84xcug
phi4 14b model
1
0
2026-03-01T23:08:03
TinyVector
false
null
0
o84xcug
false
/r/LocalLLaMA/comments/1ric44g/what_would_be_the_best_small_model_for_json/o84xcug/
false
1
t1_o84xbuq
You seem like you know your stuff, pls excuse my ignorance, but is a 24gb m3 intel mac enough to run qwen2.5 14b? I've set up ollama, claude code code and qwen2.5:14b. Opened ollama and it seems to be succesfully using the llm but claude just says some message (thinking etc) , returns a scafold (outline of something...
1
0
2026-03-01T23:07:53
i_love_max
false
null
0
o84xbuq
false
/r/LocalLLaMA/comments/1rhgzyb/cant_use_claude_code_with_ollama_local_model/o84xbuq/
false
1
t1_o84xbh6
Happy to share on request: - Exact run config + CLI commands for reproduction - KPI formula derivation and axis normalization notes - magistral:24b exclusion write-up (thermal instability + timeout analysis)
1
0
2026-03-01T23:07:50
Vivid-Gur2349
false
null
0
o84xbh6
false
/r/LocalLLaMA/comments/1ric2qd/i_benchmarked_8_local_llms_for_phonetohome_chat/o84xbh6/
false
1
t1_o84x0jc
Can you explain a little about this? How so? What kind of faq/support?
9
0
2026-03-01T23:06:05
Agreeable-Option-466
false
null
0
o84x0jc
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84x0jc/
false
9
t1_o84wz3r
I thought LM Studio used llama.cpp already as the backend?
1
0
2026-03-01T23:05:52
c64z86
false
null
0
o84wz3r
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84wz3r/
false
1
t1_o84wwa6
The 80b model is MoE, activated params only 3b. This one is a dense model, where all are activated
1
0
2026-03-01T23:05:26
QuirkyDream6928
false
null
0
o84wwa6
false
/r/LocalLLaMA/comments/1rfuej9/need_help_with_qwen3527b_performance_getting_19/o84wwa6/
false
1
t1_o84wutk
I think ever since reasoning models came about, prompt engineering flew out the window. You can think of the reasoning trace as the model's attempt to make sense of your prompt. These models can infer typical asks from relatively few words. I am almost criminally lazy and I can just write a vague request like "Mak...
6
0
2026-03-01T23:05:13
audioen
false
null
0
o84wutk
false
/r/LocalLLaMA/comments/1riboy2/learnt_about_emergent_intention_maybe_prompt/o84wutk/
false
6
t1_o84wrro
What hardware? Sounds like the 27B is spilling to CPU, maybe context window was set too large?
1
0
2026-03-01T23:04:43
GoldPanther
false
null
0
o84wrro
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o84wrro/
false
1
t1_o84wm7c
I explained what to do, I think you people are wasting your time so I gave you some pointers :)
-1
0
2026-03-01T23:03:51
jacek2023
false
null
0
o84wm7c
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84wm7c/
false
-1
t1_o84wlhj
Ok Dario, but we know you likely have architectural improvements along the lines of DSA, those are not weights, but code: will you publish something about them?
1
0
2026-03-01T23:03:44
1998marcom
false
null
0
o84wlhj
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o84wlhj/
false
1
t1_o84wla4
Yeah okay buddy. I went to uni for computer science. What's fundamentally different? Claude is a general purpose AI with strong tool calling capability, especially opus. Why can't it control weapon systems? Are you going to say anything remotely intelligent? Or are you just talking out of your ass?
0
0
2026-03-01T23:03:42
Unfortunya333
false
null
0
o84wla4
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o84wla4/
false
0
t1_o84whkx
How were you able to get it to that speed on 12GB of VRAM? The most I can sqeeze out of it is 14 tokens a second, no matter how many layers I am able to offload onto the GPU or how many MOE weights I offload onto the CPU.
1
0
2026-03-01T23:03:07
c64z86
false
null
0
o84whkx
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84whkx/
false
1
t1_o84w8uv
No HW support for FP8 on Ampere, so theoretically it'll be slower. I'll try it though. And I'm always keeping the KV at FP16 for hybrid attention models.
3
0
2026-03-01T23:01:45
TacGibs
false
null
0
o84w8uv
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o84w8uv/
false
3
t1_o84w5xm
for me 27b is just infinetly questioning itself even in the easiest tasks
1
0
2026-03-01T23:01:17
DealHunter12345
false
null
0
o84w5xm
false
/r/LocalLLaMA/comments/1rhfjeg/qwen3527b_vs_qwen3535ba3b/o84w5xm/
false
1
t1_o84w57x
Was around 20 not it's about 7 to 8 at 125k
1
0
2026-03-01T23:01:10
Electrify338
false
null
0
o84w57x
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84w57x/
false
1
t1_o84w3nv
Not my experience at all. It's blazing fast but all wrong. It performs worse than gpt oss 20b albeit at twice the speed. But what's the point of having faster nonsense?
3
0
2026-03-01T23:00:56
Windowsideplant
false
null
0
o84w3nv
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o84w3nv/
false
3
t1_o84w1lc
Can you explain how this works in practice? Still getting my head around all of this...
3
0
2026-03-01T23:00:36
Lastb0isct
false
null
0
o84w1lc
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o84w1lc/
false
3
t1_o84w126
Ok I am sorry if the question sounds stupid how can I do that from LM studio (the -fit context)
1
0
2026-03-01T23:00:32
Electrify338
false
null
0
o84w126
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84w126/
false
1
t1_o84vynf
nah sadly 27b doesnt run at anything above 3-5t/s with a decent quant for me, vs 35b running at 25t/s with the q5kxl
1
0
2026-03-01T23:00:09
MaCl0wSt
false
null
0
o84vynf
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o84vynf/
false
1
t1_o84vw9o
good to know, thanks :)
2
0
2026-03-01T22:59:46
kashimacoated
false
null
0
o84vw9o
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o84vw9o/
false
2
t1_o84vvms
can you link/post the image? That might help us figure out if it's just a challenging image or whats happening. "I tried everything on openrouter or [together.ai](http://together.ai), so no quantization." over at-scale api's there can be lots of issues besides quantization that can decrease quality. A lot of those ...
2
0
2026-03-01T22:59:39
National_Meeting_749
false
null
0
o84vvms
false
/r/LocalLLaMA/comments/1ribhpg/help_me_understand_why_a_certain_image_is/o84vvms/
false
2
t1_o84vu1r
>So it is a good fit for a 6000 Pro card if you want to run the full model **without quantization.** But who would want that, though?
2
0
2026-03-01T22:59:24
voyager256
false
null
0
o84vu1r
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o84vu1r/
false
2
t1_o84vtl6
look at my screenshot above
1
0
2026-03-01T22:59:20
jacek2023
false
null
0
o84vtl6
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84vtl6/
false
1
t1_o84vt49
I have the same setup. Use -fit and -fitcontext, and you should be able to fit 100k context comfortably. Since fit accounts for full context, you wouldn't get as much slowdown with kv-cache, as it wont overflow llama-server model C:\models\qwen\Qwen3.5-35B-A3B-UD-Q6_K_XL.gguf --fit on --kv-unified --no-mmap --paral...
1
0
2026-03-01T22:59:16
Xantrk
false
null
0
o84vt49
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84vt49/
false
1
t1_o84vpku
Nvm I think something got changed and I am getting like 12 at 125k but is there a way to force lm studio to use my shared memory more
1
0
2026-03-01T22:58:42
Electrify338
false
null
0
o84vpku
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84vpku/
false
1
t1_o84vowo
Yeah I finally got the 27B running decently with actual full context window on the GPU
2
0
2026-03-01T22:58:36
sudden_aggression
false
null
0
o84vowo
false
/r/LocalLLaMA/comments/1rhk0gz/r9700_and_vllm_with_qwen35/o84vowo/
false
2
t1_o84vn1k
What is the speed before it hits 25-30k?
1
0
2026-03-01T22:58:19
c64z86
false
null
0
o84vn1k
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84vn1k/
false
1
t1_o84vksu
Anyone know if it's possible to adjust 'reasoning-budget'? I think some people set to zero but have experimented and it seems not to allow more sensible amounts (via ollama). I got about 500 loops asking it to proofread a single sentence. Needs a confidence boost poor guy.
1
0
2026-03-01T22:57:58
Greedy_Brilliant_404
false
null
0
o84vksu
false
/r/LocalLLaMA/comments/1rec6bs/qwen35_thinking_for_too_long/o84vksu/
false
1
t1_o84vkef
download llama.cpp, download same model I use, run same command, compare the speed on your setup no ollama was used, it's called llama.cpp
3
0
2026-03-01T22:57:54
jacek2023
false
null
0
o84vkef
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84vkef/
false
3
t1_o84vjsh
Anything above 30k drops to 2 tokens per second at best
1
0
2026-03-01T22:57:48
Electrify338
false
null
0
o84vjsh
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84vjsh/
false
1
t1_o84vj8b
I would actually strongly recommend against FP8 specifically - the 3090 doesn't support that in hardware! I found that int8 works okay - but appears to be under-optimized in vLLM (at least since I checked last). I don't have numbers to show, other than my observation suggesting int4 performs insanely good on my 3090...
23
0
2026-03-01T22:57:42
JohnTheNerd3
false
null
0
o84vj8b
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o84vj8b/
false
23
t1_o84vi3b
GPU poor only here. I run Qwen3.5 35B A3B (iQ3) on CPU-only, on a fourth gen i7- 4790K. I get 2.1t/s and 2.9t/s when the context is empty. It's not that slow... Can you code by hand a Tetris clone in 25mins ? My 4790K can.
1
0
2026-03-01T22:57:32
Don_Moahskarton
false
null
0
o84vi3b
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o84vi3b/
false
1
t1_o84ve0t
I made qwen3-122b [flirt with Google](https://streamable.com/gvdt97).
2
0
2026-03-01T22:56:54
swagonflyyyy
false
null
0
o84ve0t
false
/r/LocalLLaMA/comments/1ri1l4o/who_is_doing_useful_things_with_local_ai_and_email/o84ve0t/
false
2
t1_o84v9in
Good lord, the difference between that and my dual 3090 rig (no NVLink) with llama.cpp is *shocking.* Also, this isn't factoring in my current "IDK what's going on here" situation where the model takes a surprisingly long time to start responding after llama.cpp has announced that it's done with prompt processing. The ...
4
0
2026-03-01T22:56:12
overand
false
null
0
o84v9in
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o84v9in/
false
4
t1_o84v8xe
How slow does it get for you? I get around 11 tokens a second with my 12GB RTX 4080 mobile, and if I go over the context window it drops to 9 tokens. Not excellent, but not too bad either.
1
0
2026-03-01T22:56:07
c64z86
false
null
0
o84v8xe
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84v8xe/
false
1
t1_o84v55f
Are you using ollama to chat with the model? Sorry I am kinda new to running my local models
2
0
2026-03-01T22:55:32
Electrify338
false
null
0
o84v55f
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84v55f/
false
2
t1_o84uxtv
And really, open weights just ... smells too purplish ... almost Mesozoic like WD40.  It just won't resonate with double helices.
1
0
2026-03-01T22:54:23
chensium
false
null
0
o84uxtv
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o84uxtv/
false
1
t1_o84uwzq
he drunk too much soy milk
5
0
2026-03-01T22:54:16
celsowm
false
null
0
o84uwzq
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o84uwzq/
false
5
t1_o84uur0
https://preview.redd.it/…l the context :)
2
0
2026-03-01T22:53:55
jacek2023
false
null
0
o84uur0
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84uur0/
false
2
t1_o84unh3
yep, I had the same. 35b jumped from 40t/s to 60. 122b from 20t/s to 40t/s. Both slow ones were lm studio suggested.
1
0
2026-03-01T22:52:48
kweglinski
false
null
0
o84unh3
false
/r/LocalLLaMA/comments/1ri60l3/qwen_35_35b_a3b_lmstudio_settings/o84unh3/
false
1
t1_o84ufe8
Prompt optimizing is needed. But, the detail of the prompt is directly related to the complexity of the task and how much of the instructions can be implied. Give all needed details, but no more.
3
0
2026-03-01T22:51:34
Hot-Percentage-2240
false
null
0
o84ufe8
false
/r/LocalLLaMA/comments/1riboy2/learnt_about_emergent_intention_maybe_prompt/o84ufe8/
false
3
t1_o84uc5g
Which AMD GPU firmware update? For Strix Halo?
9
0
2026-03-01T22:51:03
Potential-Leg-639
false
null
0
o84uc5g
false
/r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o84uc5g/
false
9
t1_o84u7ow
What's the context window?
2
0
2026-03-01T22:50:22
Electrify338
false
null
0
o84u7ow
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84u7ow/
false
2
t1_o84u1p3
I was able to run Qwen 3.5 35B Q4 on Windows with 5070 (no ti) by running llama.cpp. No magical skills required.
4
0
2026-03-01T22:49:29
jacek2023
false
null
0
o84u1p3
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o84u1p3/
false
4
t1_o84u11h
Very impressive! But I really suggest FP8. There's no point in FP/BF16 unless it's, like, life or death, really. Keep KV at FP16
12
0
2026-03-01T22:49:22
Kamal965
false
null
0
o84u11h
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o84u11h/
false
12
t1_o84tyx0
You truly don’t understand systems if you’re trying to compare your desktop pet….
1
0
2026-03-01T22:49:03
illicITparameters
false
null
0
o84tyx0
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o84tyx0/
false
1
t1_o84tvm4
> \> with the still much stronger Qwen3.5-35B-A3B LOL. Another one who fails to understand the difference between dense and MoE.
2
0
2026-03-01T22:48:32
ttkciar
false
null
0
o84tvm4
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o84tvm4/
false
2
t1_o84ttea
Nah. That's why RAG is useful.
7
0
2026-03-01T22:48:13
Kamal965
false
null
0
o84ttea
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o84ttea/
false
7
t1_o84thdb
It absolutely can. Whether or not it can do it well is another topic. But saying that it can't is just plain wrong. I use Claude to control my desktop pet, why can't it fundamentally control a weapons system?
1
0
2026-03-01T22:46:24
Unfortunya333
false
null
0
o84thdb
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o84thdb/
false
1
t1_o84th1s
This is off-topic for LocalLLaMA. You might want to post instead to r/LLMDevs.
2
0
2026-03-01T22:46:21
ttkciar
false
null
0
o84th1s
false
/r/LocalLLaMA/comments/1ri7byg/why_aws_charges_60x_more_for_h100s_than_vastai/o84th1s/
false
2
t1_o84tfh0
Are you sure its the GPU firmware update and not this PR https://github.com/ggml-org/llama.cpp/pull/19976 ?
0
0
2026-03-01T22:46:06
SlaveZelda
false
null
0
o84tfh0
false
/r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o84tfh0/
false
0
t1_o84t9dk
Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/PgFhZ8cnWW) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*
1
1
2026-03-01T22:45:11
WithoutReason1729
false
null
0
o84t9dk
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84t9dk/
true
1
t1_o84t7t5
Qwen is going to kill OpenAI and Anthropic and the entire business cloud model.🤣
1
0
2026-03-01T22:44:57
PsychologicalOne752
false
null
0
o84t7t5
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84t7t5/
false
1
t1_o84t4dq
Unfortunately, without a GPU, my only option at the moment is to try it on my MBP with 128 GB RAM.
4
0
2026-03-01T22:44:26
Naz6uL
false
null
0
o84t4dq
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o84t4dq/
false
4
t1_o84t381
Yeah, nothing beats Opus by faaar. However, I keep trying and trying to find best usecases for locally hosted LLMs and the actual list of useful things is growing. Try most recent qwen3.5 models overall. I pointed one at a legacy app (lot's of code, lot's of never cleaned up dead ends) and to list certain aspect of th...
6
0
2026-03-01T22:44:16
Medium_Chemist_4032
false
null
0
o84t381
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o84t381/
false
6
t1_o84t30l
Like like Anthropic and Dario, too. But I can split my AI workload 60/40 either way between commercial models and local models, instead of needing to go 100% commercial. With local models I also get around Anthropic's ridiculous usage limits, everything stays 100% private, and my data doesn't unwittingly go to a thir...
1
0
2026-03-01T22:44:14
misterflyer
false
null
0
o84t30l
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o84t30l/
false
1
t1_o84syyi
It *absolutely* performs. Perhaps not quite as good as Opus or GPT 5.2, but those are, at the very least, trillion parameter models. I find it to be a more than satisfactory assistant in math, coding and data science.
1
0
2026-03-01T22:43:37
Kamal965
false
null
0
o84syyi
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o84syyi/
false
1
t1_o84svxw
tracking in structured logs right now but not a proper decision ledger yet - your schema is basically what we're building toward. the real value is correlating approval outcome with task success downstream, not just logging the decision itself.
1
0
2026-03-01T22:43:09
BC_MARO
false
null
0
o84svxw
false
/r/LocalLLaMA/comments/1rhv06r/how_are_you_preventing_runaway_ai_agent_behavior/o84svxw/
false
1
t1_o84smox
If parameters are logarithmic sure
2
0
2026-03-01T22:41:45
TheKingOfTCGames
false
null
0
o84smox
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o84smox/
false
2
t1_o84sl8k
I certainly have not tested the resulting code - that was merely for a speed test. however, I do routinely use local models in my Claude Code (vLLM supports the Anthropic /messages endpoint and works as a drop-in replacement for the Claude Code client) and do get useful code output. just need to keep your expectations ...
8
0
2026-03-01T22:41:32
JohnTheNerd3
false
null
0
o84sl8k
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o84sl8k/
false
8
t1_o84sesx
FIY I got around 66 tok/s for the full precision 27B on 4 RTX 3090 (PCIe 4.0 4x), max context and MTP enabled with vLLM nightly.
22
0
2026-03-01T22:40:34
TacGibs
false
null
0
o84sesx
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o84sesx/
false
22
t1_o84sedt
Finally they listened to us peasants.
1
0
2026-03-01T22:40:30
fynadvyce
false
null
0
o84sedt
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84sedt/
false
1
t1_o84rzgr
I knew that, but I didn't want to put 11 benchmarks (vision benchmark doesn't count), mainly because when looking at the individual categories 27B performed better than R1 in all but one category, and in many of them by a wide margin. So the intelligence index was good enough to make my point, but yeah, sorry for perpe...
4
0
2026-03-01T22:38:13
dionisioalcaraz
false
null
0
o84rzgr
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o84rzgr/
false
4
t1_o84rwiu
does seem like a neat avenue for steganography ngl. though we just burned it by bringing it up.
1
0
2026-03-01T22:37:46
michaelsoft__binbows
false
null
0
o84rwiu
false
/r/LocalLLaMA/comments/1qpi8d4/meituanlongcatlongcatflashlite/o84rwiu/
false
1
t1_o84rvty
I switched to faster-qwen3-tts, and it is significantly faster than the standard qwen3-tts when running on a 5090. I have not noticed a difference in voice quality compared with the standard qwen3-tts model. According to my stats, the 1.7B model is generating about 5 seconds of audio in 1 second. The "tricky" thin...
2
0
2026-03-01T22:37:40
KeyToAll
false
null
0
o84rvty
false
/r/LocalLLaMA/comments/1rfc3ic/introducing_fasterqwentts/o84rvty/
false
2
t1_o84rjnb
People are shitting on the calendar example. But a calendaring app can be more.
1
0
2026-03-01T22:35:49
mikebritton
false
null
0
o84rjnb
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o84rjnb/
false
1
t1_o84rfbx
Well Dario I can run my Qwen models on my PC for FREEEEEEE
10
0
2026-03-01T22:35:09
Illustrious-Lake2603
false
null
0
o84rfbx
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o84rfbx/
false
10
t1_o84rc81
I wonder does the code being generated work? Even deepseek r1 code doesn't work as expected. The one functional codes come from codex. And it is a a lot reliable and does whst you ask as you want. Others iust crap! Even claude  code cant do shiot about serious logical and big codebase.
4
1
2026-03-01T22:34:41
Middle-Advisor5783
false
null
0
o84rc81
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o84rc81/
false
4
t1_o84qmon
I'd still want to be able to run the chatbot also as that's something my family uses pretty frequently, and I'm somewhat sure a code optimized LLM isn't strongest at chatbot type things, like acting as a Google and information compiler.
1
0
2026-03-01T22:30:50
MakutaArguilleres
false
null
0
o84qmon
false
/r/LocalLLaMA/comments/1rhg2ir/trying_to_set_up_a_vscode_server_local_llm/o84qmon/
false
1
t1_o84qkvv
Really impressed with the structure and quality of the reasoning on the medium ones so far. The little ones should be 🔥
1
0
2026-03-01T22:30:35
Soft-Barracuda8655
false
null
0
o84qkvv
false
/r/LocalLLaMA/comments/1riamsf/how_capable_is_qwen314b_really_considering_it_for/o84qkvv/
false
1
t1_o84qinz
Yeah telling people to fell useless and constantly making up bad scifi does sound like he is bad at communication.
7
0
2026-03-01T22:30:14
Dry_Yam_4597
false
null
0
o84qinz
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o84qinz/
false
7
t1_o84qi93
The challenge in this space is the pace — these channels post constantly and half the content is outdated within a week. I've been using [TubeScout](https://tubescout.app/?utm_source=reddit) to get daily digests of all my AI YouTube subscriptions. Means I can skim 20 channels' output in 10 minutes, catch anything impor...
1
0
2026-03-01T22:30:11
marcoz711
false
null
0
o84qi93
false
/r/LocalLLaMA/comments/1atycgd/which_localllama_focused_yt_channels_do_you_follow/o84qi93/
false
1
t1_o84qhaq
exact'y what i discovered today, just amazing !!!
1
0
2026-03-01T22:30:02
Opteron67
false
null
0
o84qhaq
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84qhaq/
false
1
t1_o84qffk
Very. 🙊
11
0
2026-03-01T22:29:46
Glazedoats
false
null
0
o84qffk
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84qffk/
false
11
t1_o84qe5m
Damn… with two amd mi50’s, fully offloaded to VRAM I only get like 45T/s… And I get like 15-20T/s on the 27b
2
0
2026-03-01T22:29:34
Far-Low-4705
false
null
0
o84qe5m
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o84qe5m/
false
2