name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8720yc
I am using a single 3090 on a UD Q5 K XL, getting around 30 t/s with llama.cpp. Are your settings transferable to llama.cpp?
1
0
2026-03-02T07:58:34
sabotage3d
false
null
0
o8720yc
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8720yc/
false
1
t1_o871yzq
> use unix socket files on shared storage to avoid network latency think it thru: you need a network to share storage
3
0
2026-03-02T07:58:03
HopePupal
false
null
0
o871yzq
false
/r/LocalLLaMA/comments/1rimpwy/sharding_model_across_machines/o871yzq/
false
3
t1_o871yfm
give me a gpt-oss-20b or a Nemotron, give it a good system prompt and access to a set of good MCP, just as search, memory, wikipedia - and its actually more than enough for most
2
0
2026-03-02T07:57:54
leonbollerup
false
null
0
o871yfm
false
/r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o871yfm/
false
2
t1_o871xir
I can say that dont do it. You will not have enough performance with this amount of cards and slow connection between them. You need to also count PSU costs etc. I have just gave up with 4x 7900XTX and will purchase in the future just as large VRAM cards I can afford. dealing with multiple cards with homelab ...
4
0
2026-03-02T07:57:40
Frosty_Chest8025
false
null
0
o871xir
false
/r/LocalLLaMA/comments/1rimpwy/sharding_model_across_machines/o871xir/
false
4
t1_o871vrv
wow thanks
2
0
2026-03-02T07:57:12
gondoravenis
false
null
0
o871vrv
false
/r/LocalLLaMA/comments/1rij9k1/whats_the_best_local_model_i_can_run_with_8gb/o871vrv/
false
2
t1_o871umu
Nemotron, latest qwen och gpt-oss-120b, make sure you give access to search..
1
0
2026-03-02T07:56:54
leonbollerup
false
null
0
o871umu
false
/r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o871umu/
false
1
t1_o871u3f
Maybe go back to the basics and learn how this works or even just how hardware works before going off to wild suggestions for solutions. Wild in a way that you don't have to do that and it already exists, but in a different form due to how memory handling works (RDMA based solution clustering Macs or DGX Spark machines...
5
0
2026-03-02T07:56:46
tmvr
false
null
0
o871u3f
false
/r/LocalLLaMA/comments/1rimpwy/sharding_model_across_machines/o871u3f/
false
5
t1_o871m0h
Any second now. ...aaaany second now.
4
0
2026-03-02T07:54:41
Mickenfox
false
null
0
o871m0h
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o871m0h/
false
4
t1_o871ek4
[removed]
1
0
2026-03-02T07:52:44
[deleted]
true
null
0
o871ek4
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o871ek4/
false
1
t1_o871ehg
genuinely curious, wouldn't I have consumed just as much electricity from running Fortnite STW AFK for Endurances?
1
0
2026-03-02T07:52:43
paulisaac
false
null
0
o871ehg
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o871ehg/
false
1
t1_o871840
FWIW, and I think your visualization is by far the most useful, it gets the point across at a glance. I have my own set of Python scripts to run benchmarks and make graphs and I wish I could make them as good as yours. Very interesting how Vulkan suddenly makes this jump just as I figure out how to fix my ROCm builds...
2
0
2026-03-02T07:51:05
spaceman_
false
null
0
o871840
false
/r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o871840/
false
2
t1_o8716ke
Qwen3 Coder 30B A3B at Q4 should fit as well.
1
0
2026-03-02T07:50:40
tmvr
false
null
0
o8716ke
false
/r/LocalLLaMA/comments/1rimncl/whats_the_best_local_model_i_can_run_with_a/o8716ke/
false
1
t1_o8715r8
Isin't mac doing this already?
3
0
2026-03-02T07:50:28
Euphoric_North_745
false
null
0
o8715r8
false
/r/LocalLLaMA/comments/1rimpwy/sharding_model_across_machines/o8715r8/
false
3
t1_o870ygc
Do you know of a way to improve it?
1
0
2026-03-02T07:48:34
Silver-Champion-4846
false
null
0
o870ygc
false
/r/LocalLLaMA/comments/1ri0n8p/llm_lora_on_the_fly_with_hypernetworks/o870ygc/
false
1
t1_o870wi0
u don't even need too many machines, there are commercial motherboards serving more than gpu at once, get 3-5 depending on slots
0
0
2026-03-02T07:48:03
Active_Woodpecker683
false
null
0
o870wi0
false
/r/LocalLLaMA/comments/1rimpwy/sharding_model_across_machines/o870wi0/
false
0
t1_o870rh2
if that’s ur concern. wait. and plan on going with as much ram as u can afford.
1
0
2026-03-02T07:46:43
Buddhabelli
false
null
0
o870rh2
false
/r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o870rh2/
false
1
t1_o870oz4
Assuming we take both DeepSeek-V3.2's sparse attention and Qwen3.5's hybrid gated delta net and make them both have equal number of total and activated params (both having 685B-A37B) then in terms of compute, throughput and long context memory usage, how would do they differ in theory?
4
0
2026-03-02T07:46:03
True_Requirement_891
false
null
0
o870oz4
false
/r/LocalLLaMA/comments/1rim2y2/revisiting_minimaxs_article_on_their_decision_to/o870oz4/
false
4
t1_o870ixr
they measure kl divergence with only 100 prompts from mablone harmless. totally not enough for anything meaningful. I'm building my own abliteration framework, and a problem that I ran into was there is not great standard kl divergence test set. I'm building one that has a lot of gsm8k and complicated programming tasks...
1
0
2026-03-02T07:44:30
FaustAg
false
null
0
o870ixr
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o870ixr/
false
1
t1_o870er0
I can help!
1
0
2026-03-02T07:43:22
Local_Phenomenon
false
null
0
o870er0
false
/r/LocalLLaMA/comments/1rc7ro3/divorce_attorney_built_a_26gpu_532gb_vram_cluster/o870er0/
false
1
t1_o870cyf
I'm GPU poor, but I will give it a shot. If prefill & generation speeds don't change much, it may be my go-to.
1
0
2026-03-02T07:42:54
ndiphilone
false
null
0
o870cyf
false
/r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o870cyf/
false
1
t1_o870apb
Suggested ones are the default ones I believe, it didn't change. Still starts looping beyond 80k tokens
1
0
2026-03-02T07:42:17
ndiphilone
false
null
0
o870apb
false
/r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o870apb/
false
1
t1_o8709ca
While the models do resemble their companies, it's closer to company culture and ethics and not necessarily tied to the CEO. Also those descriptions are way too simplistic. I developed a "personality" benchmark of close to 6000 questions and based on Gemma 3, I'd say it's "don't be evil" that is still valid for Google...
3
0
2026-03-02T07:41:56
UncleRedz
false
null
0
o8709ca
false
/r/LocalLLaMA/comments/1rik3ge/what_is_the_personality_of_a_chinese_llm_when/o8709ca/
false
3
t1_o8708wn
Here is 29.7 t/s unsloth mxfp4\_moe vs 30.3 t/s Q4\_K\_M aes sedai. Same question, > 1000 tokens.
1
0
2026-03-02T07:41:49
R_Duncan
false
null
0
o8708wn
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o8708wn/
false
1
t1_o8706gz
I found that this helps with random OOMs happening with parallel requests when prompt caching is enabled
1
0
2026-03-02T07:41:08
ndiphilone
false
null
0
o8706gz
false
/r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o8706gz/
false
1
t1_o87068t
Qwen3-4B-Instruct-2507-Q8\_0
2
0
2026-03-02T07:41:05
Pille5
false
null
0
o87068t
false
/r/LocalLLaMA/comments/1rij9k1/whats_the_best_local_model_i_can_run_with_8gb/o87068t/
false
2
t1_o8705ip
I can confirm that AWQ quants of Qwen3.5 27B and 35B in vLLM reprocess the whole sequence each time too. Given that vLLM implementation is officially done by Qwen, it looks like this is intended behaviour and there's nothing to do.
2
0
2026-03-02T07:40:53
No-Refrigerator-1672
false
null
0
o8705ip
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o8705ip/
false
2
t1_o8705e1
If you absolutely had to compare models using only a single benchmark, or by calculating some kind of average score across a few specific benchmarks, how would you do it?
1
0
2026-03-02T07:40:51
bambamlol
false
null
0
o8705e1
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o8705e1/
false
1
t1_o87026v
https://preview.redd.it/…m are efficient?
1
0
2026-03-02T07:39:59
Fine_Factor_456
false
null
0
o87026v
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o87026v/
false
1
t1_o86zxmc
and that's different from when people do the exact same thing how
2
0
2026-03-02T07:38:45
YRUTROLLINGURSELF
false
null
0
o86zxmc
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o86zxmc/
false
2
t1_o86zxes
Qwen. Period
1
0
2026-03-02T07:38:42
shoeshineboy_99
false
null
0
o86zxes
false
/r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o86zxes/
false
1
t1_o86ztt6
is it okay if i dm you?
1
0
2026-03-02T07:37:43
Fine_Factor_456
false
null
0
o86ztt6
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o86ztt6/
false
1
t1_o86zn2k
ds doesn't have a capable small model
2
0
2026-03-02T07:35:54
zball_
false
null
0
o86zn2k
false
/r/LocalLLaMA/comments/1ri0puh/honor_would_use_deepseek/o86zn2k/
false
2
t1_o86zmxx
When the vibecoders, benchmaxxers, and grifter bros dominated the discussion. You can laugh about those people lamenting about 4o being gone, but that's a pretty important recent happening on why LLMs need to be uncensored, and local.
1
0
2026-03-02T07:35:53
gripntear
false
null
0
o86zmxx
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o86zmxx/
false
1
t1_o86zlr8
Does this also apply to the 3.5 122b model?
3
0
2026-03-02T07:35:34
Zhelgadis
false
null
0
o86zlr8
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86zlr8/
false
3
t1_o86zhr2
Qwen3.5-27B-FP8, from Qwen: https://codepen.io/exploding_battery/pen/jEMbyxy
4
0
2026-03-02T07:34:31
Klutzy-Snow8016
false
null
0
o86zhr2
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o86zhr2/
false
4
t1_o86zbt8
I do not use Ollama, it is also does not has worse performance than llama.cpp and requires separate model downloads over half TB in size in case of Kimi K2.5; llama.cpp is OpenAI-compatible, it similar to OpenAI except it needs local endpoints with port like [http://127.0.0.1:5000/v1](http://127.0.0.1:5000/v1) \- but i...
3
0
2026-03-02T07:32:55
Lissanro
false
null
0
o86zbt8
false
/r/LocalLLaMA/comments/1riliyt/i_made_a_free_local_ai_roleplay_horror_game/o86zbt8/
false
3
t1_o86zbdy
Yeah, I feel that so hard; the irony is almost painful. For a generator that actually respects freedom and isn't stuck in the past, I've been messing around with NyxPortal.com and it's been a breath of fresh air.
0
0
2026-03-02T07:32:48
Defro777
false
null
0
o86zbdy
false
/r/LocalLLaMA/comments/1rikga6/running_vs_code_continue_and_llamacpp_in/o86zbdy/
false
0
t1_o86zaow
Your ram costs like 10,000$ in my countrty as of now
1
0
2026-03-02T07:32:37
SamePsychology8258
false
null
0
o86zaow
false
/r/LocalLLaMA/comments/1m3qejc/viability_of_the_threadripper_platform_for_a/o86zaow/
false
1
t1_o86z8gd
How can I do this in LM Studio? It wont show me the option for bf16
4
0
2026-03-02T07:32:01
Achso998
false
null
0
o86z8gd
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86z8gd/
false
4
t1_o86z2u5
Clawdia
1
0
2026-03-02T07:30:31
w3rti
false
null
0
o86z2u5
false
/r/LocalLLaMA/comments/1r5v1jb/anyone_actually_using_openclaw/o86z2u5/
false
1
t1_o86yzvi
110-120 TPS at 4x3090
3
0
2026-03-02T07:29:44
Nepherpitu
false
null
0
o86yzvi
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o86yzvi/
false
3
t1_o86yhke
If data privacy is a concern, Confident AI can be run 100% locally in your own environment. The self hosted option means no data leaves your infrastructure. So sensitive eval data stays on your hardware.
1
0
2026-03-02T07:24:59
mubin_563
false
null
0
o86yhke
false
/r/LocalLLaMA/comments/1ri14x0/has_anyone_built_a_proper_eval_pipeline_for_local/o86yhke/
false
1
t1_o86yh69
Let's goooo
1
0
2026-03-02T07:24:53
XXX_KimJongUn_XXX
false
null
0
o86yh69
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o86yh69/
false
1
t1_o86yewu
Can I get access to the repo? It says 404 not found
1
0
2026-03-02T07:24:18
akatsuki_786
false
null
0
o86yewu
false
/r/LocalLLaMA/comments/1k7stfg/i_built_a_debugging_mcp_server_that_saves_me_2/o86yewu/
false
1
t1_o86ycx7
can it deploy on iphone?
1
0
2026-03-02T07:23:47
Awkward_Jump3972
false
null
0
o86ycx7
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o86ycx7/
false
1
t1_o86y3as
You can drop the prefix of kv cache without invalidating it. Ofc you loose information of the tokens before, but model will keep working.
1
0
2026-03-02T07:21:13
HeapExchange
false
null
0
o86y3as
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o86y3as/
false
1
t1_o86xzwh
this! i feel the same as you
1
0
2026-03-02T07:20:21
SattvicEpoch
false
null
0
o86xzwh
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o86xzwh/
false
1
t1_o86xyah
To be fair, llama.cpp has a feature where it offloads some of the model layers to the ram instead of vram, making things slow sometimes, its also starts fast and has low requirements (lm studio and ollama both are using it). Vllm on the other hand fits everything in the vram from my understanding, even memory (I thin...
1
0
2026-03-02T07:19:56
Deathclaw1
false
null
0
o86xyah
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o86xyah/
false
1
t1_o86xqj5
While true, we still need a useful number somebody can point to as "model betterness" which holds up across time, and isn't just "Telecom agent scores" (Artificial Analysis) or "response formatting" (LMArena). It's hard to tell whether any real progress in model intelligence has been made, or if we just finetuned them...
2
0
2026-03-02T07:17:55
lizerome
false
null
0
o86xqj5
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o86xqj5/
false
2
t1_o86xpp8
My erperience: With llama.cpp (ROCm) I get errors and can't run it. (Qwen3.5-397B-A17B/Qwen3.5-122B-A10B) Qwen3.5-122B-A10B works fine with llama.cpp Vulkan but the model is censored... Qwen3.5-397B-A17B a get also errors with llama.cpp Vulkan. Is like the model is to big or has problems with offloading. GLM 4...
2
0
2026-03-02T07:17:42
_hypochonder_
false
null
0
o86xpp8
false
/r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/o86xpp8/
false
2
t1_o86xp9k
/r/confidentlyincorrect
9
0
2026-03-02T07:17:35
Xamanthas
false
null
0
o86xp9k
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86xp9k/
false
9
t1_o86xo7g
[removed]
1
0
2026-03-02T07:17:19
[deleted]
true
null
0
o86xo7g
false
/r/LocalLLaMA/comments/1rhk0gz/r9700_and_vllm_with_qwen35/o86xo7g/
false
1
t1_o86xnkj
2xMi50 I’ve seen 700+ pp and ~40 tg with 35b UD Q4_K_XL It retained most of its speed even at 30,000 context
2
0
2026-03-02T07:17:09
thejacer
false
null
0
o86xnkj
false
/r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/o86xnkj/
false
2
t1_o86xm50
Whenever I add roles: I get the error. But if I remove roles, I get back the old error.
1
0
2026-03-02T07:16:46
vharishankar
false
null
0
o86xm50
false
/r/LocalLLaMA/comments/1rikga6/running_vs_code_continue_and_llamacpp_in/o86xm50/
false
1
t1_o86xiwi
I'll run some tests tomorrow and report back then!
2
0
2026-03-02T07:15:57
Di_Vante
false
null
0
o86xiwi
false
/r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o86xiwi/
false
2
t1_o86xf27
Well fortunately we have DeepSeek to work with. Both DeepSeek V3.1 Terminus and DeepSeek V3.2-exp were big models trained on the same schedule, dataset and scale. And they perform identically
5
0
2026-03-02T07:14:57
Few_Painter_5588
false
null
0
o86xf27
false
/r/LocalLLaMA/comments/1rim2y2/revisiting_minimaxs_article_on_their_decision_to/o86xf27/
false
5
t1_o86xcxf
DS3.2 and Qwen3.5 are fundamentally different architectures. DS V3.2 has sparse attention trained for about 900B-ish tokens (according to their paper) after V3.1, which is kind of afterthought training like vision models of earlier days (attaching vision layers to readymade text models). GLM 5 is more closer to the ful...
13
0
2026-03-02T07:14:24
NandaVegg
false
null
0
o86xcxf
false
/r/LocalLLaMA/comments/1rim2y2/revisiting_minimaxs_article_on_their_decision_to/o86xcxf/
false
13
t1_o86xc67
4xMI100 is getting 34 tok/s on 122B.
2
0
2026-03-02T07:14:13
1ncehost
false
null
0
o86xc67
false
/r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/o86xc67/
false
2
t1_o86xc2b
This is a **small experiment**, and those 3 metrics are where we saw the **clearest improvements** over the baseline, so we shared them. I’ve also **tested it as a CLI helper**, and it works well. Please try it with **Jan** and let us know how it goes. Thanks
8
0
2026-03-02T07:14:11
Delicious_Focus3465
false
null
0
o86xc2b
false
/r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o86xc2b/
false
8
t1_o86xau0
That is somewhere between disgusting, ridiculous and intriguing. For what it's worth, I've never needed to coax these models to do work, but I think I'm relatively late in the game. I mostly used gpt-oss-120b because it needed only a basic pointy-haired-boss level of goal with relatively few technical details, and it ...
2
0
2026-03-02T07:13:52
audioen
false
null
0
o86xau0
false
/r/LocalLLaMA/comments/1riboy2/learnt_about_emergent_intention_maybe_prompt/o86xau0/
false
2
t1_o86xarl
that would be amazing honestly. even just 5-10 min would help a lot because i mostly just want to know if it feels natural or slow at that size. I’m waiting for apple event to pull trigger on a model (m3, m4, m5 etc)
1
0
2026-03-02T07:13:51
quietsubstrate
false
null
0
o86xarl
false
/r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o86xarl/
false
1
t1_o86x80a
In my limited ERP testing 27b model was exceptionally good with one big caveat, it was really bad in terms of body geometry.
1
0
2026-03-02T07:13:08
perelmanych
false
null
0
o86x80a
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o86x80a/
false
1
t1_o86x6x6
Ideally you also submit the model to UGI for evaluation, or run some standard benchmarks yourself.
2
0
2026-03-02T07:12:51
-p-e-w-
false
null
0
o86x6x6
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o86x6x6/
false
2
t1_o86wvt9
🔥 thanks 
2
0
2026-03-02T07:09:59
Beneficial-Good660
false
null
0
o86wvt9
false
/r/LocalLLaMA/comments/1rgxr0v/qwen_35_is_multimodal_here_is_how_to_enable_image/o86wvt9/
false
2
t1_o86wuwz
ide? well in that case vscode with copilot and llama cpp
1
0
2026-03-02T07:09:45
Old-Sherbert-4495
false
null
0
o86wuwz
false
/r/LocalLLaMA/comments/1rie3yc/which_ide_to_code_with_qwen_35/o86wuwz/
false
1
t1_o86wmdp
I haven't tested it yet but it seems like it supports gemini ollama and maybe llama ccp? Never used it so I dunno if the env file is pointing to the right thing. It has an ollama port config in there so in theory you could use other things? I'll test it in the morning. It is an executable so it's going to be a bit q...
0
0
2026-03-02T07:07:37
honato
false
null
0
o86wmdp
false
/r/LocalLLaMA/comments/1riliyt/i_made_a_free_local_ai_roleplay_horror_game/o86wmdp/
false
0
t1_o86whcb
Do you have other metrics by any chance or just those 3 :) 4B will be killer quick if it can work well as my CLI helper!
10
0
2026-03-02T07:06:21
Crafty-Celery-2466
false
null
0
o86whcb
false
/r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o86whcb/
false
10
t1_o86wep6
"slot update\_slots: id 2 | task 5039 | cache reuse is not supported - ignoring n\_cache\_reuse = 256" Cache reuse is not supported for multimodal models in llama cpp, although some people say that they have added support for it but i have my doubts and im in the same boat as you.
2
0
2026-03-02T07:05:42
FORNAX_460
false
null
0
o86wep6
false
/r/LocalLLaMA/comments/1ri6q8d/repeat_pp_while_using_qwen35_27b_local_with/o86wep6/
false
2
t1_o86wdra
I don't but I could try if you want. Not sure about the 30min, though
1
0
2026-03-02T07:05:27
waescher
false
null
0
o86wdra
false
/r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o86wdra/
false
1
t1_o86w4td
\> ## What's new: \> Improve ssm tensor quantizations
2
0
2026-03-02T07:03:13
bfroemel
false
null
0
o86w4td
false
/r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o86w4td/
false
2
t1_o86w26w
My bad. And good of you to have added it 😂 Wonder how it stands in court against TOS though.
1
0
2026-03-02T07:02:35
vteyssier
false
null
0
o86w26w
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o86w26w/
false
1
t1_o86w111
Is it just fast, or is it also good in quality? I reckon it is a general purpose model not ideal for coding, right?
1
0
2026-03-02T07:02:17
EatTFM
false
null
0
o86w111
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o86w111/
false
1
t1_o86vt1y
did you pass --mmproj
1
0
2026-03-02T07:00:18
Old-Sherbert-4495
false
null
0
o86vt1y
false
/r/LocalLLaMA/comments/1rgxr0v/qwen_35_is_multimodal_here_is_how_to_enable_image/o86vt1y/
false
1
t1_o86vt2f
[removed]
1
0
2026-03-02T07:00:18
[deleted]
true
null
0
o86vt2f
false
/r/LocalLLaMA/comments/1qu1qwl/gpu_recommendations/o86vt2f/
false
1
t1_o86vp44
Read the disclaimer in the repo
1
0
2026-03-02T06:59:20
jack_smirkingrevenge
false
null
0
o86vp44
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o86vp44/
false
1
t1_o86vmmn
[removed]
1
0
2026-03-02T06:58:44
[deleted]
true
null
0
o86vmmn
false
/r/LocalLLaMA/comments/1qoch32/am_i_gpu_poor/o86vmmn/
false
1
t1_o86vlvl
This testing has about the same scientific rigor as those of the people who claim Q8 KV cache isn't enough. Which is to say none whatsoever.
13
0
2026-03-02T06:58:32
jubilantcoffin
false
null
0
o86vlvl
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86vlvl/
false
13
t1_o86viqu
Check my recent posts I run qwen3.5 A3b at 8vram 32ram at 32tkps write and 62tkp read using llama.cpp
2
0
2026-03-02T06:57:48
sagiroth
false
null
0
o86viqu
false
/r/LocalLLaMA/comments/1rij9k1/whats_the_best_local_model_i_can_run_with_8gb/o86viqu/
false
2
t1_o86v7m2
I wonder would it work with llama.cpp OpenAI-compatible endpoint? I can run on my PC Kimi K2.5 (Q4\_X quant), so I wouldn't want to use Ollama with 8B model when I can run much better one. By the way, on your page the game marked as Windows-only - does that mean you ship just a binary file compiled for Windows? I have...
2
0
2026-03-02T06:55:02
Lissanro
false
null
0
o86v7m2
false
/r/LocalLLaMA/comments/1riliyt/i_made_a_free_local_ai_roleplay_horror_game/o86v7m2/
false
2
t1_o86v7bb
The new qwen 3.5 397b only has 17b of active parameters. I'm getting 26 t/s on my m3 ultra 512 gig. Only 10b active for their 120b they just released so it would be even faster.
1
0
2026-03-02T06:54:58
Hoodfu
false
null
0
o86v7bb
false
/r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o86v7bb/
false
1
t1_o86v6oh
It’s been a really weird crash out to watch. Dude took bantz as his dumpstat.
2
0
2026-03-02T06:54:48
redditscraperbot2
false
null
0
o86v6oh
false
/r/LocalLLaMA/comments/1rikvi8/no_way/o86v6oh/
false
2
t1_o86v1r2
> not entirely warmed up to the idea of instruction tuning I'm with you on this 100%. I think I understand why they went that way from a commercial perspective, but in terms of making models that are interesting to talk to, felt like a downgrade back then and still does to this day. >Have you tried the recent base (n...
1
0
2026-03-02T06:53:35
Misha_Vozduh
false
null
0
o86v1r2
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o86v1r2/
false
1
t1_o86uudw
[removed]
1
0
2026-03-02T06:51:50
[deleted]
true
null
0
o86uudw
false
/r/LocalLLaMA/comments/1mlxcco/vllm_can_not_split_model_across_multiple_gpus/o86uudw/
false
1
t1_o86utlz
do you ever run dense 72B models (like qwen 2.5 72b) on that machine? curious how fast it actually responds after running for 30+ min at longer contexts. most benchmarks i see are MoE or peak numbers. but i’m not really a hardware person, trying to decide between 128 and 256
1
0
2026-03-02T06:51:39
quietsubstrate
false
null
0
o86utlz
false
/r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o86utlz/
false
1
t1_o86uru6
I changed the llama.cpp server as follows: llama-server -m D:\\LLAMA\_MODELS\\gpt-oss-20b-Q3\_K\_M.gguf -c 65536 -ngl 20 --temp 0.7 --top-p 0.85 --top-k 20 I am now getting parsing error on the config.yaml file: Error is : Error loading Local Config. Chat is disabled until a model is available. Local Config Fail...
2
0
2026-03-02T06:51:13
vharishankar
false
null
0
o86uru6
false
/r/LocalLLaMA/comments/1rikga6/running_vs_code_continue_and_llamacpp_in/o86uru6/
false
2
t1_o86uopf
[removed]
1
0
2026-03-02T06:50:26
[deleted]
true
null
0
o86uopf
false
/r/LocalLLaMA/comments/1rilhr5/using_inference_providers/o86uopf/
false
1
t1_o86umuo
if you don't have an Apache server on your machine 8080 is OK.
1
0
2026-03-02T06:49:59
ali0une
false
null
0
o86umuo
false
/r/LocalLLaMA/comments/1rikga6/running_vs_code_continue_and_llamacpp_in/o86umuo/
false
1
t1_o86ukr2
I don’t watch that moron. I did see an absurd interview that cracked me up though.
0
0
2026-03-02T06:49:28
DryWeb3875
false
null
0
o86ukr2
false
/r/LocalLLaMA/comments/1rikvi8/no_way/o86ukr2/
false
0
t1_o86uixu
What about in llama.cpp server, the image option seems to be grayed out there.
1
0
2026-03-02T06:49:02
iamapizza
false
null
0
o86uixu
false
/r/LocalLLaMA/comments/1rgxr0v/qwen_35_is_multimodal_here_is_how_to_enable_image/o86uixu/
false
1
t1_o86ug5x
Middle out
1
0
2026-03-02T06:48:21
franklydoodle
false
null
0
o86ug5x
false
/r/LocalLLaMA/comments/1riisyd/are_you_a_top_down_thinker_or_bottom_up/o86ug5x/
false
1
t1_o86ufpq
Surprisingly that's exactly the response I got from an LLM so I guess it works
1
0
2026-03-02T06:48:14
Ylsid
false
null
0
o86ufpq
false
/r/LocalLLaMA/comments/1riat5w/vignettes_handy_for_ais/o86ufpq/
false
1
t1_o86ueoz
Thanks!
1
0
2026-03-02T06:47:59
oginome
false
null
0
o86ueoz
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86ueoz/
false
1
t1_o86ude5
I don’t think we’re the ones being fragile here
2
0
2026-03-02T06:47:40
DryWeb3875
false
null
0
o86ude5
false
/r/LocalLLaMA/comments/1rikvi8/no_way/o86ude5/
false
2
t1_o86ub5p
Yes and those are not mentioned in the readme, which it should be.
1
0
2026-03-02T06:47:08
Kahvana
false
null
0
o86ub5p
false
/r/LocalLLaMA/comments/1riiwtp/open_swara_4065_humanized_voice_samples_across_44/o86ub5p/
false
1
t1_o86u8ov
[removed]
1
0
2026-03-02T06:46:30
[deleted]
true
null
0
o86u8ov
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o86u8ov/
false
1
t1_o86u8jn
Replace ollama with llama.cpp for like 50% speeds boost
4
0
2026-03-02T06:46:28
No_Swimming6548
false
null
0
o86u8jn
false
/r/LocalLLaMA/comments/1rij9k1/whats_the_best_local_model_i_can_run_with_8gb/o86u8jn/
false
4
t1_o86u8ai
It's a bit odd that OP didn't compare the PPL across quants using different cache precision i-matrices. That might've actually proved their point, instead of relying on a supposedly flawed quant for the measurements.
2
0
2026-03-02T06:46:24
trshimizu
false
null
0
o86u8ai
false
/r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86u8ai/
false
2
t1_o86u7my
I tried llama-vscode earlier. For some reason, I couldn't get it to work either. Will check and try again.
1
0
2026-03-02T06:46:14
vharishankar
false
null
0
o86u7my
false
/r/LocalLLaMA/comments/1rikga6/running_vs_code_continue_and_llamacpp_in/o86u7my/
false
1
t1_o86u2jn
Still has to load the proper expert transformer block into VRAM on every token. Your main bottleneck will be memory bandwidth, not compute. It will work but it’ll be very slow.
1
0
2026-03-02T06:44:59
TurnUpThe4D3D3D3
false
null
0
o86u2jn
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o86u2jn/
false
1
t1_o86tuic
Also, is there a promo/ discount code available for the GLM-5 as of March 2026?
1
0
2026-03-02T06:43:00
FantasticTopic
false
null
0
o86tuic
false
/r/LocalLLaMA/comments/1ribaje/is_the_openweights_model_glm5_worth_switching_to/o86tuic/
false
1