name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7vousm
As an absolute newb to all things AI and Computer science I started with Ollama and found it easy as hell.
1
0
2026-02-28T14:00:02
Expert_Bat4612
false
null
0
o7vousm
false
/r/LocalLLaMA/comments/1rh1q8j/i919400f_rtx_4070_super_12gb_32gb_ddr5_ram/o7vousm/
false
1
t1_o7vojwy
4B will be great
2
0
2026-02-28T13:58:14
SandboChang
false
null
0
o7vojwy
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vojwy/
false
2
t1_o7vogvi
Qwen3-VL-2B wants to have a word.
13
0
2026-02-28T13:57:44
sergeysi
false
null
0
o7vogvi
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vogvi/
false
13
t1_o7vogck
I know this is a dork thing but this isn't being used correctly lol. The last two are supposed to repeat.
23
0
2026-02-28T13:57:40
JazzlikeLeave5530
false
null
0
o7vogck
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vogck/
false
23
t1_o7voe9l
They added former U.S. Army General and director of NSA Paul M. Nakasone to the board. Just a small hint.
8
0
2026-02-28T13:57:19
Utoko
false
null
0
o7voe9l
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7voe9l/
false
8
t1_o7vod6o
If "Chat GPT / Gemini / Claude can definitely walk you through it step by step" = easy, then yes, it's easy. Is it the kind of thing a non-linux-sys-admin can do without looking up steps, no. If HDD prices were not so high I would highly recommend buying a new drive to install cachy on. I'll edit this in a bit with my ...
1
0
2026-02-28T13:57:09
giblesnot
false
null
0
o7vod6o
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7vod6o/
false
1
t1_o7vo8z2
LM-Studio c'est le top pour commencer.
0
0
2026-02-28T13:56:28
Adventurous-Paper566
false
null
0
o7vo8z2
false
/r/LocalLLaMA/comments/1rh1q8j/i919400f_rtx_4070_super_12gb_32gb_ddr5_ram/o7vo8z2/
false
0
t1_o7vo62p
You know as much as I'd like to agree with you, just take a look at relatively larger models which have tool chain already in place, like Flux2 Dev. Or an autoregressive text image model like Hunyaun image. Afik it doesn't even have a well know toolchain for finetuning/LoRa. For flux2 atleast some brave souls gave it ...
1
0
2026-02-28T13:56:01
lacerating_aura
false
null
0
o7vo62p
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vo62p/
false
1
t1_o7vo1k4
Doesn't look like it, only their 2.1 is free currently: [https://kilo.ai/docs/code-with-ai/agents/free-and-budget-models](https://kilo.ai/docs/code-with-ai/agents/free-and-budget-models)
1
0
2026-02-28T13:55:16
svantana
false
null
0
o7vo1k4
false
/r/LocalLLaMA/comments/1rdpapc/chinese_ai_models_capture_majority_of_openrouter/o7vo1k4/
false
1
t1_o7vo1h1
Actually they are dumb, like those who voted for the Dems. The majority doesn't get who's the enemy to their lives, while the "system" is using the same tactic Socialism, Fascism, Communism used last 100 years. "You are with us or the enemy". And that is been played from both sides, who when comes to power have no po...
1
0
2026-02-28T13:55:15
ImportancePitiful795
false
null
0
o7vo1h1
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7vo1h1/
false
1
t1_o7vnzy3
The 3090 has much more compute and memory bandwidth which will have higher token generation throughput when serving parallel generation requests. For non batched inference (single chat window type) as you can see it doesn't make the biggest difference. The extra compute on a 3090 should allow for better use of a specu...
2
0
2026-02-28T13:55:00
12bitmisfit
false
null
0
o7vnzy3
false
/r/LocalLLaMA/comments/1rgynmf/dual_3060_and_single_3090_whats_the_point_of_the/o7vnzy3/
false
2
t1_o7vnzmc
Ran it through our internal eval suite yesterday. Non-thinking mode on the 35B MoE sits roughly where Qwen3 32B dense was on reasoning-heavy tasks, maybe slightly better on code gen. The real win is throughput — you're only activating ~4B params per token, so on a dual 3090 setup I was seeing around 45 tok/s with vLLM,...
4
0
2026-02-28T13:54:57
tom_mathews
false
null
0
o7vnzmc
false
/r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/o7vnzmc/
false
4
t1_o7vnuin
Ah I was trying blackwell so that was probably it
1
0
2026-02-28T13:54:06
MLWillRuleTheWorld
false
null
0
o7vnuin
false
/r/LocalLLaMA/comments/1re9xbi/qwen35_on_vllm/o7vnuin/
false
1
t1_o7vnthj
Where do you have a sub for GLM?
1
0
2026-02-28T13:53:55
wouldacouldashoulda
false
null
0
o7vnthj
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vnthj/
false
1
t1_o7vnngp
And they seem to burn tokens too. Actually not sure why I still have a sub. It’s that Kimi is slightly less good at implementing my autogenerated (by Opus) todo’s I guess.
1
0
2026-02-28T13:52:57
wouldacouldashoulda
false
null
0
o7vnngp
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vnngp/
false
1
t1_o7vnjri
Users are also unpredictable ... this is why people build tests every step of the way for proper software. And do dry runs before things are actually turned on.
1
0
2026-02-28T13:52:21
hurdurdur7
false
null
0
o7vnjri
false
/r/LocalLLaMA/comments/1rh28o8/building_agents_is_fun_evaluating_them_is_not/o7vnjri/
false
1
t1_o7vn8vw
Same in my experience, anything under 100b is ok for one shot but falls apart in longer agent workflows
5
0
2026-02-28T13:50:33
getpodapp
false
null
0
o7vn8vw
false
/r/LocalLLaMA/comments/1rh12xz/qwen3535b_nailed_my_simple_multiagent_workflow/o7vn8vw/
false
5
t1_o7vn7un
Are you aware of Qwen3-VL-2B-Thinking?
3
0
2026-02-28T13:50:23
jacek2023
false
null
0
o7vn7un
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vn7un/
false
3
t1_o7vn7en
GPT4 tier model? Don't you mean GPT 3.5 tier model?
2
0
2026-02-28T13:50:18
Due-Memory-6957
false
null
0
o7vn7en
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7vn7en/
false
2
t1_o7vn60b
IMO anything that is bad for Anthropic is good for us since they are very opposed to basically anything AI related being open other than safety research.
1
0
2026-02-28T13:50:05
Initial-Argument2523
false
null
0
o7vn60b
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7vn60b/
false
1
t1_o7vmutg
Hi, I speak Spanish and I'm afraid not. 
1
0
2026-02-28T13:48:14
RhubarbSimilar1683
false
null
0
o7vmutg
false
/r/LocalLLaMA/comments/1rgqpn2/im_looking_for_local_spanishspeaking_communities/o7vmutg/
false
1
t1_o7vmt45
Thank you, but it's not mine. It's [from here](https://www.youtube.com/watch?v=vbmcHjZWI_U).
3
0
2026-02-28T13:47:57
q0099
false
null
0
o7vmt45
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vmt45/
false
3
t1_o7vms88
Hi, I speak Spanish and I'm afraid not. Latinos in Latin America only use cloud-based models; local AI is a rarity. 
1
0
2026-02-28T13:47:48
RhubarbSimilar1683
false
null
0
o7vms88
false
/r/LocalLLaMA/comments/1rgqpn2/im_looking_for_local_spanishspeaking_communities/o7vms88/
false
1
t1_o7vmqsn
> itll be nearly impossible for community in general to make finetunes of it impossible *right now*
6
0
2026-02-28T13:47:34
jonydevidson
false
null
0
o7vmqsn
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vmqsn/
false
6
t1_o7vmq9m
I tried running it, \~21 tps for 122B, \~46 tps for 35B, and \~7 for 27B dense.
1
0
2026-02-28T13:47:29
shankey_1906
false
null
0
o7vmq9m
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7vmq9m/
false
1
t1_o7vmq5c
I'm very serious and not being sarcastic!😞 I really need your help ..I'm asking about what you said like using 95 % RAM ..that pci bus issue and thread count lower than the cpu thread .. because idk any of these things ..some random guy said to me use this model for my specs ...but I really don't know about these iss...
1
0
2026-02-28T13:47:28
Less_Strain7577
false
null
0
o7vmq5c
false
/r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7vmq5c/
false
1
t1_o7vmpct
Linux mint and LACT work well enough for me.
1
0
2026-02-28T13:47:20
12bitmisfit
false
null
0
o7vmpct
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7vmpct/
false
1
t1_o7vmol6
[removed]
1
0
2026-02-28T13:47:12
[deleted]
true
null
0
o7vmol6
false
/r/LocalLLaMA/comments/1rgqpn2/im_looking_for_local_spanishspeaking_communities/o7vmol6/
false
1
t1_o7vmodr
You can even run qwen coder next on a 3090 with 64gb ram
1
0
2026-02-28T13:47:10
Zundrium
false
null
0
o7vmodr
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vmodr/
false
1
t1_o7vmkht
[removed]
1
0
2026-02-28T13:46:32
[deleted]
true
null
0
o7vmkht
false
/r/LocalLLaMA/comments/1rgqpn2/im_looking_for_local_spanishspeaking_communities/o7vmkht/
false
1
t1_o7vmipl
Great post and experiments! Inspired by your findings, I went a different direction: instead of optimizing Q4_K_M, I tested whether a smaller quant that fits mostly in VRAM could beat it on speed. Setup: RTX 5080 16GB, Intel Core Ultra 9 285K, llama.cpp built from source with CUDA 13.1 + native sm_120 (Blackwell), usi...
2
0
2026-02-28T13:46:13
UniversalJS
false
null
0
o7vmipl
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7vmipl/
false
2
t1_o7vmgxd
Not with my tests.
1
0
2026-02-28T13:45:55
substance90
false
null
0
o7vmgxd
false
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7vmgxd/
false
1
t1_o7vmfio
What backend are you running? I'm getting a weird crash and CUDA error trying to load it across more than one GPU in KoboldCPP, not sure if it's something known or just my hardware being weird. I can run it on one GPU and offload some to CPU without issue, and 35B runs on my laptop CPU only. Scratching my head.
1
0
2026-02-28T13:45:41
GraybeardTheIrate
false
null
0
o7vmfio
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7vmfio/
false
1
t1_o7vmdgv
Not my experience at all
2
0
2026-02-28T13:45:20
substance90
false
null
0
o7vmdgv
false
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7vmdgv/
false
2
t1_o7vmcig
I'm pretty sure Nano-Banana is multimodal, but it's a separate model from Gemini pro/flash. You can prompt Nano-Banana to respond in text only and compare it with Gemini Pro/Flash outputs.
8
0
2026-02-28T13:45:10
typical-predditor
false
null
0
o7vmcig
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vmcig/
false
8
t1_o7vmbvp
Don't know their current stance but earlier on they don't want any of us having unrestricted AI which open source allows us to get around their alignment.
6
0
2026-02-28T13:45:03
New_Performer8966
false
null
0
o7vmbvp
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7vmbvp/
false
6
t1_o7vmbro
I've mostly just used llama.cpp and before that ollama for running inference. Is vllm equally accessible or more complicated to use?
1
0
2026-02-28T13:45:02
doesitoffendyou
false
null
0
o7vmbro
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7vmbro/
false
1
t1_o7vmamp
IMHO I wouldn't put Anthropic that high. Working with long texts and documents (not coding) is really terrible for me. Even opus is insidious and misleading, especially the latest versions. You need to use the API to make it somewhat useful.
1
0
2026-02-28T13:44:51
Substantial-Ebb-584
false
null
0
o7vmamp
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vmamp/
false
1
t1_o7vm8w9
Power limiting just places hard limits on power draw (well, sort of hard, I think there could be short spikes). Undervolting means using less power to the same compute mode. Usually, gpus are fed a bit more power than they need to increase stability, but this is offset by extra heat. With subpar cooling this leads to c...
2
0
2026-02-28T13:44:33
Prudent-Ad4509
false
null
0
o7vm8w9
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7vm8w9/
false
2
t1_o7vm87b
I just deleted that one. Yes, the newest one with the updates. It's comparatively stupid, doesn't stick to the prompt and tool responses are failing in newest Cline.
5
0
2026-02-28T13:44:26
Synor
false
null
0
o7vm87b
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7vm87b/
false
5
t1_o7vm7ka
> which would perversely push serious users toward self-hosted or European infrastructure faster than any privacy argument has managed to. Europe will then drag you into an impossible mess of both GDPR and soon-to-be "chat control". And with local models likely forbidden because ... you know, misinformation, hate spe...
-3
0
2026-02-28T13:44:19
soshulmedia
false
null
0
o7vm7ka
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7vm7ka/
false
-3
t1_o7vm5sw
I still prefer the moe just because it is soo much faster. Only unfortunate thing is that it is still not really optimized in llama cpp, I only get like 45T/s at best for 35b, with qwen 3 30b, I could get 90T/s. Dense is better for instruct model id say, but not so much for reasoning.
-2
0
2026-02-28T13:44:01
Far-Low-4705
false
null
0
o7vm5sw
false
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7vm5sw/
false
-2
t1_o7vm45j
Sam Altman is such an obvious douche.
93
0
2026-02-28T13:43:44
quantgorithm
false
null
0
o7vm45j
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vm45j/
false
93
t1_o7vm31k
Yep. Is anything but Republicanism and Conservatism. And "names" mean nothing if you do not stick to the ideology.
0
0
2026-02-28T13:43:33
ImportancePitiful795
false
null
0
o7vm31k
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7vm31k/
false
0
t1_o7vly5y
Current US admin sucks
0
0
2026-02-28T13:42:43
-_Apollo-_
false
null
0
o7vly5y
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7vly5y/
false
0
t1_o7vlxyf
No. Just read them, they are a dying breed and about the only physical paper worth buying.
8
0
2026-02-28T13:42:41
Logical_Look8541
false
null
0
o7vlxyf
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vlxyf/
false
8
t1_o7vlvh4
I am the anti-AI and AI-phobe in here when stating that if things are going out of hand, Butlerian Jihad is needed (Dune) and ban all AI/LLM for eternity. Humanity First.
1
0
2026-02-28T13:42:15
ImportancePitiful795
false
null
0
o7vlvh4
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7vlvh4/
false
1
t1_o7vlv3b
Good call, makes sense.
9
0
2026-02-28T13:42:11
-_Apollo-_
false
null
0
o7vlv3b
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7vlv3b/
false
9
t1_o7vlu4n
Haha .. this is perfect
23
0
2026-02-28T13:42:01
PaceImaginary8610
false
null
0
o7vlu4n
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vlu4n/
false
23
t1_o7vltvj
> It doesnt matter if you like Anthropic or not. This sets a precedent of the bullshit to come. What if.... What if the public fight now between the Pentagon and Anthropic is a show, just meant to just prepare the ground, say along lines of "Anthropic wants to ban local models now, but we have seen that they are th...
-3
0
2026-02-28T13:41:59
soshulmedia
false
null
0
o7vltvj
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7vltvj/
false
-3
t1_o7vlsj7
Nah that was Grok, `spicy mode`
11
0
2026-02-28T13:41:45
SunshineSeattle
false
null
0
o7vlsj7
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vlsj7/
false
11
t1_o7vlmq6
Thanks :D
1
0
2026-02-28T13:40:46
dumbelco
false
null
0
o7vlmq6
false
/r/LocalLLaMA/comments/1rh2tmu/benchmarking_opensource_llms_for_security/o7vlmq6/
false
1
t1_o7vlm2x
After trying q8_0 with Qwen3.5-27B I'm getting significantly lower success rates with q8_0 enabled doing some one shot programming. over the thumb like 20% compared to 70%.
1
0
2026-02-28T13:40:39
yeawhatever
false
null
0
o7vlm2x
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7vlm2x/
false
1
t1_o7vljdx
Would be strange times if suddenly Anthropic had to resort to Chinese cloud hosting if that is a thing. It's very strange cause I would be very against Chinese tech but our own corps and gov are looking to censor and deny access, jacking up hardware prices to push consumers out of owning anything and more things then I...
2
0
2026-02-28T13:40:11
New_Performer8966
false
null
0
o7vljdx
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7vljdx/
false
2
t1_o7vlhyu
Tu peux désactiver le mode "thinking" en modifiant le template jinja. https://www.reddit.com/r/LocalLLaMA/comments/1regq10/qwen_35_2735122b_jinja_template_modification/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button
2
0
2026-02-28T13:39:57
Adventurous-Paper566
false
null
0
o7vlhyu
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vlhyu/
false
2
t1_o7vlhek
S(c)am Altman.
215
0
2026-02-28T13:39:51
q0099
false
null
0
o7vlhek
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vlhek/
false
215
t1_o7vlh9a
tbh 16k is good enough. 32k is better though.
3
0
2026-02-28T13:39:50
Sicarius_The_First
false
null
0
o7vlh9a
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7vlh9a/
false
3
t1_o7vlgab
i9-19400F? 19th generate?Please check the CPU. Your specification(i9-14900F?)+DDR5 32 GB+RTX4070(12GB),honestly, i9 CPU is overkill. the 12GB VRAM on your 4070 Super is the real bottleneck. For the CPU:i5,12th will be okay. Recommendations: Software: LM Studio is much friendlier for 'noobs' and has a better UI for ...
2
0
2026-02-28T13:39:40
Rain_Sunny
false
null
0
o7vlgab
false
/r/LocalLLaMA/comments/1rh1q8j/i919400f_rtx_4070_super_12gb_32gb_ddr5_ram/o7vlgab/
false
2
t1_o7vlg2b
That's good to hear !! Greate initiative 🥳😍
1
0
2026-02-28T13:39:38
thegravitydefier
false
null
0
o7vlg2b
false
/r/LocalLLaMA/comments/1rh2tmu/benchmarking_opensource_llms_for_security/o7vlg2b/
false
1
t1_o7vld1u
Can confirm. I can squeeze out a 170k context length with the 27b model and have it all sit in 32gb of vram. I think I’m on the bartowski q4_k_L or M. Either way, works great in a coding harness. Actually having trouble deciding what worked better for my setup irl usage, this or the coder next 80b moe with expert layer...
4
0
2026-02-28T13:39:06
-_Apollo-_
false
null
0
o7vld1u
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vld1u/
false
4
t1_o7vlbot
Cool, will give it a shot - thank you 😊
2
0
2026-02-28T13:38:52
AlwaysInconsistant
false
null
0
o7vlbot
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7vlbot/
false
2
t1_o7vlafp
They were the first ones to go for adult content too. Not that I'm complaining but it's a sign of desperation
6
0
2026-02-28T13:38:39
Clear_Anything1232
false
null
0
o7vlafp
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vlafp/
false
6
t1_o7vlah9
Local models are really difficult, especially when they're quantized down to 4 bit. Tools are also hard with local models, there are so many variations especially with older models, Qwen format, Hermes format, OpenAI format, etc - and lots of models can't make calls correctly so you have to account for calls with sche...
1
0
2026-02-28T13:38:39
Total-Context64
false
null
0
o7vlah9
false
/r/LocalLLaMA/comments/1rh28o8/building_agents_is_fun_evaluating_them_is_not/o7vlah9/
false
1
t1_o7vl885
How do you know its even better? Data?
2
0
2026-02-28T13:38:17
Synor
false
null
0
o7vl885
false
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7vl885/
false
2
t1_o7vl5rg
[removed]
1
0
2026-02-28T13:37:51
[deleted]
true
null
0
o7vl5rg
false
/r/LocalLLaMA/comments/1rh2tmu/benchmarking_opensource_llms_for_security/o7vl5rg/
false
1
t1_o7vl4xf
Could you please point me to some good explanation (or give a short ELI5) what are MoE (mixture of experts?) and dense models? What's the difference? What to choose?
2
1
2026-02-28T13:37:43
groosha
false
null
0
o7vl4xf
false
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7vl4xf/
false
2
t1_o7vl138
I can't tell if you're being sarcastic or actually want my help. And I can't help because I don't understand what you're asking. "Get rid of what?" What's the system (Windows, Mac, Linux) ? IDK
1
0
2026-02-28T13:37:03
melanov85
false
null
0
o7vl138
false
/r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7vl138/
false
1
t1_o7vkzoq
Yesss
3
0
2026-02-28T13:36:48
-Cubie-
false
null
0
o7vkzoq
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vkzoq/
false
3
t1_o7vkzf2
It was the Washington Post
21
0
2026-02-28T13:36:45
AnticitizenPrime
false
null
0
o7vkzf2
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vkzf2/
false
21
t1_o7vky3z
Moe rocks
2
0
2026-02-28T13:36:32
devilish-lavanya
false
null
0
o7vky3z
false
/r/LocalLLaMA/comments/1rgu3s0/qwen3535ba3b_be_overthinking_like/o7vky3z/
false
2
t1_o7vkxqg
Don’t say that! I’ll have to give up stepfun 3.5 or GLM 4.7 or GPT-OSS 120b… my hard drive cries out in pain
1
0
2026-02-28T13:36:28
silenceimpaired
false
null
0
o7vkxqg
false
/r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vkxqg/
false
1
t1_o7vkv01
At this point, OpenAI is like we will do anything for money!
18
0
2026-02-28T13:36:00
PaceImaginary8610
false
null
0
o7vkv01
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vkv01/
false
18
t1_o7vks19
Yes, that's correct. Setting it to 0.00 disables top-k, because this configuration prioritizes the min-p sampler instead.
2
0
2026-02-28T13:35:29
Life-Screen-9923
false
null
0
o7vks19
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7vks19/
false
2
t1_o7vkoqj
35ba3b and 27b right now
1
0
2026-02-28T13:34:55
Cesar55142
false
null
0
o7vkoqj
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7vkoqj/
false
1
t1_o7vko0u
I oughta try this and see what the 35b model can do on the quants I can afford
4
0
2026-02-28T13:34:48
MaCl0wSt
false
null
0
o7vko0u
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7vko0u/
false
4
t1_o7vkkko
i tried 27b it thinks for half an hour before answering "hi" . what's up with that?
1
0
2026-02-28T13:34:12
woahdudee2a
false
null
0
o7vkkko
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vkkko/
false
1
t1_o7vkkl4
Mistral offre des tarifs plutôt compétitifs, et ils semblent bien partis pour devenir le fournisseur privilégié des administrations françaises. Ils ne sont pas si gros mais je pense que tout va bien pour eux.
2
0
2026-02-28T13:34:12
Adventurous-Paper566
false
null
0
o7vkkl4
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vkkl4/
false
2
t1_o7vkdlb
[removed]
1
0
2026-02-28T13:33:00
[deleted]
true
null
0
o7vkdlb
false
/r/LocalLLaMA/comments/1r9x3pu/recommend_pdf_translator_that_handles_tables_well/o7vkdlb/
false
1
t1_o7vkd1h
I hope so, IIRC UD versions of 397B up to Q4 use mxfp4 for some tensors
1
0
2026-02-28T13:32:54
beneath_steel_sky
false
null
0
o7vkd1h
false
/r/LocalLLaMA/comments/1rh0xwk/unsloth_dynamic_20_ggufs_now_selectively/o7vkd1h/
false
1
t1_o7vk9rz
A little too confident. I’ve seen multiple comments that LM Studio runs slower. It could be caused by how they default to loading models and it’s addressable, but this complaint has shown up on Reddit many times. I personally hate the training wheels on a Lm Studio. It’s definitely a good place for new people starting...
3
0
2026-02-28T13:32:19
silenceimpaired
false
null
0
o7vk9rz
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vk9rz/
false
3
t1_o7vk7in
it's explained in brief in AesSedai's HF repos but this is more detailed: https://www.reddit.com/r/LocalLLaMA/comments/1rfds1h/comment/o7kd7de/?context=3
3
0
2026-02-28T13:31:56
Ueberlord
false
null
0
o7vk7in
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7vk7in/
false
3
t1_o7vk6d7
Thanks for sharing! Is ‘—top-k 0.00’ right?
1
0
2026-02-28T13:31:44
AlwaysInconsistant
false
null
0
o7vk6d7
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7vk6d7/
false
1
t1_o7vk566
[deleted]
1
0
2026-02-28T13:31:31
[deleted]
true
null
0
o7vk566
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vk566/
false
1
t1_o7vk4nx
The writing on the wall for this to happen was up there for a long long time.
34
0
2026-02-28T13:31:25
RoomyRoots
false
null
0
o7vk4nx
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vk4nx/
false
34
t1_o7vk4gt
I doesn't take shared inference into account. There's no reason not to use shared inference if you're self-hosting a MoE model for 1 user.
1
0
2026-02-28T13:31:23
_Erilaz
false
null
0
o7vk4gt
false
/r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7vk4gt/
false
1
t1_o7vk40g
These models shares the same architecture and model implementation. And I am 100% sure Qwen3.5-27B, Qwen3.5-35B-A3B or Qwen3-Next-80B-A3B do not have the KV cache issue.
1
0
2026-02-28T13:31:19
lly0571
false
null
0
o7vk40g
false
/r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vk40g/
false
1
t1_o7vk2lz
i use it recently around 100k locally and it's still fine
1
0
2026-02-28T13:31:03
Educational_Sun_8813
false
null
0
o7vk2lz
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vk2lz/
false
1
t1_o7vk0qw
Ah that’s good to hear. I’m going to give it a go trust. Out of curiosity, which quants have you tried.
1
0
2026-02-28T13:30:44
Laabc123
false
null
0
o7vk0qw
false
/r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vk0qw/
false
1
t1_o7vjzty
I mean, you have to select gemini-2.5-flash-image, not gemini-2.5-flash, and then it works. Presumably they have two different models, one for only text output and one for text+image output because the model having to additionally support outputting images slightly decreases text only performance. However, I believe mo...
3
0
2026-02-28T13:30:34
TemperatureMajor5083
false
null
0
o7vjzty
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vjzty/
false
3
t1_o7vjxar
Il me semble que Mistral Medium est MoE.
1
0
2026-02-28T13:30:07
Adventurous-Paper566
false
null
0
o7vjxar
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vjxar/
false
1
t1_o7vjw3j
Good job, unsloth! Thrilled to see this data. Hopefully this becomes a standard thing on these most popular models.
6
0
2026-02-28T13:29:54
audioen
false
null
0
o7vjw3j
false
/r/LocalLLaMA/comments/1rh0xwk/unsloth_dynamic_20_ggufs_now_selectively/o7vjw3j/
false
6
t1_o7vju3k
That's fine
2
0
2026-02-28T13:29:34
Dreifach-M
false
null
0
o7vju3k
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7vju3k/
false
2
t1_o7vjtrm
I'm pretty comfortable with the command line (using a Mac also) but as another commenter pointed out, might have to get used to fish. Why is Arch so hyped? I remember seeing memes from people that use steamOS saying that they technically run Arch now but I never understood why Arch is special?
1
0
2026-02-28T13:29:30
doesitoffendyou
false
null
0
o7vjtrm
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7vjtrm/
false
1
t1_o7vjq43
llama-server has builded in gui, and it's way better to use in the long term from various reasons... if you cannot compile by yourself just download release from for your hardware (CUDA in your case) https://github.com/ggml-org/llama.cpp/releases then you can use qwen3 coder, qwen-35, qwen-27, devstral-2-mini, and some...
1
0
2026-02-28T13:28:52
Educational_Sun_8813
false
null
0
o7vjq43
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vjq43/
false
1
t1_o7vjnkh
Are you sure, you are not mixing up the model versions? There is no 122B-A3B. You are probably running the smaller 35B-A3B model. If you were actually running the 122B model in a Q5 quant, the weights alone would take up over 80GB of memory before you even loaded a single token of context. Since your total system usa...
1
0
2026-02-28T13:28:24
Reasonable-Yak-3523
false
null
0
o7vjnkh
false
/r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vjnkh/
false
1
t1_o7vjly8
MoEs are not as powerful parameter for parameter, but i wouldn’t be surprised if the 35b MoE is faster and at least as smart as a 9b
8
0
2026-02-28T13:28:07
silenceimpaired
false
null
0
o7vjly8
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vjly8/
false
8
t1_o7vjkp2
I'm hoping it's agentic coding capability will match claude.
6
0
2026-02-28T13:27:54
yogthos
false
null
0
o7vjkp2
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vjkp2/
false
6
t1_o7vjh8z
Thank you for the in-depth response! Yeah actually if you could share the command that would be helpful! Does Cachy have built in tools to monitor temperature? I'm kind of cautious now after overheating my 3090 and was thinking about monitoring temperature under load before I can trust it again.. Is it easy to keep Wi...
1
0
2026-02-28T13:27:17
doesitoffendyou
false
null
0
o7vjh8z
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7vjh8z/
false
1
t1_o7vjgvn
Dude, you are talking about a different model: 35B-A3B-Q4 
1
0
2026-02-28T13:27:13
Reasonable-Yak-3523
false
null
0
o7vjgvn
false
/r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vjgvn/
false
1
t1_o7vjgdc
32gb
1
0
2026-02-28T13:27:07
moahmo88
false
null
0
o7vjgdc
false
/r/LocalLLaMA/comments/1rh0msv/qwen3535ba3b_q5_k_mbest_model_for_nvidia_16gb_gpus/o7vjgdc/
false
1