name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7wvvig
I think you're right, it has not yet been implemented for this model family. I think this PR should make it work but I haven't tried it. It's not merged yet. https://github.com/ggml-org/llama.cpp/pull/19493
9
0
2026-02-28T17:44:22
OsmanthusBloom
false
null
0
o7wvvig
false
/r/LocalLLaMA/comments/1rh8o4b/selfspeculative_decoding_for_qwen3535ba3b_in/o7wvvig/
false
9
t1_o7wvuy3
sorry but even in 2026 llama 3.* is still a solid FT base for many narrow tasks!
2
0
2026-02-28T17:44:17
golmgirl
false
null
0
o7wvuy3
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7wvuy3/
false
2
t1_o7wvup0
I rent a server in France, use Coolify to manage it, OpenWebUI as my chat front-end, and OpenRouter to access whichever AI model I want. without being locked into one provider.
1
0
2026-02-28T17:44:15
Massive-Pickle-5490
false
null
0
o7wvup0
false
/r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o7wvup0/
false
1
t1_o7wvupu
Hector you're right and that's actually a really good find. With the education discount the 28-core CPU with 60-core GPU at 256GB comes in at $5,039 and the 32-core CPU with 80-core GPU at 256GB lands at $6,389. Both well within range depending on how patient I am with saving. For my use case the extra CPU and GPU cor...
1
0
2026-02-28T17:44:15
TelevisionGlass4258
false
null
0
o7wvupu
false
/r/LocalLLaMA/comments/1rfvh4c/going_fully_offline_with_ai_for_research_where_do/o7wvupu/
false
1
t1_o7wvsjy
I get like 20 t/s with experts completely in ram via --cpu-moe, and just the kv cache and I suppose attention and router being in vram. The full 256k kv cache at q8_0 fits into 8gb vram 
3
0
2026-02-28T17:43:57
Zestyclose-Shift710
false
null
0
o7wvsjy
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7wvsjy/
false
3
t1_o7wvou2
The E2E encryption part is where this stops being a fun hobby project and becomes a real liability. If you don't understand the crypto implementation, you functionally don't have E2E encryption — you have code that looks like it does encryption. Vibe-coded crypto is worse than no crypto because it gives you false confi...
0
0
2026-02-28T17:43:24
tom_mathews
false
null
0
o7wvou2
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7wvou2/
false
0
t1_o7wvnqx
noob
1
0
2026-02-28T17:43:15
NagiButor
false
null
0
o7wvnqx
false
/r/LocalLLaMA/comments/1re1nss/opensource_models_beat_opus_46_and_are_10x_cheaper/o7wvnqx/
false
1
t1_o7wvllg
True, but this tool can also filter out topics you just don't like. For example if you do not want to see anything related to politics, news, food content, a show you're not interested that keeps popping up on your feed. You could choose for a more productive youtube feed, and then later in the day switch back to reg...
1
0
2026-02-28T17:42:57
Cas_Dehook
false
null
0
o7wvllg
false
/r/LocalLLaMA/comments/1rh8iyf/my_ideas_about_protective_ai/o7wvllg/
false
1
t1_o7wvi3g
you have different agents running the same model, correct?
0
0
2026-02-28T17:42:27
muyuu
false
null
0
o7wvi3g
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7wvi3g/
false
0
t1_o7wvhvu
Still waiting for new rig(Coming month surely). But tried below models with current laptop(8GB VRAM). LFM2-24B - Got 45-50 t/s @ Q4 Qwen3.5-35B - Don't know why, got only single digit t/s. Probably too big for my GPU. Need to try again later. Nanbeige4.1-3B - I want Instruct version of this model. 8GB not at all goo...
1
0
2026-02-28T17:42:25
pmttyji
false
null
0
o7wvhvu
false
/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7wvhvu/
false
1
t1_o7wvhm5
I'm downloading your GGUF now will give it a shot thanks
1
0
2026-02-28T17:42:23
fragment_me
false
null
0
o7wvhm5
false
/r/LocalLLaMA/comments/1rg045u/overwhelmed_by_so_many_model_releases_within_a/o7wvhm5/
false
1
t1_o7wv21x
I can see full prompt reprocessing happening every time in LMStudio as well: LMStudio logs slot update\_slots: id 0 | task 1259 | forcing full prompt re-processing due to lack of cache data (likely due to SWA or hybrid/recurrent memory, see https://github.com/ggml-org/llama.cpp/pull/13194#issuecomment-28683430...
1
0
2026-02-28T17:40:11
anubhav_200
false
null
0
o7wv21x
false
/r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/o7wv21x/
false
1
t1_o7wuxjw
Ok, which is a bigger number, 3, or 12? A 5 year old can get this right.
-1
0
2026-02-28T17:39:31
Emotional-Baker-490
false
null
0
o7wuxjw
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wuxjw/
false
-1
t1_o7wux2c
I'm still using an EXL4 4bit model of the old mistral large 123b 2411 here.
4
0
2026-02-28T17:39:27
Slaghton
false
null
0
o7wux2c
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7wux2c/
false
4
t1_o7wuwzl
Yeah exactly – it’s prompt tokens that get saved. In a text chain, each agent’s prompt includes all prior agents’ output as text, so the prompt grows at every hop. In latent mode, that prior context comes as KV-cache instead, so the prompt stays short (just the role instruction + question). The model still generates ro...
8
0
2026-02-28T17:39:26
proggmouse
false
null
0
o7wuwzl
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7wuwzl/
false
8
t1_o7wuwn5
You should move to China and let us know all the rights they give you over there.  
1
0
2026-02-28T17:39:23
thescofflawl
false
null
0
o7wuwn5
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7wuwn5/
false
1
t1_o7wurqa
When I try to mix Radeon and Nvidia I should use Vulcan?
1
0
2026-02-28T17:38:41
ppsirius
false
null
0
o7wurqa
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7wurqa/
false
1
t1_o7wur6x
havent read the paper but could (some of) the effect be explained by terminal repetition loops? i.e. when the model can’t handle a problem, it ends up endlessly repeating itself till it hits max tokens. doesn’t even have to be endless either, sometimes a model will get stuck in a loop for a long time but still manage t...
4
0
2026-02-28T17:38:37
golmgirl
false
null
0
o7wur6x
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7wur6x/
false
4
t1_o7wuqmz
Please don't make more slop, there's plenty already
3
0
2026-02-28T17:38:32
Hector_Rvkp
false
null
0
o7wuqmz
false
/r/LocalLLaMA/comments/1rh4p4n/how_are_you_engaging_with_the_ai_podcast/o7wuqmz/
false
3
t1_o7wuog5
People are insane trying to disable the thinking, lol.  It is literally the secret sauce.
3
0
2026-02-28T17:38:12
Ok-Measurement-1575
false
null
0
o7wuog5
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wuog5/
false
3
t1_o7wumxl
And this is why license-based software models are absolute shit.
1
0
2026-02-28T17:37:58
kintar1900
false
null
0
o7wumxl
false
/r/LocalLLaMA/comments/1d1lq62/local_text_to_speech/o7wumxl/
false
1
t1_o7wul0o
"LocalGeMMA" might get us sued, but what about something like "LocalGamma" or "OpenGamma"? Seems like that's pretty close but Google couldn't feasibly object to "Gamma", right?
2
0
2026-02-28T17:37:42
ttkciar
false
null
0
o7wul0o
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7wul0o/
false
2
t1_o7wuku7
Have you gotten it to work with Qwen 3.5? Not working on my end.
1
0
2026-02-28T17:37:40
oxygen_addiction
false
null
0
o7wuku7
false
/r/LocalLLaMA/comments/1rgp2nu/anyone_doing_speculative_decoding_with_the_new/o7wuku7/
false
1
t1_o7wuiz8
My personal mini-ChatGPT, basically. I don't like using agents - my preference is conversational AI. So, programming concepts, math tutoring, brainstorming, thinking of counter-arguments, summarizing long documents... all that stuff. Generally nothing involving web search. 3.5-35b-a3b is insanely good at those tasks.
1
0
2026-02-28T17:37:25
Morphon
false
null
0
o7wuiz8
false
/r/LocalLLaMA/comments/1rgixxr/what_models_do_you_think_owned_february/o7wuiz8/
false
1
t1_o7wuf1a
could probably just look at the number of subs and views on source video to weigh it on the slop probability scale.. Most of these have popped up very recently and just re-summarize existing popular content from mainstream channels so topic filtering might fit into the slop-o-meter as well.
2
0
2026-02-28T17:36:51
Autobahn97
false
null
0
o7wuf1a
false
/r/LocalLLaMA/comments/1rh8iyf/my_ideas_about_protective_ai/o7wuf1a/
false
2
t1_o7wue6h
went from saving the world to drone strikes for the Führer in 5 years.
12
0
2026-02-28T17:36:44
coolaznkenny
false
null
0
o7wue6h
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wue6h/
false
12
t1_o7wuctu
Toight! Indeed, surprisingly small difference between 6 and 8! I'd go straight for the mxlp4 though, and only reconsider if it disappoints.
-1
0
2026-02-28T17:36:32
Hector_Rvkp
false
null
0
o7wuctu
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wuctu/
false
-1
t1_o7wuc9d
Even for huge models? For small ones I'll believe it
1
0
2026-02-28T17:36:28
LagOps91
false
null
0
o7wuc9d
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7wuc9d/
false
1
t1_o7wu6ku
You're right, i didn't even try that
1
0
2026-02-28T17:35:38
AppealSame4367
false
null
0
o7wu6ku
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7wu6ku/
false
1
t1_o7wu5gd
'''Poor 16Gb''' My 1080ti is crying
5
0
2026-02-28T17:35:28
NegotiationNo1504
false
null
0
o7wu5gd
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7wu5gd/
false
5
t1_o7wu1c8
Thanks melanov85, genuinely appreciate the thoughtful replies. The dual-workload concern is fair to raise, but my pipeline is sequential rather than concurrent. Human reasoning first then AI reasoning second, then I manually execute the generated code in terminal, then iterate. No simultaneous LLM and compute layer fi...
1
0
2026-02-28T17:34:51
TelevisionGlass4258
false
null
0
o7wu1c8
false
/r/LocalLLaMA/comments/1rfvh4c/going_fully_offline_with_ai_for_research_where_do/o7wu1c8/
false
1
t1_o7wtynb
People wanting to support the business that tried to fuck open source over in the past is crazy. Anthropic can burn for all I care.
2
0
2026-02-28T17:34:28
Sthenosis
false
null
0
o7wtynb
false
/r/LocalLLaMA/comments/1rh7s7s/anthropic/o7wtynb/
false
2
t1_o7wtx5k
> The refusal to DoW is probably strategic, it is not linked at all with any willing to protect people from mass surveillance or whatever. To believe this is to believe that Anthropic is willing to risk going out of business in order to virtue signal? An astoundingly bad take. Why? Because if the government succeed...
1
0
2026-02-28T17:34:15
Veastli
false
null
0
o7wtx5k
false
/r/LocalLLaMA/comments/1rh7s7s/anthropic/o7wtx5k/
false
1
t1_o7wtws0
Not a silly question at all. The questions come from GSM8K – a standard grade-school math benchmark. Stuff like: “Janet’s ducks lay 16 eggs per day. She eats three for breakfast and bakes muffins with four. She sells the rest at $2 each. How much does she make?” The 4-agent chain runs: Planner (make a plan) -> Critic (...
5
0
2026-02-28T17:34:12
proggmouse
false
null
0
o7wtws0
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7wtws0/
false
5
t1_o7wtui1
Oops, forgot to add what the browser plugin looks like: https://preview.redd.it/2j106cpwt9mg1.png?width=3010&format=png&auto=webp&s=cce68f7ad512f80f460d78b4932deec13960704b
1
0
2026-02-28T17:33:52
Cas_Dehook
false
null
0
o7wtui1
false
/r/LocalLLaMA/comments/1rh8iyf/my_ideas_about_protective_ai/o7wtui1/
false
1
t1_o7wttmx
Why are you using old models? Only bots would talk about qwen2.5 so you look a lot like a clanker.
3
0
2026-02-28T17:33:44
Emotional-Baker-490
false
null
0
o7wttmx
false
/r/LocalLLaMA/comments/1rh3oty/mac_m4_24gb_local_stack_qwen25_14b_cogito_14b/o7wttmx/
false
3
t1_o7wtrxj
Correct. IQ4\_XS on a 3090. LM-Studio-0.4.5-2-x64, Fully loaded onto the GPU, 120K context (I could probably get more, but that's where I landed to get some stuff done). export ANTHROPIC_BASE_URL=http://10.0.0.XXX:8080 export ANTHROPIC_AUTH_TOKEN=lmstudio claude --model qwen3.5-35b-a3b\ Auth token is prob...
2
0
2026-02-28T17:33:29
TheActualStudy
false
null
0
o7wtrxj
false
/r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/o7wtrxj/
false
2
t1_o7wtnfj
This happens with other models too, I've seen it often.
26
0
2026-02-28T17:32:50
phenotype001
false
null
0
o7wtnfj
false
/r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7wtnfj/
false
26
t1_o7wtju7
If you test qwen on these tasks, please do share the results
8
0
2026-02-28T17:32:19
engineer-throwaway24
false
null
0
o7wtju7
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wtju7/
false
8
t1_o7wtjl1
I suppose it makes sense that this would happen organically when you're not allowed to think ahead. The comment becomes the plan but then you need to change something, etc.
1
0
2026-02-28T17:32:16
AllergicToTeeth
false
null
0
o7wtjl1
false
/r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7wtjl1/
false
1
t1_o7wtemd
How about logical reasoning and classification tasks? Not coding tasks
1
0
2026-02-28T17:31:33
engineer-throwaway24
false
null
0
o7wtemd
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wtemd/
false
1
t1_o7wtby1
I feel like im missing something, whats up?
11
0
2026-02-28T17:31:10
Kahvana
false
null
0
o7wtby1
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7wtby1/
false
11
t1_o7wt87k
Sorry, I was (and am) on mobile. I updated to the link of the post with the image. Qwen3 coder next is in the final image of the gallery.
5
0
2026-02-28T17:30:37
spaceman_
false
null
0
o7wt87k
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wt87k/
false
5
t1_o7wt6xp
Llama 3.x 70b. The world knowledge was on another level and it communicated in a nearly slopless kind of way.
6
0
2026-02-28T17:30:26
Hoppss
false
null
0
o7wt6xp
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7wt6xp/
false
6
t1_o7wt6fc
If you're using unsloth they just released some fixes!
4
0
2026-02-28T17:30:22
joblesspirate
false
null
0
o7wt6fc
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7wt6fc/
false
4
t1_o7wt5x7
What about cpu-moe
8
0
2026-02-28T17:30:18
Zestyclose-Shift710
false
null
0
o7wt5x7
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7wt5x7/
false
8
t1_o7wt606
GGUF doesn't get along well with vLLM. Use an AWQ quant. [https://huggingface.co/cyankiwi/Qwen3.5-27B-AWQ-4bit](https://huggingface.co/cyankiwi/Qwen3.5-27B-AWQ-4bit) or [https://huggingface.co/QuantTrio/Qwen3.5-27B-AWQ](https://huggingface.co/QuantTrio/Qwen3.5-27B-AWQ)
1
0
2026-02-28T17:30:18
horriblesmell420
false
null
0
o7wt606
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7wt606/
false
1
t1_o7wt5qd
when you say token saving, you mean for prompt processing?
5
0
2026-02-28T17:30:16
colin_colout
false
null
0
o7wt5qd
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7wt5qd/
false
5
t1_o7wt2ie
Nano Banana is Gemini-Flash-Image and is multimodal. https://preview.redd.it/nw6k2r37t9mg1.jpeg?width=1080&format=pjpg&auto=webp&s=c1034cc75bdbf83399fa8119d75e78eca8553361
1
0
2026-02-28T17:29:49
Kamal965
false
null
0
o7wt2ie
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7wt2ie/
false
1
t1_o7wt1dx
It sounds like you may have one of the few use cases where 256 genuinely makes sense today. I think the M3 ultra with the education discount is exactly within budget? At least according to Gemini. That thing in Europe costs like 40pc more :/
1
0
2026-02-28T17:29:39
Hector_Rvkp
false
null
0
o7wt1dx
false
/r/LocalLLaMA/comments/1rfvh4c/going_fully_offline_with_ai_for_research_where_do/o7wt1dx/
false
1
t1_o7wt047
That's hilarious :D
2
0
2026-02-28T17:29:27
Ok-Measurement-1575
false
null
0
o7wt047
false
/r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7wt047/
false
2
t1_o7wt03v
Anthropic is an extremely shady business. It's quite likely that internally there is a fight between idealists (who really believe in the safety goal), and those who just use all the talks about safety for marketing and monopolistic purposes.
2
0
2026-02-28T17:29:27
Guardian-Spirit
false
null
0
o7wt03v
false
/r/LocalLLaMA/comments/1rh7s7s/anthropic/o7wt03v/
false
2
t1_o7wsyo6
I get your point, different use cases for us perhaps.
1
0
2026-02-28T17:29:15
jarec707
false
null
0
o7wsyo6
false
/r/LocalLLaMA/comments/1rfv6ap/what_models_run_well_on_mac_mini_m4_16gb_for_text/o7wsyo6/
false
1
t1_o7wsxh2
in my opinion the simplest start on Windows is koboldcpp
1
0
2026-02-28T17:29:05
jacek2023
false
null
0
o7wsxh2
false
/r/LocalLLaMA/comments/1rh1q8j/i919400f_rtx_4070_super_12gb_32gb_ddr5_ram/o7wsxh2/
false
1
t1_o7wsrx8
[removed]
1
0
2026-02-28T17:28:17
[deleted]
true
null
0
o7wsrx8
false
/r/LocalLLaMA/comments/1rh3pit/support_anthropic/o7wsrx8/
false
1
t1_o7wsmje
The government disliked your comment 😂 
1
0
2026-02-28T17:27:30
Matt11908
false
null
0
o7wsmje
false
/r/LocalLLaMA/comments/1n5454j/what_is_the_best_use_case_for_an_uncensored_llm/o7wsmje/
false
1
t1_o7wslmd
Qwen3.5-35B-A3B-UD-IQ2\_XXS.gguf works great for 12GB of VRAM as well. I'm getting around 80-90t/s in real usage (not counting thinking tokens).
1
0
2026-02-28T17:27:22
oxygen_addiction
false
null
0
o7wslmd
false
/r/LocalLLaMA/comments/1rh0msv/qwen3535ba3b_q5_k_mbest_model_for_nvidia_16gb_gpus/o7wslmd/
false
1
t1_o7wsidj
right? i expected LLaMA 3.3 to be something i could run quickly on older hardware (CPU only) at the cost of lower quality output, but it's dense and it chugs compared to any of the modern MoE models in the same size class and it still has some of the obvious LLM-isms as the newer ones. but now i also have Liquid and Ge...
1
0
2026-02-28T17:26:55
HopePupal
false
null
0
o7wsidj
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7wsidj/
false
1
t1_o7ws7o2
Chrono-MythoBoros-Platypus2-Superhot-Supercot-hermes-l2-70B
2
0
2026-02-28T17:25:24
Specific-Goose4285
false
null
0
o7ws7o2
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7ws7o2/
false
2
t1_o7ws43w
Which is ludicrous for how mediocre those models are… and it feels like they’ve been giving away 4.1 Fast at (or even below) cost in order to keep the GPUs from sitting idle, which is even more expensive than not making a profit on tokens.
1
0
2026-02-28T17:24:54
coder543
false
null
0
o7ws43w
false
/r/LocalLLaMA/comments/1rgq0vc/can_a_local_hosted_llm_keep_up_with_grok_41_fast/o7ws43w/
false
1
t1_o7ws3pu
Why would you compare gemma 3 27b against qwen3.5 35b a3b instead of qwen3.5 27b
6
0
2026-02-28T17:24:50
Emotional-Baker-490
false
null
0
o7ws3pu
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7ws3pu/
false
6
t1_o7ws0db
Thank you
1
0
2026-02-28T17:24:22
MrMrsPotts
false
null
0
o7ws0db
false
/r/LocalLLaMA/comments/1rgyqz7/has_anyone_got_qwen35_to_work_with_ollama/o7ws0db/
false
1
t1_o7wrx1r
> we'll never see this implemented in real inference engines Getting rid of the filler is the easy part, just make it think in Traditional Chinese.
6
0
2026-02-28T17:23:54
SomeoneSimple
false
null
0
o7wrx1r
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7wrx1r/
false
6
t1_o7wrswd
This might seem like a silly question, but can you provide some examples of the test prompts you used for gathering your sample/test data for these numbers?
16
0
2026-02-28T17:23:18
Historical-Camera972
false
null
0
o7wrswd
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7wrswd/
false
16
t1_o7wrr91
I still don't know which models you managed to run on your setup
2
0
2026-02-28T17:23:04
jacek2023
false
null
0
o7wrr91
false
/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7wrr91/
false
2
t1_o7wrqxd
Writing style. I like the prose of some older models, like rei v3 kto.
3
0
2026-02-28T17:23:01
Kahvana
false
null
0
o7wrqxd
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7wrqxd/
false
3
t1_o7wrqfk
/lmg/
1
0
2026-02-28T17:22:57
Specific-Goose4285
false
null
0
o7wrqfk
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7wrqfk/
false
1
t1_o7wro9y
I'm about to try the same. I have a Ralph loop running fully locally successfully, producing high quality code, albeit slowly. I'll want to see what Openclaw can do with similar models - any tips / advice before i start ? For Ralph, i had to a) put the agent in a sandbox, VM and/or Docker b) give it just enough acces...
1
0
2026-02-28T17:22:37
k_kert
false
null
0
o7wro9y
false
/r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7wro9y/
false
1
t1_o7wrnti
Solid advice, especially RAM over storage can't argue with that on Apple Silicon. For general use 128GB is genuinely capable. My case is specific though: running the largest open-weight MoE reasoning models locally at high quantization for serious research work. 128GB gets me Q2 on Qwen3 235B, 256GB gets me Q4 and that...
1
0
2026-02-28T17:22:34
TelevisionGlass4258
false
null
0
o7wrnti
false
/r/LocalLLaMA/comments/1rfvh4c/going_fully_offline_with_ai_for_research_where_do/o7wrnti/
false
1
t1_o7wrk3n
What you describe is possible with very modest hardware, since using an API is not in any way compute-intensive, but this is off-topic for LocalLLaMA, which is about local inference, not API inference. You might want to ask in r/Homelab, r/Programming, r/LLM, r/LLMDevs, r/Claude, or r/ChatGPT instead.
1
0
2026-02-28T17:22:03
ttkciar
false
null
0
o7wrk3n
false
/r/LocalLLaMA/comments/1rh6e38/how_to_make_ai_collaborate_to_get_my_work_done/o7wrk3n/
false
1
t1_o7wrip8
That’s very interesting thanks for trying it out and posting the results. The prefill is working nicely. WSL will reduce performance slightly (maybe 5%) so that’s a good result I think. The decode isn’t optimised, I just haven’t spent much time on it so far. I think it has a lot of gains still to be made in it as i...
1
0
2026-02-28T17:21:51
mrstoatey
false
null
0
o7wrip8
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7wrip8/
false
1
t1_o7wrfno
I still cannot comprehend how they pulled it off. More importantly how the regulators are allowing it.
2
0
2026-02-28T17:21:25
fazkan
false
null
0
o7wrfno
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wrfno/
false
2
t1_o7wrd9m
Something tells me you are new. We've been on the gooning thing since pygmalion on /lmg/ where llama weights first got leaked.
2
0
2026-02-28T17:21:04
Specific-Goose4285
false
null
0
o7wrd9m
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7wrd9m/
false
2
t1_o7wr8aa
yes, anounced and just forget as usual. Do not think it is related to public opinion
2
0
2026-02-28T17:20:23
Prestigious-Crow-845
false
null
0
o7wr8aa
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wr8aa/
false
2
t1_o7wr6re
Hilarious but honestly nothing new. Every major provider like OpenAI, Anthropic and Google do this in their “efficient” “non-reasoning” models. It’s kind of sad, we seriously lack no-reasoning models by definition
1
0
2026-02-28T17:20:10
MKU64
false
null
0
o7wr6re
false
/r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7wr6re/
false
1
t1_o7wr52g
scam saltman of closedai and dario ratmodei of misanthropic are 2 sides of the same coin
5
0
2026-02-28T17:19:56
PunnyPandora
false
null
0
o7wr52g
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wr52g/
false
5
t1_o7wr1dk
Thanks.
1
0
2026-02-28T17:19:24
gmmarcus
false
null
0
o7wr1dk
false
/r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7wr1dk/
false
1
t1_o7wqw3i
Yes, if you monitor output in the "thinking" phase of inference, and count the number of tokens inferred and/or look for substrings characteristic of rethinking and/or look for looping, you can abort inference and try something else (like re-prompting with thinking turned off, or prompting another model for the think-p...
28
0
2026-02-28T17:18:39
ttkciar
false
null
0
o7wqw3i
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7wqw3i/
false
28
t1_o7wqsvn
They absolutely did NOT announce those things were excluded. Their contract explicitly says that the government must be able to use OpenAI models for all lawful purposes.  Mass surveilling your own citizens and fully autonomous weapons are both technically lawful, especially when the government is interpreting their ...
26
0
2026-02-28T17:18:13
Reachingabittoohigh
false
null
0
o7wqsvn
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wqsvn/
false
26
t1_o7wqq0s
Thanks Brother !
1
0
2026-02-28T17:17:49
Less_Strain7577
false
null
0
o7wqq0s
false
/r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7wqq0s/
false
1
t1_o7wqokz
> Most people will switch to claude, they will gain massive influence You live in a very small bubble if you think most people are politically obsessed Redditors.
2
0
2026-02-28T17:17:37
Informal_Warning_703
false
null
0
o7wqokz
false
/r/LocalLLaMA/comments/1rh7s7s/anthropic/o7wqokz/
false
2
t1_o7wqmyr
So this wasnt new news. The reason this blew up is because the Government put them in the spotlight. Claiming they are "too libral" This has been an ongoing issue in the news. This is not a PR stunt. If it was, It just worked out in Anthropics favor. I have started paying 100$ instead of 20$ just to support them for t...
2
0
2026-02-28T17:17:23
Dudebro-420
false
null
0
o7wqmyr
false
/r/LocalLLaMA/comments/1rh7s7s/anthropic/o7wqmyr/
false
2
t1_o7wqgti
Hell yeah, I heard about this few years ago. People are blindly buying Dario's grift. People lost critical thinking to LLMs
2
0
2026-02-28T17:16:32
Realistic_Muscles
false
null
0
o7wqgti
false
/r/LocalLLaMA/comments/1rh7s7s/anthropic/o7wqgti/
false
2
t1_o7wqghu
I am a noob here, the only way to run this well is with 35gb of VRAM?
1
0
2026-02-28T17:16:30
Full_Tomato_5627
false
null
0
o7wqghu
false
/r/LocalLLaMA/comments/1rdlbvc/qwenqwen3535ba3b_hugging_face/o7wqghu/
false
1
t1_o7wqdxd
They announced they are open to it but put it on back burner after bad publicity.
1
0
2026-02-28T17:16:09
Clear_Anything1232
false
null
0
o7wqdxd
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wqdxd/
false
1
t1_o7wqdla
To be honest its kinda based on them to stick with their values even though I am not a fan of their whole safety HR corpo moto.
1
0
2026-02-28T17:16:06
Specific-Goose4285
false
null
0
o7wqdla
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7wqdla/
false
1
t1_o7wq61f
Your link doesn't work
1
0
2026-02-28T17:15:03
Hector_Rvkp
false
null
0
o7wq61f
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wq61f/
false
1
t1_o7wq4qq
It's a trick question. You're asking for an episode that doesn't exist. Its lack of presence in the training set is not an indicator of how good the model is. If you give it means to find the answer, that gap is solved. In order to get a smaller - fast - model, there are tradeoffs, and that tradeoff can be compensate...
1
0
2026-02-28T17:14:52
_raydeStar
false
null
0
o7wq4qq
false
/r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7wq4qq/
false
1
t1_o7wq3de
Those two cards almost certainly sit on different PCIe switches depending on your motherboard, which means expert routing hops across the PCIe fabric rather than staying on-die. With A3B active params the cross-GPU communication is minimal per token, but at 100k context the KV cache transfer pattern across mismatched m...
-1
0
2026-02-28T17:14:41
paulahjort
false
null
0
o7wq3de
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wq3de/
false
-1
t1_o7wq0ao
The tool I work on professionally has an optional 'improve my prompt' feature which uses a smaller lower cost model to amongst other things, condense user prompts. But given how short most user prompts are, the main purpose is clarity and structure for the user themselves when reading back through conversations. Users ...
1
0
2026-02-28T17:14:15
BigYoSpeck
false
null
0
o7wq0ao
false
/r/LocalLLaMA/comments/1rh631z/experimenting_with_a_middleware_to_compress_llm/o7wq0ao/
false
1
t1_o7wpytn
anthropic already committed their monitoring of user inputs.
2
0
2026-02-28T17:14:03
zball_
false
null
0
o7wpytn
false
/r/LocalLLaMA/comments/1rh7s7s/anthropic/o7wpytn/
false
2
t1_o7wpxva
Yes, because the full grok 6 model is around 7t parameters and I believe grok 4 is avoid 3 to 4t parameters 
1
0
2026-02-28T17:13:55
RhubarbSimilar1683
false
null
0
o7wpxva
false
/r/LocalLLaMA/comments/1rgq0vc/can_a_local_hosted_llm_keep_up_with_grok_41_fast/o7wpxva/
false
1
t1_o7wpwg8
thats not what this means at all
1
0
2026-02-28T17:13:43
Moist-Length1766
false
null
0
o7wpwg8
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7wpwg8/
false
1
t1_o7wpp7y
Good 4-bit quantizations of Qwen 3.5 have performance close to the original unquantized 16-bit model. It makes much more sense to compare parameter counts than compare unquantized FP16 sizes to QAT MXFP4.
2
0
2026-02-28T17:12:43
Federal-Effective879
false
null
0
o7wpp7y
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wpp7y/
false
2
t1_o7wpgn2
are you using unsloth's model of LMStudio's model ?
1
0
2026-02-28T17:11:30
anubhav_200
false
null
0
o7wpgn2
false
/r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/o7wpgn2/
false
1
t1_o7wpgij
You got it man. No problem and happy to help. Just my opinion for coding with AI, For coding, stick with Copilot — especially for cybersecurity projects where code quality matters. It's got the filters and optimizations that a local 7B just can't match on your hardware. Nothing wrong with experimenting locally to learn...
1
0
2026-02-28T17:11:29
melanov85
false
null
0
o7wpgij
false
/r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7wpgij/
false
1
t1_o7wpef2
Adult content? OpenAi forbids any type of it
2
0
2026-02-28T17:11:11
Prestigious-Crow-845
false
null
0
o7wpef2
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wpef2/
false
2
t1_o7wpdjv
I'm sorry, but until your project supports local inference it is off-topic for LocalLLaMA.
1
0
2026-02-28T17:11:04
ttkciar
false
null
0
o7wpdjv
false
/r/LocalLLaMA/comments/1rh7j43/an_open_source_llm_router_that_cuts_api_costs_by/o7wpdjv/
false
1
t1_o7wpdgm
It does prompt processing at double the speed of gpt-oss-120b on my system (and glm-4.7-flash too), chews through web pages, easily the better option.
0
0
2026-02-28T17:11:03
Daniel_H212
false
null
0
o7wpdgm
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wpdgm/
false
0
t1_o7wp6ua
Try a next-gen orchestrator/provisioner for GPU compute. You can save 60% on that cost. [https://github.com/theoddden/Terradev](https://github.com/theoddden/Terradev)
1
0
2026-02-28T17:10:08
paulahjort
false
null
0
o7wp6ua
false
/r/LocalLLaMA/comments/1rh7mlv/before_i_rewrite_my_stack_again_advice/o7wp6ua/
false
1