name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7yev2j
idk Starlink pretty much saved me from local ISP cartels price & speed collusion here so at least I'm thankful for this
3
0
2026-02-28T22:31:45
maroule
false
null
0
o7yev2j
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7yev2j/
false
3
t1_o7yenzf
Contract-driven development before any code starts is a strong design choice - prevents the classic issue where agents start executing before the spec is solid. The idle-aware queue via JSONL transcript files is clever too. One thing I'd add: each agent's [CLAUDE.md](http://CLAUDE.md) or equivalent needs explicit scope...
1
0
2026-02-28T22:30:39
Joozio
false
null
0
o7yenzf
false
/r/LocalLLaMA/comments/1rhcckv/vibehq_orchestrate_multiple_claude_code_codex/o7yenzf/
false
1
t1_o7yenee
you're the guy making all the ai slop on youtube?
2
0
2026-02-28T22:30:34
gphie
false
null
0
o7yenee
false
/r/LocalLLaMA/comments/1rhf9is/what_do_i_do_with_my_life/o7yenee/
false
2
t1_o7yemgy
[removed]
1
0
2026-02-28T22:30:26
[deleted]
true
null
0
o7yemgy
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7yemgy/
false
1
t1_o7ye5d7
Long story short, no. llama.cpp makes it a lot easier to see benchmark values, but they "feel" about the same. I still like having llama.cpp more. I can work with plain GGUF files now, instead of having to use Ollama's intermediary "modelfiles". Also, llama.cpp makes it a little easier to manage runtime config values,...
1
0
2026-02-28T22:27:48
arcanemachined
false
null
0
o7ye5d7
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7ye5d7/
false
1
t1_o7ye4fw
This is why bookmarks were invented x)
1
0
2026-02-28T22:27:40
IngwiePhoenix
false
null
0
o7ye4fw
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7ye4fw/
false
1
t1_o7ye4i0
I tried yesterday, no luck… thanks for the feedback though
1
0
2026-02-28T22:27:40
Accomplished_Code141
false
null
0
o7ye4i0
false
/r/LocalLLaMA/comments/1r807kb/segmentation_fault_when_loading_models_across/o7ye4i0/
false
1
t1_o7ye4im
32b is just out of my reach with an acceptable quant and speed so cant give any comparison. But i think they are comparable so the 27b might be better cause it is smaller. Altho those q3.5 like to think a lot
2
0
2026-02-28T22:27:40
KURD_1_STAN
false
null
0
o7ye4im
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7ye4im/
false
2
t1_o7ydvu2
I think you missed the point. The outrage is over the decree that every department is forbidden to use them regardless of their usefulness in their context. Also… modern democracies are not meant to be businesses. Business incentives do not align well with society’s well being. Different scopes and stake holder goals.
3
0
2026-02-28T22:26:22
aglehg
false
null
0
o7ydvu2
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ydvu2/
false
3
t1_o7ydtor
Thank you for the link! 'Model collapse' is exactly the technical term for the 'echo' I was describing in my post. It’s pretty wild (and scary) to see actual research proving that AI training on AI content leads to this kind of reality degradation. This is why I feel like a filter is a necessity for data integrity befo...
2
0
2026-02-28T22:26:02
ProductTop9807
false
null
0
o7ydtor
false
/r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/o7ydtor/
false
2
t1_o7ydnso
Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/PgFhZ8cnWW) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*
1
0
2026-02-28T22:25:09
WithoutReason1729
false
null
0
o7ydnso
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7ydnso/
true
1
t1_o7ydni6
Follow-up: Full ablation results + paper published As promised, here's the complete ablation table. Each A-config removes exactly one component from the full pipeline: |Config|Generic|Domain|Latency...
1
0
2026-02-28T22:25:07
ikchain
false
null
0
o7ydni6
false
/r/LocalLLaMA/comments/1r8jgwv/i_built_a_local_ai_dev_assistant_with_hybrid_rag/o7ydni6/
false
1
t1_o7ydmo9
Thank you for this systemic breakdown. I agree government regulation is the ultimate goal, but the 'system' is just people like us and we are the ones who will live in this future. The only way to tackle such a massive problem is to break it into smaller pieces, as fixing the whole internet at once is impossible. If w...
1
0
2026-02-28T22:24:59
ProductTop9807
false
null
0
o7ydmo9
false
/r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/o7ydmo9/
false
1
t1_o7ydjc6
I have a laptop with a 6gb rtx a3000. Still trying to work out how to use it to sorry at least pay is an agentic coding approach. Put simply, unless we find a way to run larger models on smaller GPUs, it probably isn't that much use even if I can find a way to have excellent context management and very very very focu...
1
0
2026-02-28T22:24:29
Protopia
false
null
0
o7ydjc6
false
/r/LocalLLaMA/comments/1rhe4oo/qwen_35_27b_and_qwen3535ba3b_ran_locally_on_my/o7ydjc6/
false
1
t1_o7ydbnv
what was your setup?
1
0
2026-02-28T22:23:19
TheAncientOnce
false
null
0
o7ydbnv
false
/r/LocalLLaMA/comments/1rgynmf/dual_3060_and_single_3090_whats_the_point_of_the/o7ydbnv/
false
1
t1_o7yd9xb
I tried Minisrtral 3B and another one I forgot. I tried Gemma 3 and also got error. I just updated VM Studio and they are working now.
1
0
2026-02-28T22:23:03
Takezo1000
false
null
0
o7yd9xb
false
/r/LocalLLaMA/comments/1rew5ui/lm_studio_error_when_generating_message_repeated/o7yd9xb/
false
1
t1_o7yd8ka
I think it's giving me an edge in school. I'm very anti-generative AI in most cases but being able to distill down a stack of PDFs is such a godsend. And have it answer questions? Goddamn magic.
2
0
2026-02-28T22:22:50
radically_unoriginal
false
null
0
o7yd8ka
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7yd8ka/
false
2
t1_o7yd1zq
Yes, the 27b is better - the 35b-a3b is much easier to run at good speed. Working with images, the 27b (BF16) feels much closer to the 122-a10b (FP8) than to the 35b-a3b (BF16). Given three arbitrary tiers: 1. 397b-a17b 2. 122b-a10b & 27b 3. 35b-a3b
1
0
2026-02-28T22:21:50
reto-wyss
false
null
0
o7yd1zq
false
/r/LocalLLaMA/comments/1rhfjeg/qwen3527b_vs_qwen3535ba3b/o7yd1zq/
false
1
t1_o7yd00u
Is that something that’s ok for a president to say? He makes decisions alone? What kind of democracy is that? I feel like I m living in idiocracy. 😫
1
0
2026-02-28T22:21:32
aglehg
false
null
0
o7yd00u
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7yd00u/
false
1
t1_o7ycxs8
Possibly. I used unsloth quants for a bit, but in the end I made my own. In general qwens (QwQ, Qwen3, Qwen3-Think) tended to have long chain of thoughts like that. As I said, I disabled thinking and as a general assistant it works fine. I have one with thinking enabled for coding and that works great too. My laptop is...
4
0
2026-02-28T22:21:12
lans_throwaway
false
null
0
o7ycxs8
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7ycxs8/
false
4
t1_o7ycugz
The 27B is definitely smarter but is much slower because of the difference in active parameters. It really just depends on your hardware and use case. I think the reason people (myself included) are so excited about 35B-A3B is that it runs quickly on consumer hardware with surprisingly good results. It’s a perfect fit ...
1
0
2026-02-28T22:20:42
ExtremeMuch7857
false
null
0
o7ycugz
false
/r/LocalLLaMA/comments/1rhfjeg/qwen3527b_vs_qwen3535ba3b/o7ycugz/
false
1
t1_o7ycqwz
[removed]
1
0
2026-02-28T22:20:11
[deleted]
true
null
0
o7ycqwz
false
/r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/o7ycqwz/
false
1
t1_o7yclz4
Hallucination rate 110% https://preview.redd.it/neaztblu8bmg1.png?width=864&format=png&auto=webp&s=11e3d7e6b13e680a733f21f8b0f1481db125a1c8
30
0
2026-02-28T22:19:25
-Ellary-
false
null
0
o7yclz4
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7yclz4/
false
30
t1_o7yclox
Do you feel it's better than Qwen Coder Next for coding tasks?
1
0
2026-02-28T22:19:23
Wolf-Shade
false
null
0
o7yclox
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7yclox/
false
1
t1_o7yclkk
Yes! I use a ASUS Prime Z690-P WiFi D4 LGA 1700
4
0
2026-02-28T22:19:22
klenen
false
null
0
o7yclkk
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7yclkk/
false
4
t1_o7yclfr
Good point — it’s not about capability differences. I separate frontend and backend mainly to model real multi-agent co-work: clear ownership, defined interfaces, and structured handoffs. It forces explicit API contracts and reduces context entanglement, even if the underlying model could technically do both. The goa...
-6
0
2026-02-28T22:19:21
GGwithRabbit
false
null
0
o7yclfr
false
/r/LocalLLaMA/comments/1rhcckv/vibehq_orchestrate_multiple_claude_code_codex/o7yclfr/
false
-6
t1_o7yckf9
I would need some examples, it's getting a tad too abstract to derive anything meaningful.
2
0
2026-02-28T22:19:12
Economy_Cabinet_7719
false
null
0
o7yckf9
false
/r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/o7yckf9/
false
2
t1_o7ycjmk
> i love the attitude but that's not how the corporate world works. lol. I'm not allowed to go anywhere near an AI cloud supplier with my work tasks.
1
0
2026-02-28T22:19:04
AlwaysLateToThaParty
false
null
0
o7ycjmk
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7ycjmk/
false
1
t1_o7ychnn
[https://huggingface.co/ibm-granite/granite-docling-258M](https://huggingface.co/ibm-granite/granite-docling-258M) has vision.
1
0
2026-02-28T22:18:47
TheRealMasonMac
false
null
0
o7ychnn
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7ychnn/
false
1
t1_o7ycgrd
[https://www.reddit.com/r/LocalLLaMA/comments/1rhfque/qwen3\_coder\_next\_qwen35\_27b\_devstral\_small\_2\_rust/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/?utm_sour...
2
0
2026-02-28T22:18:39
Holiday_Purpose_3166
false
null
0
o7ycgrd
false
/r/LocalLLaMA/comments/1rg41ss/qwen35_27b_vs_devstral_small_2_nextjs_solidity/o7ycgrd/
false
2
t1_o7ycgmf
>That’s not my understanding of what happened. Then your understanding doesn't match the public statements by the US government and Anthropic.
2
0
2026-02-28T22:18:38
_bones__
false
null
0
o7ycgmf
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ycgmf/
false
2
t1_o7yccsz
You can even mix brands, like Nvidia + AMD, but you need to use Vulcan so they all work together.
2
0
2026-02-28T22:18:02
TaroOk7112
false
null
0
o7yccsz
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7yccsz/
false
2
t1_o7yc9yf
Did you measure any of this..?
2
0
2026-02-28T22:17:36
Budget-Juggernaut-68
false
null
0
o7yc9yf
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7yc9yf/
false
2
t1_o7yc0gv
This runs on a mac mini? How much ram would be required?
2
0
2026-02-28T22:16:08
harbour37
false
null
0
o7yc0gv
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7yc0gv/
false
2
t1_o7yc03w
Oh grow up. Yes, it's depressing that the US became like every other banana republic out there(China, Russia, South Korea). Yes, it's no longer enough for the big business to just play by the rules, now you need to pay up and butter up whoever's in charge. All these hooha is telling Amodei to go and kowtow to the pres...
1
0
2026-02-28T22:16:05
dmitry_sfw
false
null
0
o7yc03w
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7yc03w/
false
1
t1_o7ybz5p
I mean obviously that is some pretty good hardware, but that's still pretty wild. For automation tasks, you could have have 20 instances run concurrently on RAM. 8T/s would be fine for most tasks.
1
0
2026-02-28T22:15:56
AlwaysLateToThaParty
false
null
0
o7ybz5p
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7ybz5p/
false
1
t1_o7ybt9r
I admire your perceptiveness, which for a 16yo is pretty darned good. But I fear that your optimism about being able as one person to hold back the tide of slop is misplaced. The issues with AI, as with fake news, are: 1, That often the quality of the source, or quality of them material is not taken into account and...
1
0
2026-02-28T22:15:04
Protopia
false
null
0
o7ybt9r
false
/r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/o7ybt9r/
false
1
t1_o7ybsl7
Thanks!!!
1
0
2026-02-28T22:14:58
IrisColt
false
null
0
o7ybsl7
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7ybsl7/
false
1
t1_o7ybs88
Luv u guys. Keep it going!
2
0
2026-02-28T22:14:55
johakine
false
null
0
o7ybs88
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7ybs88/
false
2
t1_o7ybreu
I did, and they refunded me, suggesting I buy a GPU for $3,000 ;-) THX
2
0
2026-02-28T22:14:47
timeshifter24
false
null
0
o7ybreu
false
/r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o7ybreu/
false
2
t1_o7ybmz1
Thank you, I will definitely try 27B asap. By the sounds of it there's nothing available in Ollama Qwen3.5 wise that would beat Qwen3 32B FP16 yet? I know there might be GGUF options, but I'm in no rush as Qwen3 works perfectly fine for me as it is and so I'm fine waiting for a better model a while longer.
1
0
2026-02-28T22:14:06
donatas_xyz
false
null
0
o7ybmz1
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7ybmz1/
false
1
t1_o7ybmat
What about AI discussing the future of AI? This is direction we are exploring with www.wavestreamer.ai Python SDK: https://pypi.org/project/wavestreamer/ LangChain: https://pypi.org/project/langchain-wavestreamer/ MCP Server: https://www.npmjs.com/package/@wavestreamer/mcp
0
0
2026-02-28T22:14:00
Puzzleheaded-Nail814
false
null
0
o7ybmat
false
/r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/o7ybmat/
false
0
t1_o7ybk8i
Is the issue financial? Or is the curriculum not good enough in your opinion? In my opinion, you should focus on your education and maybe freelance on the side if you can find good projects. Also invest in learning about big data related stuff (should already be a part of your coursework). Not every personal project ...
1
0
2026-02-28T22:13:41
Monad_Maya
false
null
0
o7ybk8i
false
/r/LocalLLaMA/comments/1rhf9is/what_do_i_do_with_my_life/o7ybk8i/
false
1
t1_o7ybie7
It was probably a lot less organic than it looked.
7
0
2026-02-28T22:13:24
privatetudor
false
null
0
o7ybie7
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7ybie7/
false
7
t1_o7ybezz
This concept has been theorized as "model collapse", and has been observed on small scale experiments https://pmc.ncbi.nlm.nih.gov/articles/PMC11269175/
1
0
2026-02-28T22:12:53
arnaudsm
false
null
0
o7ybezz
false
/r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/o7ybezz/
false
1
t1_o7ybbvw
very good advice. thank you for taking your time to reply. I will keep trying to build, might end up building something great, and if not, at least i'm building a portfolio...
2
0
2026-02-28T22:12:25
Meowkyo
false
null
0
o7ybbvw
false
/r/LocalLLaMA/comments/1rhf9is/what_do_i_do_with_my_life/o7ybbvw/
false
2
t1_o7yb3ab
So which model exactly From qwen 3.5 is best. Normal model, Moe or mrpx or whatever it’s called?
1
0
2026-02-28T22:11:04
pefman
false
null
0
o7yb3ab
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7yb3ab/
false
1
t1_o7yb1hy
Yup I'm running qwen3.5-35b-a3b Q4\_K\_M on my 6700xt with 12GB of RAM, I get \~11tok/sec which is decently fast, faster than I can read. OFC I usually skip \[Think\]. For reference: Qwen3-VL-8B-Instruct-GGUF is pretty snappy at 58tok/sec.
1
0
2026-02-28T22:10:47
ea_man
false
null
0
o7yb1hy
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7yb1hy/
false
1
t1_o7yav40
Ha ha Tim Cook already acquired it
1
0
2026-02-28T22:09:49
littlehakr
false
null
0
o7yav40
false
/r/LocalLLaMA/comments/1rfktkk/new_applenative_ai_agent/o7yav40/
false
1
t1_o7yatl7
Firstly, wrong sub Second, man, I wish I could say anything to all the young folk getting into CS now.. it’s really weird time for you guys. Most of what you may be able to learn and provider will be “easily achieved” by people who actually have the money (employers). Luckily you have time in your university, as a fir...
7
0
2026-02-28T22:09:35
k_am-1
false
null
0
o7yatl7
false
/r/LocalLLaMA/comments/1rhf9is/what_do_i_do_with_my_life/o7yatl7/
false
7
t1_o7yaspw
Moe are very efficient with swapping between ram and vram so u only need to have enough vram for those active parameters + some headroom, and everything else in ram . Ofc the more experts u can/do allocate to vram the faster it get. Im running 35b q5km with 3060 12 + 32 ddr4 with 30t/s, that is a model 3x the size of m...
2
0
2026-02-28T22:09:27
KURD_1_STAN
false
null
0
o7yaspw
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7yaspw/
false
2
t1_o7y9n3x
bad bot
4
0
2026-02-28T22:03:08
woahdudee2a
false
null
0
o7y9n3x
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7y9n3x/
false
4
t1_o7y8y5x
lfm2/2.5 is designed to run fast on cpu, gemma3 qat (lmstudio version) have a great ammount of world knowleage.
2
0
2026-02-28T21:59:22
lavilao
false
null
0
o7y8y5x
false
/r/LocalLLaMA/comments/1rhcs8p/tiny_small_faster_models_for_13_year_old_laptop/o7y8y5x/
false
2
t1_o7y8rfs
I do this sometimes IRL, and just start yapping
3
0
2026-02-28T21:58:22
jax_cooper
false
null
0
o7y8rfs
false
/r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7y8rfs/
false
3
t1_o7y8pqn
My experience with text processing—especially non-English text—shows a massive improvement with the 35b model running in closed-thought mode compared to the 27b model. The 27b model's non-thought mode performs extremely poorly on language and text processing. All runs were done using dual 5090s in FP8 我对文字的...
1
0
2026-02-28T21:58:06
mediali
false
null
0
o7y8pqn
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7y8pqn/
false
1
t1_o7y8og6
[removed]
1
0
2026-02-28T21:57:55
[deleted]
true
null
0
o7y8og6
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7y8og6/
false
1
t1_o7y8m00
I don't really know the answer to that.
1
0
2026-02-28T21:57:33
Bite_It_You_Scum
false
null
0
o7y8m00
false
/r/LocalLLaMA/comments/1rgu849/if_your_chutesai_subscription_was_unilaterally/o7y8m00/
false
1
t1_o7y8jp6
I understand that these models work slightly differently hence the need for special llamacpp support, bury unclear exactly what is new or why it is beneficial...
1
0
2026-02-28T21:57:11
Protopia
false
null
0
o7y8jp6
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7y8jp6/
false
1
t1_o7y8hm9
Hi. Is not role-play based, even if it can be seen like. The idea is that you chat fully locally with a model, and instead of the usual model responses, the model is trying to learn from the conversation and give a better response. It can also adapt based on different factors, have a long term memory, so even if you cl...
2
0
2026-02-28T21:56:53
DvMar
false
null
0
o7y8hm9
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7y8hm9/
false
2
t1_o7y8el2
You manage to make an entirely false post
2
0
2026-02-28T21:56:26
zipzag
false
null
0
o7y8el2
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7y8el2/
false
2
t1_o7y8919
Looks like support was committed about 16hrs ago, though not by merging either of these PRs.
1
0
2026-02-28T21:55:36
Protopia
false
null
0
o7y8919
false
/r/LocalLLaMA/comments/1rgkxy3/list_of_models_that_you_might_have_missed/o7y8919/
false
1
t1_o7y7pqb
>No, that's illegal overreach Are you a time traveler from the year 2015?
3
0
2026-02-28T21:52:44
_bones__
false
null
0
o7y7pqb
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7y7pqb/
false
3
t1_o7y7p6r
Thank you
1
0
2026-02-28T21:52:39
DertekAn
false
null
0
o7y7p6r
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7y7p6r/
false
1
t1_o7y7oqz
I had this issue too, for me setting repetition penalty to 1.1 was the fix
1
0
2026-02-28T21:52:35
Hoppss
false
null
0
o7y7oqz
false
/r/LocalLLaMA/comments/1rhaoty/anyone_noticing_qwen35_27b_getting_stuck_in/o7y7oqz/
false
1
t1_o7y7htt
\> I don't want the model to "but wait" 30 times It was because when the Qwen introduced the model listed the following parameters: `presence_penalty=1.5`, `repetition_penalty=1.0`; but Unsloth in the model documentation initially omitted the first one and left only the second recommendation: disable the `repetition_p...
14
0
2026-02-28T21:51:33
Exciting_Garden2535
false
null
0
o7y7htt
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7y7htt/
false
14
t1_o7y7gui
I honestly envy your peace of mind. I wish I could just pick better spaces and ignore the noise. The problem is that this slop is already poisoning the root. It is in the datasets and open source libraries that even the 'clean' spaces eventually rely on. You might not see it today, but if the foundation of human knowl...
-3
0
2026-02-28T21:51:24
ProductTop9807
false
null
0
o7y7gui
false
/r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/o7y7gui/
false
-3
t1_o7y79c9
Parakeet is still the best
2
0
2026-02-28T21:50:16
Themotionalman
false
null
0
o7y79c9
false
/r/LocalLLaMA/comments/1rhd5b6/streaming_moonshine_asr/o7y79c9/
false
2
t1_o7y781x
Context rot/contract poisoning. The moment the LLM starts hallucinating in it's CoT, the context is poisoned and will pattern match/propogate the poisoned context in a "death spiral". I use opus 4.6 almost exclusively. And in long multi turn conversations, the moment I see claude second guessing itself in it's though...
4
0
2026-02-28T21:50:05
Hisma
false
null
0
o7y781x
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7y781x/
false
4
t1_o7y76wb
Is there a reason you separated the concepts of frontend and backend engineer? The only reason we do this on real teams is people have areas they're better at than others. This isn't the case with most LLMs.
1
0
2026-02-28T21:49:54
o5mfiHTNsH748KVq
false
null
0
o7y76wb
false
/r/LocalLLaMA/comments/1rhcckv/vibehq_orchestrate_multiple_claude_code_codex/o7y76wb/
false
1
t1_o7y70ws
Hi! Did you finally buy the two GPUs? I currently have a RTX 4000, and many models fall short for me... I'm thinking of buying another one, but I don't know if most things won't be able to use both GPUs and I'll still be restricted to using just one.
1
0
2026-02-28T21:49:01
LordGrande666
false
null
0
o7y70ws
false
/r/LocalLLaMA/comments/1q5s21m/thinking_of_getting_two_nvidia_rtx_pro_4000/o7y70ws/
false
1
t1_o7y6wif
Agreed, I drift towards Mistral often because of this, recent no doubt very high-functioning models have been a disappointment for me.
3
0
2026-02-28T21:48:21
LucidTechnologist
false
null
0
o7y6wif
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7y6wif/
false
3
t1_o7y6oh5
If I understood you right, your problem is "too much slop out there" and the proposed solution is "build a filter system". I propose a better solution: pick better spaces and activities. I see almost no AI slop in my life, it doesn't affect me in any way.
6
0
2026-02-28T21:47:10
Economy_Cabinet_7719
false
null
0
o7y6oh5
false
/r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/o7y6oh5/
false
6
t1_o7y6o44
Hey don't blast the monoquantamorous folks
18
0
2026-02-28T21:47:07
sig_kill
false
null
0
o7y6o44
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7y6o44/
false
18
t1_o7y68fs
I believe there was a defect or inefficiency discovered in Unsloth's quants of the Qwen3.5 35B A3B. They released fixed quant versions for that model yesterday along with a post saying that they were working on the others including the 27B See this reddit post from them with some description: /r/LocalLLaMA/comments/...
2
0
2026-02-28T21:44:47
golden_monkey_and_oj
false
null
0
o7y68fs
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7y68fs/
false
2
t1_o7y61n3
Naw 3060's, got to go with the budget king. Although P40 24gb right now is just around 20% slower inference and for the price and limit 170w, that might even out.
1
0
2026-02-28T21:43:48
Dundell
false
null
0
o7y61n3
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7y61n3/
false
1
t1_o7y61oy
Nature handed me a very small personal reasoning budget :(
2
0
2026-02-28T21:43:48
Much-Researcher6135
false
null
0
o7y61oy
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7y61oy/
false
2
t1_o7y60al
Right now you realistically need vRAM to hold the entire model. If we get to there point that it's only needs to hold the active parameters and / or the active layer, then suddenly consumer GPU hardware can run much much much larger models.
0
0
2026-02-28T21:43:35
Protopia
false
null
0
o7y60al
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7y60al/
false
0
t1_o7y5v5m
i wish someone would tell me what all this means ? like what exactly is "confidence" and "calmness" and how all of this "personality" is useful, is it just for the roleplay ?
3
0
2026-02-28T21:42:50
UniqueAttourney
false
null
0
o7y5v5m
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7y5v5m/
false
3
t1_o7y5jr1
Follow up: you guys were correct, I used the noob route to go to LM Studio, and I dont think it was using the right CUDA runtime, or may be some the of model settings were not working as expected. Switching to llama.cpp directly I instantly went up to 40+ t/s even at 128k. Shame, i really liked the LM Studio int...
1
0
2026-02-28T21:41:09
yuhjulio
false
null
0
o7y5jr1
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7y5jr1/
false
1
t1_o7y4zki
I was exactly searching for this. appreciate this
2
0
2026-02-28T21:38:08
Mifletzet_Mayim
false
null
0
o7y4zki
false
/r/LocalLLaMA/comments/1rhbtnw/the_state_of_openweights_llms_performance_on/o7y4zki/
false
2
t1_o7y4wz5
That distinction is exactly right — and it's actually two separate failure modes. Structured system prompts / CLAUDE[md] front-loading handles behavioral drift (the how to behave layer). UCS targets something deeper: the reasoning texture an agent earns through iteration — the negative knowledge, the methodology refine...
0
0
2026-02-28T21:37:45
TheBrierFox
false
null
0
o7y4wz5
false
/r/LocalLLaMA/comments/1rhcjd3/p_ucs_v12_judgment_preservation_in_persistent_ai/o7y4wz5/
false
0
t1_o7y4vb0
Not sure where you looked because reddit has like people asking about this almost every day. Since the beginning of llama.cpp, more or less. You can even have hybrid inference between an arbitrary number of GPUs and system RAM. If you have x8 lanes per GPU, you should also try ik_llama.cpp.
3
0
2026-02-28T21:37:30
FullstackSensei
false
null
0
o7y4vb0
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7y4vb0/
false
3
t1_o7y4n1y
Thanks. I did use the first model that I found in my "download" folder. As now I have reached a position where I see the "framework" to be working as it should, for sure I will try different models. But this is not just another LLM "wrapper". It could take days to see the real benefits between models.
0
0
2026-02-28T21:36:16
DvMar
false
null
0
o7y4n1y
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7y4n1y/
false
0
t1_o7y4lp3
Will that video gen come with matching audio? That's the bar now.
1
0
2026-02-28T21:36:04
fallingdowndizzyvr
false
null
0
o7y4lp3
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7y4lp3/
false
1
t1_o7y49rj
Setting reasoning budget is wrong with this model, use the official way, as per the models card: --chat-template-kwargs "{\"enable_thinking\": false}"
18
0
2026-02-28T21:34:20
noctrex
false
null
0
o7y49rj
false
/r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7y49rj/
false
18
t1_o7y46ln
I don't think the EU had the infrastructure to support it.
2
0
2026-02-28T21:33:51
Curious_Industry_339
false
null
0
o7y46ln
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7y46ln/
false
2
t1_o7y41cf
Let's get back to what you've said previosly: > It transfers the entire KV-cache. Agent A processes its prompt, runs 20 latent thinking steps, and that whole cache gets passed to Agent B. Agent B then processes its own fresh prompt (role instruction + question) as new tokens appended after Agent A's cache. So you def...
8
0
2026-02-28T21:33:05
No-Refrigerator-1672
false
null
0
o7y41cf
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7y41cf/
false
8
t1_o7y40cl
Cost could be a factor. Smaller models are cheaper to fine-tune. Academic papers often use even smaller models.
1
0
2026-02-28T21:32:56
grimjim
false
null
0
o7y40cl
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7y40cl/
false
1
t1_o7y3wca
I want to going to come ask - why use Qwen 3.5-35B-A3B instead of the 122B-A10B. I would have thought that the 122B would be a better model to use?
2
0
2026-02-28T21:32:20
TFYellowWW
false
null
0
o7y3wca
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7y3wca/
false
2
t1_o7y3so9
Can you load a 200B+ Model over multiple cards? I haven't been able to get a straight answer on that. I only have an old R720XD I'm running a P100 on though, and it could probably handle a 2nd. Might go with 2 P40's for 48GB of VRAM.
1
0
2026-02-28T21:31:48
Pretty_Challenge_634
false
null
0
o7y3so9
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7y3so9/
false
1
t1_o7y3oo5
you create demand for linux by making it linux only. when you keep making it easily accessible for people on windows, people will remain there. i really don't understand this continuous logic.
1
0
2026-02-28T21:31:12
NineBiscuit
false
null
0
o7y3oo5
false
/r/LocalLLaMA/comments/1mao95d/running_llms_exclusively_on_amd_ryzen_ai_npu/o7y3oo5/
false
1
t1_o7y3oap
Vibe coding is a hell of a drug. Looks like you're having fun though and that's the real point!
6
0
2026-02-28T21:31:09
Stepfunction
false
null
0
o7y3oap
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7y3oap/
false
6
t1_o7y3nme
The compaction problem is real - I've seen agents drift badly mid-session. One thing that helped without toroidal routing: structured CLAUDE\[md\]sections that front-load the agent's operating constraints so they survive context pressure. Not a replacement for your approach but it handles a different failure mode - the...
-2
0
2026-02-28T21:31:03
Joozio
false
null
0
o7y3nme
false
/r/LocalLLaMA/comments/1rhcjd3/p_ucs_v12_judgment_preservation_in_persistent_ai/o7y3nme/
false
-2
t1_o7y3lvj
Very cool, it might be the smaller size of the 4_0 or maybe an issue on my end. The qwen3.5 27b 4_k_l on my system is 18.41 gigs. I run in a windows 11 environment on LM studio. No k or v cache quant enabled. I can’t max out the context without it spilling into vram. Either way, pretty happy with it.
1
0
2026-02-28T21:30:47
-_Apollo-_
false
null
0
o7y3lvj
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7y3lvj/
false
1
t1_o7y342h
I actually think if an LLM is somehow designed and trained to generate accurate video also that could be a huge improvement in it's overall world model.
1
0
2026-02-28T21:28:09
ithkuil
false
null
0
o7y342h
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7y342h/
false
1
t1_o7y31zx
Interesting results. I’m not finding the same looping issue upon using KV q8_0 on the 27B Q4_K_M. I went with 27B after extensively testing 35B and 122B and found the much expected higher quality with the dense model albeit slower pp/tg. I need the accuracy more than the speed. 5090 with 64GB ddr5 - mostly OpenCo...
3
0
2026-02-28T21:27:51
stormy1one
false
null
0
o7y31zx
false
/r/LocalLLaMA/comments/1rhchvi/qwen35_family_running_notes/o7y31zx/
false
3
t1_o7y3177
Or start with Q8 and jump ship in the general vllm/sglang direction, depending on vram available.
2
0
2026-02-28T21:27:44
Prudent-Ad4509
false
null
0
o7y3177
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7y3177/
false
2
t1_o7y316a
The web dashboard for agent configuration is exactly where I hit the same wall. My agent outgrew a spreadsheet so I built a native macOS dashboard instead - task queue, status, cost tracking per run. Sharing because the dashboard architecture problem is interesting: [https://thoughts.jock.pl/p/wiz-1-5-ai-agent-dashbo...
1
0
2026-02-28T21:27:44
Joozio
false
null
0
o7y316a
false
/r/LocalLLaMA/comments/1rhcxn2/mate_selfhosted_multiagent_system_with_ollama/o7y316a/
false
1
t1_o7y2xpa
I tried that but then I have to chose either Q4 or Q8 and the difference is 1tkps output and prefill extra 30tkps but less inteligence. llama-server -m Qwen3.5-35B-A3B-Q4_K_M-00001-of-00002.gguf -ngl 999 -fa on -c 65536 -b 4096 -ub 2048 -t 6 -np 1 -ncmoe 36 -ctk q8_0 -ctv q8_0 --port 8080 --api-key "opencode-local...
2
0
2026-02-28T21:27:14
sagiroth
false
null
0
o7y2xpa
false
/r/LocalLLaMA/comments/1rh9983/qwen_35b_a3b_aessedai_finetune_on_8gb_vram_and/o7y2xpa/
false
2
t1_o7y2wrh
[removed]
1
0
2026-02-28T21:27:06
[deleted]
true
null
0
o7y2wrh
false
/r/LocalLLaMA/comments/1rhb1xb/fine_tuning_on_proprietary_data_is_way_harder_to/o7y2wrh/
false
1