name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7ylh0j
em...my personal use is, use instruct - general task (all paramter same as official suggested) text summarize: instruct -general task simple agentic use for opencode/ openclaw-like bot : instruct -general task 'First and Last Frame' prompting to make LTX2 video : thinking mode coding.... thinking if really use....
10
0
2026-02-28T23:08:58
kironlau
false
null
0
o7ylh0j
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7ylh0j/
false
10
t1_o7ylgjr
makes sense indeed it would make a lot of sense also as an option for tools like opencode or kilo, you may want several KV-cache-compatible models running in parallel and this introduces concrete efficiencies in token/energy consumption and in time, esp. with network latency
2
0
2026-02-28T23:08:54
muyuu
false
null
0
o7ylgjr
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7ylgjr/
false
2
t1_o7ylgfd
Indeed, it would be much better. But for a local use, on a 8gb VRAM card, is the best way that I can find at the moment.
2
0
2026-02-28T23:08:53
DvMar
false
null
0
o7ylgfd
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7ylgfd/
false
2
t1_o7ylde4
This is really interesting because I've been kind of holding out for the new Max studio but I'm not really sure if that's going to be the right route or if I should maybe just stick with a dgx.
2
0
2026-02-28T23:08:24
iRanduMi
false
null
0
o7ylde4
false
/r/LocalLLaMA/comments/1rhbtnw/the_state_of_openweights_llms_performance_on/o7ylde4/
false
2
t1_o7ylb6p
A system prompt that explicitly frames intent alongside the stated problem. Something like 'before responding, consider what the user is likely trying to achieve beyond the literal request' shifts attention significantly without changing temperature. The looping behavior at scale is also worse on misaligned NUMA confi...
2
0
2026-02-28T23:08:03
paulahjort
false
null
0
o7ylb6p
false
/r/LocalLLaMA/comments/1rhchvi/qwen35_family_running_notes/o7ylb6p/
false
2
t1_o7ylauz
It’s an OpenAI-compatible endpoint, so you can just plug that base URL and the API key (5090gobrr) into SillyTavern, Open-WebUI, or even a simple Python script. , basically a tunnel link that's allowing it to talk to my local server , u can just use the python script i gave to access it , pip install openai , then just...
1
0
2026-02-28T23:08:00
Key_Pace_9755
false
null
0
o7ylauz
false
/r/LocalLLaMA/comments/1rhflqn/letting_my_rtx_5090_21_tbs_mem_stretch_its_legs/o7ylauz/
false
1
t1_o7yl9sn
Cool project on a personal level and hope you get it to where you want it. But seems low value on the grand scheme of things. I mean, is it worth it to shave a tiny bit of overhead (in the long term with decent hardware support) but then run the heaviest workload, mostly offloaded, where such overhead is probably a tin...
4
0
2026-02-28T23:07:49
didroe
false
null
0
o7yl9sn
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yl9sn/
false
4
t1_o7yl9ou
I think it might just be that Q4_0 is slightly smaller, I don’t have kv cache quantization either.
1
0
2026-02-28T23:07:48
Far-Low-4705
false
null
0
o7yl9ou
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7yl9ou/
false
1
t1_o7yl6ed
This is a very interesting development. Looking forward to see real world performance VS existing methods but low KL divergence and less refusals looks promising. Here is hoping it could even unlock higher scores on interesting benchmarks and real world tests.
6
0
2026-02-28T23:07:17
Zestyclose_Yak_3174
false
null
0
o7yl6ed
false
/r/LocalLLaMA/comments/1rh69co/multidirectional_refusal_suppression_with/o7yl6ed/
false
6
t1_o7yl5ma
Probablly. But If I don't try to see what I can "build", and ask for advices, I better stay and watch TV. I don't mind. I don't try to create "life" or have a real autonomous being. Bit I'm tired of the "stiffness" of what I can do with a local llm, on my laptop.
1
0
2026-02-28T23:07:10
DvMar
false
null
0
o7yl5ma
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7yl5ma/
false
1
t1_o7yl4xo
Or wait for a local LLM runner that can swap layers in and out of vRAM so the vRAM limits the layer size rather than the model size.
1
0
2026-02-28T23:07:03
Protopia
false
null
0
o7yl4xo
false
/r/LocalLLaMA/comments/1rhcnbt/best_coding_model_to_run_entirely_on_12gb_vram/o7yl4xo/
false
1
t1_o7yl2a3
It’s a good product. But so is Google Maps? How many people worship Lars Rasmussen? The point is people shouldn’t turn a blind eye to the behavior of these leaders. The same can be seen at the presidential level, and see where this is taking America.
2
0
2026-02-28T23:06:38
PaceImaginary8610
false
null
0
o7yl2a3
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7yl2a3/
false
2
t1_o7ykz4y
Probablly. But If I don't try to see what I can "build", and ask for advices, I better stay and watch TV. I don't mind. I don't try to create "life" or have a real autonomous being. Bit I'm tired of the "stiffness" of what I can do with a local llm, on my laptop.
1
0
2026-02-28T23:06:07
DvMar
false
null
0
o7ykz4y
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7ykz4y/
false
1
t1_o7ykv2u
bot
2
0
2026-02-28T23:05:28
MotokoAGI
false
null
0
o7ykv2u
false
/r/LocalLLaMA/comments/1rhgg0l/surprised_by_nemotron3nano_on_studio_m3_512/o7ykv2u/
false
2
t1_o7ykuvu
it is weird to ask a model named Mike because you are attaching feelings to it, if not, it wouldn't matter. The fact that you are attaching feelings to it is dangerous. It does not feel, it does not think, it merely generates what it was trained to generate based on the input that you give it.
1
0
2026-02-28T23:05:26
Mission_Biscotti3962
false
null
0
o7ykuvu
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7ykuvu/
false
1
t1_o7ykunw
To name ONE example, If you upload a file, it will ALWAYS append that file to the end of the message history, forcing FULL chat history reprocessing… Also if a model ever calls a tool, even when in native mode, it forces full prompt reprocessing since the very beginning of that turn. No other application that I connec...
5
0
2026-02-28T23:05:24
Far-Low-4705
false
null
0
o7ykunw
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7ykunw/
false
5
t1_o7yktcc
Btw since just now u heard of MOEs u might wanna reconsider some other models u might have rejected cause u thought they are too large for ur system, like qwen3 coder next, gpt oss 120b or qwen3.5 122b, all are MOEs and will work fast if u have ram to it them in
2
0
2026-02-28T23:05:11
KURD_1_STAN
false
null
0
o7yktcc
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7yktcc/
false
2
t1_o7ykt8e
thanks!
2
0
2026-02-28T23:05:10
Electrical_Ninja3805
false
null
0
o7ykt8e
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7ykt8e/
false
2
t1_o7ykqsk
SmolLM2-135m-Instruct and only cpu atm.
20
0
2026-02-28T23:04:46
Electrical_Ninja3805
false
null
0
o7ykqsk
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7ykqsk/
false
20
t1_o7ykp31
May I kindly ask what is the try cloudlfare thingie in the URL? Isn't it supposed to be the cfargo tunnels, Are they the same system provided by cloudflare or am I mixing them up? 😅
1
0
2026-02-28T23:04:30
ELPascalito
false
null
0
o7ykp31
false
/r/LocalLLaMA/comments/1rhflqn/letting_my_rtx_5090_21_tbs_mem_stretch_its_legs/o7ykp31/
false
1
t1_o7ykofh
This is both naive and dumb.
-1
0
2026-02-28T23:04:24
quantgorithm
false
null
0
o7ykofh
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7ykofh/
false
-1
t1_o7ykjr0
I had migrate to Vulkan runtime
1
0
2026-02-28T23:03:41
Ok_Programmer_5639
false
null
0
o7ykjr0
false
/r/LocalLLaMA/comments/1ren7l2/slow_prompt_processing_with_qwen3535ba3b_in_lm/o7ykjr0/
false
1
t1_o7ykhf4
Whoa I would not have thought this was possible. At any speed. Nice work
4
0
2026-02-28T23:03:17
Hefty_Development813
false
null
0
o7ykhf4
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7ykhf4/
false
4
t1_o7ykh35
Which model are you using? One that works well with cpu only?
16
0
2026-02-28T23:03:14
cryptofuturebright
false
null
0
o7ykh35
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7ykh35/
false
16
t1_o7ykgvh
Agi :D
1
0
2026-02-28T23:03:12
LegacyRemaster
false
null
0
o7ykgvh
false
/r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7ykgvh/
false
1
t1_o7ykdty
You will probably reject what I will say but you sound like you are suffering from delusions caused by the llm accompanying you down a rabbit hole because that's what it's designed to do.
2
0
2026-02-28T23:02:43
Mission_Biscotti3962
false
null
0
o7ykdty
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7ykdty/
false
2
t1_o7ykbwh
Thanks, I didn't know the project. I will checkit out. What I did try is to be able to run this fully locally, even with a 3B model. The model is running more like a "lexical organ" inside the framework. The rest of the systems are actually what is changing the "persona". As this is my first "propper" project, I choose...
1
0
2026-02-28T23:02:24
DvMar
false
null
0
o7ykbwh
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7ykbwh/
false
1
t1_o7yk8cj
anthropic will move to Germany.
1
0
2026-02-28T23:01:50
Rich_Artist_8327
false
null
0
o7yk8cj
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7yk8cj/
false
1
t1_o7yk7ny
The head of Anthropic did a 45 minute interview to the news I believe within the last 24 hours. I watched it this morning. He said exactly what I said. It’s on YouTube.
1
0
2026-02-28T23:01:43
quantgorithm
false
null
0
o7yk7ny
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7yk7ny/
false
1
t1_o7yk5pq
Totally agree — scope drift is very real. Right now role boundaries are enforced structurally (separate repos, ownership, contract gates), but I think you’re right that explicit scope limits in each agent’s CLAUDE.md would make the system more stable long-term. Thanks for sharing your notes — 1000 sessions is serious...
1
0
2026-02-28T23:01:24
GGwithRabbit
false
null
0
o7yk5pq
false
/r/LocalLLaMA/comments/1rhcckv/vibehq_orchestrate_multiple_claude_code_codex/o7yk5pq/
false
1
t1_o7yjzi5
https://preview.redd.it/…6f4069b18367fb06
1
0
2026-02-28T23:00:24
BitzLeon
false
null
0
o7yjzi5
false
/r/LocalLLaMA/comments/1qnbegl/why_so_much_hype_around_the_mac_mini_for_clawdbot/o7yjzi5/
false
1
t1_o7yjwcy
Your explanation isn’t explanationing to me. Can you try it again?
-4
0
2026-02-28T22:59:55
StardockEngineer
false
null
0
o7yjwcy
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7yjwcy/
false
-4
t1_o7yju8q
How you run it on cpu?
1
0
2026-02-28T22:59:34
EduardoDevop
false
null
0
o7yju8q
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7yju8q/
false
1
t1_o7yju5d
I always like to see small active parameters MOEs at top places so im bot complaining here. But it is very unfair to try to fit moe and dense into the same vram tbh, as minimum for computers are 16gb ram now so u can def use q4 instead while still requiring the same HW*. Im not expecting much different from 1 quant u...
1
0
2026-02-28T22:59:33
KURD_1_STAN
false
null
0
o7yju5d
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o7yju5d/
false
1
t1_o7yjrir
My advice... (hmmmm.... thinking... thinking... thinking...) 1, You might be very very very very lucky and fine something to make you a billionaire in the next 3 years, but most likely not. So instead think about a 10 year minimum period to make yourself financially stable. 2, Decide whether you want to be ultra ric...
2
0
2026-02-28T22:59:09
Protopia
false
null
0
o7yjrir
false
/r/LocalLLaMA/comments/1rhf9is/what_do_i_do_with_my_life/o7yjrir/
false
2
t1_o7yjpzl
This is the Trump Melania problem. Where you ask yourself how could this woman do this. Without realizing the truth that bad people often surround themselves with other equally bad people. So there aren't any good guys in the story.
2
0
2026-02-28T22:58:54
mrdevlar
false
null
0
o7yjpzl
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7yjpzl/
false
2
t1_o7yjpzz
The formula is more like a guideline for estimating "resources used" or its "footprint" while inferencing. It's not at all a comparison of model quality.
5
0
2026-02-28T22:58:54
DinoAmino
false
null
0
o7yjpzz
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7yjpzz/
false
5
t1_o7yjmtw
So his own words in which he is already a known and provable liar. GOT IT.
2
0
2026-02-28T22:58:24
quantgorithm
false
null
0
o7yjmtw
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7yjmtw/
false
2
t1_o7yjebr
You don't have to pick one or the other. You could be interested in running local models *and* be paying for a subscription to a commercial provider.
8
0
2026-02-28T22:57:04
droptableadventures
false
null
0
o7yjebr
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7yjebr/
false
8
t1_o7yjaq7
where is Step-3.5-flash? Fast and good imo, very "smart" but extended thinking.
1
0
2026-02-28T22:56:30
GodComplecs
false
null
0
o7yjaq7
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7yjaq7/
false
1
t1_o7yj92m
Ollama is the easiest starting point for model swapping, it handles downloads and serving through a simple API that Continue.dev connects to natively. Continue.dev is the right VSCode extension for this.
1
0
2026-02-28T22:56:14
paulahjort
false
null
0
o7yj92m
false
/r/LocalLLaMA/comments/1rhg2ir/trying_to_set_up_a_vscode_server_local_llm/o7yj92m/
false
1
t1_o7yj7hg
Thanks. And, and indeed, it is It, not her. It was easyer for me to use a female gender. I did started with a male type of personallity, but it was easyer with a female one for me. Also, is not a "companion", as a lot of guys think when hear "she". It was just strange for me to ask a model named "Mike" how is feeling, ...
1
0
2026-02-28T22:55:59
DvMar
false
null
0
o7yj7hg
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7yj7hg/
false
1
t1_o7yj5gm
I just want a 72B parameter dense model.
1
0
2026-02-28T22:55:39
tengo_harambe
false
null
0
o7yj5gm
false
/r/LocalLLaMA/comments/1rheepm/qwen_model_sizes_over_time/o7yj5gm/
false
1
t1_o7yj45d
2025 is gonna be the bootlicking olympics
-2
0
2026-02-28T22:55:27
arthor
false
null
0
o7yj45d
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7yj45d/
false
-2
t1_o7yj3ui
thanks! ive been working hard on it.
3
1
2026-02-28T22:55:24
Electrical_Ninja3805
false
null
0
o7yj3ui
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yj3ui/
false
3
t1_o7yj0g8
The 35b was looping a bit too much, but I got the updated version that came out a few hours ago and it’s significantly more stable. Worth giving jt a 2nd look
12
0
2026-02-28T22:54:52
liviuberechet
false
null
0
o7yj0g8
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o7yj0g8/
false
12
t1_o7yiz0t
Okay?
3
0
2026-02-28T22:54:39
Alpacaaea
false
null
0
o7yiz0t
false
/r/LocalLLaMA/comments/1rhgg0l/surprised_by_nemotron3nano_on_studio_m3_512/o7yiz0t/
false
3
t1_o7yiw6d
The 7B drop makes sense — smaller models have less tolerance for reconciling foreign KV representations. Curious if a lightweight adapter between injections would help or if it's fundamentally a capacity problem.
0
0
2026-02-28T22:54:12
theagentledger
false
null
0
o7yiw6d
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7yiw6d/
false
0
t1_o7yivna
I have been contemplating this also these days, I updated how I do my MXFP4 quants, and thinking of maybe doing then all again. But then qwen3.5 happened and it obsoleted like 90% of the older models, that's how good they are.
4
0
2026-02-28T22:54:07
noctrex
false
null
0
o7yivna
false
/r/LocalLLaMA/comments/1rh0xwk/unsloth_dynamic_20_ggufs_now_selectively/o7yivna/
false
4
t1_o7yiuor
I absolutely love it.
10
0
2026-02-28T22:53:58
henk717
false
null
0
o7yiuor
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yiuor/
false
10
t1_o7yitrm
I'm afraid it wont be opensource. They did not release the current model they are using on their site.
1
0
2026-02-28T22:53:49
Different_Fix_2217
false
null
0
o7yitrm
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7yitrm/
false
1
t1_o7yiqxo
I tried Qwen3-VL-8B i have `RTX 570 Ti 16gb`but vllm does not have full support for it i guess. SO falled back to ollama via openwebui.
1
0
2026-02-28T22:53:23
callmedevilthebad
false
null
0
o7yiqxo
false
/r/LocalLLaMA/comments/1qt3vbc/what_ai_to_run_on_rtx_5070/o7yiqxo/
false
1
t1_o7yiq8u
perhaps you missed the for giggles part. it may be useless to you, but i have a use for it and thats what matters.
2
0
2026-02-28T22:53:16
Electrical_Ninja3805
false
null
0
o7yiq8u
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yiq8u/
false
2
t1_o7yim6n
The actual blocker most teams hit isn't the compliance argument it's standing up a dedicated A100 training node fast enough that it doesn't add another week to the timeline... [https://github.com/theoddden/terradev-mcp](https://github.com/theoddden/terradev-mcp)
1
0
2026-02-28T22:52:38
paulahjort
false
null
0
o7yim6n
false
/r/LocalLLaMA/comments/1rhb1xb/fine_tuning_on_proprietary_data_is_way_harder_to/o7yim6n/
false
1
t1_o7yil8e
long term....this is the core of an os I am building. I understand the issues at play. right now im building a unikernal. i may or may not take it past that depending on what i can and cant figure out.
88
0
2026-02-28T22:52:29
Electrical_Ninja3805
false
null
0
o7yil8e
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yil8e/
false
88
t1_o7yikv6
Cherry picked. I had the 35BA3B and did some informal runs with it and I did not like how some refactors were performed - needed more handling to get context right. 27B was more grounded and extensive on the approach. I might've been premature with the 35BA3B and could run this bench once I'm not using the workstation.
7
0
2026-02-28T22:52:25
Holiday_Purpose_3166
false
null
0
o7yikv6
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o7yikv6/
false
7
t1_o7yikiq
A3B  140t/s on a power limited 450w 5090 
2
0
2026-02-28T22:52:22
arthor
false
null
0
o7yikiq
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7yikiq/
false
2
t1_o7yigw1
Qwen3:1.7b
1
0
2026-02-28T22:51:47
Interesting-Ad4922
false
null
0
o7yigw1
false
/r/LocalLLaMA/comments/1rgt4m4/not_creeped_out_at_all_i_swear/o7yigw1/
false
1
t1_o7yif4z
This would make more sense if the "dreams" or inner monologue that happens when you are not interacting with it, actually updated the weights of the model (instead of just storing into a database or graph), this way it would ""evolve"" by thinking, like we humans do. (I don't think this would work very well or be usefu...
0
0
2026-02-28T22:51:31
cnmoro
false
null
0
o7yif4z
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7yif4z/
false
0
t1_o7yie2f
Seems useless to me.
0
0
2026-02-28T22:51:21
qwen_next_gguf_when
false
null
0
o7yie2f
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yie2f/
false
0
t1_o7yidy1
Will this work on 5070ti?
1
0
2026-02-28T22:51:20
cnuthead
false
null
0
o7yidy1
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7yidy1/
false
1
t1_o7yi6tb
Hey there , awesome for trying it out ,and i am using llama.cpp , it's a gguf q4 , i tried other versions i really wanted to give the fp4 a go , but vllm is giving me trouble , it ran but it was slow despite the fact that fp4 is the fastest config that I can run of a model usually , it gave me around 100 t/s while this...
2
0
2026-02-28T22:50:13
Key_Pace_9755
false
null
0
o7yi6tb
false
/r/LocalLLaMA/comments/1rhflqn/letting_my_rtx_5090_21_tbs_mem_stretch_its_legs/o7yi6tb/
false
2
t1_o7yi4ox
Damn, and i'm here thinking 150t/s i get with it is pretty slow...
1
0
2026-02-28T22:49:54
GoranjeWasHere
false
null
0
o7yi4ox
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7yi4ox/
false
1
t1_o7yi1hx
Thank you very much!
5
0
2026-02-28T22:49:25
noctrex
false
null
0
o7yi1hx
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o7yi1hx/
false
5
t1_o7yi0lx
Why would one need 3 models for internet search?
1
0
2026-02-28T22:49:17
TornaD-oz
false
null
0
o7yi0lx
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o7yi0lx/
false
1
t1_o7yhyvx
That's what life is like on the razor's edge of technology. Change and test often. For me, the recent release of the qwen3.5 models, just obsoleted all older models. Been testing it these days and have been blown away.
1
0
2026-02-28T22:49:01
noctrex
false
null
0
o7yhyvx
false
/r/LocalLLaMA/comments/1rh52t9/config_drift_is_the_silent_killer_of_local_model/o7yhyvx/
false
1
t1_o7yhsi5
No. But I think making it really really good is hard, and they also synergize barred on how people use it and what these people ask for as enhancements.
1
0
2026-02-28T22:48:03
Protopia
false
null
0
o7yhsi5
false
/r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/o7yhsi5/
false
1
t1_o7yhqgq
Pulling it now. Will get it running tomorrow and I'll post here once it's done.
10
0
2026-02-28T22:47:44
Holiday_Purpose_3166
false
null
0
o7yhqgq
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o7yhqgq/
false
10
t1_o7yhp6o
I must express my condolences at their loss. (it's not low enough yet, they have to do more training)
2
0
2026-02-28T22:47:32
MoffKalast
false
null
0
o7yhp6o
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7yhp6o/
false
2
t1_o7yhjcj
Interesting, as I did some scripted runs and `--numa numactl` offered me a very slight boost. Thanks for pointing it out, I'll have to re-investigate this.
2
0
2026-02-28T22:46:36
Holiday_Purpose_3166
false
null
0
o7yhjcj
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o7yhjcj/
false
2
t1_o7yhgoe
It almost certainly will never be faster, you're going to need those drivers to get hardware into the right state to go at full speed, going to need the filesystem support to efficiently load and set up the DMAs for sharing access. Unless you just end up writing your own OS that does all of that, and at that point you'...
140
0
2026-02-28T22:46:12
arades
false
null
0
o7yhgoe
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yhgoe/
false
140
t1_o7yhfgu
Just tried it with a couple images. Thank you for providing the model for that. Out of interest, as i didn't get it form your post. Are you running. With vllm or llama.cpp? Interestingly: it's output for bboxes is slightly offset from time to time. You can see as the highlighted region by the model should be where I h...
2
0
2026-02-28T22:46:00
Njee_
false
null
0
o7yhfgu
false
/r/LocalLLaMA/comments/1rhflqn/letting_my_rtx_5090_21_tbs_mem_stretch_its_legs/o7yhfgu/
false
2
t1_o7yheyl
This is exactly what I need but for a 16gb 5070 ti / 5700x3d with 64gb of ram. Which version would you recommend for me? I am running it with lmstudio and connecting opencode to it and an unreal engine plugin. 
1
0
2026-02-28T22:45:56
Embarrassed_Adagio28
false
null
0
o7yheyl
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7yheyl/
false
1
t1_o7yhcmi
That is a great point about profit. But I actually think slop might be beneficial for AI companies on purpose. If they created a perfect AI from the start the project would be basically finished and there would be no reason for constant expensive updates. Slop feels like it was created to keep the hype alive and ensur...
1
0
2026-02-28T22:45:33
ProductTop9807
false
null
0
o7yhcmi
false
/r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/o7yhcmi/
false
1
t1_o7yh87h
I sill have a soft spot for Devstral Small 2, but it is mainly because it can understand images — making it easy to just show wire graphs of what I want or show visual bugs and fixes. But I think Qwen3.5 27B might become my newest favourite. Why did you not include Qwen 35B in your tests?
19
0
2026-02-28T22:44:52
liviuberechet
false
null
0
o7yh87h
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o7yh87h/
false
19
t1_o7yh88l
My pleasure
2
0
2026-02-28T22:44:52
noctrex
false
null
0
o7yh88l
false
/r/LocalLLaMA/comments/1r9mkgj/paddleocrvl_now_in_llamacpp/o7yh88l/
false
2
t1_o7yh5w7
Makes sense given RLHF - models get rewarded for hedging because it looks more careful, which is exactly the pattern that causes spiraling when there is a clear answer.
8
0
2026-02-28T22:44:31
BC_MARO
false
null
0
o7yh5w7
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7yh5w7/
false
8
t1_o7yh5m2
Both models are looping in the latest lm studio. in some questions. Can't test them heavily
1
0
2026-02-28T22:44:28
CatEatsDogs
false
null
0
o7yh5m2
false
/r/LocalLLaMA/comments/1rhfjeg/qwen3527b_vs_qwen3535ba3b/o7yh5m2/
false
1
t1_o7yh4fd
The BF16 one
7
0
2026-02-28T22:44:17
noctrex
false
null
0
o7yh4fd
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o7yh4fd/
false
7
t1_o7yh2ms
Ah, CPU inference eh? Does your paper get AVX2
3
0
2026-02-28T22:44:01
MoffKalast
false
null
0
o7yh2ms
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7yh2ms/
false
3
t1_o7ygz0s
The BF16 or FP16 variant?
3
0
2026-02-28T22:43:27
Holiday_Purpose_3166
false
null
0
o7ygz0s
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o7ygz0s/
false
3
t1_o7ygyl8
The personalization paradox is the most interesting result here - components working individually but overwhelming the 7B in combination is a clean insight about small model limits. Congrats on the paper.
2
0
2026-02-28T22:43:23
BC_MARO
false
null
0
o7ygyl8
false
/r/LocalLLaMA/comments/1r8jgwv/i_built_a_local_ai_dev_assistant_with_hybrid_rag/o7ygyl8/
false
2
t1_o7ygult
Without sarcasm, and with all the sincerity I would tell my own self at 20yo: Tinker after clocking out, don’t clock in to tinker. Focus on learning how to do the boring learning and work that someone else is telling you to, because your 20yo brain (or an llm) is the absolute worst mentor. Find a human mentor(s) that...
3
0
2026-02-28T22:42:46
fligglymcgee
false
null
0
o7ygult
false
/r/LocalLLaMA/comments/1rhf9is/what_do_i_do_with_my_life/o7ygult/
false
3
t1_o7ygue8
The `--numa numactl` flag across every config is doing heavy lifting... If you move to cloud or multi-GPU, those manual topology flags wont transfer and you may lose those gains they tuned locally. Consider a provisoner/orchestrator like Terradev then. It handles this and works in Claude Code.
4
0
2026-02-28T22:42:44
paulahjort
false
null
0
o7ygue8
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o7ygue8/
false
4
t1_o7ygryn
bad bot
1
0
2026-02-28T22:42:22
RnRau
false
null
0
o7ygryn
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7ygryn/
false
1
t1_o7ygm49
Interestingly there is a similar project: [https://github.com/joi-lab/ouroboros](https://github.com/joi-lab/ouroboros) Though that one is barely usable without Claude Sonnet 4.6 (or smth similar in intelligence). Will check your project too, thanks!
1
0
2026-02-28T22:41:28
groosha
false
null
0
o7ygm49
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7ygm49/
false
1
t1_o7ygfxb
If you have the RAM for it, could you also try my quant of the coder next model? I would be interesting to see where it fits in there in your bench
10
0
2026-02-28T22:40:31
noctrex
false
null
0
o7ygfxb
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o7ygfxb/
false
10
t1_o7ygcf7
Leaving aside the details of the mechanism, there appear to be two alternatives here: 1, Passing what is essentially the full existing output context for the final turn off the conversation to date, without summarising or compacting; or 2, Summarizing the thinking thus far, and owing that as input to a completely ne...
4
0
2026-02-28T22:40:00
Protopia
false
null
0
o7ygcf7
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7ygcf7/
false
4
t1_o7yganw
Yeah, I had this. My first question of any new model is “if you could give yourself a name, what would it be?” It agonised over it for a couple of minutes, second guessing itself repeatedly before settling for “Lumina”.
1
0
2026-02-28T22:39:43
ConspicuousSomething
false
null
0
o7yganw
false
/r/LocalLLaMA/comments/1rhaoty/anyone_noticing_qwen35_27b_getting_stuck_in/o7yganw/
false
1
t1_o7yga3y
Latest cuda is 13.1
1
0
2026-02-28T22:39:38
ShayBox
false
null
0
o7yga3y
false
/r/LocalLLaMA/comments/1r5sgow/llamacpp_takes_forever_to_load_model_from_ssd/o7yga3y/
false
1
t1_o7yg9es
I get what you're saying, but if you want it to be fast and snappy it will be inaccurate. And if you run a larger model it might be more accurate but it will be slower, and when it is instructed to find some more rare facts it can easily completely make it up. Install LM studio on it, try out different models. Do stuf...
3
0
2026-02-28T22:39:32
Equal_Passenger9791
false
null
0
o7yg9es
false
/r/LocalLLaMA/comments/1rhcs8p/tiny_small_faster_models_for_13_year_old_laptop/o7yg9es/
false
3
t1_o7yg6rh
This IS prefix caching. It just is. But with eg llama cpp python you have to manage your own cache (at least you did when I looked at it a few years ago), and OP might be using that or something similar. With the way OP distinguishes between “text” and “kv cache”, he either doesn’t know how cache hits work, or he’s u...
4
0
2026-02-28T22:39:07
ahjorth
false
null
0
o7yg6rh
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7yg6rh/
false
4
t1_o7yg28h
It\*.
1
0
2026-02-28T22:38:25
Budget-Juggernaut-68
false
null
0
o7yg28h
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7yg28h/
false
1
t1_o7yftda
Look we can not deny that fake news existed before AI but it was usually intentional and limited. Now we are facing a literal ocean of pure slop that is being generated unconsciously. The example is simple: students and researchers are already using AI to write essays and papers based on other AI hallucinations. This ...
1
0
2026-02-28T22:37:02
ProductTop9807
false
null
0
o7yftda
false
/r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/o7yftda/
false
1
t1_o7yfdiu
Have you tried stepfun models?
1
0
2026-02-28T22:34:36
KeikakuAccelerator
false
null
0
o7yfdiu
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7yfdiu/
false
1
t1_o7yfdd8
Ollama falsely calls the distilled versions "R1" but it's not actually R1. Yet another reason not to use it.
-2
0
2026-02-28T22:34:35
droptableadventures
false
null
0
o7yfdd8
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7yfdd8/
false
-2
t1_o7yfb14
Can confirm MOSS-TTS is great, much better than VibeVoice7b and on par or better than ElevenLabs (as long as the audio source is high quality). For long form audio, I batch generate around 3 sentences at a time instead of all at once as the audio quality starts degrading after 500 tokens or so. MOSS-TTSD is made for ...
3
0
2026-02-28T22:34:13
LilBrownBebeShoes
false
null
0
o7yfb14
false
/r/LocalLLaMA/comments/1r7bsfd/best_audio_models_feb_2026/o7yfb14/
false
3
t1_o7yf74g
You can probably find a way to filter out slop for yourself. But the tech billionaires makes their money from pushing slop to people, so don't expect them to do it voluntarily. Your best bet imo, it took try to find a community of like minded people to write and promote an open source solution. Something that can be ...
1
0
2026-02-28T22:33:36
Protopia
false
null
0
o7yf74g
false
/r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/o7yf74g/
false
1
t1_o7yeytk
At least I we can be frustrated together 😂
1
0
2026-02-28T22:32:20
lundrog
false
null
0
o7yeytk
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7yeytk/
false
1
t1_o7yevr4
Hi. Indeed, a good question. But the answer is no, because you cannot measure the weight of a ghost. This project was not been build to just have another tool. From what I know, standard AI metrics are designed to see how well a tool performs. But in this case, I can't. The only "tool access" the LLM has is the system ...
0
0
2026-02-28T22:31:51
DvMar
false
null
0
o7yevr4
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7yevr4/
false
0