name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7wp0b4
depends a lot on your use case. for agent pipelines where latency isnt critical, whisper distil-large-v3 via whisper.cpp is still my go-to for transcription — good accuracy, runs fine on 6GB VRAM, quantized q4 keeps it fast. for tts in non-real-time paths, kokoro-82M punches above its weight given the size. for actual...
1
0
2026-02-28T17:09:13
SignalStackDev
false
null
0
o7wp0b4
false
/r/LocalLLaMA/comments/1r7bsfd/best_audio_models_feb_2026/o7wp0b4/
false
1
t1_o7wowls
Yea if the deal worth billion he would have accepted the deal. This deal only worth 200 million so he used to boost his image and people are buying. OpenAI and Scam Altman are so desperate so they don't care
1
0
2026-02-28T17:08:43
Realistic_Muscles
false
null
0
o7wowls
false
/r/LocalLLaMA/comments/1rh7s7s/anthropic/o7wowls/
false
1
t1_o7wotj6
I did. I used LM Studio local server, set some environment variables, then used it through the terminal. Worked pretty good.
1
0
2026-02-28T17:08:17
TheActualStudy
false
null
0
o7wotj6
false
/r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/o7wotj6/
false
1
t1_o7wogqd
>tokens that stabilize early in shallow layers are "filler" (words like "and", "is", "the"). tokens that keep getting revised in deep layers are actual reasoning. we'll never see this implemented in real inference engines >We posit that when a token prediction stabilizes in early layers, subsequent depth-wise modific...
19
0
2026-02-28T17:06:29
FullOf_Bad_Ideas
false
null
0
o7wogqd
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7wogqd/
false
19
t1_o7wo9o7
I say the same for Gamma; besides using a very old architecture, Google doesn't care about making a new version of Gemma. Gemma is much, much weaker than Queen in coding complexity in all other aspects. I've also used Gemma And for me, it was one of the worst models I've ever used.
0
0
2026-02-28T17:05:28
AppealThink1733
false
null
0
o7wo9o7
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7wo9o7/
false
0
t1_o7wnzey
probably since it’s a 3b activate model. try comparing against qwen 27b
6
0
2026-02-28T17:04:02
z_3454_pfk
false
null
0
o7wnzey
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7wnzey/
false
6
t1_o7wnsqo
OpenRouter depends on the provider; a lot of endpoints are fp16/bf16 or fp8, not truly unquantized. I usually check the model card/provider name and A/B a long-context tool call run on fp16 vs Q8 to see where it starts drifting.
1
0
2026-02-28T17:03:05
BC_MARO
false
null
0
o7wnsqo
false
/r/LocalLLaMA/comments/1rh12xz/qwen3535b_nailed_my_simple_multiagent_workflow/o7wnsqo/
false
1
t1_o7wns6w
LOL
4
0
2026-02-28T17:03:00
No-Student6539
false
null
0
o7wns6w
false
/r/LocalLLaMA/comments/1rh7s7s/anthropic/o7wns6w/
false
4
t1_o7wnms7
You're lucky! You only saw it twice
5
0
2026-02-28T17:02:15
daniel-sousa-me
false
null
0
o7wnms7
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wnms7/
false
5
t1_o7wnhna
Perplexity does this (date and time injection) and it helps a tiniest bit, but not that much.
1
0
2026-02-28T17:01:32
Economy_Cabinet_7719
false
null
0
o7wnhna
false
/r/LocalLLaMA/comments/1rh1a6v/why_does_qwen_35_think_its_2024/o7wnhna/
false
1
t1_o7wnabn
The DTR metric is interesting but the 50-token early estimation is the part that matters for local inference. I've been doing something similar with speculative sampling on reasoning models — running 4-8 parallel generations, killing any chain that starts looping or restating the problem after the first ~100 tokens. Ev...
22
0
2026-02-28T17:00:32
tom_mathews
false
null
0
o7wnabn
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7wnabn/
false
22
t1_o7wn9cm
I'm waiting for the open source coding model to drop, equivalent or better than Sonnet/Opus/Gpt5.x!
1
0
2026-02-28T17:00:24
DownSyndromeLogic
false
null
0
o7wn9cm
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7wn9cm/
false
1
t1_o7wn1n5
With all due to respect to the researchers at Google, for a long time I knew about the uselessness of long ass chains of thought even without any paper. I guess I'm testing LLMs way more than what is considered healthy for human beings. But wait... Alternatively... On the second thought... Give me a break, will you? 🤣
-2
1
2026-02-28T16:59:20
Cool-Chemical-5629
false
null
0
o7wn1n5
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7wn1n5/
false
-2
t1_o7wmowf
I might be wrong, but the dense model is 27B
7
0
2026-02-28T16:57:34
dantheflyingman
false
null
0
o7wmowf
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wmowf/
false
7
t1_o7wmkul
The latter doesn't exist. 27B dense does, and is likely better in every aspect besides speed.
12
0
2026-02-28T16:57:00
i-eat-kittens
false
null
0
o7wmkul
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wmkul/
false
12
t1_o7wmho0
you could also try sglang, gives me 80% boost with mtp
2
0
2026-02-28T16:56:34
Conscious_Chef_3233
false
null
0
o7wmho0
false
/r/LocalLLaMA/comments/1rgwryb/speculative_decoding_qwen35_27b/o7wmho0/
false
2
t1_o7wmanx
string those gpus up
1
0
2026-02-28T16:55:38
bigh-aus
false
null
0
o7wmanx
false
/r/LocalLLaMA/comments/1rh0bkz/tempted_to_prompt_qwen_on_this_craigslist_rig_but/o7wmanx/
false
1
t1_o7wm425
Persistent profile over time! Every conversation adds to it, but old stuff naturally fades unless it keeps coming up. Think less "database" more "actual human memory."
1
0
2026-02-28T16:54:43
MotorAlternative8045
false
null
0
o7wm425
false
/r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/o7wm425/
false
1
t1_o7wm2pd
I compared 35B with thinking on to 27B with thinking off, and 27B was much better, and overall response time was about the same on an RTX 6000. IMO, on a 5090 i'd run 27B at \~ FP8 with thinking turned off. Tokens come out slower, but you are generating far fewer tokens.
26
0
2026-02-28T16:54:32
TokenRingAI
false
null
0
o7wm2pd
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wm2pd/
false
26
t1_o7wm1ey
When 64 GB computers are standard.
2
0
2026-02-28T16:54:22
MythOfDarkness
false
null
0
o7wm1ey
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7wm1ey/
false
2
t1_o7wm0j9
I use heretic version with thinking off. Its mostly logical errors and it happens frequently enough to be annoying.
1
0
2026-02-28T16:54:14
Gringe8
false
null
0
o7wm0j9
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7wm0j9/
false
1
t1_o7wlzks
I think Mistral is underrated in this ranking and so is Cohere. Mistral vibe makes their devstral model very useful. It's not Claude code, but it's remarkable how useful it is considering it's a small dense model. Also, vibe is included with a subscription to le chat and I've not ever hit a usage limit. In a future whe...
1
0
2026-02-28T16:54:07
wienerwald
false
null
0
o7wlzks
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7wlzks/
false
1
t1_o7wly62
Yes me. I wish i could help you with a how-to, but it's all vibe coded by codex. Used llamacpp and I get just above 100 Tps. Nvidia 3090 limited to 250W. Works ok, but somewhere around gpt 4.o level. Way under glm 4.7 smartness.
1
0
2026-02-28T16:53:55
L0ren_B
false
null
0
o7wly62
false
/r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/o7wly62/
false
1
t1_o7wlvka
Yeah I can see that interpretation.
5
0
2026-02-28T16:53:33
GarbanzoBenne
false
null
0
o7wlvka
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wlvka/
false
5
t1_o7wlvg5
Running llama.cpp server through terminal.
2
0
2026-02-28T16:53:32
LewdKantian
false
null
0
o7wlvg5
false
/r/LocalLLaMA/comments/1rg4fb7/should_qwen3535ba3b_be_this_much_slower_than/o7wlvg5/
false
2
t1_o7wlsly
This is the combo I've been waiting for honestly. WebLLM for inference and something like this for memory means you could build a genuinely useful personal AI assistant that runs entirely offline. No subscriptions, no data leaving the machine, no API going down at 2am when you actually need it. Quick question though, ...
2
0
2026-02-28T16:53:09
MyS3xyM0M
false
null
0
o7wlsly
false
/r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/o7wlsly/
false
2
t1_o7wlq4l
Its very good at specific tasks. Design is on par or better than sonnet 4.5! Technical is lacking.  Tool calling far behind.  Overall I will use it for design stuff I think. 
1
0
2026-02-28T16:52:48
ShadyShroomz
false
null
0
o7wlq4l
false
/r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o7wlq4l/
false
1
t1_o7wlp3j
I'm trying to explain complex topics to someone with potentially little understanding of machine learning and mathematical concepts. I know the token embeds are sorted in space by some black-box arbitrary gradient descent method generally unintelligible to humans, but it's easier to imagine the experts when you explain...
5
0
2026-02-28T16:52:40
oMGalLusrenmaestkaen
false
null
0
o7wlp3j
false
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7wlp3j/
false
5
t1_o7wljke
It's not a trick question though. It's a request for the model to give you its own plot which requires creative writing, but still requires knowledge about the show. It's only a trick question for small, dumb models which will waste so many tokens looking for the plot for a non existent episode only to tell you what yo...
1
0
2026-02-28T16:51:56
Cool-Chemical-5629
false
null
0
o7wljke
false
/r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7wljke/
false
1
t1_o7wli3n
Tbh I don’t think the US wants to use LLM for that right now, probably in 6 months for the mid-terms. Everything that they could want to do with LLM can be done with other simpler models.
1
0
2026-02-28T16:51:44
THEKILLFUS
false
null
0
o7wli3n
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7wli3n/
false
1
t1_o7wlgcc
you could try gpt oss 20b
2
0
2026-02-28T16:51:29
nunodonato
false
null
0
o7wlgcc
false
/r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7wlgcc/
false
2
t1_o7wlbvl
scama or scuma for short
3
0
2026-02-28T16:50:53
Separate_Hope5953
false
null
0
o7wlbvl
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wlbvl/
false
3
t1_o7wl9fl
> human responsibility for the use of force, including for autonomous weapon systems Just means that a general (or lower) has to take the blame when an autonomous weapon system is deployed and shit goes south, but they will still deploy them.
10
0
2026-02-28T16:50:33
Negative_Scarcity315
false
null
0
o7wl9fl
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wl9fl/
false
10
t1_o7wl5as
You can also run it in the advanced way through Termux, I'm running Qwen3-4B-Instruct-2507-Q4_K_M.gguf in termux using llama.cpp, I'm running it on my Poco x7 pro 12gb, and it's running at 7~5 tokens on the gpu Mali-G720 MC7 and it's significantly more efficient than running it on the cpu because there's no heat during...
1
0
2026-02-28T16:49:59
HaysamKING1
false
null
0
o7wl5as
false
/r/LocalLLaMA/comments/1rbrw12/is_there_any_llm_that_can_run_directly_on_an/o7wl5as/
false
1
t1_o7wl3bs
Exactly the vision! WebLLM is the missing inference piece. Drop LokulMem alongside it and you've got a full AI stack running entirely in the browser. No server, no API key, full privacy. The tab becomes the runtime.
1
0
2026-02-28T16:49:43
MotorAlternative8045
false
null
0
o7wl3bs
false
/r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/o7wl3bs/
false
1
t1_o7wktko
according to people familiar with the matter and knowledge of those arrangements
4
0
2026-02-28T16:48:23
ydnar
false
null
0
o7wktko
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7wktko/
false
4
t1_o7wksym
Those baselines are the key result - naive RAG degrading below raw LLM is exactly why the routing and Stop-RAG layers matter. Looking forward to the A1-A6 table.
1
0
2026-02-28T16:48:18
BC_MARO
false
null
0
o7wksym
false
/r/LocalLLaMA/comments/1r8jgwv/i_built_a_local_ai_dev_assistant_with_hybrid_rag/o7wksym/
false
1
t1_o7wknsl
yeah this makes sense tbh, ive noticed local reasoning models love to ramble when they're stuck. the early termination idea could be huge for llama.cpp type inference - imagine if you could kill a reasoning branch at 50 tokens instead of letting it run to 2k+. would make multi-shot much more practical
8
0
2026-02-28T16:47:37
papertrailml
false
null
0
o7wknsl
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7wknsl/
false
8
t1_o7wkklu
With WebLLM maturing so fast, do you see this becoming a full in-browser AI stack? Like local inference + local memory entirely in the tab, no server touching anything? Also curious how you're handling the cross-device sync problem, is that a dealbreaker for most use cases or do people just accept it as the tradeoff f...
1
0
2026-02-28T16:47:11
Top_View_4646
false
null
0
o7wkklu
false
/r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/o7wkklu/
false
1
t1_o7wkjab
Ok. U have 2 gpus ! Is this a custom pc build ? I am a Opus user. SO if i were to switch to a LLM like Qwen3.5 - does it a 'planning mode' ? U use 'claud code + Qwen3.5' or u have yr own cli combo ?
1
0
2026-02-28T16:47:00
gmmarcus
false
null
0
o7wkjab
false
/r/LocalLLaMA/comments/1rg8gkx/qwen35_35b_a3b_45_ts_128k_ctx_on_single_16gb_5060/o7wkjab/
false
1
t1_o7wkip5
Me too. I don't understand (I'm the app maintainer). It's full open source. The last version speeds it up quite a bit
2
0
2026-02-28T16:46:55
scousi
false
null
0
o7wkip5
false
/r/LocalLLaMA/comments/1re17th/blown_away_by_qwen_35_35b_a3b/o7wkip5/
false
2
t1_o7wkg0h
tbh the 35b-a3b has been solid for me too, way better reasoning than i expected for that size. the thinking mode helps a lot with complex tasks even if it does yap lol
6
0
2026-02-28T16:46:33
papertrailml
false
null
0
o7wkg0h
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wkg0h/
false
6
t1_o7wkdwh
yeah
1
0
2026-02-28T16:46:17
Recent_Juggernaut859
false
null
0
o7wkdwh
false
/r/LocalLLaMA/comments/1rh1024/new_ai_fundamental_research_companylab/o7wkdwh/
false
1
t1_o7wkaq7
he did indeed do the meme wrong
4
0
2026-02-28T16:45:51
mankiw
false
null
0
o7wkaq7
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wkaq7/
false
4
t1_o7wkaka
9B should work great in q8 on 16GB VRAM
1
0
2026-02-28T16:45:49
wektor420
false
null
0
o7wkaka
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7wkaka/
false
1
t1_o7wk46l
If you watch Zuckerberg or Altman pictures over time you can see the humanity draining from them. It's heavily obvious in the eyes. They started out ambitious, smart and probably a bit deluded but the success got to them and the power sucked out and remaining shreds of humanity. (It happens to all tech CEOs, it's just ...
3
0
2026-02-28T16:44:57
BonjaminClay
false
null
0
o7wk46l
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wk46l/
false
3
t1_o7wjxue
How would that price down GPUs?
4
0
2026-02-28T16:44:07
gradient8
false
null
0
o7wjxue
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7wjxue/
false
4
t1_o7wjwxl
I saw it happen with qwen instruction models, when asked complex stuff, or to resolve a problem, they will just reason in the answer and often say "wait, no, this is not right" and sometimes get stuck on a loop.
8
0
2026-02-28T16:44:00
_VirtualCosmos_
false
null
0
o7wjwxl
false
/r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7wjwxl/
false
8
t1_o7wjvqk
For me, it just hangs or stops generating code right in the middle of a task, usually after reading a file. I literally have to type "keep going" just to force it to finish the implementation. Other times it completely loses the plot. After it generates a plan and clears the context to start coding, instead of actuall...
1
0
2026-02-28T16:43:50
_Paza_
false
null
0
o7wjvqk
false
/r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/o7wjvqk/
false
1
t1_o7wjtsc
[deleted]
1
0
2026-02-28T16:43:35
[deleted]
true
null
0
o7wjtsc
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7wjtsc/
false
1
t1_o7wjsdw
That’s just what qwen 3.5 needs, too much yap
109
0
2026-02-28T16:43:23
Skystunt
false
null
0
o7wjsdw
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7wjsdw/
false
109
t1_o7wjpa9
Yes, looking into models that I can safely run on 24GB VRAM.
1
0
2026-02-28T16:42:58
SoMuchLasagna
false
null
0
o7wjpa9
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7wjpa9/
false
1
t1_o7wjlfs
I’ll see if I can be blessed by the used RAM market one day, lol. Going to try and finish deploying llama.ccp (with Claude’s help) this afternoon. Downloading three models to experiment with. I think Gemma2 27B KM4, GPT OSS, and I forget the third one recommended. If I ever score some hardware upgrades, I’ll look into ...
1
0
2026-02-28T16:42:27
SoMuchLasagna
false
null
0
o7wjlfs
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7wjlfs/
false
1
t1_o7wjlem
4.7
2
0
2026-02-28T16:42:26
Witty_Mycologist_995
false
null
0
o7wjlem
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7wjlem/
false
2
t1_o7wjh0c
What is a "nvidia 16GB GPUs"?? That's a lot of options.
1
0
2026-02-28T16:41:50
laser50
false
null
0
o7wjh0c
false
/r/LocalLLaMA/comments/1rh0msv/qwen3535ba3b_q5_k_mbest_model_for_nvidia_16gb_gpus/o7wjh0c/
false
1
t1_o7wjgt3
https://preview.redd.it/…68a07a2e2f8762
40
0
2026-02-28T16:41:48
j0j0n4th4n
false
null
0
o7wjgt3
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wjgt3/
false
40
t1_o7wj4zw
I bet he doesn't have the legal authority to do this.
1
0
2026-02-28T16:40:11
makegeneve
false
null
0
o7wj4zw
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7wj4zw/
false
1
t1_o7wiuw0
Okay, that sub is pretty funny. Shame it only has a dozen posts total, though.
1
0
2026-02-28T16:38:47
Void-07D5
false
null
0
o7wiuw0
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7wiuw0/
false
1
t1_o7withx
I'd move up Kimi. In terms of reasoning, research and writing coherent text etc. it's a clear step above Deepseek and Grok. I haven't tried it for coding though.
2
0
2026-02-28T16:38:35
Underbarochfin
false
null
0
o7withx
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7withx/
false
2
t1_o7wit8b
Is there a way to apply it currently to existing models?
30
0
2026-02-28T16:38:33
gyzerok
false
null
0
o7wit8b
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7wit8b/
false
30
t1_o7wisxh
RAM correct, look into quants, quants are lighter versions and there are a ton out there, some people make specific benchmarks based on those quants. There sure are models you can even run, but you'll have to compromise with speed.
1
0
2026-02-28T16:38:30
Zundrium
false
null
0
o7wisxh
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7wisxh/
false
1
t1_o7wip58
Hey anthropic, shift ops and hq to a EU country, everyone wins.
3
0
2026-02-28T16:37:59
Sp00k_x
false
null
0
o7wip58
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7wip58/
false
3
t1_o7wioly
Basically because you would need to train the behavior of "date awareness" and it seems they didn't bother. It probably is solvable just by injecting the date into the system prompt
0
0
2026-02-28T16:37:55
nomorebuttsplz
false
null
0
o7wioly
false
/r/LocalLLaMA/comments/1rh1a6v/why_does_qwen_35_think_its_2024/o7wioly/
false
0
t1_o7wio46
While i agree wholeheartedly, i do feel this is an extremely unfair reason for them to go down, i sorta agree with their moral position on this one issue and goverment attacking them is beyond absurd Since they use copyrighted material and THEN keep it closed sourced i do not care for their downfall at all, but this i...
1
0
2026-02-28T16:37:51
Alexercer
false
null
0
o7wio46
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7wio46/
false
1
t1_o7wiiqg
Qwen3.5 runs on their own engine, and apparently there's a bug that it tries to run guff downloaded from HF on old llama.cpp engine. https://github.com/ollama/ollama/issues/14503
2
0
2026-02-28T16:37:07
chibop1
false
null
0
o7wiiqg
false
/r/LocalLLaMA/comments/1rgyqz7/has_anyone_got_qwen35_to_work_with_ollama/o7wiiqg/
false
2
t1_o7wig12
A fellow smellgen connoisseur I see.
10
0
2026-02-28T16:36:46
Electroboots
false
null
0
o7wig12
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7wig12/
false
10
t1_o7wifep
I doubt he's good at all of them. I've seen some glaring errors in my own language. From the time I used Gemma, I saw a lot of things wrong, none of them grammatical.
0
0
2026-02-28T16:36:40
DrNavigat
false
null
0
o7wifep
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7wifep/
false
0
t1_o7wid6p
I still use Llama 3.x for professional writing because it more easily matches my natural style and tone.
8
0
2026-02-28T16:36:22
sxales
false
null
0
o7wid6p
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7wid6p/
false
8
t1_o7wi1cy
[removed]
1
0
2026-02-28T16:34:46
[deleted]
true
null
0
o7wi1cy
false
/r/LocalLLaMA/comments/1re2qzr/after_all_the_news_do_you_worry_about_privacy/o7wi1cy/
false
1
t1_o7wi0wd
I agree money is a factor but they can make money at other companies too. There is hero worshipping for people like S(c)am Altman, Elon Musk. It feels disgusting to be honest.
8
0
2026-02-28T16:34:42
PaceImaginary8610
false
null
0
o7wi0wd
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wi0wd/
false
8
t1_o7wi0hd
I tested and Q6 is barely faster so not really worth any quality loss unless you don't have the memory. Here's an example with qwen coder next 80b, same arch: https://preview.redd.it/a-few-strix-halo-benchmarks-minimax-m2-5-step-3-5-flash-v0-yqbgrwxziqkg1.png
12
0
2026-02-28T16:34:39
spaceman_
false
null
0
o7wi0hd
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wi0hd/
false
12
t1_o7whqk3
Great question to throw in here! Curious what others say too. For what it's worth, this is exactly the gap LokulMem is trying to fill, specifically for browser-based setups. If you've ever wanted memory that's fully local, decays naturally over time so old irrelevant stuff fades, and requires zero backend, worth a loo...
1
0
2026-02-28T16:33:16
MotorAlternative8045
false
null
0
o7whqk3
false
/r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/o7whqk3/
false
1
t1_o7whorb
These aren't exactly npm packages that need updating. Models are snapshot outputs. If it fulfills a workflow that's probably good enough for most people.
1
0
2026-02-28T16:33:01
iamapizza
false
null
0
o7whorb
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7whorb/
false
1
t1_o7whodm
I hope it does. Buying the dip is better 
2
0
2026-02-28T16:32:58
FSM---1
false
null
0
o7whodm
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7whodm/
false
2
t1_o7whk8a
Openwebui just calls the API, the problem is on llama.cpp's side
-1
0
2026-02-28T16:32:23
x0wl
false
null
0
o7whk8a
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7whk8a/
false
-1
t1_o7whhrd
— but memory has been the missing piece. Drop LokulMem in alongside WebLLM and you've got a full local AI stack running entirely in the tab. No server, no API key, full privacy. Would love to hear what you're building if you give it a try! 🧠⚡
1
0
2026-02-28T16:32:04
MotorAlternative8045
false
null
0
o7whhrd
false
/r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/o7whhrd/
false
1
t1_o7whh07
Oh lol. No wonder what I'm saying isn't helping. Look up someone's install guide for Debian and go from there. Your other comment says you only have 32GB of RAM. You will struggle with performance on models that need to offload to CPU/RAM with that little RAM, best to stick to one's you can fully get into your VRAM. I ...
1
0
2026-02-28T16:31:58
ImaginaryBluejay0
false
null
0
o7whh07
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7whh07/
false
1
t1_o7whbk6
Interesting. Is your setup just one agent, or coordinating with multiagents?
1
0
2026-02-28T16:31:13
chibop1
false
null
0
o7whbk6
false
/r/LocalLLaMA/comments/1rh12xz/qwen3535b_nailed_my_simple_multiagent_workflow/o7whbk6/
false
1
t1_o7wha0m
The NVFP4 quants did not work for me, the VLLM and SGLang kernels are not working properly on Blackwell, which is why I recommended MXFP4. ``` vllm serve olka-fi/Qwen3.5-122B-A10B-MXFP4 \ --max-num-seqs 128 \ --max-model-len 262144 \ --enable-auto-tool-choice \ --tool-call-parser qwen3_xml \ --port...
2
0
2026-02-28T16:31:00
TokenRingAI
false
null
0
o7wha0m
false
/r/LocalLLaMA/comments/1rgiait/switched_to_qwen35122ba10bi1gguf/o7wha0m/
false
2
t1_o7wh9x2
I'm taking advantage of this thread to ask Open-WebUI users: what is your preferred solution for managing memory?
2
0
2026-02-28T16:30:59
Adventurous-Paper566
false
null
0
o7wh9x2
false
/r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/o7wh9x2/
false
2
t1_o7wh9nq
All of those solutions did not work work for me with llama.ccp.  I had to do something like: - download this file https://qwen.readthedocs.io/en/latest/_downloads/c101120b5bebcc2f12ec504fc93a965e/qwen3_nonthinking.jinja `--chat-template-file qwen3_nothinking.jinja --chat-template-kwargs '{ "enableThinking": false }'...
1
0
2026-02-28T16:30:57
komio
false
null
0
o7wh9nq
false
/r/LocalLLaMA/comments/1re1b4a/you_can_use_qwen35_without_thinking/o7wh9nq/
false
1
t1_o7wh9d1
How many next weeks have we seen yet?
1
0
2026-02-28T16:30:55
Fit-Pattern-2724
false
null
0
o7wh9d1
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7wh9d1/
false
1
t1_o7wh3n5
awesome, thanks for the heads up!
1
0
2026-02-28T16:30:08
HopePupal
false
null
0
o7wh3n5
false
/r/LocalLLaMA/comments/1rbvmpk/running_llama_32_1b_entirely_on_an_amd_npu_on/o7wh3n5/
false
1
t1_o7wh3g7
I find it actually so cool, with WebLLM getting serious in the past couple of months, this can be ready handy in order to manage the memory!
2
0
2026-02-28T16:30:07
MyS3xyM0M
false
null
0
o7wh3g7
false
/r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/o7wh3g7/
false
2
t1_o7wh3b1
the spiraling effect is especially noticeable with reasoning models on problems that have a clean solution path - they keep second-guessing instead of committing. DTR as a metric is smart, curious how they define "deep processing" vs noise tokens in practice.
53
0
2026-02-28T16:30:06
BC_MARO
false
null
0
o7wh3b1
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7wh3b1/
false
53
t1_o7wgx3o
I’m on OpenMediaVault. :)
1
0
2026-02-28T16:29:14
SoMuchLasagna
false
null
0
o7wgx3o
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7wgx3o/
false
1
t1_o7wgv0q
Software no, but banning AI yes if there is to choose between Humanity and AI.
1
0
2026-02-28T16:28:57
ImportancePitiful795
false
null
0
o7wgv0q
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7wgv0q/
false
1
t1_o7wgrfs
I don't have enough RAM for Q3CN, so I did Q3.5.  I've not tried to format code on reddit before, let's see if this works...   ``` Ryzen 9900X (12 cores), 64GB DDR5-6000 (dual-channel), GTX 5080 (16GB, 5.0x16) Windows 11 WSL2 Ubuntu 24.04 (CUDA Toolkit 13.1 Update 1 for WSL-Ubuntu)   Krasis Launch Configuration   Krasi...
2
0
2026-02-28T16:28:28
Lonely_Drewbear
false
null
0
o7wgrfs
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7wgrfs/
false
2
t1_o7wgpkz
Not anymore: https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/
3
0
2026-02-28T16:28:12
coder543
false
null
0
o7wgpkz
false
/r/LocalLLaMA/comments/1rh0msv/qwen3535ba3b_q5_k_mbest_model_for_nvidia_16gb_gpus/o7wgpkz/
false
3
t1_o7wgp33
If you'd like to do some strict pipelines, veron control is something that need not mention isn't it
1
0
2026-02-28T16:28:08
Truncleme
false
null
0
o7wgp33
false
/r/LocalLLaMA/comments/1rh52t9/config_drift_is_the_silent_killer_of_local_model/o7wgp33/
false
1
t1_o7wgmpb
May the luck be with you
5
0
2026-02-28T16:27:48
Maleficent-Ad5999
false
null
0
o7wgmpb
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7wgmpb/
false
5
t1_o7wgegm
My guess is something with CUDA. The issue I had above is I installed CUDA but because I was on studio 2026 instead of 2022 it never detected that it could install the dependencies. Make sure you install visual studio 2022, all the required packages in it. Then install CUDA (or reinstall it). CUDA will just straight ...
1
0
2026-02-28T16:26:39
ImaginaryBluejay0
false
null
0
o7wgegm
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7wgegm/
false
1
t1_o7wgeij
I'm doing tests in cyber security field to let the agent find vulnerabilities in a web application (web pentest) and i tested qwen3.5 , glm4.7,gpt-oss,qwen3-coder-next and the best was glm4.7 . I will share the result of the test in separate article
4
0
2026-02-28T16:26:39
DarkZ3r0o
false
null
0
o7wgeij
false
/r/LocalLLaMA/comments/1rh12xz/qwen3535b_nailed_my_simple_multiagent_workflow/o7wgeij/
false
4
t1_o7wgbgp
To be fair, mistral models are good as system components. it's lightweight and fast, really good for many auxiliary jobs.
2
0
2026-02-28T16:26:14
Equal-Meeting-519
false
null
0
o7wgbgp
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7wgbgp/
false
2
t1_o7wg9uw
Hey now, that's both logical and reasonable :)
1
1
2026-02-28T16:26:00
realityczek
false
null
0
o7wg9uw
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7wg9uw/
false
1
t1_o7wg7zl
The active parameter counts are 3B and 5.1B. They're referring to the quantized model size.
1
0
2026-02-28T16:25:45
DeProgrammer99
false
null
0
o7wg7zl
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wg7zl/
false
1
t1_o7wg4eh
That's not at all unreasonable. Anthropic essentially decided that their AI model would act in such a way as to prevent lawful US military action if it felt it was improper. That means any contractor that used it might find the results skewed or manipulated in ways that were non obvious to achieve that outcome. In sho...
-5
0
2026-02-28T16:25:14
realityczek
false
null
0
o7wg4eh
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7wg4eh/
false
-5
t1_o7wg153
LM Studio has done this for a year
1
0
2026-02-28T16:24:47
_mausmaus
false
null
0
o7wg153
false
/r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7wg153/
false
1
t1_o7wfy5m
Sam already chose alliance. The government will support him and bail him out anytime needed.
2
0
2026-02-28T16:24:22
redditsublurker
false
null
0
o7wfy5m
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wfy5m/
false
2
t1_o7wfwyu
[removed]
1
0
2026-02-28T16:24:12
[deleted]
true
null
0
o7wfwyu
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7wfwyu/
false
1