name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7w3bn1
Always keep a validation dataset for each usecase
2
0
2026-02-28T15:21:03
No_Afternoon_4260
false
null
0
o7w3bn1
false
/r/LocalLLaMA/comments/1rh52t9/config_drift_is_the_silent_killer_of_local_model/o7w3bn1/
false
2
t1_o7w396t
--chat-template-kwargs "{"enable_thinking":false}"
2
0
2026-02-28T15:20:42
Velocita84
false
null
0
o7w396t
false
/r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/o7w396t/
false
2
t1_o7w36yt
Opus and GPT on life watch? I mean GLM-5 is already strong enough competition, and the research prep for Deepseek4 was quite significant, some technical breakthrough is very possible which would put it at least uncomfortably close to current SOTA. That would be a very stark contrast to Dario Amodei words just few m...
3
0
2026-02-28T15:20:23
bakawolf123
false
null
0
o7w36yt
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7w36yt/
false
3
t1_o7w351b
Stopped using opeai
4
0
2026-02-28T15:20:06
rf97a
false
null
0
o7w351b
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7w351b/
false
4
t1_o7w34u1
Yes, me too. I dont need any other functionality right now... Just give us emgram with disk support, this is all I'm waiting
5
0
2026-02-28T15:20:05
Several-Tax31
false
null
0
o7w34u1
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7w34u1/
false
5
t1_o7w2ylx
It's such a small model. Why don't we all pool resources and train the next version of it ourselves.
1
0
2026-02-28T15:19:11
Clear_Anything1232
false
null
0
o7w2ylx
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7w2ylx/
false
1
t1_o7w2we8
[removed]
1
0
2026-02-28T15:18:52
[deleted]
true
null
0
o7w2we8
false
/r/LocalLLaMA/comments/1mwr13v/request_best_live_translation_for_conferences_and/o7w2we8/
false
1
t1_o7w2oh8
Hmm, okay. I'll have to dig around to figure out how to do that on llama.cpp or return to different platform where modifying the template is more straight forward. Thanks.
1
0
2026-02-28T15:17:44
silenceimpaired
false
null
0
o7w2oh8
false
/r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/o7w2oh8/
false
1
t1_o7w2nue
And it would also take a year for the llama.cpp to support...
1
0
2026-02-28T15:17:38
Several-Tax31
false
null
0
o7w2nue
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7w2nue/
false
1
t1_o7w2huf
Sorry it took me so long to get back to you -- I had it fixed for these kinds of questions. https://preview.redd.it/ti40c95t49mg1.png?width=933&format=png&auto=webp&s=2870138ef766b33ea37dae2693acfef2b1e088e5 Thanks for the challenge!! The problem was, it misinterpreted the question and fell down the wrong rabbit ...
1
0
2026-02-28T15:16:45
_raydeStar
false
null
0
o7w2huf
false
/r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7w2huf/
false
1
t1_o7w2hh7
supports anything you want, pi is kind of a framework, either you install others extensions or build your own
1
0
2026-02-28T15:16:42
Unlucky-Message8866
false
null
0
o7w2hh7
false
/r/LocalLLaMA/comments/1rcjzsk/is_opencode_the_best_free_coding_agent_currently/o7w2hh7/
false
1
t1_o7w2bbx
This is twice in the last week that I’ve seen someone make a meme with this format while not understanding how this format works.
16
0
2026-02-28T15:15:48
TurboRadical
false
null
0
o7w2bbx
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7w2bbx/
false
16
t1_o7w29vp
Yeah, Sam just got on his knees, did what he knew he was supposed to, and the Pentagon is now going to build regarded skynet.
3
0
2026-02-28T15:15:35
The_IT_Dude_
false
null
0
o7w29vp
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7w29vp/
false
3
t1_o7w282z
Mahdi! Please point us the way! https://preview.redd.it/15x9ld3759mg1.jpeg?width=960&format=pjpg&auto=webp&s=fa97f0cc0769de15ecf94054d646f25f92a49eca
2
0
2026-02-28T15:15:19
Lawlette_J
false
null
0
o7w282z
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7w282z/
false
2
t1_o7w274a
On a completely diff. note. I am looking for a local Qwen VL for mostly testing purpose (playwright screenshots) . Can you suggest good models? My GPU is \`RTX 5070 Ti 16 GB\`
1
0
2026-02-28T15:15:11
callmedevilthebad
false
null
0
o7w274a
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7w274a/
false
1
t1_o7w259i
I would love to know what "well setup" means. I just run it as it comes out of the box.
1
0
2026-02-28T15:14:54
OrbMan99
false
null
0
o7w259i
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7w259i/
false
1
t1_o7w1swp
[deleted]
1
0
2026-02-28T15:13:07
[deleted]
true
null
0
o7w1swp
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7w1swp/
false
1
t1_o7w1bek
Qwen 3 coder next 80b is the one that you should be comparing. And setting the model temperature lower. That is also the case for these new 3.5 models
1
0
2026-02-28T15:10:30
SomeAcanthocephala17
false
null
0
o7w1bek
false
/r/LocalLLaMA/comments/1rf2b90/benchmarking_qwen3535b_vs_gptoss20b_for_agentic/o7w1bek/
false
1
t1_o7w18rv
Podcasting is very lucrative. And it has entrenched interests. The low end/low effort will be cannibalized by AI. And i don't mean low quality. I mean the interesting pods where something is (lightly) researched and then discussed. Things like Joe Rogan and lex friedman. An AI can easily do that job of taking a body of...
-1
0
2026-02-28T15:10:07
emprahsFury
false
null
0
o7w18rv
false
/r/LocalLLaMA/comments/1rh4p4n/how_are_you_engaging_with_the_ai_podcast/o7w18rv/
false
-1
t1_o7w1153
Did you change the temperature? The qwen model instructions are too lower the temperature to 0.6 for coding and tool usage. Did you do that?
1
0
2026-02-28T15:08:58
SomeAcanthocephala17
false
null
0
o7w1153
false
/r/LocalLLaMA/comments/1rf2b90/benchmarking_qwen3535b_vs_gptoss20b_for_agentic/o7w1153/
false
1
t1_o7w0zp5
The thread is worth zooming out on a bit here. The whole ASR→LLM→TTS pipeline design is increasingly looking like a transitional architecture. When you decompose speech into text, you lose prosody, emotional tone, turn-taking cues, and the natural rhythm of conversation. Then TTS tries to reconstruct all of that artifi...
1
0
2026-02-28T15:08:45
the-ai-scientist
false
null
0
o7w0zp5
false
/r/LocalLLaMA/comments/1r7bsfd/best_audio_models_feb_2026/o7w0zp5/
false
1
t1_o7w0z3b
AI bots still think it is 2024
126
0
2026-02-28T15:08:39
inaem
false
null
0
o7w0z3b
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7w0z3b/
false
126
t1_o7w0xzz
Just disable thinking if you don't need it to solve an actual problem
2
0
2026-02-28T15:08:29
Velocita84
false
null
0
o7w0xzz
false
/r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/o7w0xzz/
false
2
t1_o7w0tjx
Generally narrative and creative writing. I sometimes need to regenerate when it gets off the rails. Especially after 40-50k context. Also the agent uses rag for recipe creation using culinary techniques from textbooks. "preset": { "temp": 1.02, // slight bump for unpredictability without chaos "...
3
0
2026-02-28T15:07:49
Helpful_Jelly5486
false
null
0
o7w0tjx
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7w0tjx/
false
3
t1_o7w0rnt
Jina is a popular LLM search service with MCP support, and there are hundreds of date/time MCP packages to choose from. Conversely you can inject current date and time in the system prompt. 
1
0
2026-02-28T15:07:33
dinerburgeryum
false
null
0
o7w0rnt
false
/r/LocalLLaMA/comments/1rh1a6v/why_does_qwen_35_think_its_2024/o7w0rnt/
false
1
t1_o7w0qsg
this is a common misunderstanding. The "experts" have nothing to do with "expertise" in various subjects. there is NOT a "coding expert", " medical expert ", etc. The activated experts will be changing wildly with each token, even within specialized and domain specific responses.
10
0
2026-02-28T15:07:25
Calandracas8
false
null
0
o7w0qsg
false
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7w0qsg/
false
10
t1_o7w0q5l
There are AMD Ryzen processors with a built in GPU (no idea whether the GPU has inference capabilities), and then there are AMD Ryzen AI processors which have an additional specialised NPU. You don't say which, sou at have no idea what hardware is actually being used for inference. But in essence you are spending a lo...
1
0
2026-02-28T15:07:19
Protopia
false
null
0
o7w0q5l
false
/r/LocalLLaMA/comments/1rgixk7/accuracy_vs_speed_my_top_5/o7w0q5l/
false
1
t1_o7w0kry
My working hypothesis is as follows. The further away you stray away from the enthusiasts, early adopters, users and so on, you really are getting closer to, what an average joe sees it as, which is [humourusly shown here](https://www.reddit.com/r/AITrailblazers/comments/1rh3nid/this_is_every_white_collar_worker_vs_ai_...
0
0
2026-02-28T15:06:31
Medium_Chemist_4032
false
null
0
o7w0kry
false
/r/LocalLLaMA/comments/1rh4p4n/how_are_you_engaging_with_the_ai_podcast/o7w0kry/
false
0
t1_o7w0fr3
I rather cancel to show investors numbers they understand, a fucking decline to zero.
12
0
2026-02-28T15:05:47
keyboardmonkewith
false
null
0
o7w0fr3
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7w0fr3/
false
12
t1_o7w0fsj
You made my day
1
0
2026-02-28T15:05:47
thibautrey
false
null
0
o7w0fsj
false
/r/LocalLLaMA/comments/1rgwryb/speculative_decoding_qwen35_27b/o7w0fsj/
false
1
t1_o7w0crp
I don’t think they’ve ever said, but yes, that is probably embarrassing for them if true.
1
0
2026-02-28T15:05:20
coder543
false
null
0
o7w0crp
false
/r/LocalLLaMA/comments/1rgq0vc/can_a_local_hosted_llm_keep_up_with_grok_41_fast/o7w0crp/
false
1
t1_o7w0bls
Although I guess they could start putting it into new rfps kind of like they do with soc2 certification policies. 
1
0
2026-02-28T15:05:10
ShareNorth3675
false
null
0
o7w0bls
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7w0bls/
false
1
t1_o7w0b4g
Indeed i've tldr And you say you could connect to ur ollama instance directly? Have you considered changing inference engine?
1
0
2026-02-28T15:05:05
No_Afternoon_4260
false
null
0
o7w0b4g
false
/r/LocalLLaMA/comments/1rh0akj/frustration_building_out_my_local_models/o7w0b4g/
false
1
t1_o7w0a1p
f\*\*k bro, you have given me huge work to do for this weekend, damn, why I didn't see it earlier. thanks for sharing this.
6
0
2026-02-28T15:04:56
SearchTricky7875
false
null
0
o7w0a1p
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7w0a1p/
false
6
t1_o7w08q7
Those are just the first 4 slides, I'm certain there are more that Dr. Nefarious is working up!
2
0
2026-02-28T15:04:44
Autobahn97
false
null
0
o7w08q7
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7w08q7/
false
2
t1_o7w0859
Hey folks fyi we’ve started beta testing for a consumer friendly LLMs on NPU on Linux solution. Check it out on the dev channel of the lemonade discord.
2
0
2026-02-28T15:04:38
jfowers_amd
false
null
0
o7w0859
false
/r/LocalLLaMA/comments/1rbvmpk/running_llama_32_1b_entirely_on_an_amd_npu_on/o7w0859/
false
2
t1_o7w07o5
That’s actually an interesting solution, but I can’t really afford it GPU-wise — I’m on an 8GB card, and most vision models get pretty heavy once you start running them on full pages. So for now I’m focusing on preprocessing the watermark and trying to keep things lightweight.
1
0
2026-02-28T15:04:34
SprayOwn5112
false
null
0
o7w07o5
false
/r/LocalLLaMA/comments/1rh3xey/seeking_help_improving_ocr_in_my_rag_pipeline/o7w07o5/
false
1
t1_o7w06v3
>self referencing his own shitpost
1
0
2026-02-28T15:04:27
Due-Memory-6957
false
null
0
o7w06v3
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7w06v3/
false
1
t1_o7w06j4
No. Not publicly at least. OpenAI announced that those things were excluded in their new contract too. That just doesn't make sense overall.
8
0
2026-02-28T15:04:25
GarbanzoBenne
false
null
0
o7w06j4
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7w06j4/
false
8
t1_o7w02vt
oops my bad i missed it - thank you
1
0
2026-02-28T15:03:51
acertainmoment
false
null
0
o7w02vt
false
/r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o7w02vt/
false
1
t1_o7w02j3
Yeah, V1 and V2 were experimental, mostly me dialing in the merge parameters. For V3 I’ve refined the merge models and parameter. They're still on the HF profile if you're curious, but I'd go straight to V3.
1
0
2026-02-28T15:03:48
Biscotto58
false
null
0
o7w02j3
false
/r/LocalLLaMA/comments/1rh0wqj/made_a_12b_uncensored_rp_merge_putting_it_out/o7w02j3/
false
1
t1_o7w0166
They can't dictate what a company uses, only where their data goes and the tools used to execute their contracts.  As a gov contractor, my co workers have specific ai policies they need to abide by for the contracts they work on, but I dont need to follow those in my role not working on that contract.  On Monday, w...
1
0
2026-02-28T15:03:36
ShareNorth3675
false
null
0
o7w0166
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7w0166/
false
1
t1_o7w003v
Cancel and just keep spamming prompts to burn compute time for free?
3
0
2026-02-28T15:03:26
ReformedBlackPerson
false
null
0
o7w003v
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7w003v/
false
3
t1_o7w004d
Scary times. With Altman, it can become a real bad thing real quick.
11
0
2026-02-28T15:03:26
quantgorithm
false
null
0
o7w004d
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7w004d/
false
11
t1_o7vzxu2
Previously, models felt more raw and unique, now every output seems calibrated to be "perfect". The emerging, experimental edge from the early days had a certain charm. Now they all look alike and seem rather boring. In the beginning, it was truly magical, we discovered, wondered if they were conscious, played wi...
40
0
2026-02-28T15:03:05
Adventurous-Paper566
false
null
0
o7vzxu2
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7vzxu2/
false
40
t1_o7vzxll
another update https://preview.redd.it/o5uglhaz29mg1.png?width=1221&format=png&auto=webp&s=5ab20971374a679abc8bda3e8021ce427d266635 7 hidden items
2
0
2026-02-28T15:03:03
jacek2023
false
null
0
o7vzxll
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vzxll/
false
2
t1_o7vzvbt
There have been some significant omni LLMs released for image generation https://huggingface.co/inclusionAI/Ming-flash-omni-2.0, Another 1T one (Ernie 5.0) which is not open source, can do video generation, https://huggingface.co/papers/2602.04705
7
0
2026-02-28T15:02:42
Aaaaaaaaaeeeee
false
null
0
o7vzvbt
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vzvbt/
false
7
t1_o7vzsa9
Okay brother thanks ! .. 1.i removed the model from the ollama now nothing is there in disk ! 2. System specs : GPU : NVIDIA GTX 1650ti 4GB VRAM RAM : 16GB CPU : Ryzen 4600H 3.OS : windows !
1
0
2026-02-28T15:02:14
Less_Strain7577
false
null
0
o7vzsa9
false
/r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7vzsa9/
false
1
t1_o7vzroa
Already, it's effectively impossible for any US federal entity OR a contractor in their supply chain to use any AI sourced from China. This includes any commercial access or using open source models. I'm not saying I agree or disagree with it. I happily use their open models at home. But I would never be able to use t...
12
0
2026-02-28T15:02:09
feckdespez
false
null
0
o7vzroa
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7vzroa/
false
12
t1_o7vzqxf
[removed]
1
0
2026-02-28T15:02:02
[deleted]
true
null
0
o7vzqxf
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vzqxf/
false
1
t1_o7vzq9l
Software needs to be libre - otherwise it's unethical. For AI models it's probably enough that they are public and that you can run them locally with libre software.
1
0
2026-02-28T15:01:56
MelodicFuntasy
false
null
0
o7vzq9l
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7vzq9l/
false
1
t1_o7vzpym
ive got 4 gpus totaling 32gb vram running off 2 x1 slots and getting a useful 20t/s on qwen3.5 35b. bandwidth isnt that big of a deal for small scale.
1
0
2026-02-28T15:01:54
tvall_
false
null
0
o7vzpym
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7vzpym/
false
1
t1_o7vzooh
What context size are you getting?
1
0
2026-02-28T15:01:42
Medium_Chemist_4032
false
null
0
o7vzooh
false
/r/LocalLLaMA/comments/1rf2ulo/qwen35_122b_in_72gb_vram_3x3090_is_the_best_model/o7vzooh/
false
1
t1_o7vznyt
Reddit experts must have a lot of cognitive dissonance, whether they are allowed to say something good about Trump or do they have to do intellectual somersaults ;)
2
0
2026-02-28T15:01:36
jacek2023
false
null
0
o7vznyt
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7vznyt/
false
2
t1_o7vzlgv
switch the model into vision variant, and use it for ocr??
1
0
2026-02-28T15:01:13
Altruistic_Heat_9531
false
null
0
o7vzlgv
false
/r/LocalLLaMA/comments/1rh3xey/seeking_help_improving_ocr_in_my_rag_pipeline/o7vzlgv/
false
1
t1_o7vzgy8
looks great but doesn’t support subagents?
1
0
2026-02-28T15:00:31
Realistic-Ad5812
false
null
0
o7vzgy8
false
/r/LocalLLaMA/comments/1rcjzsk/is_opencode_the_best_free_coding_agent_currently/o7vzgy8/
false
1
t1_o7vzetz
[Best Qwen3.5-35B-A3B GGUF for 24GB VRAM?! : r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/1resggh/best_qwen3535ba3b_gguf_for_24gb_vram/) [Muffbiscuit (u/TitwitMuffbiscuit) - Reddit](https://www.reddit.com/user/TitwitMuffbiscuit/) read these posts...after these test, the community found UD-Q4\_K\_XL is...
1
0
2026-02-28T15:00:12
kironlau
false
null
0
o7vzetz
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vzetz/
false
1
t1_o7vze4i
People often stick with older models because of their extensive experience with them and the stability required in production systems, where replacing core components with unproven technology is impractical. Also, newer models often suffer from quality degradation due to a lack of high-quality training data. Their dep...
5
0
2026-02-28T15:00:05
yami_no_ko
false
null
0
o7vze4i
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7vze4i/
false
5
t1_o7vzdrt
At 21 tokens per second it's faster than I can read when it's all loaded in VRAM so I'm okay with it. If you're doing agentic stuff or coding I suppose it might be too slow... or if you don't have a 24GB card.
1
0
2026-02-28T15:00:02
silenceimpaired
false
null
0
o7vzdrt
false
/r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vzdrt/
false
1
t1_o7vzdlw
That "heuristic" was always just headcannon
2
0
2026-02-28T15:00:00
Hefty_Acanthaceae348
false
null
0
o7vzdlw
false
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7vzdlw/
false
2
t1_o7vzbrr
Isn't grok fast around 400b parameters at fp8?
1
0
2026-02-28T14:59:44
RhubarbSimilar1683
false
null
0
o7vzbrr
false
/r/LocalLLaMA/comments/1rgq0vc/can_a_local_hosted_llm_keep_up_with_grok_41_fast/o7vzbrr/
false
1
t1_o7vzb7y
Interesting! Thank you! I will say, the thinking does seem valuable when it comes to vision as it seems to be pretty good at recognizing when it doesn't have the full picture from it's loose visual understanding.
10
0
2026-02-28T14:59:39
valdev
false
null
0
o7vzb7y
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7vzb7y/
false
10
t1_o7vza1r
literally no chance i ever run that on my hardware. 150B is probably about the biggest I can get to.
1
0
2026-02-28T14:59:28
sleepingsysadmin
false
null
0
o7vza1r
false
/r/LocalLLaMA/comments/1rgixxr/what_models_do_you_think_owned_february/o7vza1r/
false
1
t1_o7vz90u
Gemma always sounds more coherent, even if it's wrong... However, something I've noticed is: even though it's older and smaller than other models, Gemma has more factual knowledge than, for example, Qwen3.5 35b.
3
0
2026-02-28T14:59:19
DrNavigat
false
null
0
o7vz90u
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7vz90u/
false
3
t1_o7vz8uu
They signed a contract with the government to do those things?? Holy fuck
13
0
2026-02-28T14:59:18
Borkato
false
null
0
o7vz8uu
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vz8uu/
false
13
t1_o7vz890
[https://xkcd.com/1172/](https://xkcd.com/1172/)
29
0
2026-02-28T14:59:12
bobby-chan
false
null
0
o7vz890
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7vz890/
false
29
t1_o7vz54j
Anthropic stood its moral ground that it would not let the govt use Anthropic to cross 2 red lines: \- no mass surveillance of Americans \- AI cannot be used for fully autonomous weapons using Anthropic. Govt said not good enough and banned Anthropic. Altman publicly supported Anthropic on that position in a ...
41
0
2026-02-28T14:58:43
quantgorithm
false
null
0
o7vz54j
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vz54j/
false
41
t1_o7vz1qc
I don't care about the haters, I'm going to pester them until Google finally releases this information.
2
1
2026-02-28T14:58:12
DrNavigat
false
null
0
o7vz1qc
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7vz1qc/
false
2
t1_o7vyzkc
Slightly off topic, any tips/tricks to keep it from overthinking this much?
1
0
2026-02-28T14:57:52
silenceimpaired
false
null
0
o7vyzkc
false
/r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/o7vyzkc/
false
1
t1_o7vyxhe
Look at the green message box in the screenshot
1
0
2026-02-28T14:57:33
OsmanthusBloom
false
null
0
o7vyxhe
false
/r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o7vyxhe/
false
1
t1_o7vywin
Getting into Spec based generation, saving this workflow thanks
1
0
2026-02-28T14:57:24
Tom0o0Hanks
false
null
0
o7vywin
false
/r/LocalLLaMA/comments/1n00k4e/what_is_the_best_local_coding_agent/o7vywin/
false
1
t1_o7vytu6
No not in this case. It would result in a clear error, and it would not just resolve itself. In any case as part of my troubleshooting I regularly check and make sure that I can connect, I do regularly employ commands like ss -ntpl, nc, curl, etc to make sure it is working, testing connections both locally and via th...
1
0
2026-02-28T14:56:59
tahaan
false
null
0
o7vytu6
false
/r/LocalLLaMA/comments/1rh0akj/frustration_building_out_my_local_models/o7vytu6/
false
1
t1_o7vynjn
He used Claude to write it
2
0
2026-02-28T14:56:02
the_last_action_hero
false
null
0
o7vynjn
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7vynjn/
false
2
t1_o7vyj5v
Github:- https://github.com/vmDeshpande/ai-agent-automation Website:- https://vmdeshpande.github.io/ai-automation-platform-website/
1
0
2026-02-28T14:55:22
Feathered-Beast
false
null
0
o7vyj5v
false
/r/LocalLLaMA/comments/1rh4nb2/just_shipped_v030_of_my_ai_workflow_engine/o7vyj5v/
false
1
t1_o7vyiy9
I didn't want to spam LocalLLaMA because the haters but https://preview.redd.it/mgsszujm19mg1.png?width=1189&format=png&auto=webp&s=a4b187255503eead070cc86d1f61a6af19491a34
8
0
2026-02-28T14:55:19
jacek2023
false
null
0
o7vyiy9
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7vyiy9/
false
8
t1_o7vyiky
I wish people wouldn't be impressed by this kind of writing so I could stop seeing models trained to do it. Better to be good at writing simply than try and fail to write poetically.
5
0
2026-02-28T14:55:16
Scroatazoa
false
null
0
o7vyiky
false
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7vyiky/
false
5
t1_o7vyi5z
For TTS, Kokoro has been my go-to for anything that needs to sound natural in a production context — it punches well above its weight for the model size and runs fast enough on a single GPU that latency isn't an issue. Orpheus TTS is worth trying if you want more expressive delivery, though stability on longer outputs ...
1
0
2026-02-28T14:55:12
the-ai-scientist
false
null
0
o7vyi5z
false
/r/LocalLLaMA/comments/1r7bsfd/best_audio_models_feb_2026/o7vyi5z/
false
1
t1_o7vyfaq
People often stick with older models because of their extensive experience with them and the stability required in production systems, where replacing core components with unproven technology is impractical. Also, newer models often suffer from quality degradation due to a lack of high-quality training data. Their de...
0
0
2026-02-28T14:54:46
yami_no_ko
false
null
0
o7vyfaq
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7vyfaq/
false
0
t1_o7vyd1o
do you mean these are mxfp4 quants?
2
0
2026-02-28T14:54:26
jacek2023
false
null
0
o7vyd1o
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vyd1o/
false
2
t1_o7vycdm
Thanks for the link — I'll check it out. We're on the evaluation side (playback, pre-shipment gate), so it's a different layer, but if someone is concerned about 'Did this change break something?' while building with CLIO, we would be the ones to help right away. :)
1
0
2026-02-28T14:54:20
Fluffy_Salary_5984
false
null
0
o7vycdm
false
/r/LocalLLaMA/comments/1rh28o8/building_agents_is_fun_evaluating_them_is_not/o7vycdm/
false
1
t1_o7vyc00
> I do think Anthropic is kind of better than OpenAI Why
8
0
2026-02-28T14:54:16
Due-Memory-6957
false
null
0
o7vyc00
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7vyc00/
false
8
t1_o7vybbp
I tried Tesseract earlier, but in my case it didn’t really help — the watermark still interfered and the output wasn’t any better than PyMuPDF’s extraction. That’s why I’m exploring other options now (thresholding, EasyOCR, PaddleOCR, etc.) and seeing what works best for this specific doc. Open to recommendations if yo...
1
0
2026-02-28T14:54:10
SprayOwn5112
false
null
0
o7vybbp
false
/r/LocalLLaMA/comments/1rh3xey/seeking_help_improving_ocr_in_my_rag_pipeline/o7vybbp/
false
1
t1_o7vy9c7
I guess we all have different expectations, you are certainly not the first person even on this sub to fill the memory on a Mac, but I just dislike it. Well, on a machine that I'm also using at the same time with apps. When people load 120GB of LLM stuff onto a 128GB Mac Studio which purely does the serving task that's...
1
0
2026-02-28T14:53:52
tmvr
false
null
0
o7vy9c7
false
/r/LocalLLaMA/comments/1rfv6ap/what_models_run_well_on_mac_mini_m4_16gb_for_text/o7vy9c7/
false
1
t1_o7vy69x
If someone would buy them, they’d change the policy. Being blacklisted from all companies that partner with the DoW (essentially all companies) is a death sentence. But what will really happen is some progressive court will instantly pause this with some magic power (injunction or whatever) and then in a few weeks or ...
12
0
2026-02-28T14:53:25
Virtamancer
false
null
0
o7vy69x
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7vy69x/
false
12
t1_o7vy57n
Similar thing happened to me, but I managed to press it further and it admitted (in it's thinking bubble) that the actual cutoff date was close to 2024, even though it repeatedly told itself multiple times that it must be 2026. It definitely didn't know things that happened after 2024. It thought the latest java releas...
1
0
2026-02-28T14:53:15
tonkodonko
false
null
0
o7vy57n
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7vy57n/
false
1
t1_o7vy3tx
Hey, thanks for pointing that out. I actually forgot to change the license — I only made the repo public recently, so the previous one was just a placeholder. I’ve updated it now to BUSL-1.1. And yeah, I’ll check out the thresholding preprocessing suggestion too. Still figuring out the etiquette on Reddit, so genuinel...
1
0
2026-02-28T14:53:02
SprayOwn5112
false
null
0
o7vy3tx
false
/r/LocalLLaMA/comments/1rh3xey/seeking_help_improving_ocr_in_my_rag_pipeline/o7vy3tx/
false
1
t1_o7vy2yz
unsloth is busy at fixing its UD mxfp4 problems.... so the hidden model could be explained
-1
0
2026-02-28T14:52:54
kironlau
false
null
0
o7vy2yz
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vy2yz/
false
-1
t1_o7vy2s1
I've only used it for really short conversations since it seems to want to reprocess all context. It's very smart tho, feels like some conversations I had with Claude models.  For my setup, I guess I'd stick with oss 20B as it doesn't take several minutes to process additional prompts. 
6
0
2026-02-28T14:52:53
ArchdukeofHyperbole
false
null
0
o7vy2s1
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7vy2s1/
false
6
t1_o7vy0zk
The chart includes 4.1 fast when I click on it: https://preview.redd.it/grskuuze09mg1.png?width=2524&format=png&auto=webp&s=d5e99572c77d5bba9fcdb56ac78726801289f7dc Not sure what you're seeing. And you'd realistically need a Mac with at least 24GB of memory to run the 27B model. Your server with a 4070 Ti would be b...
1
0
2026-02-28T14:52:36
coder543
false
null
0
o7vy0zk
false
/r/LocalLLaMA/comments/1rgq0vc/can_a_local_hosted_llm_keep_up_with_grok_41_fast/o7vy0zk/
false
1
t1_o7vxz6s
Found Tony Stark in captivity
2
0
2026-02-28T14:52:20
Kandiak
false
null
0
o7vxz6s
false
/r/LocalLLaMA/comments/1rh0bkz/tempted_to_prompt_qwen_on_this_craigslist_rig_but/o7vxz6s/
false
2
t1_o7vxwr5
You can move to a country that doesn't have democracy like Russia or China if you prefer to have no freedom.
1
0
2026-02-28T14:51:58
MelodicFuntasy
false
null
0
o7vxwr5
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7vxwr5/
false
1
t1_o7vxr96
What seems to have happened* is that Altman publicly said "We have red lines too!" and then whispered "I just have to say that" to Hegseth. *my interptetation, not a literal event
12
0
2026-02-28T14:51:08
a-wiseman-speaketh
false
null
0
o7vxr96
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vxr96/
false
12
t1_o7vxr3j
Move to eu
1
0
2026-02-28T14:51:06
Soft_Syllabub_3772
false
null
0
o7vxr3j
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7vxr3j/
false
1
t1_o7vxqu4
the thinking can be disable, either in 1. llama.cpp server parameter, or 2. even change to a mod chat template, which then could use no\_think or thinking to control the think mode: [Qwen 3.5 27-35-122B - Jinja Template Modification (Based on Bartowski's Jinja) - No thinking by default - straight quick answers, nee...
73
0
2026-02-28T14:51:04
kironlau
false
null
0
o7vxqu4
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7vxqu4/
false
73
t1_o7vxnuw
Very cool, thanks a lot for the info!
1
0
2026-02-28T14:50:37
luminous_connoisseur
false
null
0
o7vxnuw
false
/r/LocalLLaMA/comments/1ol8bfx/new_ai_workstation/o7vxnuw/
false
1
t1_o7vxn5a
From my experience, gpt and opus models are clearly a tier above gemini in everything except ultra long context stuff and pdf tools. And i'd say gpt 5.2 thinking is a step above opus in reasoning tasks. 5.3 codex is also at least on par with 4.6 opus. Your results are very surprising to me.
0
0
2026-02-28T14:50:30
SerdarCS
false
null
0
o7vxn5a
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vxn5a/
false
0
t1_o7vxkl3
Okay I believe you, sorry about that. First thing — before we get into any settings, I'd recommend removing whatever you currently have set up and starting fresh. If someone else told you to install this model and you don't fully understand the setup, there could be things running or configured that are eating your RAM...
1
0
2026-02-28T14:50:06
melanov85
false
null
0
o7vxkl3
false
/r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7vxkl3/
false
1
t1_o7vxjxj
Only 27b, but it's not a DeepSeek for sure, even in English. In Russian, it's just a bullshit generator, sadly.
1
0
2026-02-28T14:50:00
Ardalok
false
null
0
o7vxjxj
false
/r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7vxjxj/
false
1
t1_o7vxjp9
Architecture differences can change how they are finetuned and trained, the tool calling, how harnesses work with a model. Imagine: you’ve worked on finetuning a qwen2.5 model for a while, written a harness, etc, and then you switch the model and everything breaks.
18
0
2026-02-28T14:49:58
Badger-Purple
false
null
0
o7vxjp9
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7vxjp9/
false
18
t1_o7vxfco
I think you should try Qwen3.5-27B-GGUF Q3\_K\_S or Q3\_K\_M.
2
0
2026-02-28T14:49:19
moahmo88
false
null
0
o7vxfco
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7vxfco/
false
2