name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7w9wwo
Wouldn't q6 be plenty smart and faster?
19
0
2026-02-28T15:54:11
Hector_Rvkp
false
null
0
o7w9wwo
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7w9wwo/
false
19
t1_o7w9sx7
I was waiting for a heretic ablation of this model. I thought it would be a while before we'd get one, because the heretic project page says it doesn't support hybrid models...
2
0
2026-02-28T15:53:38
odomobo
false
null
0
o7w9sx7
false
/r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7w9sx7/
false
2
t1_o7w9qsi
No, that's expected. What's unreasonable is blacklisting Anthropic from doing business with ALL US Military adjacent companies/contractors and ALL federal agencies.
15
0
2026-02-28T15:53:21
Kamal965
false
null
0
o7w9qsi
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7w9qsi/
false
15
t1_o7w9o8h
has deepseek released even a single thing ever that wasnt open source? theyre not like Qwen who release their big models like Qwen3-Max closed source DeepSeek open sources literally everything not even just models
16
0
2026-02-28T15:53:00
pigeon57434
false
null
0
o7w9o8h
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7w9o8h/
false
16
t1_o7w9lbn
I don’t think this is isolated behavior. All of the major provider from OpenAI, Anthropic, and Google all does this. Larger model seems to suppress verbose behavior better than smaller one when prompt. But that just my two cents
3
0
2026-02-28T15:52:35
I-am_Sleepy
false
null
0
o7w9lbn
false
/r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7w9lbn/
false
3
t1_o7w9jfx
I appreciate the reply, thanks! That's just it though; I have been using the suggested sampling and penalty parameters posted, and am still seeing replies like, >*Non-Adjacent:* Explicitly exclude *i*−1,,*i*\+,*i*\+,*i*\+,*i*\+,*i*\+,*i*\+,*i*\+,*i*\+,*i*\+,*i*\+,*i*\+1,*i*\+1,*i*\+1,*i*\+1,*i*\+1,*i*\+1,*i*\+1 an...
1
0
2026-02-28T15:52:19
plopperzzz
false
null
0
o7w9jfx
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7w9jfx/
false
1
t1_o7w9f69
Litterally begging and pleading to cross the red line with in hours of Anthropic drawing it.
4
0
2026-02-28T15:51:44
TracerBulletX
false
null
0
o7w9f69
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7w9f69/
false
4
t1_o7w9ev3
i dont think they would say video if their sources never mentioned video at all. I DO, however, think they're dumb enough to confuse input modalities and output modalities so its likely to be image-video-text-to-text just like Kimi-K2.5, which I don't seem many people talking about how it has video input which is cool
1
0
2026-02-28T15:51:41
pigeon57434
false
null
0
o7w9ev3
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7w9ev3/
false
1
t1_o7w9cyd
Install Python 3.11 (64-bit) Clone repo Use GitHub “Code → Download ZIP” OR git clone (prefer git) Run ONLY: install_pocket_tts.bat Run: start_server.bat Open: http://127.0.0.1:8080/ Pick a voice from the list (don’t type joe-rogan1 unless it appears) For voice cloning: Accept HF terms + login / token, then ...
1
0
2026-02-28T15:51:25
RIP26770
false
null
0
o7w9cyd
false
/r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o7w9cyd/
false
1
t1_o7w9cmy
Basically impossible to ban local models in most Western countries. But they're heavily limited because Nvidia isn't putting enough VRAM into their consumer cards to run anything beyond about 20B. And more like 12B for most video cards.
5
0
2026-02-28T15:51:23
NighthawkT42
false
null
0
o7w9cmy
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7w9cmy/
false
5
t1_o7w95vq
5090.. ive ran Q6 and Q4 .. right now Q4 with no kv cache 256k context
5
0
2026-02-28T15:50:27
arthor
false
null
0
o7w95vq
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7w95vq/
false
5
t1_o7w93xo
qwen coder next is very sesitive to quantisation. I've noticed that q8 does absolutely well on it's own, great model. Q6 is sometimes omitting consequences (if I change this, the references will change). Q4 is not really capable of orchestrating itself. I.e. after finishing a subtask calls it a day and doesn't call fin...
3
0
2026-02-28T15:50:10
kweglinski
false
null
0
o7w93xo
false
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7w93xo/
false
3
t1_o7w92wa
[removed]
1
0
2026-02-28T15:50:02
[deleted]
true
null
0
o7w92wa
false
/r/LocalLLaMA/comments/1qtnz9s/best_local_model_for_openclaw/o7w92wa/
false
1
t1_o7w92hi
Are you using LM Studio or some command line solution? Those numbers are impressive IMO 
1
0
2026-02-28T15:49:58
PrefersAwkward
false
null
0
o7w92hi
false
/r/LocalLLaMA/comments/1rg4fb7/should_qwen3535ba3b_be_this_much_slower_than/o7w92hi/
false
1
t1_o7w92hq
Right, in subs dedicated to funny animal pictures, reposting things from a few years ago is quick karma. Doing the same thing in a sub for a fast developing technology does not work. 
8
0
2026-02-28T15:49:58
lxgrf
false
null
0
o7w92hq
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7w92hq/
false
8
t1_o7w92ba
[removed]
1
0
2026-02-28T15:49:57
[deleted]
true
null
0
o7w92ba
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7w92ba/
false
1
t1_o7w921z
Desperately looking for it, as well as a decent API provider for it that don't Alibaba cloud. Alibaba wants me to send my ID to use them so lol never will I ever
2
0
2026-02-28T15:49:55
MaverickPT
false
null
0
o7w921z
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7w921z/
false
2
t1_o7w91qw
Which models currently support that?
1
0
2026-02-28T15:49:52
nullnuller
false
null
0
o7w91qw
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7w91qw/
false
1
t1_o7w8xek
Oh wow! The gap's *much* more obvious now. Thank you for sharing!
2
0
2026-02-28T15:49:17
itsappleseason
false
null
0
o7w8xek
false
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7w8xek/
false
2
t1_o7w8u5o
say, llamaguard3-8b is not worse than gpt-oss-safeguard or nemotron-nano-30b-a3b in fraud detection. its more strict that new ones that have "bring your own policy" so it works either better or not worse for niche use case
7
0
2026-02-28T15:48:50
LienniTa
false
null
0
o7w8u5o
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7w8u5o/
false
7
t1_o7w8tz8
Qwen supports over 100 languages. That's irrelevant.
1
1
2026-02-28T15:48:49
AppealThink1733
false
null
0
o7w8tz8
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7w8tz8/
false
1
t1_o7w8nzz
It does not sound like you understand what I said in the post. # It started working on it's own. Basically I was probably just not patient enough.
1
0
2026-02-28T15:47:59
tahaan
false
null
0
o7w8nzz
false
/r/LocalLLaMA/comments/1rh0akj/frustration_building_out_my_local_models/o7w8nzz/
false
1
t1_o7w8mvu
Is it really strong arming for the military to say basically: if we can't use it for military purposes we'll find one we can? It's silly to expect the military to pay for AI they can't use for military purposes.
6
0
2026-02-28T15:47:50
NighthawkT42
false
null
0
o7w8mvu
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7w8mvu/
false
6
t1_o7w8m9f
I have a similar situation with my device, which is an 8840U (780m). Sharing for reference: - Qwen3-30B-A3B-Instruct-2507-UD-Q4_K_XL: 27 t/s - Qwen3-VL-30B-A3B-Instruct-UD-Q4_K_XL: 27 t/s - Qwen3.5-35B-A3B-Q4_K_M: 16 t/s
2
0
2026-02-28T15:47:44
EightHachi
false
null
0
o7w8m9f
false
/r/LocalLLaMA/comments/1rg4fb7/should_qwen3535ba3b_be_this_much_slower_than/o7w8m9f/
false
2
t1_o7w8kai
The older models come in a variety of sizes, and it takes time for new models to be available for the variety of hardware that users have. If it's bigger than 3B, then it's completely unusable without the hardware to run it. What would be awesome is cutting edge 0.5B><3B models. Or smaller.
3
0
2026-02-28T15:47:28
Sure_Explorer_6698
false
null
0
o7w8kai
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7w8kai/
false
3
t1_o7w8j93
Thinking-first models are benchmaxxed for thinking mode, they won't perform as good with thinking off
2
0
2026-02-28T15:47:19
def_not_jose
false
null
0
o7w8j93
false
/r/LocalLLaMA/comments/1rgswkc/turn_off_thinking_in_lm_studio/o7w8j93/
false
2
t1_o7w8ac6
The last thing they want is to make this a legal battle, in the age of AI the courts operate at a glacial scale. It's an arms race after all and there are agendas that have term limits. It will be interesting how this plays out.
1
0
2026-02-28T15:46:07
Igot1forya
false
null
0
o7w8ac6
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7w8ac6/
false
1
t1_o7w8ac7
While this could be true, do you have any examples? Especially in the same model family I never had the feeling that the newer model wasn't a lot better. Of course there are just solved tasks and if your model works, why change it
0
0
2026-02-28T15:46:07
LevianMcBirdo
false
null
0
o7w8ac7
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7w8ac7/
false
0
t1_o7w8aas
Independent from this I've been right on the edge of cancelling ChatGPT for Claude for a while now. CharGPT seems pretty desperate with the creation of that $30 tier in order to keep getting what I've been getting for $20. Meanwhile I've been running free tier Claude alongside the paid GPT and it feels like it's doi...
1
0
2026-02-28T15:46:06
NighthawkT42
false
null
0
o7w8aas
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7w8aas/
false
1
t1_o7w8a3r
Llama.cpp. they supposedly fixed the issue the other day. It still don't work properly for me at least. I'll get maybe two turns before it starts re-processing. And by then,  there's so much context from the models thinking outputs that it takes a while to process even a simple 20 token question because it's processing...
7
0
2026-02-28T15:46:05
ArchdukeofHyperbole
false
null
0
o7w8a3r
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7w8a3r/
false
7
t1_o7w87lx
Quantization affects moe and small models more and even at 6 bit it's very fast since it's moe model, and I got ram for it, so I go with the best quality to be sure I get best the model can do. Anything above q6_k is no difference. For 100B and above moe, you are right, Q4 is sweet spot
3
0
2026-02-28T15:45:44
BumblebeeParty6389
false
null
0
o7w87lx
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7w87lx/
false
3
t1_o7w86x4
Also I would recommend Z Image for image generation, it's very well regarded, as well as flux.2-4b/9b.
2
0
2026-02-28T15:45:38
Curious_Priority8156
false
null
0
o7w86x4
false
/r/LocalLLaMA/comments/1rh0hgl/hi_im_a_total_noob/o7w86x4/
false
2
t1_o7w7uld
Can you explain a bit more about your work?
1
0
2026-02-28T15:43:58
fuse04
false
null
0
o7w7uld
false
/r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7w7uld/
false
1
t1_o7w7ucm
Remember when the whole reddit rushed to support Altman when the board tried to oust him?
17
0
2026-02-28T15:43:56
floghdraki
false
null
0
o7w7ucm
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7w7ucm/
false
17
t1_o7w7rtu
Wish I had more than 16gb of vram... 😭
1
0
2026-02-28T15:43:35
lundrog
false
null
0
o7w7rtu
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7w7rtu/
false
1
t1_o7w7qgx
could i run on single 3080 + 32GB system RAM?
1
0
2026-02-28T15:43:24
cloudcity
false
null
0
o7w7qgx
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7w7qgx/
false
1
t1_o7w7pur
12B models can have good adherence in 5k tokens of lore well, not to the level of bigger models of course, but they can work really well with far more than just 500 tokens. My biggest problem with Wayfarer 2 was the repetition and not adherence, maybe this user didn't use a good format for the LLM to understand his wo...
1
0
2026-02-28T15:43:18
Sherlockyz
false
null
0
o7w7pur
false
/r/LocalLLaMA/comments/1n8kk48/new_ai_dungeon_models_wayfarer_2_12b_nova_70b/o7w7pur/
false
1
t1_o7w7oqh
I just clicked on the <install\_pocket\_tts.bat> as someone mentioned in the forum, and run all the bats in the zip in my c:\\ drive. I'm a visually impaired writer and I have no idea what else to do. Is there any guide anywhere or maybe a YouTube tutorial I could listen to? Or what else do I have to install? I have Py...
1
0
2026-02-28T15:43:09
timeshifter24
false
null
0
o7w7oqh
false
/r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o7w7oqh/
false
1
t1_o7w7ol1
It must be because you are a native speaker of English or Chinese.
-8
0
2026-02-28T15:43:08
DrNavigat
false
null
0
o7w7ol1
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7w7ol1/
false
-8
t1_o7w7nkv
This is exactly why.
5
0
2026-02-28T15:42:59
indicava
false
null
0
o7w7nkv
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7w7nkv/
false
5
t1_o7w7lgn
These are things you must discover for yourself, grasshopper.
1
0
2026-02-28T15:42:42
crantob
false
null
0
o7w7lgn
false
/r/LocalLLaMA/comments/1r6zxy0/kimi_k2_was_spreading_disinformation_and_made_up/o7w7lgn/
false
1
t1_o7w7l4d
I'm serious, let's unite. I support it.
2
0
2026-02-28T15:42:39
DrNavigat
false
null
0
o7w7l4d
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7w7l4d/
false
2
t1_o7w7inq
Thank you. I'm gonna watch and try comfyUI
2
0
2026-02-28T15:42:19
PizzaSouthern5853
false
null
0
o7w7inq
false
/r/LocalLLaMA/comments/1rh0hgl/hi_im_a_total_noob/o7w7inq/
false
2
t1_o7w7duq
That’s counterintuitive, newer models are more efficient and vary wildly in available sizes.
22
0
2026-02-28T15:41:39
indicava
false
null
0
o7w7duq
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7w7duq/
false
22
t1_o7w7ci1
Thanks !
2
0
2026-02-28T15:41:27
PizzaSouthern5853
false
null
0
o7w7ci1
false
/r/LocalLLaMA/comments/1rh0hgl/hi_im_a_total_noob/o7w7ci1/
false
2
t1_o7w7an3
I'm not sure whether I should run 27b or 122b on a 4090 at iq4. Both seem to have similar quality. Maybe 27b is a little faster but I'm optimizing for overnight runs, not interactive speed. I usually use Kimi k2.5 as the supervisor and local as the executor subagent in a GSD flow. I have to put the kv cache at q8 to fi...
1
0
2026-02-28T15:41:11
gtrak
false
null
0
o7w7an3
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7w7an3/
false
1
t1_o7w76ur
Why not Q4? I thought the difference wasn't big and you can get more tokens per sec and increase the context size
1
0
2026-02-28T15:40:40
Curious_Priority8156
false
null
0
o7w76ur
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7w76ur/
false
1
t1_o7w755x
Glm 5 is about as good as sonnet 4
1
0
2026-02-28T15:40:25
serpix
false
null
0
o7w755x
false
/r/LocalLLaMA/comments/1rggpu9/glm5code/o7w755x/
false
1
t1_o7w747j
1. Why you are not using lmstudio? 2. Have you tried Gemma3n models as Nemotron Nano?
1
0
2026-02-28T15:40:17
MokoshHydro
false
null
0
o7w747j
false
/r/LocalLLaMA/comments/1rh3oty/mac_m4_24gb_local_stack_qwen25_14b_cogito_14b/o7w747j/
false
1
t1_o7w7398
No shit.
-1
0
2026-02-28T15:40:09
buecker02
false
null
0
o7w7398
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7w7398/
false
-1
t1_o7w734c
To be fair "Does it let you goon" was already a fair amount in 2024
1
0
2026-02-28T15:40:08
Numerous_Mulberry514
false
null
0
o7w734c
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7w734c/
false
1
t1_o7w7357
Yes, I had some conversations about this in the hf repos [here](https://huggingface.co/noctrex/Qwen3-Coder-Next-MXFP4_MOE-GGUF/discussions/2) and [here](https://huggingface.co/noctrex/Qwen3.5-35B-A3B-MXFP4_MOE-GGUF/discussions/3), and it seems that despite the lower KLD and PPL scores, it actually behaves better when u...
2
0
2026-02-28T15:40:08
noctrex
false
null
0
o7w7357
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7w7357/
false
2
t1_o7w6z6v
I don’t understand why all employees wanted him back. Him getting kicked out of OpenAI was likely the best thing that happened to OpenAI
36
0
2026-02-28T15:39:35
PaceImaginary8610
false
null
0
o7w6z6v
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7w6z6v/
false
36
t1_o7w6ya6
Notice this is the heretic version so behavior might not be repeatable in stock. The craftsmanship that this team put into the reasoning distillation is really a new level, but the 'emergent brain' of a3b is still stumped when posed with problems needing untrained creativity. My conclusion is being reinforced: Aint n...
0
1
2026-02-28T15:39:27
crantob
false
null
0
o7w6ya6
false
/r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7w6ya6/
false
0
t1_o7w6xfa
me too
0
1
2026-02-28T15:39:20
Needausernameplzz
false
null
0
o7w6xfa
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7w6xfa/
false
0
t1_o7w6w5z
[removed]
1
0
2026-02-28T15:39:10
[deleted]
true
null
0
o7w6w5z
false
/r/LocalLLaMA/comments/1r1nq95/people_who_expose_their_llm_to_the_internet_how/o7w6w5z/
false
1
t1_o7w6ta8
Why so cynical? 27B param models are pretty doable given the hardware I see around LocalLlama Waiting around for corporate handouts doesn't really capture the spirit of this sub.
6
0
2026-02-28T15:38:45
Clear_Anything1232
false
null
0
o7w6ta8
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7w6ta8/
false
6
t1_o7w6sfn
Google literally shaking rn
2
0
2026-02-28T15:38:38
johnnyApplePRNG
false
null
0
o7w6sfn
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7w6sfn/
false
2
t1_o7w6r56
Thank you for that post man. I knew we weren't maximizing gpt-oss potential... Thank you for sharing your knowledge
1
0
2026-02-28T15:38:27
IngenuityMotor2106
false
null
0
o7w6r56
false
/r/LocalLLaMA/comments/1rc6c8m/feels_like_magic_a_local_gptoss_20b_is_capable_of/o7w6r56/
false
1
t1_o7w6pjq
If ur using open WebUI, that’s the reason. Who ever made Open WebUI doesn’t understand prompt caching at all.
8
0
2026-02-28T15:38:14
Far-Low-4705
false
null
0
o7w6pjq
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7w6pjq/
false
8
t1_o7w6odj
\> What's your approach for keeping local deployments stable across model updates? Keeping the model weights (offloading to a spinning platter as they go), running everything in docker images, so I can always swap back to a specific one. Also, using llama-swap that holds that whole config together (args + explicit mo...
1
0
2026-02-28T15:38:05
Medium_Chemist_4032
false
null
0
o7w6odj
false
/r/LocalLLaMA/comments/1rh52t9/config_drift_is_the_silent_killer_of_local_model/o7w6odj/
false
1
t1_o7w6mya
Ensure its a quality quant step one. Step 2 ensure perams right top k 20 temp 0 repeat 1 min p 0 top p 0.95
1
0
2026-02-28T15:37:53
Ok_Technology_5962
false
null
0
o7w6mya
false
/r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/o7w6mya/
false
1
t1_o7w6l54
If you ask gemini what the best small model is, you get Qwen 2.5 or Gemma 3. Chat GPT isn't much better. Even when you say "released in the past month" it's just awful at it. 'current best' is one thing AI still kind of sucks at finding.
28
0
2026-02-28T15:37:38
_raydeStar
false
null
0
o7w6l54
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7w6l54/
false
28
t1_o7w6hl0
I mean, I have 16gm vram and get 72 tok/s with the Qwen3.5-35B-A3B 4Q_K_M, and all I did was follow some of the recent guides about tweaking the CPU layer offloading.
9
0
2026-02-28T15:37:09
JumboShock
false
null
0
o7w6hl0
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7w6hl0/
false
9
t1_o7w6egd
Yeah, weirdly I noticed that in qwen 3vl, the 30b thinking had better vision than 32b instruct, and the 32b had a way larger vision module (not to mention more than 10x the active params) I think thinking models just tend to have better vision in general.
8
0
2026-02-28T15:36:43
Far-Low-4705
false
null
0
o7w6egd
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7w6egd/
false
8
t1_o7w67ub
[deleted]
1
0
2026-02-28T15:35:47
[deleted]
true
null
0
o7w67ub
false
/r/LocalLLaMA/comments/1rh22j0/qwen35_prefill_latency_extremely_slow_with_large/o7w67ub/
false
1
t1_o7w66ym
Could be a quant issue. I noticed this with a different quant and find bartowski ones run fine/better for me. Also unsloth didnt run as well either. Unsloth will be re-publishing their quants as there was some issue. The 35B has been uploaded yesterday but i am also waiting for 27b and will test again. Until then try b...
2
0
2026-02-28T15:35:40
thegr8anand
false
null
0
o7w66ym
false
/r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/o7w66ym/
false
2
t1_o7w66dx
Can that model be used in a CPU? Could you please tell me your computer's components?
3
0
2026-02-28T15:35:35
kmuentez
false
null
0
o7w66dx
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7w66dx/
false
3
t1_o7w646g
I think we should just stop pretending Q1 quants exist. Q1 was a nice experiment but it clearly failed for anything except general chatting.
1
0
2026-02-28T15:35:17
Long_comment_san
false
null
0
o7w646g
false
/r/LocalLLaMA/comments/1re76g6/this_benchmark_from_shows_unsolth_q3_quantization/o7w646g/
false
1
t1_o7w63xl
They are trying to save themselves
7
0
2026-02-28T15:35:15
markeus101
false
null
0
o7w63xl
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7w63xl/
false
7
t1_o7w5yns
I support it. Let's do LocalGeMMA, with 1 parameter.
4
0
2026-02-28T15:34:30
DrNavigat
false
null
0
o7w5yns
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7w5yns/
false
4
t1_o7w5ybc
can you help share pp and tk/s for 35B Q4 or Q8 on W7900 ?
1
0
2026-02-28T15:34:27
putrasherni
false
null
0
o7w5ybc
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7w5ybc/
false
1
t1_o7w5uz0
I hope it paints the stock market red.
13
0
2026-02-28T15:33:59
dingo_xd
false
null
0
o7w5uz0
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7w5uz0/
false
13
t1_o7w5poq
Thanks!
1
0
2026-02-28T15:33:13
silenceimpaired
false
null
0
o7w5poq
false
/r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/o7w5poq/
false
1
t1_o7w5o1u
I'm not. I put in plenty of use and am confident about its placing
1
0
2026-02-28T15:33:00
ForsookComparison
false
null
0
o7w5o1u
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7w5o1u/
false
1
t1_o7w5n80
Yes it can generate text but it's not really multimodal in a sense that its text generation utility is worthless because of this.
1
0
2026-02-28T15:32:52
wasdasdasd32
false
null
0
o7w5n80
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7w5n80/
false
1
t1_o7w5m4b
Bezos's sugar baby.
1
0
2026-02-28T15:32:44
dingo_xd
false
null
0
o7w5m4b
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7w5m4b/
false
1
t1_o7w5ite
I think the technical folks who do it because it still works. Others do it because some older llms kiss their butt in a specific way XD It took GPT a while to retire 4o bahaha
4
0
2026-02-28T15:32:16
TheAncientOnce
false
null
0
o7w5ite
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7w5ite/
false
4
t1_o7w59ew
Also I hate how Qwen3 after 2507 got the GPT-style excessive appreciation of the user, as well as "not just ... but ..." thrope that it repeats like constantly. It feels like earlier models anwered with less bullshit.
19
0
2026-02-28T15:30:57
No-Refrigerator-1672
false
null
0
o7w59ew
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7w59ew/
false
19
t1_o7w58uz
If anything, Anthropic is probably more right wing than Trump
0
0
2026-02-28T15:30:53
Exotic_Lion_3581
false
null
0
o7w58uz
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7w58uz/
false
0
t1_o7w4x9r
[removed]
1
0
2026-02-28T15:29:15
[deleted]
true
null
0
o7w4x9r
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7w4x9r/
false
1
t1_o7w4vl8
how can i get vllm to serve these unsloth quants!? what dependency nightmare is that. im able to serve through llamacpp. im also on wsl because of windows only apps. someone please publish a container that just works.
1
0
2026-02-28T15:29:01
noooo_no_no_no
false
null
0
o7w4vl8
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7w4vl8/
false
1
t1_o7w4ujx
I'm hoping for a good sub-1B model. It's really good for classifiers, gating, etc.
6
0
2026-02-28T15:28:52
_raydeStar
false
null
0
o7w4ujx
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7w4ujx/
false
6
t1_o7w4tiy
I tried llamacpp (rocm, vulkan, cpu ,versions) I didn't find much difference on my system, a GPU could be better but it consume also more, it depends on your use case
1
0
2026-02-28T15:28:43
Deep_Traffic_7873
false
null
0
o7w4tiy
false
/r/LocalLLaMA/comments/1rgixk7/accuracy_vs_speed_my_top_5/o7w4tiy/
false
1
t1_o7w4syt
Yeah I was surprised how single 3090s isn't significantly better than the 3060s. And even for image gen, it is faster, but two 3060s lets you do concurrency that a single 3090 doesn't. I guess the only time 3090s measure up is when they stack up, dual 3090s or even quad being the sweet spot
0
0
2026-02-28T15:28:38
TheAncientOnce
false
null
0
o7w4syt
false
/r/LocalLLaMA/comments/1rgynmf/dual_3060_and_single_3090_whats_the_point_of_the/o7w4syt/
false
0
t1_o7w4snh
I’m issuing it in a similar way. I’ve got it loaded on CPU and tied into my n8n automations and it is smart and fast enough to free up my GPU. I’m loving it
13
0
2026-02-28T15:28:36
someone383726
false
null
0
o7w4snh
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7w4snh/
false
13
t1_o7w4sgc
I am somehow only getting around \~ 20 tokens a second on M4 with Q4\_K\_M of unsloth. This feels low? Am I am missing something here?
1
0
2026-02-28T15:28:34
ChickenShieeeeeet
false
null
0
o7w4sgc
false
/r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o7w4sgc/
false
1
t1_o7w4jqq
Alright so I have the exact same GPU — I test on a 1650. Here's the honest truth about your hardware. With 4GB VRAM and 16GB RAM, that 24B model should have never been recommended to you. I'm honestly surprised it even loaded. Here's what will actually work: Realistically, you're looking at a 3B model for comfortable u...
1
0
2026-02-28T15:27:20
melanov85
false
null
0
o7w4jqq
false
/r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7w4jqq/
false
1
t1_o7w4is6
I don't care about Gemma at all, I care about the upcoming qwen3.5 4B and 8B models.
4
1
2026-02-28T15:27:12
AppealThink1733
false
null
0
o7w4is6
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7w4is6/
false
4
t1_o7w4d3w
I saw this the other day on here and I think you'd like to check it out. searxng + chromium + stackoverflow/GitHub apis. Not my project. https://github.com/Shelpuk-AI-Technology-Consulting/kindly-web-search-mcp-server
2
0
2026-02-28T15:26:24
Xp_12
false
null
0
o7w4d3w
false
/r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7w4d3w/
false
2
t1_o7w4c36
Aren't newer models trained on cleaner and more training data?
3
0
2026-02-28T15:26:15
KaroYadgar
false
null
0
o7w4c36
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7w4c36/
false
3
t1_o7w4ahi
use prompt like this:# Few-Shot Example User: Explain quantum entanglement. Assistant: <think> The user wants to understand the concept of quantum entanglement. 1. First, I need to define what quantum entanglement is. It is a physical phenomenon. 2. Next, I should use common examples, such as "a pair of shoes" o...
1
0
2026-02-28T15:26:03
AnyArmy6566
false
null
0
o7w4ahi
false
/r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/o7w4ahi/
false
1
t1_o7w47km
Ddr5?
2
0
2026-02-28T15:25:38
Fast_Thing_7949
false
null
0
o7w47km
false
/r/LocalLLaMA/comments/1rh0msv/qwen3535ba3b_q5_k_mbest_model_for_nvidia_16gb_gpus/o7w47km/
false
2
t1_o7w40p0
FWIW, it was called the Department of War from 1789 to 1947, and the renaming happened in 1949.
1
0
2026-02-28T15:24:39
Genghiz007
false
null
0
o7w40p0
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7w40p0/
false
1
t1_o7w3x94
Waiting for Gemma 4… yeah
5
0
2026-02-28T15:24:10
Geritas
false
null
0
o7w3x94
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7w3x94/
false
5
t1_o7w3wgy
Neither. This Transformers commit mentioning Qwen3.5-9B was posted a few weeks ago in this sub. [https://github.com/huggingface/transformers/commit/fc9137225880a9d03f130634c20f9dbe36a7b8bf#diff-0e5e76a787bdf4aaf1b4cc123b3ed7af55632837177716b3e7ea770e9cdf6834R21-R29](https://github.com/huggingface/transformers/commit/fc...
1
0
2026-02-28T15:24:04
DeProgrammer99
false
null
0
o7w3wgy
false
/r/LocalLLaMA/comments/1rh002v/copy_paste_error_or_does_vllm_team_know_something/o7w3wgy/
false
1
t1_o7w3pq2
[removed]
1
0
2026-02-28T15:23:05
[deleted]
true
null
0
o7w3pq2
false
/r/LocalLLaMA/comments/1rh3oty/mac_m4_24gb_local_stack_qwen25_14b_cogito_14b/o7w3pq2/
false
1
t1_o7w3jai
I swore by gpt-oss-120b as the best assistant model for QA and office tasks. Still need to put it through its paces but so far very happy with the 35b at q8 on strix halo
70
0
2026-02-28T15:22:09
SocialDinamo
false
null
0
o7w3jai
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7w3jai/
false
70
t1_o7w3gxk
I’m using old models because I have old hardware and I’m broke.
1
1
2026-02-28T15:21:51
Expert_Bat4612
false
null
0
o7w3gxk
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7w3gxk/
false
1
t1_o7w3e6l
Americans have shown time and time again they do not care about their own personal privacy, as long as they can have their conveniences and entertainment. And the right-wingers will think whatever Trump wants them to think. 
1
0
2026-02-28T15:21:25
optimator_h
false
null
0
o7w3e6l
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7w3e6l/
false
1