name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7ur78f
New Claude 2.1? Is this different from the Claude 2.1 that came out in 2023?
1
0
2026-02-28T09:27:59
lxgrf
false
null
0
o7ur78f
false
/r/LocalLLaMA/comments/1rgxywo/new_claude_21_refuses_to_kill_a_python_process/o7ur78f/
false
1
t1_o7ur6fo
Can the cli’s call local models and work with them in tandem? One thing I never thought to ask Gemini cli to do.
1
0
2026-02-28T09:27:46
ojxander
false
null
0
o7ur6fo
false
/r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7ur6fo/
false
1
t1_o7ur5gv
good time to ban all the bad bots
2
0
2026-02-28T09:27:30
LoSboccacc
false
null
0
o7ur5gv
false
/r/LocalLLaMA/comments/1rgxywo/new_claude_21_refuses_to_kill_a_python_process/o7ur5gv/
false
2
t1_o7ur4x0
It's typical bot stuff. They repost both the original post as well as the top comments in the original post. This way it looks like actual traffic, boosts the post, and adds some karma to the comment bots too.
2
0
2026-02-28T09:27:21
KaroYadgar
false
null
0
o7ur4x0
false
/r/LocalLLaMA/comments/1rgxywo/new_claude_21_refuses_to_kill_a_python_process/o7ur4x0/
false
2
t1_o7ur41y
Unironically this. Although he did acknoweledge this during the video, which surprised, he's more honest than most model builders out there. Big respect for him to take on this journey for most of us who don't own such hardware, and for testing for us the chinee modded 4090. I am 100% sure he lurks over here. I...
3
0
2026-02-28T09:27:06
waiting_for_zban
false
null
0
o7ur41y
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7ur41y/
false
3
t1_o7ur246
Yeah, I went and did this; seemed like a good test of local coding agents. They did a good enough job with a bit of tidying up and fixing from me. Will take the script a few days to load in all the email but it's in progress 10k or so done so far.
2
0
2026-02-28T09:26:34
_-Carnage
false
null
0
o7ur246
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o7ur246/
false
2
t1_o7ur0xu
amazing!
1
0
2026-02-28T09:26:14
No_Feed2488
false
null
0
o7ur0xu
false
/r/LocalLLaMA/comments/1rg9lli/little_qwen_35_27b_and_qwen_35ba3b_models_did/o7ur0xu/
false
1
t1_o7ur0sv
how to edit the jinja tamplate? should I download the jinja file separately and override the internal one with the flag?
1
0
2026-02-28T09:26:12
Lorian0x7
false
null
0
o7ur0sv
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7ur0sv/
false
1
t1_o7uqyf1
What’s the main goal for the server? LLM inference, training, or image models? GPU VRAM, RAM, and CPU requirements change a lot depending on the workload, so knowing that first helps avoid bad recommendations.
1
0
2026-02-28T09:25:33
Global_Ease_371
false
null
0
o7uqyf1
false
/r/LocalLLaMA/comments/1r1sdkp/i_am_planning_on_building_a_home_ai_server_what/o7uqyf1/
false
1
t1_o7uqy05
Claud is against any form of killing
1
0
2026-02-28T09:25:26
Zealousideal_Nail288
false
null
0
o7uqy05
false
/r/LocalLLaMA/comments/1rgxywo/new_claude_21_refuses_to_kill_a_python_process/o7uqy05/
false
1
t1_o7uqrd5
[removed]
1
0
2026-02-28T09:23:34
[deleted]
true
null
0
o7uqrd5
false
/r/LocalLLaMA/comments/1q9wg6p/finetuning_translation_model/o7uqrd5/
false
1
t1_o7uqnsr
> (Though I've been known to answer to El for short!) I wonder if that's a subtle reference by your LLM, associating itself with the original Canaanite pantheon. Answers to god's name but with female personality?
1
0
2026-02-28T09:22:34
hum_ma
false
null
0
o7uqnsr
false
/r/LocalLLaMA/comments/1rgt4m4/not_creeped_out_at_all_i_swear/o7uqnsr/
false
1
t1_o7uqiw5
Yeah but were the models ass or the dataset? If someone tried training a modern model architecture on the old datasets we would know.
1
0
2026-02-28T09:21:12
MaruluVR
false
null
0
o7uqiw5
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uqiw5/
false
1
t1_o7uqhqj
No need, I don't think. All Qwen3.5 have built in NEXTN speculative decoding built in, so running that algo (think it's only natively supported by sglang?) provided most of the speed up without a seperate speculative model Correct me if I'm wrong anyone
2
0
2026-02-28T09:20:54
MmmmMorphine
false
null
0
o7uqhqj
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uqhqj/
false
2
t1_o7uqh7s
[removed]
1
0
2026-02-28T09:20:47
[deleted]
true
null
0
o7uqh7s
false
/r/LocalLLaMA/comments/1r0q0vt/can_someone_who_trained_fine_tuned_on_nvfp4_can/o7uqh7s/
false
1
t1_o7uqgin
I have a feeling much of this is being driven by OpenAI lobbying in an attempt to get rid of competition. This is like two playground bullies squabbling for "top dog" status, while one of the bullies is slipping $20 bills into the principal's pockets to stick a thumb on the scales of the showdown. No matter who wins ...
1
0
2026-02-28T09:20:36
Look_0ver_There
false
null
0
o7uqgin
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7uqgin/
false
1
t1_o7uqfk1
[removed]
1
0
2026-02-28T09:20:21
[deleted]
true
null
0
o7uqfk1
false
/r/LocalLLaMA/comments/1rgxywo/new_claude_21_refuses_to_kill_a_python_process/o7uqfk1/
false
1
t1_o7uqdlz
why tho? everyone should be able to create their closed models. OpenAi get the flack cause they were claiming to be opoen but they are not.
1
0
2026-02-28T09:19:50
North-Act-7958
false
null
0
o7uqdlz
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7uqdlz/
false
1
t1_o7uqdh1
The Lmstudio quant is really bad tho
2
0
2026-02-28T09:19:47
Lorian0x7
false
null
0
o7uqdh1
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7uqdh1/
false
2
t1_o7uqbf9
Same, I really would like to see a modern model architecture trained on one of the old datasets.
4
0
2026-02-28T09:19:15
MaruluVR
false
null
0
o7uqbf9
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uqbf9/
false
4
t1_o7uqaup
How is that working?
1
0
2026-02-28T09:19:05
profcuck
false
null
0
o7uqaup
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uqaup/
false
1
t1_o7uq9f2
No, that's an illegal interpretation of the law. Kegsbreath is full of shit, as always. They can only ask companies to not use Anthropic's tools for government projects. ie: if you're working on a weapon system for the government, you can't use Claude anymore. They have no authority to tell, eg: Amazon, to not do bus...
4
0
2026-02-28T09:18:42
truthputer
false
null
0
o7uq9f2
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7uq9f2/
false
4
t1_o7uq891
How ? I also read the “manual”
2
0
2026-02-28T09:18:23
Skystunt
false
null
0
o7uq891
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7uq891/
false
2
t1_o7uq5jd
Depends on what you use it for, for some tasks like creative writing old models were objectively better. Its no X but Y and em dashes arent what you want to see every 5 seconds in creative writing.
4
0
2026-02-28T09:17:40
MaruluVR
false
null
0
o7uq5jd
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uq5jd/
false
4
t1_o7uq292
For me increasing ubatch size was the key to get much higher PP speeds. The llama.cpp default 512 is pretty low. If you increase it above 2048 you will also need to adjust batch size up. This will eat some VRAM so you will need to offload more experts to CPU, thus tg speed may suffer. It's a tradeoff.
1
0
2026-02-28T09:16:47
OsmanthusBloom
false
null
0
o7uq292
false
/r/LocalLLaMA/comments/1rgkmd7/ways_to_improve_prompt_processing_when_offloading/o7uq292/
false
1
t1_o7upyqx
The static analysis idea is good but the harder problem is that a skill can look totally clean and still behave maliciously depending on what input the agent feeds it at runtime. Reading the file beforehand only catches the obvious stuff. What actually helped me was adding a runtime layer that monitors what each skill ...
1
0
2026-02-28T09:15:48
thecanonicalmg
false
null
0
o7upyqx
false
/r/LocalLLaMA/comments/1rgelk1/the_supply_chain_problem_nobody_talks_about_agent/o7upyqx/
false
1
t1_o7upvwj
Maybe optimized using Codex instead of Opus /s
2
0
2026-02-28T09:15:04
Charming_Support726
false
null
0
o7upvwj
false
/r/LocalLLaMA/comments/1rggpu9/glm5code/o7upvwj/
false
2
t1_o7upo7f
Sadly it is! I am using the finely crafted quant of mratsim/MiniMax-M2.5-BF16-INT4-AWQ and two days ago it ruined a full day of dev sprints. The model got itself in the "singular vs plural" battle and eventually looped for 2 hours trying to fix an otherwise simple CI/CD pipeline integration script.
1
0
2026-02-28T09:13:03
One-Macaron6752
false
null
0
o7upo7f
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7upo7f/
false
1
t1_o7upgsz
After this comment, I'm going to try it out. Thanks for the recommendation and those who upvoted 😃
10
0
2026-02-28T09:11:10
nakedspirax
false
null
0
o7upgsz
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7upgsz/
false
10
t1_o7upgnd
[deleted]
1
0
2026-02-28T09:11:07
[deleted]
true
null
0
o7upgnd
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7upgnd/
false
1
t1_o7upfki
Is everyone on this thread time travelers?
11
0
2026-02-28T09:10:51
inaem
false
null
0
o7upfki
false
/r/LocalLLaMA/comments/1rgxywo/new_claude_21_refuses_to_kill_a_python_process/o7upfki/
false
11
t1_o7upd1r
llama.cpp is way too slow for my concurrent workload, I need vllm and enough VRAM for context. I did try one of the unsloth Q3 quants and it was noticeably worse than MXFP4 and at best 20% faster. Still we are talking 10s of tokens per second I need 1000s per second. Qwen3.5-27b BF16 for captioning + Qwen3-VL-235-a22b...
1
0
2026-02-28T09:10:12
reto-wyss
false
null
0
o7upd1r
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7upd1r/
false
1
t1_o7upcld
I strongly recommend the IQ4NL quant for 397BA17B. It performed considerably better than all other 4-bit quants on my tests and was basically equivalent to Q6KXL.
2
0
2026-02-28T09:10:05
twack3r
false
null
0
o7upcld
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7upcld/
false
2
t1_o7upbwx
I read this as the results are slightly more accurate with 27B, while it takes a bit less memory and has much slower inference
9
0
2026-02-28T09:09:54
Abject-Kitchen3198
false
null
0
o7upbwx
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7upbwx/
false
9
t1_o7upaxj
When there is too much garbage in its system prompt… haha
0
0
2026-02-28T09:09:39
Diligent-Builder7762
false
null
0
o7upaxj
false
/r/LocalLLaMA/comments/1rgxywo/new_claude_21_refuses_to_kill_a_python_process/o7upaxj/
false
0
t1_o7up8i0
Weird, I got this: You can use: killall python3 Or if you want to force-kill them: killall -9 python3 Other options: * `pkill` — `pkill -f python` kills any process with "python" in its command line (catches scripts too). * `htop` — interactive process viewer where you can filter by "python" and...
1
0
2026-02-28T09:09:02
Efficient_Ad_4162
false
null
0
o7up8i0
false
/r/LocalLLaMA/comments/1rgxywo/new_claude_21_refuses_to_kill_a_python_process/o7up8i0/
false
1
t1_o7up6yc
You can use built in MTP like this in vLLM: ``` CUDA_VISIBLE_DEVICES=0,1,2,3 vllm serve Qwen3.5-27B-FP8 -tp 4 --max-model-len 256k --gpu-memory-utilization 0.88 --max-num-seqs 48 --tool-call-parser qwen3_coder --reasoning-parser qwen3 --enable-auto-tool-choice --max_num_batched_tokens 8192 --enable-prefix-caching ...
3
0
2026-02-28T09:08:37
lly0571
false
null
0
o7up6yc
false
/r/LocalLLaMA/comments/1rgwryb/speculative_decoding_qwen35_27b/o7up6yc/
false
3
t1_o7up5l6
Pacifist AI. They don't want to hurt their own side.
1
0
2026-02-28T09:08:17
Gokudomatic
false
null
0
o7up5l6
false
/r/LocalLLaMA/comments/1rgxywo/new_claude_21_refuses_to_kill_a_python_process/o7up5l6/
false
1
t1_o7uoyw8
I'd wait for a smaller moe similar to gpt oss 20b
1
0
2026-02-28T09:06:35
Significant_Fig_7581
false
null
0
o7uoyw8
false
/r/LocalLLaMA/comments/1rgp2nu/anyone_doing_speculative_decoding_with_the_new/o7uoyw8/
false
1
t1_o7uoydm
Totally. they are for a much larger class of model. I'm sure GLM 4.7 Flash is also not going to be super competitive either, even though i would hope it comes close. I meant a head to head qwen3.5 35B against these big boy 300B, 700B models (Kimi K2.5 is 1T). surely it comes up short? if it comes close it'd be super im...
1
0
2026-02-28T09:06:27
michaelsoft__binbows
false
null
0
o7uoydm
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uoydm/
false
1
t1_o7uoy52
there's never such a thing as 100% though, is there? Not in what I do!
0
0
2026-02-28T09:06:23
richardbaxter
false
null
0
o7uoy52
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7uoy52/
false
0
t1_o7uov4d
check chat template, depends on whats you backend... serveal tools has bug with it right now
1
0
2026-02-28T09:05:35
maho_Yun
false
null
0
o7uov4d
false
/r/LocalLLaMA/comments/1rg05k7/qwen_35_122b_a10b_3584_score_on_natint_ugi/o7uov4d/
false
1
t1_o7uos64
Why would they be in panic mode? They were warned if you do X, Y will happen. They did X and Y happened. If they were in panic mode they would not have done X.
8
0
2026-02-28T09:04:48
Efficient_Ad_4162
false
null
0
o7uos64
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uos64/
false
8
t1_o7uom20
100%
2
0
2026-02-28T09:03:15
Significant_Fig_7581
false
null
0
o7uom20
false
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7uom20/
false
2
t1_o7uok1o
They just have to survive until the next US election. Because, and don't get confused, a company's first thought is profit, not morals. Choosing to not work with Republicans is a nod to Democrats.
4
0
2026-02-28T09:02:44
Ranter619
false
null
0
o7uok1o
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uok1o/
false
4
t1_o7uohcr
Yeah, that's illegal bullshit and not what the supply chain laws permit. That would be unprecedented overreach and government meddling in private businesses. Your first mistake is believing a single word that those freaks say, this diseased administration lies all the time. The law says that they're allowed to preven...
1
0
2026-02-28T09:02:04
truthputer
false
null
0
o7uohcr
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7uohcr/
false
1
t1_o7uoh90
I tried the heretic version of 27b. Its good, but often gets confused, saying things that dont make sense. I think a finetine focused on roleplaying would make it better. Right now glm steam 106b works best for me.
5
0
2026-02-28T09:02:02
Gringe8
false
null
0
o7uoh90
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uoh90/
false
5
t1_o7uoefd
where did you get that from?
1
0
2026-02-28T09:01:17
North-Act-7958
false
null
0
o7uoefd
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7uoefd/
false
1
t1_o7uoctj
Nice benchmark! Very useful. ¿Do you have Opus 4.6? This model has solid long context remembering + 68% ARCAGI2 so could do it really fine (And looking to Sonnet 4.6 it can be assumed)
2
0
2026-02-28T09:00:52
Sockand2
false
null
0
o7uoctj
false
/r/LocalLLaMA/comments/1rg9lli/little_qwen_35_27b_and_qwen_35ba3b_models_did/o7uoctj/
false
2
t1_o7uocpw
I don't get how this guy can still do what he does. No uprising. Nothing
1
0
2026-02-28T09:00:50
Justify_87
false
null
0
o7uocpw
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7uocpw/
false
1
t1_o7uo994
I'm getting about 7-8 tps output.
1
0
2026-02-28T08:59:56
Warm-Attempt7773
false
null
0
o7uo994
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7uo994/
false
1
t1_o7uo6i7
heh, models are monolithic
4
0
2026-02-28T08:59:13
IrisColt
false
null
0
o7uo6i7
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uo6i7/
false
4
t1_o7uo65s
You should try 397B at UD Q3KXL. That model I think would beat anything else you can fit on your dual GPUs. You could even hit full context at Q8 KVcache likely. 
2
0
2026-02-28T08:59:07
My_Unbiased_Opinion
false
null
0
o7uo65s
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uo65s/
false
2
t1_o7uo46e
This!
1
0
2026-02-28T08:58:37
IrisColt
false
null
0
o7uo46e
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uo46e/
false
1
t1_o7uo2rk
Is IQ4_K_XS also retired? I don't see it anymore in the model card. I was using it because it seemed to be the best fit in my RTX 4000 with 20GB VRAM. Is there some new recommendations for which quant might work best to fit entirely in 20GB?
1
0
2026-02-28T08:58:16
ishbuggy
false
null
0
o7uo2rk
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7uo2rk/
false
1
t1_o7uo1cu
SODA is the same but 'the' is spelled wrong
1
0
2026-02-28T08:57:54
philmarcracken
false
null
0
o7uo1cu
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7uo1cu/
false
1
t1_o7unv40
Anthropics won't go anywhere they could just move overseas. They are the best in the field now.
2
0
2026-02-28T08:56:18
Budget-Juggernaut-68
false
null
0
o7unv40
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7unv40/
false
2
t1_o7unv56
just for fun, i tried running that same prompt using the same system prompt and it's obviously answering way too casually and answers really long responses. not on the same vibes i got with qwen 3.5-27B. It's very long, im just gonna provide some part of it from the beginning: >Okay, you want depth, not dictionary d...
2
0
2026-02-28T08:56:18
theskilled42
false
null
0
o7unv56
false
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7unv56/
false
2
t1_o7unuoj
I also really liked the iq2_m for some reason, the old one they removed for now that someone else re-uploaded. For even more speed you can force thinking off and it still ran fine enough for me, on 12 vram +ram I get 50 tps tho I needed to requantize the mmproj to be smaller too (which is fine since I rarely use images...
1
0
2026-02-28T08:56:11
ethereal_intellect
false
null
0
o7unuoj
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7unuoj/
false
1
t1_o7unrlu
There is always someone with a pre-squeeze stash.
1
0
2026-02-28T08:55:23
TomLucidor
false
null
0
o7unrlu
false
/r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o7unrlu/
false
1
t1_o7unrhw
$100,000
1
0
2026-02-28T08:55:22
undercovernerd5
false
null
0
o7unrhw
false
/r/LocalLLaMA/comments/1rd80gx/i_just_saw_something_amazing/o7unrhw/
false
1
t1_o7unmrb
I wonder when plug-and-play with older RPi boards and ultra-small linear attention models would be possible.
1
0
2026-02-28T08:54:11
TomLucidor
false
null
0
o7unmrb
false
/r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o7unmrb/
false
1
t1_o7unkng
Training uses surrogate gradient backprop — that's the scaler.scale(loss).backward() line. The SNN architecture is native (binary spikes, membrane potentials, thresholds, refractory periods), but training relies on surrogate gradients through the ATan function to make spikes differentiable. STDP is implemented in chat....
1
0
2026-02-28T08:53:40
zemondza
false
null
0
o7unkng
false
/r/LocalLLaMA/comments/1rfddpi/training_a_144m_spiking_neural_network_for_text/o7unkng/
false
1
t1_o7unkin
No, that's illegal overreach and not what the "Supply Chain Risk" laws say. The Department of Defense can prevent contractors from using Anthropic products WHILE DOING DOD WORK, but they can't prevent contractors from using Anthropic products for other purposes. Any private or public company, individual or organizati...
5
0
2026-02-28T08:53:38
truthputer
false
null
0
o7unkin
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7unkin/
false
5
t1_o7unikj
shorted system prompt ---> more exploration
1
0
2026-02-28T08:53:08
Haoranmq
false
null
0
o7unikj
false
/r/LocalLLaMA/comments/1rguxyo/i_ran_3830_inference_runs_to_measure_how_system/o7unikj/
false
1
t1_o7uniao
>I use it for roleplay though The Heretic version is sufficiently good for role-playing.
4
0
2026-02-28T08:53:04
nihnuhname
false
null
0
o7uniao
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uniao/
false
4
t1_o7unfzt
So it is more laptop-oriented I guess?
1
0
2026-02-28T08:52:29
TomLucidor
false
null
0
o7unfzt
false
/r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o7unfzt/
false
1
t1_o7uncs9
FYI. I moved away from lm studio last night and was hitting bigger numbers with llama.ccp
3
0
2026-02-28T08:51:40
nakedspirax
false
null
0
o7uncs9
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uncs9/
false
3
t1_o7unc60
Nice. that should be getting more attention!
1
0
2026-02-28T08:51:31
ComplexType568
false
null
0
o7unc60
false
/r/LocalLLaMA/comments/1rg2yl7/qwen35_27b_at_q3_k_m_passes_the_car_wash_test/o7unc60/
false
1
t1_o7un90i
I was waiting for gemma4, gemma3 gave me fantastic results in my personal projects
2
0
2026-02-28T08:50:44
vladlearns
false
null
0
o7un90i
false
/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7un90i/
false
2
t1_o7un7vf
Not sure where you got the data from, but just at a quick glance, the math ain't mathing... Gemma 3 12B at Q4\_K\_M is marked as Good and that it uses 76% of VRAM, but the weights alone are 6.8GiB so 85% of that 8GB VRAM so definitely not fitting the KV and that "131K" context, For another example the Llama 3.2 3B at ...
1
0
2026-02-28T08:50:26
tmvr
false
null
0
o7un7vf
false
/r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7un7vf/
false
1
t1_o7un5t0
Anyone know where to find BnB models? Interested in doing post training RL but BnB Q4 for newly released models are sparse.
1
0
2026-02-28T08:49:55
bestsniperNAxoxo
false
null
0
o7un5t0
false
/r/LocalLLaMA/comments/1rfds1h/qwen3535ba3b_q4_quantization_comparison/o7un5t0/
false
1
t1_o7un3jj
Yes, the thinking is ridiculous. But it's nice to have, if precision is more important than time. My 3.5-27B oneshot the joke with thinking, while it got it only once in five tries without thinking. The five tries were still way faster, yet the difference is obvious.
1
0
2026-02-28T08:49:21
lisploli
false
null
0
o7un3jj
false
/r/LocalLLaMA/comments/1rgu3s0/qwen3535ba3b_be_overthinking_like/o7un3jj/
false
1
t1_o7umzzc
the dense vs MoE thing has real practical implications. 27B dense with no thinking overhead is snappier than a 35B MoE when context fills up, because attention over sparse activated params degrades past ~60-70% fill. the "do not provide a lame or generic answer" prompt is clever, basically forcing the model out of its...
5
0
2026-02-28T08:48:25
BreizhNode
false
null
0
o7umzzc
false
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7umzzc/
false
5
t1_o7umyss
This has not been the first time they don't support similar techniques. Really hope they can implement it so we can all have a meaningful speed boost on same hardware.
2
0
2026-02-28T08:48:08
Zestyclose_Yak_3174
false
null
0
o7umyss
false
/r/LocalLLaMA/comments/1rgp2nu/anyone_doing_speculative_decoding_with_the_new/o7umyss/
false
2
t1_o7umw51
you can do it in the presets, much flexible and you can switch modes whenever you need.
6
0
2026-02-28T08:47:27
FORNAX_460
false
null
0
o7umw51
false
/r/LocalLLaMA/comments/1rgswkc/turn_off_thinking_in_lm_studio/o7umw51/
false
6
t1_o7umrpi
the Chinese model ban angle is the part that could actually land. not Anthropic specifically, but "any model not approved by [agency]" becoming procurement language. which would perversely push serious users toward self-hosted or European infrastructure faster than any privacy argument has managed to.
37
0
2026-02-28T08:46:18
BreizhNode
false
null
0
o7umrpi
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7umrpi/
false
37
t1_o7umpba
Yes. Claude explained it pretty well
0
1
2026-02-28T08:45:42
Marenz
false
null
0
o7umpba
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7umpba/
false
0
t1_o7ummyz
16GB VRAM is enough to run Q4 of 27B, or Q8 of a 13-14B, which makes a real difference for reasoning tasks. the thing that trips people up most with Ollama is the default context -- it's 2048, way too short for anything useful. set OLLAMA_NUM_CTX=8192 at minimum and things get a lot more consistent.
2
0
2026-02-28T08:45:07
BreizhNode
false
null
0
o7ummyz
false
/r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7ummyz/
false
2
t1_o7ummrl
>They want to use buggy AI vision systems to autonomously kill people. Where did you read this? Got a source?
1
0
2026-02-28T08:45:04
fodacao
false
null
0
o7ummrl
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ummrl/
false
1
t1_o7umknv
for openclaw-style server management tasks, Qwen3.5-27B is more than enough. Grok 4.1 fast is fast, not particularly more capable for tool use. the tricky part isn't the model, it's making sure your server has enough RAM to avoid context truncation mid-session. 64GB+ is where it gets reliable for longer agent loops.
0
0
2026-02-28T08:44:31
BreizhNode
false
null
0
o7umknv
false
/r/LocalLLaMA/comments/1rgq0vc/can_a_local_hosted_llm_keep_up_with_grok_41_fast/o7umknv/
false
0
t1_o7umii3
vulkan or rocm,? I have a 5700xt too,what quant you are using? and what is your generation and prefill speed ? Thanks
2
0
2026-02-28T08:43:59
kironlau
false
null
0
o7umii3
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7umii3/
false
2
t1_o7umhz0
NovelAI has seriously dropped the ball for all the reasons OP listed and no new features in a long time. A clean writing tool with the ability to quickly and easily integrate AI for various task would be amazing.
3
0
2026-02-28T08:43:50
capitol_thought
false
null
0
o7umhz0
false
/r/LocalLLaMA/comments/1rgiimd/discussion_is_it_time_for_a_prosefirst_successor/o7umhz0/
false
3
t1_o7umh0j
Crazy how nobody really works on slms. 1.7b - or somewhere on the ballpark. Coherent ones. A swarm of small models or however culture defines it should be better than large models for a specified use case.
1
0
2026-02-28T08:43:35
Hot_Inspection_9528
false
null
0
o7umh0j
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7umh0j/
false
1
t1_o7umg2c
Well yes. If you're memory bandwidth limited (but plenty of it) go MoE (e.g. iGPU). If you have little memory (but high bandwidth) go dense (e.g. GPU).
12
0
2026-02-28T08:43:21
debackerl
false
null
0
o7umg2c
false
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7umg2c/
false
12
t1_o7umbpv
They also don't have the authority to cut Medicaid finding to Minnesota, or impose tariffs, or abduct the leader of Venezuela. But they do it anyway. If you're going to pick a defining feature of Trump's presidency, it's that the Executive does illegal things continuously.
22
0
2026-02-28T08:42:14
_bones__
false
null
0
o7umbpv
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7umbpv/
false
22
t1_o7um6hf
[deleted]
1
0
2026-02-28T08:40:54
[deleted]
true
null
0
o7um6hf
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7um6hf/
false
1
t1_o7um1r4
Its time to buy puts on " palantir " lol .. They are screwed after this ..
1
0
2026-02-28T08:39:42
chuan_l
false
null
0
o7um1r4
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7um1r4/
false
1
t1_o7uly4d
You can't debug what you can't see. Log everything.
3
0
2026-02-28T08:38:45
MotokoAGI
false
null
0
o7uly4d
false
/r/LocalLLaMA/comments/1rgwyqi/agent_debugging_is_a_mess_am_i_the_only_one/o7uly4d/
false
3
t1_o7ulx1q
Some of these subs centered around AI do really have just random downvoted to fuck comments. Though your first comment could've been seen as sarcastic. A sort of "welcome to the club, glad you finally caught up" sort of thing.
5
0
2026-02-28T08:38:28
TakuyaTeng
false
null
0
o7ulx1q
false
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7ulx1q/
false
5
t1_o7ulvrj
I thought it was really great. I'd like to incorporate it into the app I'm currently developing. Is it not publicly available yet?
1
0
2026-02-28T08:38:08
amania_jp
false
null
0
o7ulvrj
false
/r/LocalLLaMA/comments/1ql9jzp/roast_me_built_an_sdk_for_ios_apps_to_run_ai_on/o7ulvrj/
false
1
t1_o7ulopk
there was a mass update on qwen3.5 in unsloth 12h ago, they fixed tool calling.
2
0
2026-02-28T08:36:19
arm2armreddit
false
null
0
o7ulopk
false
/r/LocalLLaMA/comments/1rg045u/overwhelmed_by_so_many_model_releases_within_a/o7ulopk/
false
2
t1_o7uljnn
Although 35B MoE is faster, 27B is superior in real world tasks in knowledge. I also found that instruction following with sophisticated and long prompts works best with 27B and 122B. I am still trying to determine how big the gap actually is between those two. On benchmarks they are nearly identical, yet I don't beli...
3
0
2026-02-28T08:35:02
Zestyclose_Yak_3174
false
null
0
o7uljnn
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uljnn/
false
3
t1_o7ulj29
Which is more than can be said for the OAI Sota... and lets not forget.. GPT5.2 probably spent the same number of tokens working out whether the request was safe to answer, the same again to figure out if it was causing harm.. and then decided, probably because of excessive safety in its CoT, it should say walking.
1
0
2026-02-28T08:34:52
Temporary-Mix8022
false
null
0
o7ulj29
false
/r/LocalLLaMA/comments/1rgu3s0/qwen3535ba3b_be_overthinking_like/o7ulj29/
false
1
t1_o7ulghs
That’s not to run on most consumer hardware, unless heavily quantized
0
0
2026-02-28T08:34:12
Usual-Orange-4180
false
null
0
o7ulghs
false
/r/LocalLLaMA/comments/1rgq0vc/can_a_local_hosted_llm_keep_up_with_grok_41_fast/o7ulghs/
false
0
t1_o7ulfr5
I tried [https://huggingface.co/unsloth/GLM-4.7-Flash-GGUF](https://huggingface.co/unsloth/GLM-4.7-Flash-GGUF) and couldn't get it working in any useful capacity. None of their other models are small enough to run on my hardware.
1
0
2026-02-28T08:34:00
paulgear
false
null
0
o7ulfr5
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7ulfr5/
false
1
t1_o7ulcdn
None of those, waiting for the < 10B models.
1
0
2026-02-28T08:33:08
hum_ma
false
null
0
o7ulcdn
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ulcdn/
false
1
t1_o7ul6rg
people are usually reluctant to have horny chats with cloud models, especially if their preferences are kinky/rare/obviously illegal -- hence **local** LLMs. And people usually do not have terabytes of VRAM at home so they have to tinker with software/hardware optimizations to squeeze the most out of it.
3
0
2026-02-28T08:31:40
MelodicRecognition7
false
null
0
o7ul6rg
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7ul6rg/
false
3
t1_o7ul5sp
How many tokens/s do you expect to get in that card, and how many are acceptable for your workflows? I’m looking to upgrade my system and I’m curious about the capabilities of different card/model combinations. I found <30t/s is just a little too slow to use as a coding assistant, and dense models do seem to come wi...
2
0
2026-02-28T08:31:26
Manamultus
false
null
0
o7ul5sp
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ul5sp/
false
2
t1_o7ul3yb
Anthropic getting killed by friendly fire by the *US government* after trying to dogpile China for forever is some kind of tragic irony
40
0
2026-02-28T08:30:56
tengo_harambe
false
null
0
o7ul3yb
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7ul3yb/
false
40