name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7ul218
i just keep running q3 next coder as it's got much more recent cutoff date so it knows about newer updates to APIs I use. q3.5 122b is useful to me only because it's multimodal (it can look at images) so I'm only keeping it out of the 3.5 family. smaller ones i don't care about since i'm not using agentic stuff. i on...
1
0
2026-02-28T08:30:26
dmter
false
null
0
o7ul218
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ul218/
false
1
t1_o7ukxoa
I've seen that with smaller one liner prompts without sys prompt, the models tend to go into craziness. Can you verify if this happens to you with opencode?
1
0
2026-02-28T08:29:19
Old-Sherbert-4495
false
null
0
o7ukxoa
false
/r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o7ukxoa/
false
1
t1_o7ukxkv
What's being said is that Ryzen 7 isn't a component — It's a CPU 'class' or designation. Ryzen 7 includes multiple CPUs and doesn't tell anyone the actual hardware being tested. DDR5 doesn't tell anyone anything useful because RAM speeds are a factor and you didn't include it.
0
0
2026-02-28T08:29:17
Silly-Protection7389
false
null
0
o7ukxkv
false
/r/LocalLLaMA/comments/1rgixk7/accuracy_vs_speed_my_top_5/o7ukxkv/
false
0
t1_o7ukr3b
It's really unfortunate that its context size is so low. Because it really is an amazing model.
2
0
2026-02-28T08:27:39
toothpastespiders
false
null
0
o7ukr3b
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7ukr3b/
false
2
t1_o7ukl7m
I'm also working a desktop companion AI since you said grok is better in role playing kinda curious if it's better than claudeAI since I'm using it as current AI. Well I would like to go full local once I manage to test it a few months once I've finish it.
1
0
2026-02-28T08:26:08
Sakubo0018
false
null
0
o7ukl7m
false
/r/LocalLLaMA/comments/1q8uiu9/building_a_desktop_ai_companion_with_memory/o7ukl7m/
false
1
t1_o7ukkrl
The way I understood is that continuing prompt after lm restart fills the KV cache to same exact state than before restart. Added with the randomness of the new tokens applied to prompt.  That being said next challenge is to understand how the lm runs as automaton. How does it feed new "thoughts" to itself and what kin...
1
0
2026-02-28T08:26:02
Ambitious-Sense-7773
false
null
0
o7ukkrl
false
/r/LocalLLaMA/comments/1rg0ir2/after_using_local_models_for_one_month_i_learned/o7ukkrl/
false
1
t1_o7ukilz
I have no idea what you mean. I installed everything in the zip and followed all your instructions. All I want is to make it work, so both of us would be happy when I tell everybody about your amazing app. I just don't know what to do. I uninstalled and reinstalled it 4 times, but it neither narrates anything with the...
1
0
2026-02-28T08:25:29
timeshifter24
false
null
0
o7ukilz
false
/r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o7ukilz/
false
1
t1_o7ukfsu
Waiting for the Unsloth respin before I try 27B.
1
0
2026-02-28T08:24:46
paulgear
false
null
0
o7ukfsu
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7ukfsu/
false
1
t1_o7uka4h
Yes, I swapped them around all over the place. Any V100 on its own boots fine. But not together.
1
0
2026-02-28T08:23:19
MackThax
false
null
0
o7uka4h
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7uka4h/
false
1
t1_o7uk9dy
I feel like a 60B A5B would probably even work on my hardware too, but they haven't released one of those... ;-(
1
0
2026-02-28T08:23:08
paulgear
false
null
0
o7uk9dy
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uk9dy/
false
1
t1_o7uk5qj
Absolutely agree. They're the ones really pushing a model and success or failure on their part is just as interesting. And a gooner's going to be VERY nitpicky and particular about their gooning. So they're going to rant, and tweak, and test.
7
0
2026-02-28T08:22:12
toothpastespiders
false
null
0
o7uk5qj
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uk5qj/
false
7
t1_o7uk4ma
🥲
1
0
2026-02-28T08:21:55
MackThax
false
null
0
o7uk4ma
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7uk4ma/
false
1
t1_o7uk2kh
[https://www.reddit.com/r/LocalLLaMA/comments/1rgtxry/comment/o7u1zjg/](https://www.reddit.com/r/LocalLLaMA/comments/1rgtxry/comment/o7u1zjg/) \- if I had the hardware to run 122B-A10B or 397B-A17B I definitely would, but the point of my post is that something that runs on my limited hardware is working for an agentic ...
2
0
2026-02-28T08:21:24
paulgear
false
null
0
o7uk2kh
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uk2kh/
false
2
t1_o7uk21b
The SXM draws 300W. Fedora. At idle it draws some 22W.
1
0
2026-02-28T08:21:16
MackThax
false
null
0
o7uk21b
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7uk21b/
false
1
t1_o7uk1pa
Running the 27b currently. Fits on 2 3090s in Q8 and seems smart. I can also run the 122B at Q3 or Q4 with cpu offload, but not sure if that will be an improvement actually (besides being faster) since it will be in a lower quant
5
0
2026-02-28T08:21:11
eribob
false
null
0
o7uk1pa
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uk1pa/
false
5
t1_o7uk0nm
running 122b mxfp4_moe atm
3
0
2026-02-28T08:20:55
pfn0
false
null
0
o7uk0nm
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uk0nm/
false
3
t1_o7ujubu
Noticeably beneficial in that it doesn't drain my wallet. ;-)
5
0
2026-02-28T08:19:17
paulgear
false
null
0
o7ujubu
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7ujubu/
false
5
t1_o7ujuad
Very few of lama3 eera open model could go beyon 3/4 conversation turn ppl have lama colored glasses here but it wasn't until miqu that we rrally had the gpt at home model.
1
0
2026-02-28T08:19:16
LoSboccacc
false
null
0
o7ujuad
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7ujuad/
false
1
t1_o7ujqu0
Lightly edited extract follows - don't just blindly run this. I run OpenCode in a Docker container so the home directory has only the OpenCode config files and nothing else. The project I'm working on is mounted onto /src. { "$schema": "https://opencode.ai/config.json", "agent": { "...
8
0
2026-02-28T08:18:23
paulgear
false
null
0
o7ujqu0
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7ujqu0/
false
8
t1_o7ujn0p
This is just a thin server running on Pi, barely taking any memory, gui runs on the client's browser.
1
0
2026-02-28T08:17:26
jslominski
false
null
0
o7ujn0p
false
/r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o7ujn0p/
false
1
t1_o7ujhc6
one day in the future, your toaster will have one of these chips inside
1
0
2026-02-28T08:15:58
sidonsoft
false
null
0
o7ujhc6
false
/r/LocalLLaMA/comments/1rd80gx/i_just_saw_something_amazing/o7ujhc6/
false
1
t1_o7ujh4i
I did that 3 days ago, pretty sure the latest binary has the support baked in.
1
0
2026-02-28T08:15:55
jslominski
false
null
0
o7ujh4i
false
/r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o7ujh4i/
false
1
t1_o7ujg3h
Kann. Auch evtl alles nur pr sein um antropic Sicherheit zu untermauern für andere kritische Infrastrukturen. Aber an sondern werden wir sehen wo es hingeht
-1
0
2026-02-28T08:15:39
revilo-1988
false
null
0
o7ujg3h
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7ujg3h/
false
-1
t1_o7ujf2h
It is a total turning point and the amazing thing is, you don't need a multi gpu rig. Running it on a 3090 pc and a 5060ti pc, they both fly along. Its just so freeing to not be tied to some limited api plan.
1
0
2026-02-28T08:15:22
megadonkeyx
false
null
0
o7ujf2h
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7ujf2h/
false
1
t1_o7ujekn
Tried from bartowski and AesSedai, didn't help either
2
0
2026-02-28T08:15:15
Acrobatic_Donkey5089
false
null
0
o7ujekn
false
/r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o7ujekn/
false
2
t1_o7ujeie
I understand perfectly well, I also understand the revenue they bring is from government contracts is nothing compared to the main businesses the major multi-national companies bring in. Never mind the difficulty in getting their employees to follow along, it is a ridiculous idea.
-1
0
2026-02-28T08:15:14
Similar_Director6322
false
null
0
o7ujeie
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ujeie/
false
-1
t1_o7ujd92
> LLM about the history of Tiananmen Square? /s FYI Kimi k2.5 was enshittified: https://old.reddit.com/r/LocalLLaMA/comments/1r6zxy0/kimi_k2_was_spreading_disinformation_and_made_up/
-1
0
2026-02-28T08:14:54
MelodicRecognition7
false
null
0
o7ujd92
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7ujd92/
false
-1
t1_o7ujd6l
Can you maybe point out/clarify what you mean by "ai hallucination?" Is it asking whether this is to tackle hallucinations, or did I word/present something too abstract or something else?
1
0
2026-02-28T08:14:53
daeron-blackFyr
false
null
0
o7ujd6l
false
/r/LocalLLaMA/comments/1rguhz9/project_sota_toolkit_drop_3_distill_the_flow/o7ujd6l/
false
1
t1_o7ujbjk
Edit: Thanks to all the comments. It seems I don't have my set up optimized. Using LM studio with most things set to pretty much default.
1
0
2026-02-28T08:14:27
yuhjulio
false
null
0
o7ujbjk
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ujbjk/
false
1
t1_o7uj9d6
What's the HuggingFace link for the model you used?
1
0
2026-02-28T08:13:55
kokroo
false
null
0
o7uj9d6
false
/r/LocalLLaMA/comments/1rei65v/qwen3535ba3b_quantization_quality_speed/o7uj9d6/
false
1
t1_o7uj1bv
What is your setup?
1
0
2026-02-28T08:11:53
Ygobyebye
false
null
0
o7uj1bv
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7uj1bv/
false
1
t1_o7uixzo
Short version, I gave it a task that took a while and had multiple steps.
1
0
2026-02-28T08:11:01
paulgear
false
null
0
o7uixzo
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uixzo/
false
1
t1_o7uiwg4
I don't think the bubble can afford to lose one of the big players. So I'd guess nvidia would push money into them like they did with OpenAI
8
0
2026-02-28T08:10:37
WildDogOne
false
null
0
o7uiwg4
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uiwg4/
false
8
t1_o7uivvv
Yeah, just working with 35B A3B at the moment. I'll try the 27B once Unsloth have updated it.
1
0
2026-02-28T08:10:29
paulgear
false
null
0
o7uivvv
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uivvv/
false
1
t1_o7uiusl
yes they did, some people learn from mistakes apperently, some...
-3
0
2026-02-28T08:10:12
arousedsquirel
false
null
0
o7uiusl
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uiusl/
false
-3
t1_o7uiszy
Youre running it as server or exec in your ralphy setup?
1
0
2026-02-28T08:09:45
dron01
false
null
0
o7uiszy
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uiszy/
false
1
t1_o7uit0a
They're weirdly charming in a way. I think I like them for a similar reason to why I like LLMs in the first place. It's just interesting seeing different takes on thinking, logic, common sense, and what reasoning actually is.
2
0
2026-02-28T08:09:45
toothpastespiders
false
null
0
o7uit0a
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uit0a/
false
2
t1_o7uimib
Did they? Have you looked at a list of countries they operate in? Anthropic lists as one of their customers, the Rwandan government, where the president just won his fourth term with 99% of the vote . . . you know, like how all democracies work.
4
0
2026-02-28T08:08:07
ShinsOfGlory
false
null
0
o7uimib
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uimib/
false
4
t1_o7uilbe
Actually bonkers. Way to go, bud.
-1
0
2026-02-28T08:07:48
thejacer
false
null
0
o7uilbe
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7uilbe/
false
-1
t1_o7uil9s
122b on strix halo
1
0
2026-02-28T08:07:47
mindwip
false
null
0
o7uil9s
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uil9s/
false
1
t1_o7uikeo
could you describe in a few words what that AI hallucination is about and why we could need it?
2
0
2026-02-28T08:07:34
MelodicRecognition7
false
null
0
o7uikeo
false
/r/LocalLLaMA/comments/1rguhz9/project_sota_toolkit_drop_3_distill_the_flow/o7uikeo/
false
2
t1_o7uihvr
I never liked Anthropic. And now i have even bigger reason to hate them. 20 dollor you pay to use their service for 10 minutes
-8
0
2026-02-28T08:06:56
lumos675
false
null
0
o7uihvr
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uihvr/
false
-8
t1_o7uialz
Deflection, whataboutism, and more delusions. Enjoy your simple life buddy.
0
0
2026-02-28T08:05:05
MMAgeezer
false
null
0
o7uialz
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7uialz/
false
0
t1_o7uiahu
Is going to have some effect yes, because you are bringing the concept of things which have to be followed closely, but that’s not some novel discovery, that’s how language models work, the real problem is to manage context to keep it always laser focused to the task at hand.
2
0
2026-02-28T08:05:03
Usual-Orange-4180
false
null
0
o7uiahu
false
/r/LocalLLaMA/comments/1rev8jl/prompts_arent_enough_for_longrunning_agents_they/o7uiahu/
false
2
t1_o7uia8m
That must be something wrong. Maybe you only ran it with a simple `llama-server` command? For me, Q4\_K\_M with a 3070 8GB, 5700X, and 48GB RAM, can get around 25–30 tok/s
2
0
2026-02-28T08:04:59
Ok_Conference_7975
false
null
0
o7uia8m
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uia8m/
false
2
t1_o7ui7le
if ToS clearly states that they could downgrade the service any time without notifying users then they did nothing wrong. If you don't agree to their ToS - do not use their service.
1
0
2026-02-28T08:04:19
MelodicRecognition7
false
null
0
o7ui7le
false
/r/LocalLLaMA/comments/1rgu849/if_your_chutesai_subscription_was_unilaterally/o7ui7le/
false
1
t1_o7ui6dn
He's the victim of illegal government surveillance and has pursued global peace with much more success than even Obama. These two redlines hardly seem like issues he'd be at odds with, which leads me to believe they aren't the issues at hand, bud. BUT I PUT THAT IN MY RESPONSE which you responded to with denigration an...
-7
0
2026-02-28T08:04:01
thejacer
false
null
0
o7ui6dn
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ui6dn/
false
-7
t1_o7ui3iq
Exactly! When I tried to run the qwen 14B version before, I encountered a lot of difficulties. Now, switching to the qwen 3 8B or the gemma3-12b has made things much better.
1
0
2026-02-28T08:03:19
Remarkable-End5073
false
null
0
o7ui3iq
false
/r/LocalLLaMA/comments/1rfv6ap/what_models_run_well_on_mac_mini_m4_16gb_for_text/o7ui3iq/
false
1
t1_o7ui3jb
Yeah, it’s not like they were forced to submit a proposal. They wanted the business from the DoW, they just didn’t like the terms.
19
0
2026-02-28T08:03:19
ShinsOfGlory
false
null
0
o7ui3jb
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7ui3jb/
false
19
t1_o7ui2g6
It's a nothing burger, hegseth doesn't have the authority to do it. 
-16
0
2026-02-28T08:03:02
KeikakuAccelerator
false
null
0
o7ui2g6
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7ui2g6/
false
-16
t1_o7ui2cd
397b, but currently llama doesn't seem capable of running it on my setup (epyc+blackwell)
1
0
2026-02-28T08:03:01
MengerianMango
false
null
0
o7ui2cd
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ui2cd/
false
1
t1_o7ui29s
He passed the test: "a human."
12
0
2026-02-28T08:03:00
arm2armreddit
false
null
0
o7ui29s
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7ui29s/
false
12
t1_o7ui1us
use llama.cpp, why vllm?
1
0
2026-02-28T08:02:54
Potential-Leg-639
false
null
0
o7ui1us
false
/r/LocalLLaMA/comments/1r7l7q5/the_strix_halo_feels_like_an_amazing_super_power/o7ui1us/
false
1
t1_o7ui0a6
So Trump is pissed because he just scrolled down the down and clicked "Accept"
2
0
2026-02-28T08:02:30
MasterShakeS-K
false
null
0
o7ui0a6
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ui0a6/
false
2
t1_o7uhxw4
I've been enjoying gaming and Wan video gen on my 5090 the most so far. It remains my most prized possession. I should perhaps say my daughter is, but she is not a thing.
2
0
2026-02-28T08:01:54
michaelsoft__binbows
false
null
0
o7uhxw4
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uhxw4/
false
2
t1_o7uhw0i
Which of Anthropic's existing redlines do you think necessitated this petulant response from the Trump Admin? Are you a fan of mass domestic surveillance bud?
1
0
2026-02-28T08:01:25
MMAgeezer
false
null
0
o7uhw0i
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7uhw0i/
false
1
t1_o7uhsj5
Mostly interested in 122b and 27b. Have 48gb vram and 96gb ram. I use it for roleplay though, so waiting for finetunes. The base models arent good enough for it.
10
0
2026-02-28T08:00:34
Gringe8
false
null
0
o7uhsj5
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uhsj5/
false
10
t1_o7uhpi9
Delusion is suddenly being on Anthropic's cock just because they're at odds with Bad Orange Man. Kid.
-7
0
2026-02-28T07:59:50
thejacer
false
null
0
o7uhpi9
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7uhpi9/
false
-7
t1_o7uhozs
35b a3b Q4 UD-MXFP4_MOE on 2x Nvidia p40. 35t/s gen and 250-300 t/s PP on latest llamacpp in docker on a x99 2680v4. These moe models are the bomb with the old cheap p40 gpus! :-)
2
0
2026-02-28T07:59:42
tehinterwebs56
false
null
0
o7uhozs
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uhozs/
false
2
t1_o7uhmn8
First: the blacklisting is likely illegal. This is overreach and vindictive persecution against an American company. If it happens it would likely be overturned in court, especially since there is zero evidence for the claims being made by the administration other than their insane rantings. Second: the law only appli...
4
1
2026-02-28T07:59:06
truthputer
false
null
0
o7uhmn8
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uhmn8/
false
4
t1_o7uhfk4
Looks really interesting. The fun part about Decode is that you can batch the hell out of it on the GPU. How many decode requests can you run at the same time compared to GPU?
2
0
2026-02-28T07:57:19
WeekLarge7607
false
null
0
o7uhfk4
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7uhfk4/
false
2
t1_o7uhd2c
Haha, yeah the text tokens don't make any sense economically. Don't get me thinking of the tens of billions of Gemini 3 Flash tokens I could generate with the sale of my 5090.. But image and speech generation actually does cost a reasonable amount. Hours and hours of speech output along with hundreds of images in refin...
2
0
2026-02-28T07:56:41
_-_David
false
null
0
o7uhd2c
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uhd2c/
false
2
t1_o7uh55x
Possibly? I'm only going on what's mentioned at [https://unsloth.ai/docs/models/qwen3.5:](https://unsloth.ai/docs/models/qwen3.5:) "Between 27B and 35B-A3B, use 27B if you want slightly more accurate results and can't fit in your device. Go for 35B-A3B if you want much faster inference."
1
0
2026-02-28T07:54:44
paulgear
false
null
0
o7uh55x
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uh55x/
false
1
t1_o7uh4ru
I get 40-50 t/s with 4060 and DDR4... Offloading lots to RAM as I also run comfyui same time... Are you using ollama?
2
0
2026-02-28T07:54:38
Pjotrs
false
null
0
o7uh4ru
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uh4ru/
false
2
t1_o7uh1uf
I order now to decrease all RAM and GPU prices for gamers and hobby LLM players! 😭
2
0
2026-02-28T07:53:54
arm2armreddit
false
null
0
o7uh1uf
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7uh1uf/
false
2
t1_o7uh1lz
They didn't even "demand" this. They just said "hey we think our product can't really be used for that at the moment.".
1
0
2026-02-28T07:53:50
ThirdMover
false
null
0
o7uh1lz
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7uh1lz/
false
1
t1_o7uh15p
Yaay!
1
0
2026-02-28T07:53:44
Ylsid
false
null
0
o7uh15p
false
/r/LocalLLaMA/comments/1r6h3ha/difference_between_qwen_3_maxthinking_and_qwen_35/o7uh15p/
false
1
t1_o7uh0kd
Stop with the nostalgia googles. Yes it was a glorious time. But also, the sub was flooded with gooner tune shills and quantization was seen as miracle magic. All innovation stemmed form the fact that we had nothing, and llama2 models were undercooked messes that gave use the illusion of accomplishment. I much prefer w...
2
1
2026-02-28T07:53:35
Lan_BobPage
false
null
0
o7uh0kd
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uh0kd/
false
2
t1_o7ugtcw
try this: .\llama-server.exe \ --device CUDA0 \ --jinja \ -ctk q8_0 \ -ctv q8_0 \ -fa on \ -m D:\Qwen3.5-35B-A3B-Q4K_S.gguf \ --mmproj D:\mmproj-BF16.gguf \ -c 131072 \ -t 12 \ --no-mmap \ --fit on
4
0
2026-02-28T07:51:46
Conscious_Chef_3233
false
null
0
o7ugtcw
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ugtcw/
false
4
t1_o7ugpe9
No American company in history has been designated a supply chain risk by the federal government. Except Anthropic.
21
0
2026-02-28T07:50:46
Popdmb
false
null
0
o7ugpe9
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7ugpe9/
false
21
t1_o7ugpd5
This is what delusion looks like, kids.
5
0
2026-02-28T07:50:45
MMAgeezer
false
null
0
o7ugpd5
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ugpd5/
false
5
t1_o7ugl4b
I tested them all, 397b-A17B (MXFP4), 122b-a10b (FP8), 27b (BF16), and 35b-a3b (BF16). The 397b-a17b and the 27b are the standouts for me, but the 397b is too heavy for my setup (2x RTX Pro 6000, I used an extra 5090 to get it into VRAM) - I can only get like 50 tg/s maybe 80tg/s (llama.cpp) with concurrency, but on t...
6
0
2026-02-28T07:49:42
reto-wyss
false
null
0
o7ugl4b
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ugl4b/
false
6
t1_o7ugh4p
I noticed I got a lot of context size issue with open code. Needed to open a new session.
1
0
2026-02-28T07:48:42
Gold_Sugar_4098
false
null
0
o7ugh4p
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7ugh4p/
false
1
t1_o7ugdzw
You need to try GPT-OSS-20b, it's the most "ChatGPT" model you're going to get in 16gb of VRAM. It's really pretty solid. Then qwen3-30b-a3b, or the new qwen3.5-35b-a3b should work nicely. But the speed difference of the GPT-OSS-20b which will fit entirely in your VRAM means that it is almost definitely your best model...
1
0
2026-02-28T07:47:56
_-_David
false
null
0
o7ugdzw
false
/r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7ugdzw/
false
1
t1_o7ugcpk
27b is the best model imo
5
0
2026-02-28T07:47:37
FusionCow
false
null
0
o7ugcpk
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ugcpk/
false
5
t1_o7ugb89
Havebt fully tested but so far so good on my snake fame crearion test. It went above n beyond creating different types of levels, took alittle fighting but its good
1
0
2026-02-28T07:47:15
Soft_Syllabub_3772
false
null
0
o7ugb89
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7ugb89/
false
1
t1_o7ug3yl
27B dense is probably the sweet spot for most setups. The MoE models look attractive on paper but inference latency can be unpredictable depending on routing patterns.
2
0
2026-02-28T07:45:25
Lost-Garage-4358
false
null
0
o7ug3yl
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ug3yl/
false
2
t1_o7ug1cg
Ubuntu has better support for bleeding-edge tech, Debian has older versions of some important software so you'll have to build it yourself, and often it is a huge pain in the ass, not simple `./configure && make`. also I've had things broken and Debian while people report that they work in Ubuntu, for example process ...
1
0
2026-02-28T07:44:45
MelodicRecognition7
false
null
0
o7ug1cg
false
/r/LocalLLaMA/comments/1rgomsq/ubuntu_or_debian_speed_difference_on_llamacpp/o7ug1cg/
false
1
t1_o7ug0w0
Z.AI is claude level in my opinion, Grok is not that high either
1
0
2026-02-28T07:44:39
Significant_Fig_7581
false
null
0
o7ug0w0
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7ug0w0/
false
1
t1_o7ug0gm
Let's not forget they already have a Pentagon contract which is now, allegedly, getting decom'd. How many extensions you reckon they'll get because they can't find equivalent capability? Many, I suspect.
2
0
2026-02-28T07:44:33
Ok-Measurement-1575
false
null
0
o7ug0gm
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7ug0gm/
false
2
t1_o7ug05x
[removed]
1
0
2026-02-28T07:44:28
[deleted]
true
null
0
o7ug05x
false
/r/LocalLLaMA/comments/1rgu849/if_your_chutesai_subscription_was_unilaterally/o7ug05x/
false
1
t1_o7ufz6o
First I have to say I am a BIG TIME supporter of President Trump, just so my position is clear. I would also say that these are two issues Anthropic has brought up, but their simply mentioning them doesn't mean that they are being pursued by the Trump admin. I'll point out Anthropic has literally been regarded as the s...
-11
0
2026-02-28T07:44:13
thejacer
false
null
0
o7ufz6o
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ufz6o/
false
-11
t1_o7ufxzf
Which qwen 3.5??
7
0
2026-02-28T07:43:55
Pineapple_King
false
null
0
o7ufxzf
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7ufxzf/
false
7
t1_o7ufvga
I'm building my own. I just wrote a summary here [https://www.reddit.com/r/LocalLLaMA/comments/1rgtxry/comment/o7ufflp/?context=3](https://www.reddit.com/r/LocalLLaMA/comments/1rgtxry/comment/o7ufflp/?context=3)
1
0
2026-02-28T07:43:17
michaelsoft__binbows
false
null
0
o7ufvga
false
/r/LocalLLaMA/comments/1rgelk1/the_supply_chain_problem_nobody_talks_about_agent/o7ufvga/
false
1
t1_o7ufs93
Wow eso está brutal gracias, tengo una pregunta,estaría agradecido si puedes ayudarme tengo 2 server uno Xeon(R) CPU E5-2690 0 @ 2.90GHz (2 Sockets) + 160gb ram es avx Y Intel(R) Xeon(R) CPU E5-2690 v4 avx2 Solo he probado los modelos en el avx no en el 2, la pregunta es, he corrido glm 4.7 flash, perooo lo que resp...
1
0
2026-02-28T07:42:28
Bitter_Tone_2871
false
null
0
o7ufs93
false
/r/LocalLLaMA/comments/1pvxq2t/hard_lesson_learned_after_a_year_of_running_large/o7ufs93/
false
1
t1_o7ufrzz
Land of the free...
3
0
2026-02-28T07:42:24
capitol_thought
false
null
0
o7ufrzz
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ufrzz/
false
3
t1_o7ufr8q
Thanks appreciate it! It's largely due to dynamic quantization which you read about here: https://unsloth.ai/docs/basics/unsloth-dynamic-2.0-ggufs
1
0
2026-02-28T07:42:13
danielhanchen
false
null
0
o7ufr8q
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7ufr8q/
false
1
t1_o7ufp68
OP have you evaluated qwen3.5 against GLM-5? GLM-4.7? I think those and maybe Kimi K2.5 have a chance also at working under your ralph loop approach? If those also do not function as well as qwen3.5 then that would be a truly impressive result. I have not seen like any significant blunders out of GLM-4.7 yet and it's...
0
0
2026-02-28T07:41:41
michaelsoft__binbows
false
null
0
o7ufp68
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7ufp68/
false
0
t1_o7ufp2r
They were the best before his announcement and they're still the best after it. Trump marketing 101. "If you hate me, you should use Claude!"  *shakes fist*
2
1
2026-02-28T07:41:40
Ok-Measurement-1575
false
null
0
o7ufp2r
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7ufp2r/
false
2
t1_o7ufoxt
Anthropic response: https://www.anthropic.com/news/statement-department-of-war >Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Legally, a supply...
5
0
2026-02-28T07:41:38
randombsname1
false
null
0
o7ufoxt
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7ufoxt/
false
5
t1_o7ufm8m
my 4070 12g got 50 tps with q4. something wrong with your setup...
2
0
2026-02-28T07:40:56
Conscious_Chef_3233
false
null
0
o7ufm8m
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ufm8m/
false
2
t1_o7ufl7o
3080s don't have 16GB VRAM so I don't know what to believe about you
2
0
2026-02-28T07:40:40
tomByrer
false
null
0
o7ufl7o
false
/r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7ufl7o/
false
2
t1_o7ufh9s
The people downvoting you don't understand the consequences of being declared a "supply chain risk".
4
0
2026-02-28T07:39:41
ttkciar
false
null
0
o7ufh9s
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7ufh9s/
false
4
t1_o7ufflp
In most places the cost of electricity is such that unless you have solar or really cheap utility rates, the inference electricity you pay for is still going to more or less match the cost of API token rates, it is for the open models, if you hunt down cheap ones, and also, certainly with subscriptions the effective ra...
4
0
2026-02-28T07:39:15
michaelsoft__binbows
false
null
0
o7ufflp
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7ufflp/
false
4
t1_o7ufcft
That seems a little slow, may be due to ddr4! could you try this. Im running Q8 btw in 16 GB Vram and 64 GB DDR 5 set CUDA\_VISIBLE\_DEVICES=0 && "C:\\Users\\user\\Desktop\\llama\\llama-server.exe" \^ \-m "D:\\Qwen3.5-35B-A3B-UD-Q8\_K\_XL.gguf" \^ \-a Qwen3.5-35B-A3B \^ \--ctx\_size 65536 \^ \-ot ".ffn\_.\...
3
0
2026-02-28T07:38:27
nikhilprasanth
false
null
0
o7ufcft
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ufcft/
false
3
t1_o7ufc2g
It got a bit snowed under with MiniMax and then Qwen releasing their new models, but so far I feel like it's the best 128GB model we've got for now.
2
0
2026-02-28T07:38:21
spaceman_
false
null
0
o7ufc2g
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ufc2g/
false
2
t1_o7ufbf6
AWS would be barred from doing business with Anthropic if AWS was doing business with the US military. Quoting from https://www.bbc.com/news/articles/cn48jj3y8ezo -- > \> Both Trump and Hegseth announced their decisions against Anthropic on social media, with the defence secretary saying on X that Anthropic would be ...
2
0
2026-02-28T07:38:10
ttkciar
false
null
0
o7ufbf6
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7ufbf6/
false
2
t1_o7ufa8m
I could never get Qwen3 Next to work but I just found out it works using only one GPU at a time. So in my case, the problem seems to boil down to spanning multiple GPUs. You could try loading Qwen 3.5 using just one GPU + memory and see if it works. It does for me.
1
0
2026-02-28T07:37:53
EdenistTech
false
null
0
o7ufa8m
false
/r/LocalLLaMA/comments/1r807kb/segmentation_fault_when_loading_models_across/o7ufa8m/
false
1
t1_o7uf8w1
Running Qwen 3.5 35B A3B Q8. 5060 TI 16GB and DDR5 with 65K context set CUDA\_VISIBLE\_DEVICES=0 && "C:\\Users\\user\\Desktop\\llama\\llama-server.exe" \^ \-m "D:\\Qwen3.5-35B-A3B-UD-Q8\_K\_XL.gguf" \^ \-a Qwen3.5-35B-A3B \^ \--ctx\_size 65536 \^ \-ot ".ffn\_.\*\_exps.=CPU" \^ \--jinja \^ \-fa on \^ ...
1
0
2026-02-28T07:37:32
nikhilprasanth
false
null
0
o7uf8w1
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uf8w1/
false
1
t1_o7uf6vl
Exactly my point, nothing like that will happen either 
-24
0
2026-02-28T07:37:01
KeikakuAccelerator
false
null
0
o7uf6vl
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uf6vl/
false
-24