name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7uxz6i
Good question! Here's my 2c: AI agent frameworks like LangGraph or CrewAI are providing building blocks to build complex agentic applications, but they are relatively low-level, and you have to wire everything together yourself. Then, there are very capable agent harnesses like OpenCode that already do a lot of the log...
1
0
2026-02-28T10:34:51
mgfeller
false
null
0
o7uxz6i
false
/r/LocalLLaMA/comments/1rf9891/anyone_actually_running_multiagent_setups_that/o7uxz6i/
false
1
t1_o7uxuch
Im just curious about the implementation I like the idea of generating some educational comics for my kids but stuff like character consistency where a daunting detail which made me avoid a quick experiment
2
0
2026-02-28T10:33:35
RonnyPfannschmidt
false
null
0
o7uxuch
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uxuch/
false
2
t1_o7uxqm3
Good luck!
1
0
2026-02-28T10:32:37
_-_David
false
null
0
o7uxqm3
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uxqm3/
false
1
t1_o7uxjr1
Likely even smaller if you can
1
0
2026-02-28T10:30:49
AnomalyNexus
false
null
0
o7uxjr1
false
/r/LocalLLaMA/comments/1rgp2nu/anyone_doing_speculative_decoding_with_the_new/o7uxjr1/
false
1
t1_o7uxhpu
Like a framework? It's just something I coded up to make language study more interesting and appealing. All of the component parts and pieces are open source, but I don't have the project turned into a pinokio app or anything.
1
0
2026-02-28T10:30:17
_-_David
false
null
0
o7uxhpu
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uxhpu/
false
1
t1_o7uxfhd
Built a small real-world agent workflow experiment: I converted my old Stadia gamepad (which was collecting dust) into a Codex controller, so I can trigger coding actions from a gamepad. I asked Codex to build the macOS bridge app in Swift. I hadn’t written Swift before, so this was mainly a practical way to learn whi...
1
0
2026-02-28T10:29:43
phoneixAdi
false
null
0
o7uxfhd
false
/r/LocalLLaMA/comments/1rgzax4/i_built_a_codex_control_deck_from_an_old_stadia/o7uxfhd/
false
1
t1_o7uxbch
I'm wondering about this too. Has anyone tested 35B-A3B on Aider Polyglot? Qwen3 Coder Next scored around 66, which is impressive. I've been trying to run the Polyglot testsuite, but it's very slow on my potato RTX 3060 gaming laptop. Yes I know it's not a coder model. Hoping for one soon.
2
0
2026-02-28T10:28:38
OsmanthusBloom
false
null
0
o7uxbch
false
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7uxbch/
false
2
t1_o7ux9fd
TBH it sounds like they tried to play hard to get, and then they got got lol. > Would Anthropic's fall be good or bad for us? probably inconsequential as far as we're concerned. Let's not forget, Amodei was never a friend of open weight models and tried to tell people they're not viable.
9
0
2026-02-28T10:28:07
UnreasonableEconomy
false
null
0
o7ux9fd
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7ux9fd/
false
9
t1_o7ux97q
Thanks, that was my problem with GLM-4.7-Flash because I couldn't show it screenshots from my game
9
0
2026-02-28T10:28:03
jacek2023
false
null
0
o7ux97q
false
/r/LocalLLaMA/comments/1rgxr0v/qwen_35_is_multimodal_here_is_how_to_enable_image/o7ux97q/
false
9
t1_o7ux5zq
I'm curious - when would 50 t\\s ever not be enough? Do you read faster than that? I get it if there's like a large group of people actively prompting a shared server, but otherwise, what's the point in generating thousands of tokens per second?
1
0
2026-02-28T10:27:12
iz-Moff
false
null
0
o7ux5zq
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ux5zq/
false
1
t1_o7ux4y1
We build [Navigator](https://beta.keinsaas.com) It focuses on enabling non technical people to build AI Agents & Automations with every model available. Would appreciate feedback🙌
1
0
2026-02-28T10:26:55
SirPuzzleheaded997
false
null
0
o7ux4y1
false
/r/LocalLLaMA/comments/1kmragz/are_you_using_ai_gateway_in_your_genai_stack/o7ux4y1/
false
1
t1_o7ux492
I found them to be very similar. Cachy might be more (or *even* more, rather) bleeding edge but that made no difference for my use cases. If you're coming from Windows with limited command line experience, maybe Fedora will feel a bit more user friendly. Both have active communities online. The main selling point of ...
2
0
2026-02-28T10:26:45
AcceSpeed
false
null
0
o7ux492
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7ux492/
false
2
t1_o7ux2e4
That plus he’s made his money and now does whatever he wants. Which apparently is LLMs
6
0
2026-02-28T10:26:15
AnomalyNexus
false
null
0
o7ux2e4
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7ux2e4/
false
6
t1_o7ux284
All three are good for my setup, but it’s possible I’ll also use smaller Qwens, because sometimes you just want a very quick model for simple tasks.
1
0
2026-02-28T10:26:13
jacek2023
false
null
0
o7ux284
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ux284/
false
1
t1_o7uwyjy
Improving my own knowledge of computer hardware and software, though arguably that isn't niche.
1
0
2026-02-28T10:25:14
OrganicPlasma
false
null
0
o7uwyjy
false
/r/LocalLLaMA/comments/1rb2j5c/favourite_niche_usecases/o7uwyjy/
false
1
t1_o7uwqb4
Hey this is cool. One question. Some models are released on weekly basis, like qern 3.5 coming next week. You are going to manually add these? Or is there some script to get them?
1
0
2026-02-28T10:23:02
Present-Ad-8531
false
null
0
o7uwqb4
false
/r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7uwqb4/
false
1
t1_o7uwo5c
> your reasoning of choosing which layers to quant is simple yet obviously very powerful link?
1
0
2026-02-28T10:22:27
DistanceSolar1449
false
null
0
o7uwo5c
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7uwo5c/
false
1
t1_o7uwmuf
27b as a dense model mostly gives better output since all its parameters always get involved while 35b is a moe model meaning only some of its 4b experts get a say in the answer but that makes it way faster. But thats just general knowledge answer. All depends on your finetune and preset parameters in the end.
4
0
2026-02-28T10:22:05
getmevodka
false
null
0
o7uwmuf
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uwmuf/
false
4
t1_o7uwm54
Show me a model as good as pre-censorship GPT4.
4
0
2026-02-28T10:21:54
Misha_Vozduh
false
null
0
o7uwm54
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uwm54/
false
4
t1_o7uwkwl
What template are you using for it? I modified it to the best I can for tool calling to work but I am sure other people have a better setup
6
0
2026-02-28T10:21:34
octopus_limbs
false
null
0
o7uwkwl
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uwkwl/
false
6
t1_o7uwglj
Well, we got to wait for Intel to create a gguf quant (which is supported by autoround). https://github.com/intel/auto-round?tab=readme-ov-file#supported-schemes
1
0
2026-02-28T10:20:24
TitwitMuffbiscuit
false
null
0
o7uwglj
false
/r/LocalLLaMA/comments/1rfds1h/qwen3535ba3b_q4_quantization_comparison/o7uwglj/
false
1
t1_o7uwet9
Same, but q4kxl from unsloth and on a 6000 pro max q
1
0
2026-02-28T10:19:56
getmevodka
false
null
0
o7uwet9
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uwet9/
false
1
t1_o7uwdj2
So more capacity for everyone else
3
0
2026-02-28T10:19:36
AnomalyNexus
false
null
0
o7uwdj2
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7uwdj2/
false
3
t1_o7uwcsf
What context size are you using?
1
0
2026-02-28T10:19:24
kokroo
false
null
0
o7uwcsf
false
/r/LocalLLaMA/comments/1rei65v/qwen3535ba3b_quantization_quality_speed/o7uwcsf/
false
1
t1_o7uw6mm
Qwen is absolutely awesome. I'm dealing with computer graphics problems above my experience. Qwen gives more diverse and better ideas, that actually, knows how to structure the project better, worries a lot more about performance. Comparing to paid Claude here.
2
0
2026-02-28T10:17:42
Negative_Scarcity315
false
null
0
o7uw6mm
false
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7uw6mm/
false
2
t1_o7uw5sz
That's odd, i haven't had 27b say anything nonsensical yet, maybe a phrase or two here and there, but not any more so than other models of comparable size. Though it did felt a bit more dry compared to Qwen3 VL 32b, which i've been using more and more lately.
5
0
2026-02-28T10:17:28
iz-Moff
false
null
0
o7uw5sz
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uw5sz/
false
5
t1_o7uw3xh
Now I wonder when we’ll have a coding model as good as modern Codex/Opus at home
2
0
2026-02-28T10:16:57
MasterScrat
false
null
0
o7uw3xh
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uw3xh/
false
2
t1_o7uw1ol
Is the tooling around the comic gen opensource?
1
0
2026-02-28T10:16:19
RonnyPfannschmidt
false
null
0
o7uw1ol
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uw1ol/
false
1
t1_o7uw0yi
How are you running it? Same hardware here
1
0
2026-02-28T10:16:08
Medium_Chemist_4032
false
null
0
o7uw0yi
false
/r/LocalLLaMA/comments/1rf2ulo/qwen35_122b_in_72gb_vram_3x3090_is_the_best_model/o7uw0yi/
false
1
t1_o7uvxq4
i swear this sub has a new "this local model feels like Sonnet" post every 3 weeks and it's never actually true when you stress test it. the pattern is always the same: someone runs it on their personal projects where they already know the codebase, it does well because they're unconsciously steering it with good promp...
2
0
2026-02-28T10:15:13
No-Understanding2406
false
null
0
o7uvxq4
false
/r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7uvxq4/
false
2
t1_o7uvxk8
Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/PgFhZ8cnWW) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*
1
0
2026-02-28T10:15:10
WithoutReason1729
false
null
0
o7uvxk8
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7uvxk8/
true
1
t1_o7uvvzw
[removed]
1
0
2026-02-28T10:14:44
[deleted]
true
null
0
o7uvvzw
false
/r/LocalLLaMA/comments/1rgwryb/speculative_decoding_qwen35_27b/o7uvvzw/
false
1
t1_o7uvv9p
reasoning-budget: 0 That's what you're looking for. Tried it on my RTX 2060 (yes, hahaha) and get around 3t/s, so i cannot afford to have reasoning enabled.
1
0
2026-02-28T10:14:32
AppealSame4367
false
null
0
o7uvv9p
false
/r/LocalLLaMA/comments/1rglgma/qwen_35_llamacpp_turn_of_reasoning_and_performance/o7uvv9p/
false
1
t1_o7uvtni
What was your experience like using cachyos? How much tinkering was required to get everything running?
1
0
2026-02-28T10:14:04
doesitoffendyou
false
null
0
o7uvtni
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7uvtni/
false
1
t1_o7uvoao
I miss the times where everyone was finetuning llama 1, it felt much more open and like a community where anyone could contribute rather than now where we're just begging chinese labs to give us better models And quantization feels like miracle magic for a reason, even more so back in the day. Because llama 1 and 2 ...
8
0
2026-02-28T10:12:37
TechnoByte_
false
null
0
o7uvoao
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uvoao/
false
8
t1_o7uvk57
You probably couldn't teach Vicuna at the time though, it was the first heavily aligned local model
2
0
2026-02-28T10:11:29
_Erilaz
false
null
0
o7uvk57
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uvk57/
false
2
t1_o7uvhb8
What was your experience with both of them? Did you prefer either one?
1
0
2026-02-28T10:10:43
doesitoffendyou
false
null
0
o7uvhb8
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7uvhb8/
false
1
t1_o7uvdaf
122b REAP
1
0
2026-02-28T10:09:38
Last-Progress18
false
null
0
o7uvdaf
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uvdaf/
false
1
t1_o7uvcoq
I was trying to use it with Qwen3 Coder Next earlier today. But with vLLM v0.15.1 it would crash while loading. I'm not sure if it's an issue with the model, vLLM, or ROCm. I'll have to do some more testing this weekend.
2
0
2026-02-28T10:09:28
AustinM731
false
null
0
o7uvcoq
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uvcoq/
false
2
t1_o7uvawu
I think the only reason I went more and more towards python is I get it to go into documents from lots of different places, including editing them as well, and it just seems intuitive, but I've already downloaded R last night from your recommendation and will have a go with it over the weekend and during writing my nex...
1
0
2026-02-28T10:08:59
LTP-N
false
null
0
o7uvawu
false
/r/LocalLLaMA/comments/1rg3da6/are_there_any_particular_offline_models_i_could/o7uvawu/
false
1
t1_o7uv0xg
You can try AesSedai/Qwen3.5-35B-A3B-GGUF Q5_K_M. 5070ti works well.Surprise!
2
0
2026-02-28T10:06:15
moahmo88
false
null
0
o7uv0xg
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7uv0xg/
false
2
t1_o7uv0u0
Vicuna-33B was the GOAT, now it's just a goat.
2
0
2026-02-28T10:06:13
MoffKalast
false
null
0
o7uv0u0
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uv0u0/
false
2
t1_o7uuwh8
It's very very tough to beat Claude Code if it's well setup. I have zero issues paying for it. That being said, 3.5 seems like it'll be really capable of being a good agent and it can just spin up CC. It's cool that all these pieces are starting to come together.
1
0
2026-02-28T10:05:01
musicsurf
false
null
0
o7uuwh8
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uuwh8/
false
1
t1_o7uuvuo
Glad for it!
1
0
2026-02-28T10:04:51
FORNAX_460
false
null
0
o7uuvuo
false
/r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7uuvuo/
false
1
t1_o7uuv8e
I'm not sure I understand your question, but the model runs on my server inside Docker using llama.cpp, and OpenCode runs on my laptop inside Docker and connects to the server for its inference tasks. The Ralph-like setup is just an OpenCode command that tells it to take a task and work on it, and then there's a bash ...
2
0
2026-02-28T10:04:40
paulgear
false
null
0
o7uuv8e
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uuv8e/
false
2
t1_o7uuuw4
What are your use cases?
1
0
2026-02-28T10:04:35
Potential-Leg-639
false
null
0
o7uuuw4
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uuuw4/
false
1
t1_o7uutq3
Me too. I found it low quality on most of my tasks.
4
0
2026-02-28T10:04:16
_supert_
false
null
0
o7uutq3
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7uutq3/
false
4
t1_o7uunw6
Did you try the Q4\_K\_XL quant? Should be able to fit in 24 GB as long as you enable q8\_0 KV cache quant.
6
0
2026-02-28T10:02:39
paulgear
false
null
0
o7uunw6
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uunw6/
false
6
t1_o7uuhkr
ggml when?
4
0
2026-02-28T10:00:56
TechnoByte_
false
null
0
o7uuhkr
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uuhkr/
false
4
t1_o7uucdy
I still remember SuperHOT-8k
2
0
2026-02-28T09:59:30
TechnoByte_
false
null
0
o7uucdy
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uucdy/
false
2
t1_o7uubch
Remember Vicuna days? Wizard? Oh those were some models...
5
0
2026-02-28T09:59:13
Barry_22
false
null
0
o7uubch
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uubch/
false
5
t1_o7uuafs
You made my day! 👍
2
0
2026-02-28T09:58:59
Adventurous-Paper566
false
null
0
o7uuafs
false
/r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7uuafs/
false
2
t1_o7uu2gu
Thanks! time oto redownload. Btw, I see some MXFP4 in Qwen3.5-397B-A17B-UD-Q4_K_XL is that one also affected?
1
0
2026-02-28T09:56:45
relmny
false
null
0
o7uu2gu
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7uu2gu/
false
1
t1_o7uu1ci
Amazing! Thanks.
2
0
2026-02-28T09:56:26
moahmo88
false
null
0
o7uu1ci
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7uu1ci/
false
2
t1_o7uttnv
Have you cloned the repo previously, or did you just launch the install bat?
1
0
2026-02-28T09:54:19
RIP26770
false
null
0
o7uttnv
false
/r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o7uttnv/
false
1
t1_o7uttlg
Excellent analysis, there’s just a minor catch: most modern LLMs utilize SwiGLU and SiLU activations (you can verify this in the config.json). The formula is: $$\\text{Expert}(x) = ( \\text{SiLU}(x W\_{gate}) \\cdot (x W\_{up}) ) W\_{down}$$ This architecture uses three matrices of equal parameter size (including the g...
2
0
2026-02-28T09:54:17
Sad-Pickle4282
false
null
0
o7uttlg
false
/r/LocalLLaMA/comments/1rg4apu/qwen_35_architecture_analysis_parameter/o7uttlg/
false
2
t1_o7utoqs
Qwen 3.5 122b Windows 11 Ollama (Q4_K_M) Single RTX Pro 6000
1
0
2026-02-28T09:52:57
SirOakTree
false
null
0
o7utoqs
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7utoqs/
false
1
t1_o7utnlb
Don't know about distros with everything pre-installed and tbh it's Linux so you can add anything by yourself, the "real" difference the distro will make is mostly in how and at what frequency it will handle updates, what it will use to install and update packages, if it's willing to let you break the system and possib...
1
0
2026-02-28T09:52:39
AcceSpeed
false
null
0
o7utnlb
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7utnlb/
false
1
t1_o7utk53
https://preview.redd.it/…the movie title.
2
0
2026-02-28T09:51:43
MoffKalast
false
null
0
o7utk53
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7utk53/
false
2
t1_o7utics
Thank you!
1
0
2026-02-28T09:51:14
Lorian0x7
false
null
0
o7utics
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7utics/
false
1
t1_o7uti8c
Try using the DEER technique(Dynamic early exit), basically you give a confidence score on cutting off reasoning tokens to improve latency, usually results in better accuracy as it reduces overthinking.
1
0
2026-02-28T09:51:12
sayamss
false
null
0
o7uti8c
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7uti8c/
false
1
t1_o7uth06
I think it is good for open source. I do think Anthropic is kind of better than OpenAI, but still in the light of all this transpiring, I have a strong urge to move to self-hosted. 
4
0
2026-02-28T09:50:52
Barry_22
false
null
0
o7uth06
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uth06/
false
4
t1_o7utgx6
I agree. This generation of models from OpenAI and Anthropic are significantly better than the last gen for coding.
3
0
2026-02-28T09:50:51
rorykoehler
false
null
0
o7utgx6
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7utgx6/
false
3
t1_o7utg4k
curious, how is 27B vs 35B q4 (on nvidia gpu) in text analysis - does anyone know?
3
0
2026-02-28T09:50:39
vogelvogelvogelvogel
false
null
0
o7utg4k
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7utg4k/
false
3
t1_o7utep6
being digging for some openclaw alternatives and some of them were vibecoded using Claude, if you think that that code is going to be replaced by good software engineers than there are companies that are going to have some quiete big problems on their software. Códing by meter is not something I would call software eng...
1
0
2026-02-28T09:50:16
danigoncalves
false
null
0
o7utep6
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7utep6/
false
1
t1_o7utegw
[Here](https://i.ibb.co/0VKhFDCp/Untitled.png). With safetensor models, the template file should be in the folder with the model, i think, so you could also just edit it directly.
2
0
2026-02-28T09:50:12
iz-Moff
false
null
0
o7utegw
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7utegw/
false
2
t1_o7utadg
gpt-5.3-codex-high is amazing... similar level to Claude Opus 4.6 thinking
2
0
2026-02-28T09:49:04
rorykoehler
false
null
0
o7utadg
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7utadg/
false
2
t1_o7ut7um
Cachyos
1
0
2026-02-28T09:48:21
Fresh_Finance9065
false
null
0
o7ut7um
false
/r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7ut7um/
false
1
t1_o7ut719
Não existe queda da anthropic, voce nao pode garantir que nada ai nao passa de show. Tenham bom senso, a historia da america sempre foi: veja como somos bonzinhos ! (10 anos depois se descobre algum tipo de crime contra humanidade) Foi assim em todas as invasões, o caso epstein mostra isso também Toda america é u...
5
0
2026-02-28T09:48:07
charmander_cha
false
null
0
o7ut719
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7ut719/
false
5
t1_o7ut61z
No Stepfun3-5? It's up there in coding + speed.
1
0
2026-02-28T09:47:50
oxygen_addiction
false
null
0
o7ut61z
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7ut61z/
false
1
t1_o7ut5q5
People forget how bad 3.5-turbo was in practice, if you believe the embedding leaks it was only roughly Mixtral sized and heavily undertrained on noisy data which was standard at the time. A 7B from a year ago stomps it with a significant margin.
2
0
2026-02-28T09:47:45
MoffKalast
false
null
0
o7ut5q5
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7ut5q5/
false
2
t1_o7ut4cs
I migrated from Ollama to LM Studio, and probably llama.cpp raw someday? Maybe not though because LM Studio runs on llama.cpp. I wouldn't even know where to begin with vllm, and I have only seen 'sglang' mentioned on huggingface pages, never actually from a person. What an ecosystem we have But hey, free vram is free ...
2
0
2026-02-28T09:47:21
_-_David
false
null
0
o7ut4cs
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ut4cs/
false
2
t1_o7ut3d7
Alright, I gave this tutorial a try, compiled llama.cpp with the params as described, running on RTX5060ti 16GB + 64gb DDR5 6400mts/s, and I'm only getting 50t/s, did I do something wrong? Using CUDA 13.1 and latest NVIDIA drivers
3
0
2026-02-28T09:47:04
soyalemujica
false
null
0
o7ut3d7
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7ut3d7/
false
3
t1_o7ut1ik
llama.cpp and Ollama both manage the spreading of the model across the available cards automatically and I haven't been unhappy with the performance. PCI might be a bottleneck; I've heard people use direct attach cables, but I haven't really tried to maximise the performance.
5
0
2026-02-28T09:46:34
paulgear
false
null
0
o7ut1ik
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7ut1ik/
false
5
t1_o7usy1u
The latest 397B-A17B is better all benchmarks than Qwen Max-Thinking, no? Also cheaper on API.
1
0
2026-02-28T09:45:36
oxygen_addiction
false
null
0
o7usy1u
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7usy1u/
false
1
t1_o7uswsk
Tech evolves because of gooners. From video to ai, so much improved because the gooners wanted to goon.
1
0
2026-02-28T09:45:15
Tasty_Victory_3206
false
null
0
o7uswsk
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uswsk/
false
1
t1_o7ust9p
True! I was \*so\* confused with the speed, and the outputs were wild. Definitely a bad version.
1
0
2026-02-28T09:44:15
_-_David
false
null
0
o7ust9p
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7ust9p/
false
1
t1_o7usrj9
Then you don't know the old days. We'd chew the hell outta that astroturfing sleazebag.
5
0
2026-02-28T09:43:47
Tasty_Victory_3206
false
null
0
o7usrj9
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7usrj9/
false
5
t1_o7uspua
122b, once an GGUF using Heretic is released. This model is too refusal laden as-is for roleplay. Thorh has uploaded the tensors for a Heretical 122b, but I want to see if they are willing to upload a NoSlop version before asking Mradarancher to quantize the model. https://huggingface.co/trohrbaugh/Qwen3.5-122B-A10...
2
0
2026-02-28T09:43:17
Sabin_Stargem
false
null
0
o7uspua
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uspua/
false
2
t1_o7uslne
Chat completion is a good topic. Currently the imatrix approach does not parse special tokens and thus (and for another reason) **cannot train the exact chat flows** that it's being used for 99.9% of the time in practice. I did a bunch of testing (and code changes) on that in the past. Here's a [lengthy discussion](htt...
3
0
2026-02-28T09:42:08
Chromix_
false
null
0
o7uslne
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7uslne/
false
3
t1_o7usllm
https://www.reddit.com/r/LocalLLaMA/s/9kkB68jmZv
2
0
2026-02-28T09:42:07
keally1123
false
null
0
o7usllm
false
/r/LocalLLaMA/comments/1rgxywo/new_claude_21_refuses_to_kill_a_python_process/o7usllm/
false
2
t1_o7uslbl
Interested to see how you get on, in a similar situation, with one small and one larger setup only a teensy bit smaller, 32GB and 80GB. Have grabbed a copy of 27b Q6 for the 32GB and 122b Q4 for the 80GB just got to clear teh time to sit and really do some comparisons to get a better sense
1
0
2026-02-28T09:42:03
munkiemagik
false
null
0
o7uslbl
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uslbl/
false
1
t1_o7usl14
Almost every time someone praises qwen for open sourcing a model, I think about how nice it would have been if they would have released Wan 2.5 or 2.6.. Wan 2.2 is cool, but there is potential for so much more. Speaking of which.. I heard the Seedance 2 model weights were leaked. 96b parameters. I'd buy a few more 5060...
1
0
2026-02-28T09:41:58
_-_David
false
null
0
o7usl14
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7usl14/
false
1
t1_o7usjs7
np, this is a Nomad job file. Nomad is not widely used, but it's easier to set up & maintain than Kubernetes.
2
0
2026-02-28T09:41:38
Tartarus116
false
null
0
o7usjs7
false
/r/LocalLLaMA/comments/1r8rgcp/minimax_25_on_strix_halo_thread/o7usjs7/
false
2
t1_o7usekx
Fair point 
6
0
2026-02-28T09:40:12
KeikakuAccelerator
false
null
0
o7usekx
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7usekx/
false
6
t1_o7us49p
Fair enough!  But if you can get a bit of time to learn it, R is a very forgiving language, and Rstudio makes things as easy as possible.  LLMs are great at R coding, also :)
1
0
2026-02-28T09:37:19
llmentry
false
null
0
o7us49p
false
/r/LocalLLaMA/comments/1rg3da6/are_there_any_particular_offline_models_i_could/o7us49p/
false
1
t1_o7us3xb
It depends on your GPU. I am using a 3070 and getting 28t/s.my promotes is something like this: `$env:LLAMA_CACHE="unsloth/Qwen3.5-35B-A3B-GGUF"; $env:LLAMA_SET_ROWS=1; llama-server -m C:/Users/T/.cache/lm-studio/models/unsloth/Qwen3.5-35B-A3B-GGUF/Qwen3.5-35B-A3B-UD-Q4_K_M.gguf -ub 2048 -ctk f16 -ctv f16 -sm none -mg...
1
0
2026-02-28T09:37:12
Snoo_28140
false
null
0
o7us3xb
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7us3xb/
false
1
t1_o7us18s
gguf when?
5
0
2026-02-28T09:36:26
christianweyer
false
null
0
o7us18s
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7us18s/
false
5
t1_o7uryky
Please make sure to report that as [SYCL issue](https://github.com/ggml-org/llama.cpp/issues?q=is%3Aissue%20state%3Aopen%20sycl) with all your details then, so that it can get fixed (and you'll get faster speeds)
1
0
2026-02-28T09:35:40
Chromix_
false
null
0
o7uryky
false
/r/LocalLLaMA/comments/1qz5uww/qwen3_coder_next_as_first_usable_coding_model_60/o7uryky/
false
1
t1_o7urxam
How you combine VRAM for 2 or more cards? PCI bandwidth is not a bottleneck? I run 27b Q3_K_M on 5070ti but need to lower the context to 32k. I'm thinking how could extend that because for agentic coding is very small number.
2
0
2026-02-28T09:35:17
ppsirius
false
null
0
o7urxam
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7urxam/
false
2
t1_o7urqmk
Hmm, interesting. Will test it. Mostly going to run it for coding.
1
0
2026-02-28T09:33:25
Uranday
false
null
0
o7urqmk
false
/r/LocalLLaMA/comments/1rglgma/qwen_35_llamacpp_turn_of_reasoning_and_performance/o7urqmk/
false
1
t1_o7urlnp
Bruh
2
0
2026-02-28T09:32:01
Present-Ad-8531
false
null
0
o7urlnp
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7urlnp/
false
2
t1_o7url4v
Why is that scary for the future of self hosted llms ? 
5
0
2026-02-28T09:31:52
ossbournemc
false
null
0
o7url4v
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7url4v/
false
5
t1_o7urjvm
That is the most american thing I read today, "sue everyone and everything for millions because why not". I believe Chutes might even be outside of US jurisdiction (not sure though)
-2
0
2026-02-28T09:31:30
drumyum
false
null
0
o7urjvm
false
/r/LocalLLaMA/comments/1rgu849/if_your_chutesai_subscription_was_unilaterally/o7urjvm/
false
-2
t1_o7urijm
Instead of disabling thinking you might benefit from simply [making it shorter](https://www.reddit.com/r/LocalLLaMA/comments/1rehykx/qwen35_low_reasoning_effort_trick_in_llamaserver/) (and thus a lot faster). That way Qwen mostly skips reasoning for simple tasks, yet at least spends a few seconds on more complex ones.
1
0
2026-02-28T09:31:06
Chromix_
false
null
0
o7urijm
false
/r/LocalLLaMA/comments/1rglgma/qwen_35_llamacpp_turn_of_reasoning_and_performance/o7urijm/
false
1
t1_o7urf05
Ok I will work on this first, thank you !
1
0
2026-02-28T09:30:08
SpellGlittering1901
false
null
0
o7urf05
false
/r/LocalLLaMA/comments/1rgf12v/how_to_chose_the_right_model/o7urf05/
false
1
t1_o7urf0v
[removed]
1
0
2026-02-28T09:30:08
[deleted]
true
null
0
o7urf0v
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7urf0v/
false
1
t1_o7urbwg
Makes sense, thank you !
1
0
2026-02-28T09:29:16
SpellGlittering1901
false
null
0
o7urbwg
false
/r/LocalLLaMA/comments/1rgf12v/how_to_chose_the_right_model/o7urbwg/
false
1
t1_o7urb5j
I fit this in 16g VRAM and 128k context https://huggingface.co/cerebras/GLM-4.7-Flash-REAP-23B-A3B
1
0
2026-02-28T09:29:04
ppsirius
false
null
0
o7urb5j
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7urb5j/
false
1
t1_o7urakq
This is actually why running local matters more than the benchmark arguments. If your inference depends on a US vendor's API, you've outsourced the risk of exactly this kind of political volatility. Your point about precedent is right — once it's normal for governments to strong-arm AI providers, which vendor are you e...
1
0
2026-02-28T09:28:54
BreizhNode
false
null
0
o7urakq
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7urakq/
false
1