name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8iabbr
Yes *if* you have some experience, this UI would be for people that likely don't have much experience running any kind of AI model. Same concept of when dall-e first came out, it was made to be as simple as possible since generative AI was still a new concept in the public eye, if they had made it a complex system allowing users to modify every setting dall-e had it would have been to complex and overwhelming for ~80% of the people wanting to use it.
1
0
2026-03-04T00:15:29
Ill-Oil-2027
false
null
0
o8iabbr
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8iabbr/
false
1
t1_o8iaaiz
there's multiple phases of training nowadays, it isn't as simple as pretrain -> sft -> rlvr. there is a concept of midtraining where you anneal while training on the highest quality data you have, and this is still supposedly before the instruct tuning, but you know, it's usually got some instruct data in there. arcee released a base model for trinity as well, so there are some options at least in the 200-400b range now. ps gemma pretrained also has seen chat templates, it is not just qwen.
1
0
2026-03-04T00:15:21
llama-impersonator
false
null
0
o8iaaiz
false
/r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8iaaiz/
false
1
t1_o8ia920
Those salaries ain't paying themselves if their models run in your potato GPU. With that being said, they definitely cooked with this release. It is finally "good enough" for the great majority of casual use cases. Well done Qwen team.
1
0
2026-03-04T00:15:07
LocoMod
false
null
0
o8ia920
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ia920/
false
1
t1_o8i9zdq
i know thats seems actrually cool maybe i really need to try it and give up the sm graph function of IK lama cpp :( it would be a dream when all these forks join forces so we have the autoparser with -fit and sm graph xD , but ill really give it a try thanks for the info
1
0
2026-03-04T00:13:30
Noobysz
false
null
0
o8i9zdq
false
/r/LocalLLaMA/comments/1rk2pll/step_flash_35_toolcall_and_thinking_godforsaken/o8i9zdq/
false
1
t1_o8i9ye7
ROCm imo is to much of a hassle to get functioning and afaik it's Linux only, Vulkan is system agnostic, meaning it doesn't care if you have Intel, amd, Nvidia, metal (Mac), or adreno (android phones),
1
0
2026-03-04T00:13:21
Ill-Oil-2027
false
null
0
o8i9ye7
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8i9ye7/
false
1
t1_o8i9xnb
For text only: Qwen3.5 27B AWQ 4 bit + speculative decoding (the trick is disabling vision to make room for speculative decoding companion model) @ 24-32K context. You get 35+ t/s on single 3090@280W. Or you keep the vision with no speculative decoding (but you get 18 t/s)
1
0
2026-03-04T00:13:13
j3st3r666
false
null
0
o8i9xnb
false
/r/LocalLLaMA/comments/1qfkn3a/best_end_of_world_model_that_will_run_on_24gb_vram/o8i9xnb/
false
1
t1_o8i9w6z
Jan ai can also run it :D been using the unsloth since release
1
0
2026-03-04T00:12:59
Lemonzest2012
false
null
0
o8i9w6z
false
/r/LocalLLaMA/comments/1rirlvw/qwen_35_2b_and_9b_relesed/o8i9w6z/
false
1
t1_o8i9w3z
It's bad news for the open-source llm.
1
0
2026-03-04T00:12:58
moahmo88
false
null
0
o8i9w3z
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8i9w3z/
false
1
t1_o8i9uiy
Dang I just started following him too. (Justin)
1
0
2026-03-04T00:12:42
Spitfire1900
false
null
0
o8i9uiy
false
/r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8i9uiy/
false
1
t1_o8i9mlx
27B q4 is blazingly fast on my RTX5090, getting 1.7k prefill and 60 crunching. 122B q4 is 100tps pre and 20tps gen on my strix halo. So yeh choice depends on your cruncher. The 27B on strix is way to slow. But loving 27B and 122B combo.
1
0
2026-03-04T00:11:24
hay-yo
false
null
0
o8i9mlx
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8i9mlx/
false
1
t1_o8i9l08
Unlike open-source software, for models we entirely depend on scraps given away by big companies or nation states. It's incredibly frustrating. But it is what it is, and especially with regard to the 2nd point, a certain pragmatism from the community is needed. I say this to you, because you are one of the most vocal posters around here against anything that's not open (especially things *you* personally can't run). Quite naturally you are not a fan of "Chinese Cloud API" around here. I am not their direct user either, but the calculus is pretty simple, if these guys can't figure out a sustainable path forward then open-weights will be the very first casualty, and then everybody here loses. I have been dreading something like this for a while, but didn't expect Qwen to fold first, and for me this is the worst case scenario. So I feel like saying this because I don't think Qwen will be the last. Well obviously I am not saying we should tolerate the apocalyptic level of engagement bots that talk about Chinese models here. But perhaps we can at worst ignore the posts about GLM coding plans or whatever, made by real humans. Surely it's not exactly the same thing as when people talk about Claude subscription, one company still does open-weight, the other never did. GLM-5 *is* open-weight, regardless of whether you and I can run it or not. And the existence of it, and the revenue that they get from it, will both directly dictate whether their next one will be open or not.
1
0
2026-03-04T00:11:08
nullmove
false
null
0
o8i9l08
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8i9l08/
false
1
t1_o8i9bgi
Use autoparser branch, have been using the IQ4XS quant heavily and haven't had one instance of thinking loops. There is a problem with limit and offset parameter order that I want to address in a follow-up PR
1
0
2026-03-04T00:09:34
ilintar
false
null
0
o8i9bgi
false
/r/LocalLLaMA/comments/1rk2pll/step_flash_35_toolcall_and_thinking_godforsaken/o8i9bgi/
false
1
t1_o8i9acn
* And how much free space do you have on that disk `C:\` ? * You are `administrator` and you have permission to write into `C:\Program Files` ?
1
0
2026-03-04T00:09:24
scorp123_CH
false
null
0
o8i9acn
false
/r/LocalLLaMA/comments/1mitsok/which_ai_image_generators_are_good_for/o8i9acn/
false
1
t1_o8i96es
Hmm, I've just started thinking on these kinds of use cases as well but nothing so compelling, care to share more of your usecases?
1
0
2026-03-04T00:08:45
Sy-Zygy
false
null
0
o8i96es
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8i96es/
false
1
t1_o8i90ho
Thank you. I would've never bought my 7600 if I knew how much I wanted to do with it but can't because no ROCm or Cuda support.
1
0
2026-03-04T00:07:48
National_Meeting_749
false
null
0
o8i90ho
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8i90ho/
false
1
t1_o8i8w8w
Prompt at the end for translation?
1
0
2026-03-04T00:07:06
moahmo88
false
null
0
o8i8w8w
false
/r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8i8w8w/
false
1
t1_o8i8uyk
Because not all of us do. I bought an AMD card before local LLMs really became a thing and I have plenty of gpu power, but because my card isn't officially ROCm supported I can't do virtually anything with AI because no one supports vulkan. It's ALL cuda. I'm having to hop free services/ pay for training runs I could be doing locally.. if only pytorch would support vulkan.
1
0
2026-03-04T00:06:54
National_Meeting_749
false
null
0
o8i8uyk
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8i8uyk/
false
1
t1_o8i8sey
Plot twist: they all move to France to work with Le Cun.
1
0
2026-03-04T00:06:29
keepthepace
false
null
0
o8i8sey
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8i8sey/
false
1
t1_o8i8qdr
Would you make converting it to MLX or have the safetensors published as well?
1
0
2026-03-04T00:06:11
EpsilonAnura
false
null
0
o8i8qdr
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8i8qdr/
false
1
t1_o8i8lgw
Apple is looking good these days because everything else quadrupled in price. The meta would have been a few 3090s and a server with similar ram to split to. Have your cake and eat it too on prompt speeds.
1
0
2026-03-04T00:05:22
a_beautiful_rhind
false
null
0
o8i8lgw
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8i8lgw/
false
1
t1_o8i8l75
Another (alleged) qwen employee who left today retweeted this https://preview.redd.it/jc6dh44g6xmg1.png?width=1059&format=png&auto=webp&s=59002fb79af35cd835b37d3f7cbd2405906a1327
1
0
2026-03-04T00:05:20
Betadoggo_
false
null
0
o8i8l75
false
/r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8i8l75/
false
1
t1_o8i8cuu
Yes, multiple times, same error keeps showing up
1
0
2026-03-04T00:03:57
lubezki
false
null
0
o8i8cuu
false
/r/LocalLLaMA/comments/1mitsok/which_ai_image_generators_are_good_for/o8i8cuu/
false
1
t1_o8i8clk
And to not be a leech, my own benchmarks using vLLM and Llama 3.1 70b: **1 \* A6000 (ampere)**: Read speed (tokens/sec): 650 - 1280+ Read speed (words/sec): 500 - 985+ **Write speed (tokens/sec): 14.4 - 15.1** Write Speed (words/sec): 11.1 - 11.6 Real world speed (on a unrealistically long prompt): 43.5 seconds **4 \* RTX A4000 20gb (ada):** Read speed (tokens/sec):  800 - 1280+ Read speed (words/sec): 615 - 985+ **Write Speed (tokens/sec):  20.0 - 22.8** Write Speed (words/sec): 15 - 17 Real world speed (on a unrealistically long prompt): 29.2 seconds **2\*A5000 (ada)**: **Write Speed (tokens/sec):  \~22.9** Also, with some careful setup of vLLM, you can expect to get around roughly several users concurrently typing with the tokens/sec of each user being mostly unchanged from the single user
1
0
2026-03-04T00:03:55
Fuehnix
false
null
0
o8i8clk
false
/r/LocalLLaMA/comments/1rk61kp/has_anybody_here_had_to_do_research_on_gpu/o8i8clk/
false
1
t1_o8i89ze
Rule 1 - Search before asking. The content is frequently covered in this sub. Please search to see if your question has been answered before creating a new post.
1
0
2026-03-04T00:03:28
LocalLLaMA-ModTeam
false
null
0
o8i89ze
true
/r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8i89ze/
true
1
t1_o8i89v2
Hi OP, how is it going. I also want to set up a similar setup and use jetson orin nano as a always on tiny PC that runs this. Was thinking of adding some automations to allow it to boot up my real PC and maybe use ssh if needed. Do you have any suggestions? I think for a cheap solution using API is the only choice.
1
0
2026-03-04T00:03:27
Plastic-Business1633
false
null
0
o8i89v2
false
/r/LocalLLaMA/comments/1quw140/setting_up_openclawmoltbot_on_jetson_orin_super/o8i89v2/
false
1
t1_o8i898i
Then I'm happy.
1
0
2026-03-04T00:03:21
TitwitMuffbiscuit
false
null
0
o8i898i
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8i898i/
false
1
t1_o8i813t
[removed]
1
0
2026-03-04T00:02:02
[deleted]
true
null
0
o8i813t
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8i813t/
false
1
t1_o8i80zf
Am I an idiot? Why would you ever use Qwen3 Coder Next over Qwen3.5 27B? It almost matches the performance while being much smaller and faster?
1
0
2026-03-04T00:02:01
TheRealSol4ra
false
null
0
o8i80zf
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o8i80zf/
false
1
t1_o8i804f
Make it a Blender Plugin and then we're talking.
1
0
2026-03-04T00:01:53
ArthurParkerhouse
false
null
0
o8i804f
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8i804f/
false
1
t1_o8i7y8j
Am I an idiot? Why would you ever use Qwen3 Coder Next over Qwen3.5 27B? It almost matches the performance while being much smaller and faster?
1
0
2026-03-04T00:01:34
TheRealSol4ra
false
null
0
o8i7y8j
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o8i7y8j/
false
1
t1_o8i7wnp
Am I an idiot? Why would you ever use Qwen3 Coder Next over Qwen3.5 27B? It almost matches the performance while being much smaller and faster?
1
0
2026-03-04T00:01:19
TheRealSol4ra
false
null
0
o8i7wnp
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o8i7wnp/
false
1
t1_o8i7mz5
This is excellent. In a sea of different options, this truly helps!
1
0
2026-03-03T23:59:44
sig_kill
false
null
0
o8i7mz5
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8i7mz5/
false
1
t1_o8i7l9a
Every dog has it's day. Every time they run it, results shuffle around since these models are non deterministic.
1
0
2026-03-03T23:59:27
segmond
false
null
0
o8i7l9a
false
/r/LocalLLaMA/comments/1rk5qzz/qwen3codernext_scored_40_on_latest_swerebench/o8i7l9a/
false
1
t1_o8i7l3x
I read that Alibaba brought in google guy to lead, Qwen guy was like "this some bullshit" and left
1
0
2026-03-03T23:59:25
One-Employment3759
false
null
0
o8i7l3x
false
/r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8i7l3x/
false
1
t1_o8i7h55
Too bad it’s wrong… also even tineye can get that right… and google image search. Also it’s a beautiful spot, Lisbon is a dead city these days, but still lovely to visit.
1
0
2026-03-03T23:58:47
mrepop
false
null
0
o8i7h55
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8i7h55/
false
1
t1_o8i7gn9
Your clickbaits are worse and worse
1
0
2026-03-03T23:58:42
jacek2023
false
null
0
o8i7gn9
false
/r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8i7gn9/
false
1
t1_o8i7ay4
Have you decided to go huanan or openscad way?
1
0
2026-03-03T23:57:47
paul_tu
false
null
0
o8i7ay4
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8i7ay4/
false
1
t1_o8i78wh
You have a discord tag next to your name. Pipe down.
1
0
2026-03-03T23:57:26
lannistersstark
false
null
0
o8i78wh
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8i78wh/
false
1
t1_o8i78rm
What are you running q2?
1
0
2026-03-03T23:57:25
Vastopian
false
null
0
o8i78rm
false
/r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o8i78rm/
false
1
t1_o8i73w5
I can not read that hole post so I am going to just act like I did... I think the problem you are attempting to express, is local ai isnt cloud ai so stop trying to make it the replacement. no matter how good local ai gets its not going to scratch your claude or gpt chatbot itch, and thats ok it has to be treated differently.
1
0
2026-03-03T23:56:38
Lesser-than
false
null
0
o8i73w5
false
/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8i73w5/
false
1
t1_o8i70bn
4090 should not be much faster than 3090 (maybe 10%) for single user inference because memory bandwith is pretty similiar.
1
0
2026-03-03T23:56:04
zipperlein
false
null
0
o8i70bn
false
/r/LocalLLaMA/comments/1rk5ftz/benchmarked_the_main_gpu_options_for_local_llm/o8i70bn/
false
1
t1_o8i6wkm
I have this same setup and I use the following: Dummy hdmi or DisplayPort plug - I like an hdmi 2.1 plug for strix halo - they are cheap on Amazon. Tailscale Sunshine / moonlight  You can stream 4k/120fps from your strix halo box with the hdmi 2.1 plug. Tailscale lets you connect from anywhere with ease and sunshine/moonlight has the best performance and I’ve never once had an issue with it on Wayland Linux.  RustDesk is an option too but then you have to mess with relay servers if you want it to be secure. 
1
0
2026-03-03T23:55:26
socialjusticeinme
false
null
0
o8i6wkm
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8i6wkm/
false
1
t1_o8i6vpw
It just isn't working, when using --verbose-prompt I get a <|im_start|>assistant followed by <think> on the other line, a empty line and </think> on another line after the new line. Meaning that the thinking part is ignored.
1
0
2026-03-03T23:55:18
WowSkaro
false
null
0
o8i6vpw
false
/r/LocalLLaMA/comments/1rjzlrn/are_the_9b_or_smaller_qwen35_models_unthinking/o8i6vpw/
false
1
t1_o8i6vgl
What is your presence penalty? I set mine to 1 and it helps. Unsloth recommends 1.5 for thinking models for generic tasks
1
0
2026-03-03T23:55:15
Guilty_Rooster_6708
false
null
0
o8i6vgl
false
/r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/o8i6vgl/
false
1
t1_o8i6v6e
Linux mint is where it's at. It's a fixed up Ubuntu with no snap packages. You'll still install your driver's via adding Nvidia to your repo list in apt and then you'll see the good versions of the drivers. No matter what version if Linux you use you'll want to do this as the drivers that are in the basic apt database are old and will probably get in your 50's series way. Oh and I like the cinnamon version of Linux mint. Xfce is nice too but cinnamon works better with steam games (also go to steams site to install this because the one in the repo likes to glitch a bit I remember). Old packages in the repo are the reason lots of stuff doesn't work across all Linux installs. 
1
0
2026-03-03T23:55:12
ArtfulGenie69
false
null
0
o8i6v6e
false
/r/LocalLLaMA/comments/1rj7y0u/any_issues_tips_for_running_linux_with_a_5060ti/o8i6v6e/
false
1
t1_o8i6trp
no contest, 2 pro 6000. the only reason to ever pick 8xv100 is that you got it for free and yet, based on your objective, it might still not be worth it. blackwell supports native 4-bit (NVFP4) training, that's all I gotta say on this, read up on it if you don't already know.
1
0
2026-03-03T23:54:58
segmond
false
null
0
o8i6trp
false
/r/LocalLLaMA/comments/1rjymi0/training_on_8x_v100_32gb_with_nvlink_or_2x_rtx/o8i6trp/
false
1
t1_o8i6k19
won't happen, maybe the community can do a data distill
1
0
2026-03-03T23:53:22
Agreeable-Market-692
false
null
0
o8i6k19
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8i6k19/
false
1
t1_o8i6jxw
I doubt it - DRAM prices are too high. They'll likely either max it at 256 or charge $30K for it.
1
0
2026-03-03T23:53:21
BallsInSufficientSad
false
null
0
o8i6jxw
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8i6jxw/
false
1
t1_o8i65tx
[allen.ai](http://allen.ai)
1
0
2026-03-03T23:51:05
segmond
false
null
0
o8i65tx
false
/r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8i65tx/
false
1
t1_o8i64s8
Nope, still working on it
1
0
2026-03-03T23:50:55
ShadowSec19
false
null
0
o8i64s8
false
/r/LocalLLaMA/comments/1juxpiq/looking_for_most_uncensored_uptodate_llm_for/o8i64s8/
false
1
t1_o8i5xmj
Once the person everyone followed leaves, the rest start asking themselves why they're still there. This is how team collapses happen. Qwen was genuinely one of the best things happening in open source AI right now, especially at the smaller model sizes.
1
0
2026-03-03T23:49:45
ruibranco
false
null
0
o8i5xmj
false
/r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8i5xmj/
false
1
t1_o8i5uwm
I don't get it. Why do I need a fucking account to connect to my own remote server wtf??
1
0
2026-03-03T23:49:17
soostenuto
false
null
0
o8i5uwm
false
/r/LocalLLaMA/comments/1rer60n/lm_link/o8i5uwm/
false
1
t1_o8i5n5l
https://preview.redd.it/…ve been updated?
1
0
2026-03-03T23:48:01
spaceman_
false
null
0
o8i5n5l
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8i5n5l/
false
1
t1_o8i5m7k
MiMo V2 Flash is crazy
1
0
2026-03-03T23:47:52
piggledy
false
null
0
o8i5m7k
false
/r/LocalLLaMA/comments/1rjnpuv/costsperformance_tradeoff_for_qwen3_qwen35_and/o8i5m7k/
false
1
t1_o8i5kh0
Ebay or if you want real protection and 30 days to test you could buy through Amazon used. As long as it is fulfilled by Amazon it should be covered by their return allowing you to send it back if it's garbage. 
1
0
2026-03-03T23:47:34
ArtfulGenie69
false
null
0
o8i5kh0
false
/r/LocalLLaMA/comments/1rj8zhq/where_can_i_get_good_priced_3090s/o8i5kh0/
false
1
t1_o8i5a5p
Will their model be able to say things forbidden by EU 'law'?
1
0
2026-03-03T23:45:54
crantob
false
null
0
o8i5a5p
false
/r/LocalLLaMA/comments/1qqhhtx/mistral_ceo_arthur_mensch_if_you_treat/o8i5a5p/
false
1
t1_o8i59qm
122B is updating as we speak, 35B was updated just over an hour ago.
1
0
2026-03-03T23:45:50
spaceman_
false
null
0
o8i59qm
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8i59qm/
false
1
t1_o8i55aa
Large context windows don’t eliminate hallucination — they redistribute it. When the context grows, attention becomes a scarce resource. The model may anchor on a locally coherent but globally irrelevant segment, and then confidently elaborate from there. So the failure mode isn’t just “lost in the middle.” It’s often signal dilution + misplaced confidence. That’s why extraction-before-synthesis patterns tend to work better — they reduce the surface area of interpretive drift.
1
0
2026-03-03T23:45:06
Jaded_Argument9065
false
null
0
o8i55aa
false
/r/LocalLLaMA/comments/1rk045z/are_huge_context_windows_a_hallucination_problem/o8i55aa/
false
1
t1_o8i51qx
This has been fixed in ik_llama.cpp: https://github.com/ikawrakow/ik_llama.cpp/pull/1352 Can give it a try
1
0
2026-03-03T23:44:31
notdba
false
null
0
o8i51qx
false
/r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o8i51qx/
false
1
t1_o8i51bx
Time for my obligatory "running local llms costs way more for way less performance" post.
1
0
2026-03-03T23:44:27
daddy_dollars
false
null
0
o8i51bx
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8i51bx/
false
1
t1_o8i4zs0
https://preview.redd.it/…b1a22f96b25b2d
1
0
2026-03-03T23:44:12
JumpyAbies
false
null
0
o8i4zs0
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8i4zs0/
false
1
t1_o8i4zgf
I like openhands if you aren’t bothered by using a cli. It running in uv has some good isolation features.
1
0
2026-03-03T23:44:08
Chasmansp
false
null
0
o8i4zgf
false
/r/LocalLLaMA/comments/1rcjzsk/is_opencode_the_best_free_coding_agent_currently/o8i4zgf/
false
1
t1_o8i4u5w
so far, queries and refactorings over a small Rust and a medium-size iOS Swift codebase, using OpenCode as a harness also some English docs updates, for which the difference surprised me even more: 35B-A3B seems like it loses the plot when it has to keep track of a long procedure with multiple steps, 27B doesn't. but i'd expect myself to be more sensitive to failures in natural language so that's bias on bias
1
0
2026-03-03T23:43:16
HopePupal
false
null
0
o8i4u5w
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8i4u5w/
false
1
t1_o8i4rxl
35B is much less capable as a tool for agentic coding. 122B one shotted a Breakout-style game. 35B could do the brick layout but failed miserably on most of the rest.
1
0
2026-03-03T23:42:55
James-Kane
false
null
0
o8i4rxl
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8i4rxl/
false
1
t1_o8i4ogd
Applying the term "democratization" to capital-driven price reductions is truth-inversion.
1
0
2026-03-03T23:42:21
crantob
false
null
0
o8i4ogd
false
/r/LocalLLaMA/comments/1qqhhtx/mistral_ceo_arthur_mensch_if_you_treat/o8i4ogd/
false
1
t1_o8i4cs5
It does. It appears in my own spot checks to not even be a contest. In programming and handling of languages that aren't as well covered by training, GLSL, for graphics, it's the difference between acceptably competent even at a low Q4 quant for the 122b , vs absolutely horrendous at Q8 for the 35B. The 27B lands in-between, but is also decimated at long and medium tail topics.
1
0
2026-03-03T23:40:26
Lucis_unbra
false
null
0
o8i4cs5
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8i4cs5/
false
1
t1_o8i48mj
Qwen-2.5 models also had tokenizer in their base version and were trained to follow them. I think even Deepseek V3 Base knows its tokenizer. They all are trained on SFT data during midtraining, I think. Base models were dead for quite a while.
1
0
2026-03-03T23:39:46
netikas
false
null
0
o8i48mj
false
/r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8i48mj/
false
1
t1_o8i45jo
https://preview.redd.it/…b... Wild times.
1
0
2026-03-03T23:39:17
3spky5u-oss
false
null
0
o8i45jo
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8i45jo/
false
1
t1_o8i430q
HA. Good one. 96TB would be insane. Thanks for the catch. I think I’ll go with the M3 Ultra for now. If something is announced this week, I guess I have a return window.
1
0
2026-03-03T23:38:51
cmerrifield
false
null
0
o8i430q
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8i430q/
false
1
t1_o8i40ed
no more qwen 3.5 coder
1
0
2026-03-03T23:38:27
Opteron67
false
null
0
o8i40ed
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8i40ed/
false
1
t1_o8i408j
Who is Gwen bro
2
0
2026-03-03T23:38:25
TheDivineKnight01
false
null
0
o8i408j
false
/r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8i408j/
false
2
t1_o8i3ypj
fat booty parameters
1
0
2026-03-03T23:38:10
philmarcracken
false
null
0
o8i3ypj
false
/r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/o8i3ypj/
false
1
t1_o8i3wib
[removed]
1
0
2026-03-03T23:37:49
[deleted]
true
null
0
o8i3wib
false
/r/LocalLLaMA/comments/1n0aijh/gpt_oss_120b/o8i3wib/
false
1
t1_o8i3uvn
I would only notice the difference when u let the agent run and do autonomous unsupervised tasks for 2+ hours. 122b will hold stability for longer. Other than that its pretty similar in terms of architecture and behavioural.
1
0
2026-03-03T23:37:33
Express_Quail_1493
false
null
0
o8i3uvn
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8i3uvn/
false
1
t1_o8i3u94
No, haha. That's a tall ask for a small local. Qwen3.5 397b a17b is one of the closest to cloud frontier performance right now.
1
0
2026-03-03T23:37:27
3spky5u-oss
false
null
0
o8i3u94
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8i3u94/
false
1
t1_o8i3t7x
Remember to thank your local watermelon for unaffordable electricity. Thank them again and again.
1
0
2026-03-03T23:37:17
crantob
false
null
0
o8i3t7x
false
/r/LocalLLaMA/comments/1qqhhtx/mistral_ceo_arthur_mensch_if_you_treat/o8i3t7x/
false
1
t1_o8i3r3o
This is MacBook only right? Not for mini or studio. Why would anybody want this much ram for laptops if you are running agents and expect to be running it overnight
1
0
2026-03-03T23:36:57
frebay
false
null
0
o8i3r3o
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8i3r3o/
false
1
t1_o8i3p7n
Nordstream2 says you're right.
1
0
2026-03-03T23:36:39
crantob
false
null
0
o8i3p7n
false
/r/LocalLLaMA/comments/1qqhhtx/mistral_ceo_arthur_mensch_if_you_treat/o8i3p7n/
false
1
t1_o8i3mxn
its already running in the GPU my 1080Ti: 😯😯😪
1
0
2026-03-03T23:36:18
NegotiationNo1504
false
null
0
o8i3mxn
false
/r/LocalLLaMA/comments/1rk4x7w/i_think_that_is_a_good_one/o8i3mxn/
false
1
t1_o8i3lii
OP read a tweet and made up a story about being forced to resign for extra updoots.
2
0
2026-03-03T23:36:04
LoaderD
false
null
0
o8i3lii
false
/r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8i3lii/
false
2
t1_o8i3icc
😂 yeah it takes a lot of time idk why my 1080Ti: 😐
1
0
2026-03-03T23:35:34
NegotiationNo1504
false
null
0
o8i3icc
false
/r/LocalLLaMA/comments/1rk4x7w/i_think_that_is_a_good_one/o8i3icc/
false
1
t1_o8i3hz2
Agreed! On vision tasks 27B dense and 122B MoE are on par and good, while on somewhat complex or long calculation tasks the 122B clearly performs better. Also 122B has ~3 times higher tps.
1
0
2026-03-03T23:35:31
DaniDubin
false
null
0
o8i3hz2
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8i3hz2/
false
1
t1_o8i3ft8
[removed]
1
0
2026-03-03T23:35:10
[deleted]
true
null
0
o8i3ft8
false
/r/LocalLLaMA/comments/1rf4br0/where_do_you_all_rent_gpu_servers_for_small_ml_ai/o8i3ft8/
false
1
t1_o8i3f2o
Hey we've all been newbies and we all came in a wave of newves. This one i orders of magnitude larger, but I'm sure they learn and eventually do ollama bashing
1
0
2026-03-03T23:35:03
No_Afternoon_4260
false
null
0
o8i3f2o
false
/r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/o8i3f2o/
false
1
t1_o8i3e74
And it's comparable to Claude Opus 4.5?
1
0
2026-03-03T23:34:55
Fabulous-Locksmith60
false
null
0
o8i3e74
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8i3e74/
false
1
t1_o8i3a6n
you are right my bad didn't see that there were other screen shots
1
0
2026-03-03T23:34:16
hustla17
false
null
0
o8i3a6n
false
/r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8i3a6n/
false
1
t1_o8i35sc
9b os really better than 4b? My potato notebook wants to know 😂😂😂
1
0
2026-03-03T23:33:35
Fabulous-Locksmith60
false
null
0
o8i35sc
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8i35sc/
false
1
t1_o8i34ot
Truth, Claude will happily tell me how to kill myself or poison the city's water supply, but it refuses to monkey patch a dataviz python method. Codex and Gemini will do all 3!
1
0
2026-03-03T23:33:25
clocksmith
false
null
0
o8i34ot
false
/r/LocalLLaMA/comments/1rk342c/the_dow_vs_anthropic_saga_proves_closedsource/o8i34ot/
false
1
t1_o8i32yv
[removed]
1
0
2026-03-03T23:33:08
[deleted]
true
null
0
o8i32yv
false
/r/LocalLLaMA/comments/1rf4br0/where_do_you_all_rent_gpu_servers_for_small_ml_ai/o8i32yv/
false
1
t1_o8i2uh2
🖥️🗿🎨
1
0
2026-03-03T23:31:48
Glazedoats
false
null
0
o8i2uh2
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8i2uh2/
false
1
t1_o8i2j3a
Junyang is the lead. The intern was just some guy who was saying they are leaving because he left.
1
0
2026-03-03T23:30:01
SlaveZelda
false
null
0
o8i2j3a
false
/r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8i2j3a/
false
1
t1_o8i2hq8
You likely won’t if you’re just doing basic agent tasks. The large dense shine for more complex synthesis and reasoning. Infact, they tend to be worse for agent tasks because they’re too smart for it.
1
0
2026-03-03T23:29:48
3spky5u-oss
false
null
0
o8i2hq8
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8i2hq8/
false
1
t1_o8i2eop
No advantage??? I guess if you pick and choose what you look at. Instruction following, agentic tasks, coding - 122B has clear advantage over 35B. Some benches it's +10 points over 35B. Is this just another one of those aimless posts to keep Qwen in the conversation?
1
0
2026-03-03T23:29:20
DinoAmino
false
null
0
o8i2eop
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8i2eop/
false
1
t1_o8i2ckr
That’s actually insane, omg. I’m so excited, I think I’ll run the 9B on my new laptop haha
1
0
2026-03-03T23:29:00
Borkato
false
null
0
o8i2ckr
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8i2ckr/
false
1
t1_o8i2bde
absolutely omg!
1
0
2026-03-03T23:28:49
ShotgunEnvy
false
null
0
o8i2bde
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8i2bde/
false
1
t1_o8i28ex
They might... However, large leadership shakeups tend to lead to a stall in progress or a shift in priorities. In other words, new Qwen models may either be delayed, not open weights, or potentially not launched at all.
1
0
2026-03-03T23:28:22
BagelRedditAccountII
false
null
0
o8i28ex
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8i28ex/
false
1
t1_o8i2635
I will post a benchmark (niche engineering domain work, agent work, complex synthesis of engineering documents, and context degradation over turns) when I have finished up with all of the 3.5 family, but sofar I can say you really cannot go wrong either way. Even the 0.8b scores like a 45% at my agent bench (FP16 @ 250 tok/s on a 5090). That surprised me a lot.
1
0
2026-03-03T23:28:00
3spky5u-oss
false
null
0
o8i2635
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8i2635/
false
1
t1_o8i23nq
i did it at [research.site](http://research.site)
1
0
2026-03-03T23:27:38
OutlandishnessFull44
false
null
0
o8i23nq
false
/r/LocalLLaMA/comments/1ipzjs6/we_need_a_chatbot_arena_for_deep_research/o8i23nq/
false
1
t1_o8i22cc
try [research.site](http://research.site)
1
0
2026-03-03T23:27:25
OutlandishnessFull44
false
null
0
o8i22cc
false
/r/LocalLLaMA/comments/1ipzjs6/we_need_a_chatbot_arena_for_deep_research/o8i22cc/
false
1
t1_o8i20lr
Try to offload to GPU ; )
1
0
2026-03-03T23:27:09
qwen_next_gguf_when
false
null
0
o8i20lr
false
/r/LocalLLaMA/comments/1rk4x7w/i_think_that_is_a_good_one/o8i20lr/
false
1