name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7v7y5s
I'm on openSUSE Tumbleweed. The only pre-build version I've found is in the distro repos and that one is outdated.
1
0
2026-02-28T12:03:53
LosEagle
false
null
0
o7v7y5s
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v7y5s/
false
1
t1_o7v7v8k
It's always was about gooning. Erebus came out before quants and even before llama.
2
0
2026-02-28T12:03:13
Hot-Employ-3399
false
null
0
o7v7v8k
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7v7v8k/
false
2
t1_o7v7uh7
Also set presence\_penalty to 1.5 or something to prevent over-thinking
2
0
2026-02-28T12:03:02
chris_0611
false
null
0
o7v7uh7
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v7uh7/
false
2
t1_o7v7svx
Sorry I can't really say too much about that, what I do know it does seem like it manages it good enough. It runs through your repo and reads the files it needs and jumps further if needed. However I use around 100k context, and that works really well right now, but I suspect that going much lower in context size, you'...
1
0
2026-02-28T12:02:41
WildDogOne
false
null
0
o7v7svx
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7v7svx/
false
1
t1_o7v7ssf
Like I said, who is the target audience for a chip with high memory bandwidth? A laptop or whatever can only exist if Intel/AMD make such a chip. Those looking for a mobile workstation need compute on top of the memory bandwidth, hence why I mentioned compute. Mobile workstations offer significantly more compute than a...
2
0
2026-02-28T12:02:39
FullstackSensei
false
null
0
o7v7ssf
false
/r/LocalLLaMA/comments/1rb8mzd/this_is_how_slow_local_llms_are_on_my_framework/o7v7ssf/
false
2
t1_o7v7r0j
Can we set presence penalty in LMstudio or is that only for llama.cpp folks?
4
0
2026-02-28T12:02:15
RickyRickC137
false
null
0
o7v7r0j
false
/r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/o7v7r0j/
false
4
t1_o7v7qy1
Trying now the BF16 MXFP4\_MOE model, it's giving me 35t/s and also thinking LESS and giving me a result quicker than the Q4\_M model.
2
0
2026-02-28T12:02:14
soyalemujica
false
null
0
o7v7qy1
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7v7qy1/
false
2
t1_o7v7qh3
I have yet to see a single ad for OpenClaw. Aside from maybe unmarked sponsorships
0
0
2026-02-28T12:02:07
MrYorksLeftEye
false
null
0
o7v7qh3
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7v7qh3/
false
0
t1_o7v7puv
Why don’t the us gov ask grok?
1
0
2026-02-28T12:01:58
snekslayer
false
null
0
o7v7puv
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7v7puv/
false
1
t1_o7v7n70
Holy... it can do everything huh. 1T+ params here we go. Patrician toys
1
0
2026-02-28T12:01:22
Lan_BobPage
false
null
0
o7v7n70
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v7n70/
false
1
t1_o7v7k2t
The superpowers repo pretty much answers all of that; I have only done a little tweaking myself, adding skills and updating a few things. I often just tell OpenCode what I want the skill to do and get it to write one, then edit it as desired when it's done.
1
0
2026-02-28T12:00:39
paulgear
false
null
0
o7v7k2t
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v7k2t/
false
1
t1_o7v7jqq
LM Studio welcomes you my friend. It's great.
1
0
2026-02-28T12:00:34
_-_David
false
null
0
o7v7jqq
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v7jqq/
false
1
t1_o7v7c7z
There might be a diffusion step to clean up artifacts but I think it's suspected current closed frontier models are autoregressive. There are already many papers published on this topic by the big labs and I think OpenAI has been known to do this for some time.
7
0
2026-02-28T11:58:51
Calm_Bit_throwaway
false
null
0
o7v7c7z
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v7c7z/
false
7
t1_o7v7au3
It's been months everybody is saying that V4 is just around the corner.. imho they'll wait to digest the opus 4.6 moment
49
0
2026-02-28T11:58:32
No_Afternoon_4260
false
null
0
o7v7au3
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v7au3/
false
49
t1_o7v79wq
I run local models and use the free api keys or AuthO. I had paid for ridiculous Opus 4.6 prices for high level work. I honestly think we are seeing the first waves of models that are actually capable of building systems. Why would I want to make websites and shitty apps when I could use it to build high level systems ...
0
0
2026-02-28T11:58:19
moborius1387
false
null
0
o7v79wq
false
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7v79wq/
false
0
t1_o7v79h2
Let the courts decide the Trump admin knows they will lose and get less power afterwards.
6
0
2026-02-28T11:58:13
AutomaticDriver5882
false
null
0
o7v79h2
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7v79h2/
false
6
t1_o7v78vh
They’re Chinese and probably work according to 996, meaning from 9am to 9pm 6 days a week.
-13
0
2026-02-28T11:58:05
YearnMar10
false
null
0
o7v78vh
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v78vh/
false
-13
t1_o7v77rs
Anthropic in panic mode? Quite the opposite, I am sure they are drowning themselves in Champagne right now. It is an absolute PR disaster for OpenAI to come out the day after and tell the world they are going to replace Anthropic. Because that means they are fine with "mass surveillance of Americans". Even staunch Trum...
3
0
2026-02-28T11:57:50
wanderer_4004
false
null
0
o7v77rs
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7v77rs/
false
3
t1_o7v74b2
Afaik the model might do some refinement with an actual diffusion step but many parts of the image generation are now shared with the autoregressive LLM part.
6
0
2026-02-28T11:57:01
Calm_Bit_throwaway
false
null
0
o7v74b2
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v74b2/
false
6
t1_o7v72ww
r/localllamacirclejerk should get more attendance
2
0
2026-02-28T11:56:42
Firepal64
false
null
0
o7v72ww
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7v72ww/
false
2
t1_o7v72i4
Thank you for having a good impression about South Korea, but we're not capable to race China any more.
1
0
2026-02-28T11:56:37
ExcuseAccomplished97
false
null
0
o7v72i4
false
/r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o7v72i4/
false
1
t1_o7v713j
I couldn’t get the 80b to perform as well as oss-120b but am much preferring 35b over 120b right now for speed any usability in open code
2
0
2026-02-28T11:56:18
SocialDinamo
false
null
0
o7v713j
false
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7v713j/
false
2
t1_o7v7027
Isn't it bad port configuration? You try to launch ur instance on a port that's already used by another app
1
0
2026-02-28T11:56:04
No_Afternoon_4260
false
null
0
o7v7027
false
/r/LocalLLaMA/comments/1rh0akj/frustration_building_out_my_local_models/o7v7027/
false
1
t1_o7v6yqg
You need to be super smart to make things work in ollama, because there are usually broken or semi broken and amd support (for example) simply sucks. llama.cpp simply works or it will work in some days. I use llama.cpp router mode (one server with many models, a la ollama, and a web-chat included for free) with models ...
2
0
2026-02-28T11:55:46
lasizoillo
false
null
0
o7v6yqg
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v6yqg/
false
2
t1_o7v6u6e
r/StableDiffusion unless you want to create ascii art or svg images
4
0
2026-02-28T11:54:42
Aggressive_Collar135
false
null
0
o7v6u6e
false
/r/LocalLLaMA/comments/1rh0hgl/hi_im_a_total_noob/o7v6u6e/
false
4
t1_o7v6ta6
It would be amazing if you could try the unsloth quant.
1
0
2026-02-28T11:54:30
MrMrsPotts
false
null
0
o7v6ta6
false
/r/LocalLLaMA/comments/1rgyqz7/has_anyone_got_qwen35_to_work_with_ollama/o7v6ta6/
false
1
t1_o7v6s7j
Is it as simple as ollama with the Gwen 3.5 model and open code ? Or is any extra setup step needed ? I tried and it looks like open code doesn't provide tool calling functionality when using local models and I don't understand what I'm doing wrong.
1
0
2026-02-28T11:54:14
No-Consequence-4687
false
null
0
o7v6s7j
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v6s7j/
false
1
t1_o7v6r8u
Have you tried exactly what I did and it works for you?
1
0
2026-02-28T11:54:01
MrMrsPotts
false
null
0
o7v6r8u
false
/r/LocalLLaMA/comments/1rgyqz7/has_anyone_got_qwen35_to_work_with_ollama/o7v6r8u/
false
1
t1_o7v6nfg
I thought the US was under a corporate dictatorship so odd. I guess we find out who the real handlers are.
2
0
2026-02-28T11:53:08
AutomaticDriver5882
false
null
0
o7v6nfg
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7v6nfg/
false
2
t1_o7v6mul
Miss the days when i had my llama and my mixtral
2
0
2026-02-28T11:53:00
Mehdi135849
false
null
0
o7v6mul
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7v6mul/
false
2
t1_o7v6mpj
I was using Ollama and pulled the generic 35b. Big mistake. I’ve been reading up on llama and will try this 
1
0
2026-02-28T11:52:58
ParamedicAble225
false
null
0
o7v6mpj
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v6mpj/
false
1
t1_o7v6m5f
Got an external HDD exactly for my favourite models not long ago. Just before their prices started to go up, too. I'm a panicky dumbass, so I'll probably be all set LLM wise
2
0
2026-02-28T11:52:50
ayu-ya
false
null
0
o7v6m5f
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7v6m5f/
false
2
t1_o7v6k4x
None sadly, Just 9 and 4B MoE models work great on low vram configs. So the quanted 35B should be fine for you.
33
0
2026-02-28T11:52:22
SystematicKarma
false
null
0
o7v6k4x
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v6k4x/
false
33
t1_o7v6ije
Running 35b in q8_0 with my Mac with 64GB, and it works great!
1
0
2026-02-28T11:52:00
chibop1
false
null
0
o7v6ije
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7v6ije/
false
1
t1_o7v6hbt
Well maybe just download the pre-built version! Here's the releases page: https://github.com/ggml-org/llama.cpp/releases/ Latest Vulkan version, works on any GPU vendor: https://github.com/ggml-org/llama.cpp/releases/download/b8180/llama-b8180-bin-win-vulkan-x64.zip Latest CUDA 12 build, assuming you have NVIDIA: ht...
1
0
2026-02-28T11:51:43
Firepal64
false
null
0
o7v6hbt
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v6hbt/
false
1
t1_o7v6h4w
Thanks. I’m a noob and was using Ollama but I’m learning the ways. I’ll try this out. 
0
0
2026-02-28T11:51:41
ParamedicAble225
false
null
0
o7v6h4w
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v6h4w/
false
0
t1_o7v6cg5
They are going to use AWS bedrock to host the models it sounds like. Not sure how technically Anthropic is going to know they are using it for bad things anyways.
2
0
2026-02-28T11:50:36
AutomaticDriver5882
false
null
0
o7v6cg5
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7v6cg5/
false
2
t1_o7v6cef
I've been running qwen3.5:27b in Codex with no issues so far.
1
0
2026-02-28T11:50:35
shuravi108
false
null
0
o7v6cef
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v6cef/
false
1
t1_o7v6c25
It will 100% be a 4B and the 9B 3.5 series 256K context length, native VL Base and instructs will be released.
2
0
2026-02-28T11:50:30
SystematicKarma
false
null
0
o7v6c25
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v6c25/
false
2
t1_o7v6beq
Well maybe just download the pre-built version! Here's the releases page: https://github.com/ggml-org/llama.cpp/releases/ Latest Vulkan version, works on any GPU vendor: https://github.com/ggml-org/llama.cpp/releases/download/b8180/llama-b8180-bin-win-vulkan-x64.zip Latest CUDA 12 build, assuming you have NVIDIA: ht...
1
0
2026-02-28T11:50:21
Firepal64
false
null
0
o7v6beq
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v6beq/
false
1
t1_o7v67tm
27B at Q8KXL UD quant. It’s so dang smart and I get a respectable 17.5 tk/s at 200k tokens on my chonky boi w7900.
1
0
2026-02-28T11:49:32
Thrumpwart
false
null
0
o7v67tm
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7v67tm/
false
1
t1_o7v65xf
It's saturday, I don't think they're releasing new models today. So I'm guessing this is a GGUF (or MLX) update. Those are listed in Qwen3 collection aswell.
8
0
2026-02-28T11:49:07
rerri
false
null
0
o7v65xf
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v65xf/
false
8
t1_o7v64qp
nobody is running those locally
6
0
2026-02-28T11:48:50
Western_Objective209
false
null
0
o7v64qp
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7v64qp/
false
6
t1_o7v64me
Just realized that you're trying to use unsloth quant. I haven't tried that. It works fine if I just pull from their library. `ollama run qwen3.5:35b`
2
0
2026-02-28T11:48:48
chibop1
false
null
0
o7v64me
false
/r/LocalLLaMA/comments/1rgyqz7/has_anyone_got_qwen35_to_work_with_ollama/o7v64me/
false
2
t1_o7v6491
Are you sure on this? I thought models like gemini-2.5-flash-image were a single model that can handle both text and image tokens (in- and output)
12
0
2026-02-28T11:48:42
TemperatureMajor5083
false
null
0
o7v6491
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v6491/
false
12
t1_o7v643m
Looks like model production isn't the problem anymore. Now the problem is reliable agents to use the models.. which apparently aren't yet good enough to create reliable agents as moltbot showed
0
1
2026-02-28T11:48:40
inphaser
false
null
0
o7v643m
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v643m/
false
0
t1_o7v63td
how many people can run a 405b param model on their home computer?
3
0
2026-02-28T11:48:36
Western_Objective209
false
null
0
o7v63td
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7v63td/
false
3
t1_o7v6143
is there anything as good as GPT4 that runs locally for real? Like that was a legit trillion param model that had a decent amount of depth to it, when I try things like the qwen models that fit in 32GB of RAM I think they are still behind it in capabilities
1
0
2026-02-28T11:47:58
Western_Objective209
false
null
0
o7v6143
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7v6143/
false
1
t1_o7v5y2k
did you look into Roo?
1
0
2026-02-28T11:47:15
howardhus
false
null
0
o7v5y2k
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v5y2k/
false
1
t1_o7v5v5a
Use docker image bro
1
1
2026-02-28T11:46:36
inphaser
false
null
0
o7v5v5a
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v5v5a/
false
1
t1_o7v5u6e
Did you use the lfm 2.5 24b q4 version from liquid ai ?
1
0
2026-02-28T11:46:22
Less_Strain7577
false
null
0
o7v5u6e
false
/r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7v5u6e/
false
1
t1_o7v5u3f
But at what cost? Everything ?
7
0
2026-02-28T11:46:21
devilish-lavanya
false
null
0
o7v5u3f
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v5u3f/
false
7
t1_o7v5t8y
Hi, thank you so much for highlighting this correction! I didn't know about this gating mechanism inside the FFN itself, and I thought there was a simple MLP with a down projection and an up projection. I'll update the post as soon as I'll be able to double check my computations.
1
0
2026-02-28T11:46:09
Luca3700
false
null
0
o7v5t8y
false
/r/LocalLLaMA/comments/1rg4apu/qwen_35_architecture_analysis_parameter/o7v5t8y/
false
1
t1_o7v5qy8
wow thats great. you mind telling us more? you do that with agents/skills? self made or is there some reference?
3
0
2026-02-28T11:45:36
howardhus
false
null
0
o7v5qy8
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v5qy8/
false
3
t1_o7v5o8u
It works great from my end.
1
0
2026-02-28T11:44:59
chibop1
false
null
0
o7v5o8u
false
/r/LocalLLaMA/comments/1rgyqz7/has_anyone_got_qwen35_to_work_with_ollama/o7v5o8u/
false
1
t1_o7v5fcq
I’m too stupid to compile llama.cpp… yet somehow still capable of reading the instructions they wrote for people like me.
5
0
2026-02-28T11:42:53
SufficientRow6231
false
null
0
o7v5fcq
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v5fcq/
false
5
t1_o7v5f8z
For creating images I don't think LMStudio will work. Try comfyui
3
0
2026-02-28T11:42:51
thebadslime
false
null
0
o7v5f8z
false
/r/LocalLLaMA/comments/1rh0hgl/hi_im_a_total_noob/o7v5f8z/
false
3
t1_o7v5bqn
[removed]
1
0
2026-02-28T11:42:01
[deleted]
true
null
0
o7v5bqn
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v5bqn/
false
1
t1_o7v5a03
Gemma also had 1B and 270M
15
0
2026-02-28T11:41:36
stopbanni
false
null
0
o7v5a03
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v5a03/
false
15
t1_o7v5824
The problem with compilation on Windows is that you need a compiler. On Linux you can just install some packages but on Windows there are many ways to do things incorrectly.
3
0
2026-02-28T11:41:08
jacek2023
false
null
0
o7v5824
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v5824/
false
3
t1_o7v55ei
I can almost hear it say "Please, kill me" 
13
0
2026-02-28T11:40:29
redditorialy_retard
false
null
0
o7v55ei
false
/r/LocalLLaMA/comments/1rh0bkz/tempted_to_prompt_qwen_on_this_craigslist_rig_but/o7v55ei/
false
13
t1_o7v55bp
Ah, gotcha. It's still a work in progress, but I've had my jaw dropped a few times. I hope this inspires you. What I've got going in simplest terms is something like a team working in sequence... \################################################################## WRITER ##############################################...
2
0
2026-02-28T11:40:28
_-_David
false
null
0
o7v55bp
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v55bp/
false
2
t1_o7v51sx
What in the zombie apocalypse is this ?
16
0
2026-02-28T11:39:37
maxymob
false
null
0
o7v51sx
false
/r/LocalLLaMA/comments/1rh0bkz/tempted_to_prompt_qwen_on_this_craigslist_rig_but/o7v51sx/
false
16
t1_o7v4y5t
Let's see if it stays oss then.
5
0
2026-02-28T11:38:45
Technical-Earth-3254
false
null
0
o7v4y5t
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v4y5t/
false
5
t1_o7v4qby
Which Qwen 3.5 variant exactly?
1
0
2026-02-28T11:36:51
Polite_Jello_377
false
null
0
o7v4qby
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v4qby/
false
1
t1_o7v4p4g
Most likely it seems that way. But I believe they use different models. Auto regressive for text generation, diffusion for image generation. The integration of both models in their platform makes it seem it’s the same, but I don’t believe it is.
23
0
2026-02-28T11:36:32
FlatwormMinimum
false
null
0
o7v4p4g
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v4p4g/
false
23
t1_o7v4p01
No, just routed to their image gen model.
21
0
2026-02-28T11:36:30
And-Bee
false
null
0
o7v4p01
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v4p01/
false
21
t1_o7v4ovs
It seems like a problem with unsloth's Q4\_K\_XL. I downloaded AesSedai's IQ4\_XS and it got the correct answer. Unsloth said in another post that even tho they have high PPL and KLD (lower better) compared to AesSedai's quants, but real world tests show their quants are better, which I doubt. AesSedai's IQ4\_XS is 10G...
1
0
2026-02-28T11:36:29
po_stulate
false
null
0
o7v4ovs
false
/r/LocalLLaMA/comments/1re894z/the_first_local_vision_model_to_get_this_right/o7v4ovs/
false
1
t1_o7v4npt
I was trying on 6bit quant
0
0
2026-02-28T11:36:11
BitXorBit
false
null
0
o7v4npt
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v4npt/
false
0
t1_o7v4mkg
You just add `{%- set enable_thinking = false %}` to the top of your chat template.
2
0
2026-02-28T11:35:55
ayylmaonade
false
null
0
o7v4mkg
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7v4mkg/
false
2
t1_o7v4l6j
If that is what you can run in your vram, its the best option to choose as of rn, yes.
1
0
2026-02-28T11:35:35
getmevodka
false
null
0
o7v4l6j
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7v4l6j/
false
1
t1_o7v4l4a
Yeah. Is it the newspaper that fired a bunch of reporters?
36
0
2026-02-28T11:35:34
demon_itizer
false
null
0
o7v4l4a
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v4l4a/
false
36
t1_o7v4jdo
did you compile llama cpp with CUDAS nvidia support?
1
0
2026-02-28T11:35:09
Hot_Turnip_3309
false
null
0
o7v4jdo
false
/r/LocalLLaMA/comments/1rett32/can_i_run_qwen35_122ba10b_on_a_single_rtx_3090/o7v4jdo/
false
1
t1_o7v4imf
LFM2-24B is not yet finished according to liquid. From their blog: "When pre-training completes, expect an LFM2.5-24B-A2B with additional post-training and reinforcement learning."
2
0
2026-02-28T11:34:58
zkstx
false
null
0
o7v4imf
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v4imf/
false
2
t1_o7v4hlt
MSM doesn't know shit about jack.
60
0
2026-02-28T11:34:43
ResidentPositive4122
false
null
0
o7v4hlt
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v4hlt/
false
60
t1_o7v4gsq
Cool. I'm a big fan of 7B q4. Quick, snappy, not perfect, easy to fine tune and work with.
1
0
2026-02-28T11:34:31
melanov85
false
null
0
o7v4gsq
false
/r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7v4gsq/
false
1
t1_o7v4fdv
It's not about the AI, it's about asserting their power to force companies to do what they want. To MAGA ideology, the fighting is what matters, not the victory.
2
0
2026-02-28T11:34:11
Mickenfox
false
null
0
o7v4fdv
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7v4fdv/
false
2
t1_o7v4ewa
Not yet, comparing the new UD models from yesterday right now (27B and 35B A3B) using default settings.
1
0
2026-02-28T11:34:03
T3KO
false
null
0
o7v4ewa
false
/r/LocalLLaMA/comments/1rglgma/qwen_35_llamacpp_turn_of_reasoning_and_performance/o7v4ewa/
false
1
t1_o7v4ec1
Aren't most closed frontier models currently doing image gen with the LLM right now?
7
0
2026-02-28T11:33:57
Calm_Bit_throwaway
false
null
0
o7v4ec1
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v4ec1/
false
7
t1_o7v4adb
Let Claude help you to compile it
2
0
2026-02-28T11:32:52
Ok_Condition4242
false
null
0
o7v4adb
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v4adb/
false
2
t1_o7v4a9a
I've added docs and a get started section to pairl.dev now - should make things clearer.
1
0
2026-02-28T11:32:50
ZealousidealCycle915
false
null
0
o7v4a9a
false
/r/LocalLLaMA/comments/1qtru5p/pairl_a_protocol_for_efficient_agent/o7v4a9a/
false
1
t1_o7v4a6k
[deleted]
1
0
2026-02-28T11:32:49
[deleted]
true
null
0
o7v4a6k
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7v4a6k/
false
1
t1_o7v44id
I think it's some of the bad quants of the MOE versions that just go into this endless "but wait... but wait maybe.... " loop when thinking.
6
0
2026-02-28T11:31:24
chris_0611
false
null
0
o7v44id
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7v44id/
false
6
t1_o7v435v
No problem! To answer your question — I don't train LLMs from scratch, that takes millions of dollars in compute. What I do is finetune existing open source models on specific knowledge, which is a much more practical approach and something you can definitely do locally. For setting up a local LLM for the family, I'd r...
2
0
2026-02-28T11:31:04
melanov85
false
null
0
o7v435v
false
/r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7v435v/
false
2
t1_o7v41xv
Hey, did you buy them and test it out?
1
0
2026-02-28T11:30:45
PrincipleLogos
false
null
0
o7v41xv
false
/r/LocalLLaMA/comments/1jnc9rd/its_not_much_but_its_honest_work_4xrtx_3060/o7v41xv/
false
1
t1_o7v3ywl
And which version (4.6 or 4.5?)
1
0
2026-02-28T11:29:58
yaxir
false
null
0
o7v3ywl
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7v3ywl/
false
1
t1_o7v3vz9
It's more likely they mean the model will be text-image to text.
184
0
2026-02-28T11:29:16
Few_Painter_5588
false
null
0
o7v3vz9
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v3vz9/
false
184
t1_o7v3v5k
Only if they work as a draft model and provide speed-up for the bigger models.
6
0
2026-02-28T11:29:04
chris_0611
false
null
0
o7v3v5k
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v3v5k/
false
6
t1_o7v3v2z
Please, a model between 14 and 20b for us, poor 16Gb vram users.
62
0
2026-02-28T11:29:03
Slow_Concentrate3831
false
null
0
o7v3v2z
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v3v2z/
false
62
t1_o7v3tmz
Yeah it’s a challenge, in the past I ran early deepseek via ollama.  I will experiment on it hope one had a more hands on experience lately 
1
0
2026-02-28T11:28:41
Gold_Sugar_4098
false
null
0
o7v3tmz
false
/r/LocalLLaMA/comments/1rg1dfi/searching_advice_nvidia_t6000_4gb_vram_useful_for/o7v3tmz/
false
1
t1_o7v3sqo
wouldnt they release quants at release of the bigger models?
1
0
2026-02-28T11:28:28
Odd-Ordinary-5922
false
null
0
o7v3sqo
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v3sqo/
false
1
t1_o7v3s29
Possible, but why hidden?
7
0
2026-02-28T11:28:18
jacek2023
false
null
0
o7v3s29
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v3s29/
false
7
t1_o7v3qia
Generation!? Surely they mean video/image input, right? It would be immensely cool to have an omni modal model that can do everything though, that would be real innovation.
143
0
2026-02-28T11:27:55
dampflokfreund
false
null
0
o7v3qia
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7v3qia/
false
143
t1_o7v3q4n
all you have to do is read instructions lol
5
0
2026-02-28T11:27:50
Odd-Ordinary-5922
false
null
0
o7v3q4n
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v3q4n/
false
5
t1_o7v3pts
^^qwen
123
0
2026-02-28T11:27:45
eidrag
false
null
0
o7v3pts
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v3pts/
false
123
t1_o7v3nx6
Waiting for MLX models for LM Studio. Are the mlx-community ones advisable?
2
0
2026-02-28T11:27:17
edeltoaster
false
null
0
o7v3nx6
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v3nx6/
false
2
t1_o7v3n2e
[removed]
1
0
2026-02-28T11:27:04
[deleted]
true
null
0
o7v3n2e
false
/r/LocalLLaMA/comments/1qyrree/aucun_modèles_sur_ml_studio/o7v3n2e/
false
1
t1_o7v3l5v
Was the same for me. Simply downloaded the compiled version of llama.cpp and started via llama server. Took me 5 minutes in the end
13
0
2026-02-28T11:26:35
RepresentativeFun28
false
null
0
o7v3l5v
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7v3l5v/
false
13
t1_o7v3jbk
Unsloth have released a new set of models fixing this issue. they say it's an issue with the chat template. (i have no idea what this means btw, but i guess u can try to fix it urself)
3
0
2026-02-28T11:26:08
Old-Sherbert-4495
false
null
0
o7v3jbk
false
/r/LocalLLaMA/comments/1rgxr0v/qwen_35_is_multimodal_here_is_how_to_enable_image/o7v3jbk/
false
3
t1_o7v3j90
How many t/s have you get with 4gb gpu + 32gb Ram?
6
0
2026-02-28T11:26:07
Dreifach-M
false
null
0
o7v3j90
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7v3j90/
false
6