name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o86b2xe
open code. i've been using and it has been tooling calling perfectly
2
0
2026-03-02T04:14:44
ZealousidealShoe7998
false
null
0
o86b2xe
false
/r/LocalLLaMA/comments/1rie3yc/which_ide_to_code_with_qwen_35/o86b2xe/
false
2
t1_o86b2ly
Personaly I love to start in the middle
2
0
2026-03-02T04:14:40
obnoxious_wealth
false
null
0
o86b2ly
false
/r/LocalLLaMA/comments/1riisyd/are_you_a_top_down_thinker_or_bottom_up/o86b2ly/
false
2
t1_o86asou
Would you recommend keeping kv cache in bf16 as the previous user recommended ?
2
0
2026-03-02T04:12:42
Certain-Cod-1404
false
null
0
o86asou
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o86asou/
false
2
t1_o86aq0j
>Remember those days? -p "This is a transcript of an IRC conversation between Assistant and User:\n\nAssistant: Hi I'm the new assistant, how can I help you?\n\nUser: " etc. etc... Those models were smarter for sure. It wasn't even controversial. Everyone knew that instruction tuning just made the model slightly mor...
2
0
2026-03-02T04:12:11
KallistiTMP
false
null
0
o86aq0j
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o86aq0j/
false
2
t1_o86al0r
They are not like people at all. Putting aside that at the core they are just a very sophisticated statistical model and make mistakes that people never would, people have accountability. An algorithm isn't capable of being responsible for its actions.
1
0
2026-03-02T04:11:12
citrusalex
false
null
0
o86al0r
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o86al0r/
false
1
t1_o86ae1j
GGUF would be great to have to try (even your previous models don't have)
3
0
2026-03-02T04:09:49
pmttyji
false
null
0
o86ae1j
false
/r/LocalLLaMA/comments/1ri7y1i/picokittensabstractsllama8m_writing_abstracts/o86ae1j/
false
3
t1_o86a76v
Try Qwen 3.5 9B when it comes out gpt-oss-20b could be good as well
6
0
2026-03-02T04:08:27
TurnUpThe4D3D3D3
false
null
0
o86a76v
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o86a76v/
false
6
t1_o86a1yz
Won’t fit in VRAM
-2
0
2026-03-02T04:07:25
TurnUpThe4D3D3D3
false
null
0
o86a1yz
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o86a1yz/
false
-2
t1_o869yp0
your llm is opening bash markdowns and never finishing them or mixing them with file headers
2
0
2026-03-02T04:06:46
ForsookComparison
false
null
0
o869yp0
false
/r/LocalLLaMA/comments/1riic5m/running_llamaserver_as_a_persistent_systemd/o869yp0/
false
2
t1_o869nut
I really hope Golden Retriever has a good platform in 2028. He's a very good boy.
2
0
2026-03-02T04:04:39
jazir555
false
null
0
o869nut
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o869nut/
false
2
t1_o869jb0
with new small 3,5 model today maybe we can go faster with speculative decoding? but we need to find best tunning for that.
2
0
2026-03-02T04:03:46
raysar
false
null
0
o869jb0
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o869jb0/
false
2
t1_o8692ac
Have you tried the recent base (non-instruction tuned) models with a template? I'm still not entirely warmed up to the idea of instruction tuning. Like, we knew without a doubt that it did dumb models down, significantly, and everyone just decided it should become the norm anyway. It makes even less sense now that we...
2
0
2026-03-02T04:00:31
KallistiTMP
false
null
0
o8692ac
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o8692ac/
false
2
t1_o8690xg
This is really interesting and honestly validates something I've been frustrated with from the application side. The core insight — skill files are eating context space that should be used for the actual task — is exactly right. I've been building agents locally (also on a 7900 XTX, small world) and the amount of cont...
1
0
2026-03-02T04:00:16
Di_Vante
false
null
0
o8690xg
false
/r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o8690xg/
false
1
t1_o8690lw
Before ChatGPT blew up there was a huge number of application specific machine learning models, often put together by comparatively small research groups, companies, or others who freely released all their code and models. I had the pleasure to take many of these (often a little janky) research codebases/models and ada...
2
0
2026-03-02T04:00:12
Prestigious_Thing797
false
null
0
o8690lw
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o8690lw/
false
2
t1_o868sxq
That's a fascinating read, but Lavender is clearly just a deep neural net trained on massive amounts of timeseries and/or social media data
2
0
2026-03-02T03:58:43
chuby1tubby
false
null
0
o868sxq
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o868sxq/
false
2
t1_o868sme
Glad I wasn't the only to notice. Definitely need the deets on that!
1
0
2026-03-02T03:58:39
vyralsurfer
false
null
0
o868sme
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o868sme/
false
1
t1_o868nf9
Thanks for the hint! Hadn't decided on which quant for that one yet, I'll make sure 6 is in the test group. I can fit it, but I also want to fit other models in parallel haha. Playing memory tetris
1
0
2026-03-02T03:57:41
segfawlt
false
null
0
o868nf9
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o868nf9/
false
1
t1_o868h3g
Then where is local image models at today? Seems like it's ended up at a wall. Is it at push button receive waifu level still
1
0
2026-03-02T03:56:29
paulisaac
false
null
0
o868h3g
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o868h3g/
false
1
t1_o868ghg
That would be a better question to have asked in the first place. I knew they distilled models but I didn't know they retarded them to post on reddit in the comments.
0
0
2026-03-02T03:56:21
BuffaloDesperate8357
false
null
0
o868ghg
false
/r/LocalLLaMA/comments/1rih7lq/i_asked_my_llm_to_speak_with_as_many/o868ghg/
false
0
t1_o868aqd
I can't get it to do things I'm able to do with GLM 4 32b... Can anyone try this and see if it actually makes a reasonable clock? ``` Hey! Can you create an analog clock with HTML/CSS it should show the current time lets start at 10:00pm. Include numerals, use CSS to animate the hands using transform rotate. Lets t...
1
0
2026-03-02T03:55:13
sleepy_roger
false
null
0
o868aqd
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o868aqd/
false
1
t1_o8686r6
I can't get it to do things I'm able to do with GLM 4 32b... Can anyone try this and see if it actually makes a reasonable clock? ``` Hey! Can you create an analog clock with HTML/CSS it should show the current time lets start at 10:00pm. Include numerals, use CSS to animate the hands using transform rotate. Lets t...
1
0
2026-03-02T03:54:29
sleepy_roger
false
null
0
o8686r6
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o8686r6/
false
1
t1_o8686aa
Fwiw I use unsloths Q6XL quant for coder next and it's amazing
5
0
2026-03-02T03:54:23
kevin_1994
false
null
0
o8686aa
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o8686aa/
false
5
t1_o867u5t
Thanks for hopping in to confirm! I was having the same questions as OP. I was also under the impression that Qwen Coder Next was converted with the same issue but I can't find the comment that had me thinking that - is there also a re-upload coming for those or was this a 3.5 only issue?
2
0
2026-03-02T03:52:03
segfawlt
false
null
0
o867u5t
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o867u5t/
false
2
t1_o867u3k
Yes tried those combos and they work well with unified mem, 5060t 16g +128g at 64k context but I run the Q4 variants.
1
0
2026-03-02T03:52:02
Tema_Art_7777
false
null
0
o867u3k
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o867u3k/
false
1
t1_o867sb5
I abounded lm studio and swapped over to llama.cpp I am getting 126k context at 54 tokens per second without exhausting my RAM
1
0
2026-03-02T03:51:43
Electrify338
false
null
0
o867sb5
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o867sb5/
false
1
t1_o867s0y
Only AI builds a timeline that runs from right to left. Qwen maybe.
1
0
2026-03-02T03:51:40
DockEllis17
false
null
0
o867s0y
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o867s0y/
false
1
t1_o867o4g
Thank you for your work.
2
0
2026-03-02T03:50:55
ConferenceMountain72
false
null
0
o867o4g
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o867o4g/
false
2
t1_o867o1j
Yes absolutely, we're also gonna make notebooks for them. ATM you can use our finetuning guide: [https://unsloth.ai/docs/models/qwen3.5/fine-tune](https://unsloth.ai/docs/models/qwen3.5/fine-tune)
1
0
2026-03-02T03:50:54
yoracale
false
null
0
o867o1j
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o867o1j/
false
1
t1_o867n9v
The progress since DeepSeek R1 has been genuinely staggering. A year ago running a 70B model locally required a multi-GPU setup that cost thousands. Now quantized models on a single 4090 or an M4 Max are genuinely useful for production tasks. The efficiency gains in quantization, speculative decoding, and attention opt...
3
0
2026-03-02T03:50:46
Soft-Analyst-9452
false
null
0
o867n9v
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o867n9v/
false
3
t1_o867mzb
This is exactly what I use Confident AI for. Create your golden dataset with expected outputs, then run your candidate models through it. The metrics are computed by whatever judge you configure, can be a local model too. You get a side by side comparison with statistical breakdowns. confident ai for setup.
1
0
2026-03-02T03:50:43
Safe-Obligation-3370
false
null
0
o867mzb
false
/r/LocalLLaMA/comments/1ri14x0/has_anyone_built_a_proper_eval_pipeline_for_local/o867mzb/
false
1
t1_o867mz8
Hope you're right and someone can get heretic working well for this model, as the interactive portion is not fun.
4
0
2026-03-02T03:50:42
vpyno
false
null
0
o867mz8
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o867mz8/
false
4
t1_o867btp
I think they just fixed it: [https://github.com/ggml-org/llama.cpp/pull/19970](https://github.com/ggml-org/llama.cpp/pull/19970)
7
0
2026-03-02T03:48:35
simracerman
false
null
0
o867btp
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o867btp/
false
7
t1_o8678bl
Sent you a PM; you have PII in your repo.
1
0
2026-03-02T03:47:56
michael2v
false
null
0
o8678bl
false
/r/LocalLLaMA/comments/1rick3t/i_benchmarked_8_local_llms_for_phonetohome_chat/o8678bl/
false
1
t1_o8674kq
Yes would recommend waiting for it esp for the tool-calling fixes. We're re-uploading with benchmarks today, But you can also use any other quant except Q4\_K\_XL, Q3\_K\_XL and Q2\_K\_XL as those aren't affected.
15
0
2026-03-02T03:47:14
yoracale
false
null
0
o8674kq
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o8674kq/
false
15
t1_o866zy4
Is this PR just submitted by someone, related? [https://github.com/ggml-org/llama.cpp/issues/20033](https://github.com/ggml-org/llama.cpp/issues/20033)
1
0
2026-03-02T03:46:23
simracerman
false
null
0
o866zy4
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o866zy4/
false
1
t1_o866ljz
What was the trick? How much context were you able to get out of VLLM on R9700? I consistently find that I get only half what I get with GGUF on 5090 On dual R9700 using FP8 tensors I got 93K context and prefill was great but tg is too slow for my liking and electricity cost. # 2xR9700 vLLM FP8 vs 1x R9700 llama...
1
0
2026-03-02T03:43:42
Ok-Ad-8976
false
null
0
o866ljz
false
/r/LocalLLaMA/comments/1rhk0gz/r9700_and_vllm_with_qwen35/o866ljz/
false
1
t1_o866isw
I never quantize kv. Unsloths q4xl is working for you? I might give that a shot. I thought we were supposed to wait for his re-upload using the same technique as the 35B model
5
0
2026-03-02T03:43:11
kevin_1994
false
null
0
o866isw
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o866isw/
false
5
t1_o866e26
This is genuinely impressive reverse engineering. Apple keeps the ANE documentation locked down tight because they don't want developers bypassing CoreML. The fact that someone got training working on it opens up possibilities Apple probably doesn't want explored. The ANE has insane perf/watt compared to GPU inference,...
1
0
2026-03-02T03:42:18
Soft-Analyst-9452
false
null
0
o866e26
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o866e26/
false
1
t1_o866c7o
The speed of open source model releases is insane right now. Qwen 3.5 dropping while people are still benchmarking Qwen 3. The real question is whether the small variants are actually usable for production tasks or if they're just benchmark optimized. I've been running Qwen 2.5 72B for code generation and it's genuinel...
0
0
2026-03-02T03:41:56
Soft-Analyst-9452
false
null
0
o866c7o
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o866c7o/
false
0
t1_o8663r4
They don't talk about the model at all, so it's pointless.
1
0
2026-03-02T03:40:25
Ok-Adhesiveness-4141
false
null
0
o8663r4
false
/r/LocalLLaMA/comments/1rih7lq/i_asked_my_llm_to_speak_with_as_many/o8663r4/
false
1
t1_o8663tn
A somewhat challenging coding project would be a good test of its perplexity.
3
0
2026-03-02T03:40:25
jeffwadsworth
false
null
0
o8663tn
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8663tn/
false
3
t1_o865xad
my understanding is that the latency is more likely to improve things than the actual bandwidth. since P2P support is typically locked away by NVIDIA, the all-reduce operation would have to push the data via the CPU. however, geohot did since release a hacked driver that might alleviate most performance benefits fro...
3
0
2026-03-02T03:39:12
JohnTheNerd3
false
null
0
o865xad
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o865xad/
false
3
t1_o865vx4
[removed]
1
0
2026-03-02T03:38:58
[deleted]
true
null
0
o865vx4
false
/r/LocalLLaMA/comments/1m2ibq0/is_it_possible_to_run_something_like_groks_anime/o865vx4/
false
1
t1_o865v3j
If you're consistently hitting performance walls with local LLMs, it might be worth considering a more powerful GPU setup, as even the M1/M2 chips can struggle with larger models. NVIDIA cards with 24GB+ VRAM (like the 3090 or 4090) handle 30B+ models much more smoothly. Before buying anything, [llmpicker.blog](https:/...
1
0
2026-03-02T03:38:48
KneeTop2597
false
null
0
o865v3j
false
/r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o865v3j/
false
1
t1_o865r5z
The 80B doesn't allow me enough context with my local setup, so I haven't thoroughly tested it
1
0
2026-03-02T03:38:05
National_Meeting_749
false
null
0
o865r5z
false
/r/LocalLLaMA/comments/1rihdwf/openclaw_and_qwen_35_qwen_next_80/o865r5z/
false
1
t1_o865qxw
With the Qwen3.5 models its extremely important to use bf16 for the kv cache.... (especially in thinking mode) i strugled in the start too... but after changeing the k cache to bf16 and the v cache to bf16 and using the unsloth dynamic q4\_k\_xl quants they are absolutely amazing....
20
0
2026-03-02T03:38:02
snapo84
false
null
0
o865qxw
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o865qxw/
false
20
t1_o865pr1
Yeah, I value quality over all other factors.
2
0
2026-03-02T03:37:50
jeffwadsworth
false
null
0
o865pr1
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o865pr1/
false
2
t1_o865ltr
Your RX 6600 is a solid choice for local AI experimentation! For running models like Llama or Vicuna, an 8GB GPU works well if you stick with smaller models under 7B parameters. If you want to go bigger (13B+), you'd need more VRAM. Check out [llmpicker.blog](https://emea01.safelinks.protection.outlook.com/?url=https%3...
0
0
2026-03-02T03:37:07
KneeTop2597
false
null
0
o865ltr
false
/r/LocalLLaMA/comments/1ri6nf2/recommendations_for_gpu_with_8gb_vram/o865ltr/
false
0
t1_o865gfs
It’s bollocks, or at least false in general. [darkc0de/XORTRON.CriminalComputing.LARGE.2026.3](https://huggingface.co/darkc0de/XORTRON.CriminalComputing.LARGE.2026.3), made with Heretic 1.2, is currently the #2 model on the UGI Leaderboard, second only to Grok 4. It substantially *improves upon*, rather than reducing...
21
0
2026-03-02T03:36:08
-p-e-w-
false
null
0
o865gfs
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o865gfs/
false
21
t1_o865ga7
Qwen is good for coding and STEM applications, but it is heavily slopified. Numerous roleplaying-centric finetunes of existing models exist, which limit slop and increase creativity. [Here's a HuggingFace page with some good ones. ](https://huggingface.co/collections/SicariusSicariiStuff/most-of-my-models-in-order)
6
0
2026-03-02T03:36:06
BagelRedditAccountII
false
null
0
o865ga7
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o865ga7/
false
6
t1_o865e2c
e,g, qwen coder next
1
0
2026-03-02T03:35:42
Possible-Basis-6623
false
null
0
o865e2c
false
/r/LocalLLaMA/comments/1r5h1gj/you_can_run_minimax25_locally/o865e2c/
false
1
t1_o865bjt
You think it’s better than Next 80b? Or is it just resource availability?
1
0
2026-03-02T03:35:15
AdLongjumping192
false
null
0
o865bjt
false
/r/LocalLLaMA/comments/1rihdwf/openclaw_and_qwen_35_qwen_next_80/o865bjt/
false
1
t1_o865bde
if talking about quality (IQ), I also do not find anything very usefull locally, unless you are running the full version, but the tps will be still low
1
0
2026-03-02T03:35:13
Possible-Basis-6623
false
null
0
o865bde
false
/r/LocalLLaMA/comments/1r5h1gj/you_can_run_minimax25_locally/o865bde/
false
1
t1_o8657aq
I mean, it's not like he's going to *remove* NVLink if he doesn't need it for this specific model lol.
6
0
2026-03-02T03:34:28
Kamal965
false
null
0
o8657aq
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8657aq/
false
6
t1_o86512y
Yes I'm running it right now. All Qwen35 models require nightly vLLM right now. Possibly even with patches if you want MTP working.
1
0
2026-03-02T03:33:20
vpyno
false
null
0
o86512y
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o86512y/
false
1
t1_o864xcc
can't wait to try 2b and 4b.
1
0
2026-03-02T03:32:38
gpt872323
false
null
0
o864xcc
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o864xcc/
false
1
t1_o864kwb
Yep, this is why I grabbed an NVLink way back. People in the sub used to say it didn't make much of a difference but I saw a pretty significant difference, glad I paid the $200 back then.
4
0
2026-03-02T03:30:20
sleepy_roger
false
null
0
o864kwb
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o864kwb/
false
4
t1_o864g6p
the pc is also worth 33k now instead of 6k because of the ddr5 apocalypse
1
0
2026-03-02T03:29:29
Confusion_Senior
false
null
0
o864g6p
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o864g6p/
false
1
t1_o864dmd
I'm building a machine specifically to isolate an instance of ironclaw, and the 3.5 35BA3B is exactly the model I'm going to use for the non-coding and chron jobs and everything that doesn't require the utmost intelligence. It's a wonderful model, and is a worthy successor to Qwen 3 30A3B.
1
0
2026-03-02T03:29:01
National_Meeting_749
false
null
0
o864dmd
false
/r/LocalLLaMA/comments/1rihdwf/openclaw_and_qwen_35_qwen_next_80/o864dmd/
false
1
t1_o86449z
I dont recommend using chatgpt or like plublic service. You will be giving personal information about your mental health.
1
0
2026-03-02T03:27:18
No_Entertainer_8404
false
null
0
o86449z
false
/r/LocalLLaMA/comments/1nlyt65/what_is_the_best_llm_for_psychology_coach_or/o86449z/
false
1
t1_o8642wv
Wow, I asked and they delivered. Literally just the other day I was saying how much I'd like to see more models that can fit entirely on a variety of GPU tiers. I really want to see what that 0.8 model is all about, that looks like a model that could be used for entertainment in games, toys, and maybe for...
2
0
2026-03-02T03:27:03
Bakoro
false
null
0
o8642wv
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8642wv/
false
2
t1_o8642ma
I ran it , very slow. Again i think i will wait for quant. one
1
0
2026-03-02T03:27:00
callmedevilthebad
false
null
0
o8642ma
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o8642ma/
false
1
t1_o863vns
The open-source community still hasn’t produced a real "open-source" SOTA model, has it? If these big commercial companies stop providing their "free stuff", could the open-source community sustain a valuable development?
-1
0
2026-03-02T03:25:45
Dr_Me_123
false
null
0
o863vns
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o863vns/
false
-1
t1_o863uu5
Yeah, it's super frustrating when you're trying to get everything to play nicely together and it just won't cooperate. I've found that sometimes the specific configurations for different models need more tweaking than expected. It's interesting to see how some folks are automating even deeper interactions beyond the ba...
1
0
2026-03-02T03:25:36
JohnTheTechAi2
false
null
0
o863uu5
false
/r/LocalLLaMA/comments/1rhmwfn/cant_get_qwen_models_to_work_with_tool_calls/o863uu5/
false
1
t1_o863ump
With my older laptop, a 3080 with 16GB VRAM and 36GB RAM, what models are I hoping to use?🥹
2
0
2026-03-02T03:25:33
veeeth
false
null
0
o863ump
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o863ump/
false
2
t1_o863rfl
Thanks mate, gonna try that
1
0
2026-03-02T03:24:58
Dhonnan
false
null
0
o863rfl
false
/r/LocalLLaMA/comments/1ric44g/what_would_be_the_best_small_model_for_json/o863rfl/
false
1
t1_o863p9j
Thanks
1
0
2026-03-02T03:24:35
Dhonnan
false
null
0
o863p9j
false
/r/LocalLLaMA/comments/1ric44g/what_would_be_the_best_small_model_for_json/o863p9j/
false
1
t1_o863lw3
I have the same mini and just stick to GPT-OSS 20B. It's a great fit for the hardware. Qwen3 30B is fast but context size is limited to around ~20k.
1
0
2026-03-02T03:23:57
jeremyckahn
false
null
0
o863lw3
false
/r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o863lw3/
false
1
t1_o863it1
Holy crap I never actually thought there was use for that much RAM. I'm just a nub by comparison getting 64gb of DDR4 just so I could multibox EVE Online with 18 simultaneous clients running.
1
0
2026-03-02T03:23:24
paulisaac
false
null
0
o863it1
false
/r/LocalLLaMA/comments/1q8ckz0/the_reason_why_ram_has_become_so_expensive/o863it1/
false
1
t1_o863gz1
eh, for my case gemma3 still fares better in my use case, but would love to add 4.7 flash one day.  Could you tell me your hardware and tech stack for it? I also have a single 3090
6
0
2026-03-02T03:23:04
redditorialy_retard
false
null
0
o863gz1
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o863gz1/
false
6
t1_o863c4r
And they say LLMs can't produce real art.
2
0
2026-03-02T03:22:12
Recoil42
false
null
0
o863c4r
false
/r/LocalLLaMA/comments/1rih7lq/i_asked_my_llm_to_speak_with_as_many/o863c4r/
false
2
t1_o8639jj
Have all of that too, same list, but also plex. My family sits there and watches stuff with ads, when the exact same thing is on the plex server. Also have OP's problem with the home cloud and AI, no one uses any of it, and 60% of the time I spent on it was getting things "just right" so the family could use it just...
1
0
2026-03-02T03:21:44
MarkIII-VR
false
null
0
o8639jj
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o8639jj/
false
1
t1_o8639ho
and ensuring it didn't click the top 2 results that are ads filmed to the brim with malware 
0
1
2026-03-02T03:21:43
redditorialy_retard
false
null
0
o8639ho
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o8639ho/
false
0
t1_o8633i9
Thanks for sharing this. I see you are passing sampling parameters to Llama.cpp via OpenCode config file. i.e: ... "options": { "min_p": 0.0, "presence_penalty": 0.0, "repetition_penalty": 1.0, "temperature": 0.6, "top_k"...
1
0
2026-03-02T03:20:40
St0lz
false
null
0
o8633i9
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o8633i9/
false
1
t1_o862ezp
same dude, it was over at r/machinelearningnews, they always have to bring up the golden ratio and nonsensical equations made up of meaningless symbols and then they get their AI lover to respond in the comments for them, WILD but also so sad.
1
0
2026-03-02T03:16:18
Certain-Cod-1404
false
null
0
o862ezp
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o862ezp/
false
1
t1_o862e75
You're not better than anyone here
1
0
2026-03-02T03:16:09
ForsookComparison
false
null
0
o862e75
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o862e75/
false
1
t1_o8624f9
Qwen3.5 35B or the 27B fit in your VRAM with the smaller Q3 quants, and both are performing really well for me. 35-A3B Q4 is good with offloading. You can get a lot of context with your system. Qwen3-Coder-Next also performs really well on 16GB VRAM/ 64GB RAM systems like mine.
6
0
2026-03-02T03:14:24
sine120
false
null
0
o8624f9
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o8624f9/
false
6
t1_o8623w1
Because it was generated by one it appears. What information does your hungry soul seek my friend?
-1
0
2026-03-02T03:14:19
BuffaloDesperate8357
false
null
0
o8623w1
false
/r/LocalLLaMA/comments/1rih7lq/i_asked_my_llm_to_speak_with_as_many/o8623w1/
false
-1
t1_o861vho
Yeah, "Inference OS" may be overkill. More like "Boot to LLM", but highly optimized to totally thrash the hardware.
1
0
2026-03-02T03:12:51
IAmBobC
false
null
0
o861vho
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o861vho/
false
1
t1_o861v03
3.5-35B IQ3\_XXS for speed, or the Q3 of the 27B for intelligence. Both are performing great for me.
1
0
2026-03-02T03:12:46
sine120
false
null
0
o861v03
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o861v03/
false
1
t1_o861ukj
Wild that what felt like a sudden jailbreak of open weights in Jan 2025 turned into a whole new baseline for what "local" even means now.
3
0
2026-03-02T03:12:41
theagentledger
false
null
0
o861ukj
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o861ukj/
false
3
t1_o861jia
27B will work and 35b-a3b will work, just offload all the experts to the CPU.
1
0
2026-03-02T03:10:42
InvertedVantage
false
null
0
o861jia
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o861jia/
false
1
t1_o861fbs
Downvoted? You are welcome for the correct and simple answer to your question.
-2
0
2026-03-02T03:09:56
_-_David
false
null
0
o861fbs
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o861fbs/
false
-2
t1_o861fbt
Maybe if I wanted to tank my FPS when playing games...
1
0
2026-03-02T03:09:56
k31thdawson
false
null
0
o861fbt
false
/r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o861fbt/
false
1
t1_o861aod
I mean I had a good laugh. I'd be interested to know what model this is.
1
0
2026-03-02T03:09:07
tat_tvam_asshole
false
null
0
o861aod
false
/r/LocalLLaMA/comments/1rih7lq/i_asked_my_llm_to_speak_with_as_many/o861aod/
false
1
t1_o8619fj
Shakespeare is alive!?
1
0
2026-03-02T03:08:53
bworneed
false
null
0
o8619fj
false
/r/LocalLLaMA/comments/1qyl6rd/gemini_system_prompt_google_decided_to_remove_pro/o8619fj/
false
1
t1_o860yff
Yeah, he is. He still frequently live stream his coding projects [https://www.youtube.com/@geohotarchive/videos](https://www.youtube.com/@geohotarchive/videos)
3
0
2026-03-02T03:06:56
I-am_Sleepy
false
null
0
o860yff
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o860yff/
false
3
t1_o860tv9
Let's go
1
0
2026-03-02T03:06:06
Conscious_Nobody9571
false
null
0
o860tv9
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o860tv9/
false
1
t1_o860jos
You still haven't answered my question.
0
0
2026-03-02T03:04:18
Ok-Adhesiveness-4141
false
null
0
o860jos
false
/r/LocalLLaMA/comments/1rih7lq/i_asked_my_llm_to_speak_with_as_many/o860jos/
false
0
t1_o860eli
Running Qwen 3.5 35B A3B Q8. 5060 TI 16GB and DDR5 with 65K context set CUDA_VISIBLE_DEVICES=0 && "C:\Users\user\Desktop\llama\llama-server.exe" ^ -m "D:\Qwen3.5-35B-A3B-UD-Q8_K_XL.gguf" ^ -a Qwen3.5-35B-A3B ^ --ctx_size 65536 ^ -ot ".ffn_.*_exps.=CPU" ^ --jinja ^ -fa on ^ -ngl 999 ^ -t 12 ^ -b 2048 ^ -ub 25...
3
0
2026-03-02T03:03:24
nikhilprasanth
false
null
0
o860eli
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o860eli/
false
3
t1_o8609s8
What about mid range 122b? Any idea?
1
0
2026-03-02T03:02:32
RickyRickC137
false
null
0
o8609s8
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o8609s8/
false
1
t1_o8607qv
I mean its a 20b more running on CPU. What are talking, a couple of pennies? Y'all need to chill. WTF even happened to this sub where you literally can only post news or the same exact question over and over 
0
1
2026-03-02T03:02:10
ArchdukeofHyperbole
false
null
0
o8607qv
false
/r/LocalLLaMA/comments/1rih7lq/i_asked_my_llm_to_speak_with_as_many/o8607qv/
false
0
t1_o86070a
OpenCode in Zed
2
0
2026-03-02T03:02:02
Educational-Fruit854
false
null
0
o86070a
false
/r/LocalLLaMA/comments/1rie3yc/which_ide_to_code_with_qwen_35/o86070a/
false
2
t1_o8605eg
40k < 80k 🤓 Running agents in parallel is a thing but the very slow prefill probably kills it for most applications 🙂‍↕️
1
0
2026-03-02T03:01:45
gnaarw
false
null
0
o8605eg
false
/r/LocalLLaMA/comments/1rd80gx/i_just_saw_something_amazing/o8605eg/
false
1
t1_o86026n
I downloaded a model today quantized it in aws and found after all that it doesn’t work have you tested it?
2
0
2026-03-02T03:01:12
AutomaticDriver5882
false
null
0
o86026n
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o86026n/
false
2
t1_o8601gn
Boy and proffessor:-Encodes: check your own assumptions before checking someone else's. Driver:-Encodes: disambiguation is sovereignty, assuming is how you crash. Bird:-Three morals: not everyone who shits on you is your enemy, not everyone who rescues you is your friend, when warm and in shit keep your mouth shut....
1
0
2026-03-02T03:01:05
RTS53Mini
false
null
0
o8601gn
false
/r/LocalLLaMA/comments/1riat5w/vignettes_handy_for_ais/o8601gn/
false
1
t1_o8600ib
thats true. Took eternity to respond. I am deleting it for now. Will wait till quantized one is released
2
0
2026-03-02T03:00:54
callmedevilthebad
false
null
0
o8600ib
false
/r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o8600ib/
false
2
t1_o85zyt4
You did a comparison between two models on a give hardware budget - that's totally fair, and useful for people with similar hardware.
3
0
2026-03-02T03:00:37
crantob
false
null
0
o85zyt4
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o85zyt4/
false
3
t1_o85zv88
Remember, in this scenario you're the military and have unlimited compute resources. You run them all. Old models, local LLMs, huge cloud-based LLMs, a team of old generals wargaming on a board, and you have real humans compare the results to see what signals they've extracted from the noise.
3
0
2026-03-02T03:00:00
evranch
false
null
0
o85zv88
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o85zv88/
false
3
t1_o85zv32
connect your display to mainboards/cpu's iGPU. you'll loose rtx super resolution and HDR, but gain 2-3GB of VRAM eaten by windows. also use llama.cpp with -fit on and --fit-target 256, that should reclaim another 700MB
2
0
2026-03-02T02:59:58
Training_Visual6159
false
null
0
o85zv32
false
/r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o85zv32/
false
2