name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8486k1
I’d be way more worried about hallucinations using an LLM instead of the older models.
1
0
2026-03-01T20:54:45
kisk22
false
null
0
o8486k1
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8486k1/
false
1
t1_o84845v
lol who downvoted me
1
0
2026-03-01T20:54:24
StardockEngineer
false
null
0
o84845v
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o84845v/
false
1
t1_o847zib
isnt it wild that ppl still think 27b model can beat large one only cuz it was benchmaxxed?
-5
0
2026-03-01T20:53:43
LienniTa
false
null
0
o847zib
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o847zib/
false
-5
t1_o847k91
More likely scenario: I vibe coded something. I’m not entirely sure how it works, but I put something that I don’t understand into one end and out comes something I don’t understand from the other end.
4
0
2026-03-01T20:51:33
AnotherSoftEng
false
null
0
o847k91
false
/r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o847k91/
false
4
t1_o847hm5
GLM 4.7 Flash is my favorite at the moment, haven't spent too much time with the new qwen though. I love 4.7's personality. It's truly a joy to talk to. It also performed fairly well with agentic tasks and on my 3090 I can run it with 64k context (offloading experts)
13
0
2026-03-01T20:51:10
Aromatic-Low-4578
false
null
0
o847hm5
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o847hm5/
false
13
t1_o847gau
This is me really rooting that this also is available for the 7900xtx. Has someone already tested it?
2
0
2026-03-01T20:51:00
Di_Vante
false
null
0
o847gau
false
/r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o847gau/
false
2
t1_o8479ag
...TBH, that was gonna be my go-to as well! "So you're looking to unload some hardware....." I'm in the process of setting up a server for my use and have dual 3060's and thought that was plenty for what I need.
1
0
2026-03-01T20:50:00
Ambitious_Worth7667
false
null
0
o8479ag
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o8479ag/
false
1
t1_o84751c
The hype train has been running for days. Teasers left and right.
1
0
2026-03-01T20:49:24
Black-Mack
false
null
0
o84751c
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84751c/
false
1
t1_o846syf
My 7900xtx appreciates you sharing these! Have you also tested non-unsloth models, or know someone that did it? Just wondering tho
1
0
2026-03-01T20:47:40
Di_Vante
false
null
0
o846syf
false
/r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o846syf/
false
1
t1_o846j97
Such a good wee model
0
0
2026-03-01T20:46:16
Amazing_Athlete_2265
false
null
0
o846j97
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o846j97/
false
0
t1_o846fge
I'm getting pretty different *apparent* performance differences between Qwen3.5-27B and other models at the same quantization level - CUDA, and dual 3090 cards, so 48GB of total VRAM. But, looking at llama.cpp when it's processing this model, I've got a full CPU core pegged - though apparently that's true with Gemma 2...
1
0
2026-03-01T20:45:43
overand
false
null
0
o846fge
false
/r/LocalLLaMA/comments/1rfuej9/need_help_with_qwen3527b_performance_getting_19/o846fge/
false
1
t1_o846ece
What 30B MoE model would you recommend?
4
0
2026-03-01T20:45:34
AuspiciousApple
false
null
0
o846ece
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o846ece/
false
4
t1_o84677b
7.5M bigcode Python dataset This is now a reality, so you can train and Fine-tune huge models on weak hardware. Well, logically, 6 GB of memory = not very fast training.
0
0
2026-03-01T20:44:33
Actual_Wolf_2932
false
null
0
o84677b
false
/r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o84677b/
false
0
t1_o846705
I'm sorry, what did you do exactly to update the GPU firmware on Strix Halo? I feel a bit lost atm...
7
0
2026-03-01T20:44:31
simmessa
false
null
0
o846705
false
/r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o846705/
false
7
t1_o8463i8
This new model (being the latest and powerful) is likely to be one of the best
8
0
2026-03-01T20:44:01
ansibleloop
false
null
0
o8463i8
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8463i8/
false
8
t1_o8461i8
A $10/month plan with a mainstream provider is 'cheaper' because it provides you a tiny amount of access. Very different scenario than running it 24/7 ( most users won't do this btw )
2
0
2026-03-01T20:43:44
mr_zerolith
false
null
0
o8461i8
false
/r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/o8461i8/
false
2
t1_o84619a
came here looking exactly if there was a link it lol. thanks for answering
1
0
2026-03-01T20:43:41
Di_Vante
false
null
0
o84619a
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84619a/
false
1
t1_o84610b
These 4 models are literally for 16GB users I look forward to 3.5 9b on my RTX 4080 I'm even more excited for Gemma 4
15
0
2026-03-01T20:43:39
ansibleloop
false
null
0
o84610b
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84610b/
false
15
t1_o845wu7
Sounds like total clickbait, unless you post anything beyond three completely generic and meaningless phrases I will assume this is just an ad post. Efficient inference of a 70B model on 6B would be an insane breakthrough, let alone fine-tuning.
5
0
2026-03-01T20:43:03
ilintar
false
null
0
o845wu7
false
/r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o845wu7/
false
5
t1_o845tp2
Literally all of the models announced today would fit on 16GB. You'd need to go Q8 for the 9B but it'd fit.
28
0
2026-03-01T20:42:36
McNiiby
false
null
0
o845tp2
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o845tp2/
false
28
t1_o845i77
Wow, new levels of low effort fakery.
2
0
2026-03-01T20:40:56
__JockY__
false
null
0
o845i77
false
/r/LocalLLaMA/comments/1ri7byg/why_aws_charges_60x_more_for_h100s_than_vastai/o845i77/
false
2
t1_o845ec4
yes
7
0
2026-03-01T20:40:22
Rheumi
false
null
0
o845ec4
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o845ec4/
false
7
t1_o845djp
YES!!! Super excited for these especially, thank you thank you thank you Qwen team our savior!!!
1
0
2026-03-01T20:40:15
AbheekG
false
null
0
o845djp
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o845djp/
false
1
t1_o845aww
Shares the unified memory 
3
0
2026-03-01T20:39:52
BumbleSlob
false
null
0
o845aww
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o845aww/
false
3
t1_o84502t
See huggingface’s lighteval.
1
0
2026-03-01T20:38:18
Zc5Gwu
false
null
0
o84502t
false
/r/LocalLLaMA/comments/1ri14x0/has_anyone_built_a_proper_eval_pipeline_for_local/o84502t/
false
1
t1_o844xtb
What if your convos with oder under Uncle Sam (NSA...) or one of his minions (Google, OpenAI, Anthropic, in your Tesla car, ....) divulge secrets or violate the privacy of your friends? Are you ok with that? If so, why?
1
0
2026-03-01T20:37:58
soshulmedia
false
null
0
o844xtb
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o844xtb/
false
1
t1_o844sw1
Sure, but that's the same with people as well. They have a good track record, until they make a mistake. That's why we don't place important decisions like that on one single point of failure. 
0
0
2026-03-01T20:37:16
WetRolls
false
null
0
o844sw1
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o844sw1/
false
0
t1_o844gcz
The probable main problem is that qwen3.5:35b is just broken on Ollama right now. I migrated from Ollama to llama.cpp yesterday and not looking back when it comes to coding, it's so much faster and actually works now. But yea, even Ollama's own Q4\_K\_M kept disconnecting after 1 message for me in the app interface, I ...
2
0
2026-03-01T20:35:27
No-Statistician-374
false
null
0
o844gcz
false
/r/LocalLLaMA/comments/1rhgzyb/cant_use_claude_code_with_ollama_local_model/o844gcz/
false
2
t1_o844fk5
Lol i tried that with 4b-thinking-2507 and it claimed everything it was seeing was a simulation and not reality!
1
0
2026-03-01T20:35:21
neil_555
false
null
0
o844fk5
false
/r/LocalLLaMA/comments/1ri5la8/qwen_35_35b_a3b_is_convinced_that_its_running_in/o844fk5/
false
1
t1_o844bb0
You don't need to know classified material to know that. That information is unclassified and openly shared. The military has done tests with AI to determine if it can accurately prioritize defensive and offensive targets, and so far, it has failed. There's lots we don't know, but what we do know is that they don't con...
1
0
2026-03-01T20:34:44
WetRolls
false
null
0
o844bb0
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o844bb0/
false
1
t1_o8449hd
lol i don't have any solutions but my qwen setup failed in EXACTLY this way on a file called config.ts except in qwen code instead of opencode
1
0
2026-03-01T20:34:29
__SlimeQ__
false
null
0
o8449hd
false
/r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o8449hd/
false
1
t1_o8449bf
[νόησις](https://noesis-lab.com/)
1
0
2026-03-01T20:34:27
Intrepid-Struggle964
false
null
0
o8449bf
false
/r/LocalLLaMA/comments/1rhv06r/how_are_you_preventing_runaway_ai_agent_behavior/o8449bf/
false
1
t1_o8447wd
Yeah it's slow, but being dense definitely helps with smarts. Damn that's a weird sentence I just wrote.
1
0
2026-03-01T20:34:16
MoffKalast
false
null
0
o8447wd
false
/r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o8447wd/
false
1
t1_o8447p7
Today you can learn that 'test' 'hello' and 'hi there' are not a useful test of any reasoning model.
0
0
2026-03-01T20:34:14
crantob
false
null
0
o8447p7
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o8447p7/
false
0
t1_o844752
I managed to fix Qwen3-4b-thinking-2507 with some system prompt entries, without these it always claimed to be cloud base and also thought anything that happened after it's training data cutoff was fictional (or some invented test scenario)
2
0
2026-03-01T20:34:09
neil_555
false
null
0
o844752
false
/r/LocalLLaMA/comments/1ri5la8/qwen_35_35b_a3b_is_convinced_that_its_running_in/o844752/
false
2
t1_o8444gc
ship to us?
1
0
2026-03-01T20:33:45
True_Tangerine_4706
false
null
0
o8444gc
false
/r/LocalLLaMA/comments/1ri0v3e/anyone_need_a_12channel_ddr5_rdimm_ram_set_for_an/o8444gc/
false
1
t1_o843zzv
Just know, there are so many watching from the sidelines admiring all the cool things you guys build; with the mind of a farmer no less and not much else.
1
0
2026-03-01T20:33:07
t0mi74
false
null
0
o843zzv
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o843zzv/
false
1
t1_o843yrn
On how much data did you fine tune and what was the objective?
1
0
2026-03-01T20:32:57
Exotic-Custard4400
false
null
0
o843yrn
false
/r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o843yrn/
false
1
t1_o843y20
Not for 16gb users :(
-26
0
2026-03-01T20:32:51
MiyamotoMusashi7
false
null
0
o843y20
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o843y20/
false
-26
t1_o843udj
I told mine it was running locally and it actually believed me :)
1
0
2026-03-01T20:32:20
neil_555
false
null
0
o843udj
false
/r/LocalLLaMA/comments/1ri5la8/qwen_35_35b_a3b_is_convinced_that_its_running_in/o843udj/
false
1
t1_o843ptw
Here to say this. Llama-swap has been a “set it and forget it” addition to my system. Really appreciate the effort!
2
0
2026-03-01T20:31:41
Purple-Programmer-7
false
null
0
o843ptw
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o843ptw/
false
2
t1_o843ib4
Why does it run out of context? I need a --context-shift here.
1
0
2026-03-01T20:30:37
crantob
false
null
0
o843ib4
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o843ib4/
false
1
t1_o84398u
> Basically impossible to ban local models in most Western countries. I like your optimism, it's positive energy. I am much more cynical. Yesterday I saw how California seems to strong-arm open-source devs into integrating "age ID" now so I wouldn't at all surprise me to see much more severe restrictions on general pu...
1
0
2026-03-01T20:29:19
soshulmedia
false
null
0
o84398u
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o84398u/
false
1
t1_o842xf3
Test it on GTX 1060 6GB
-1
0
2026-03-01T20:27:37
Actual_Wolf_2932
false
null
0
o842xf3
false
/r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o842xf3/
false
-1
t1_o842ub0
Thank you!
1
0
2026-03-01T20:27:10
soshulmedia
false
null
0
o842ub0
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o842ub0/
false
1
t1_o842t2g
if that is a mcp server variation issue orvon system prompt.. (like datetime changing etc), that would be really annoying to fix manually.. I would like to have in each IDE/Cli the decision to keep prefix unchanged best.. There should be a flag at least.. and SWA.. why should it come to pass at 1/4 below context size a...
1
0
2026-03-01T20:26:59
chrisoutwright
false
null
0
o842t2g
false
/r/LocalLLaMA/comments/1ri6q8d/repeat_pp_while_using_qwen35_27b_local_with/o842t2g/
false
1
t1_o842r6h
ill try the nanbeige!
2
0
2026-03-01T20:26:43
TinyVector
false
null
0
o842r6h
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o842r6h/
false
2
t1_o842q5s
I find that hard to believe, if you don't even know which GPU you have.
6
0
2026-03-01T20:26:34
OsmanthusBloom
false
null
0
o842q5s
false
/r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o842q5s/
false
6
t1_o842pgz
People should think about how they are implicitly *violating the privacy* of others by not caring about their own privacy. If one e.g. talks to ChatGPT about friends in one's life - that will violate the privacy of those friends for which the corresponding data points will be collected. For all the talk, especially h...
1
0
2026-03-01T20:26:28
soshulmedia
false
null
0
o842pgz
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o842pgz/
false
1
t1_o842ll0
Sorry for the long time to reply! I swear I saw replies and your work! I was just so busy troubleshooting the other VibeVoice FastAPI I was attempting to setup by myself! I moved on to your 1-click installer as soon as I saw it! At the beginning of the 1-click installer process, when its looking for python, I didn't ...
1
0
2026-03-01T20:25:55
Forsaken-Paramedic-4
false
null
0
o842ll0
false
/r/LocalLLaMA/comments/1ppx93g/vibevoice_7b_and_15b_fastapi_wrapper/o842ll0/
false
1
t1_o842l7a
You seem to believe that a MoE has the knowledge of its total parameters but only the intelligence of a dense model large as its active parameters. That's not how it works. That chart is wishful thinking. OTOH, Qwen3.5-397B-A17B really is better than DeepSeek-R1 even if it has 60% its total size and less than 50% acti...
18
0
2026-03-01T20:25:52
Expensive-Paint-9490
false
null
0
o842l7a
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o842l7a/
false
18
t1_o842hmk
Thanks. I don't suppose there are any equivalent settings for LMStudio? (pushing it now).
1
0
2026-03-01T20:25:22
mintybadgerme
false
null
0
o842hmk
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o842hmk/
false
1
t1_o842el6
Or had the issue? There's been a couple of updates, latest on Feb 19 but it looks like a reupload of all the other quants together.
1
0
2026-03-01T20:24:56
hum_ma
false
null
0
o842el6
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o842el6/
false
1
t1_o842e1v
30b fast on my craptop, 35 too slow. need to explore 2.5 now
1
0
2026-03-01T20:24:51
crantob
false
null
0
o842e1v
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o842e1v/
false
1
t1_o842cbd
I have created a new technology, a major innovation that changes the learning process from huge amounts of VRAM and training time to training a 70B model on the same 6GB GPU. And without the loss of quality that occurs with QLoRa. You can test it at flap-ai.com. I hope they don't block me for posting the link in the ...
-4
0
2026-03-01T20:24:36
Actual_Wolf_2932
false
null
0
o842cbd
false
/r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o842cbd/
false
-4
t1_o8427zs
Go local if what you want is a hobby and go cloud if you what you want is ultimately code. If you can swing the $20 for Cursor, that's a fantastic place for a beginner. You'll have access to the most advanced models and a fairly good amount of usage. As others have mentioned, the second that money is a concern, local ...
2
0
2026-03-01T20:23:58
_-_David
false
null
0
o8427zs
false
/r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/o8427zs/
false
2
t1_o841xmk
Just grabbed the wallpaper from wallhaven.cc then set my ghostty configs to be semi-transparent
5
0
2026-03-01T20:22:29
theskilled42
false
null
0
o841xmk
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o841xmk/
false
5
t1_o841rkg
theoretically, qwen3.5 35b-a3b is your choice... but the vulkan optimization is not very well, at least on window 11, for my 5700xt 8gb, 16k context size, it should get 15-20 tk's for zero content, but I get 7 tk/s now. (for same hardware, I could get 24 tk/s for qwen3 coder 30b-a3b) maybe your gpu is newer....th...
1
0
2026-03-01T20:21:36
kironlau
false
null
0
o841rkg
false
/r/LocalLLaMA/comments/1ri6nf2/recommendations_for_gpu_with_8gb_vram/o841rkg/
false
1
t1_o841riw
It's Oh My Zsh :)
2
0
2026-03-01T20:21:36
theskilled42
false
null
0
o841riw
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o841riw/
false
2
t1_o841p09
Hrm I though sloth was always a good version - just switched versions of Q8 and it worked perfectly.
0
0
2026-03-01T20:21:14
jacobpederson
false
null
0
o841p09
false
/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o841p09/
false
0
t1_o841jj9
In LM Studio, flash attention is enabled by default. I didn't see much of a change in the t/s from enabling KV cache quantization but it would probably let me put some more layers on the GPU. But the KV cache is already pretty light for Qwen 3.5. And mmap also didn't seem to make much of a difference but maybe it depen...
1
0
2026-03-01T20:20:27
kke12
false
null
0
o841jj9
false
/r/LocalLLaMA/comments/1ri60l3/qwen_35_35b_a3b_lmstudio_settings/o841jj9/
false
1
t1_o841isc
same issue but different model and IDE tooling i filed with [VSC issue](https://github.com/microsoft/vscode/issues/298554) and left at gllm-org [cppllama](https://github.com/ggml-org/llama.cpp/issues/19794#issuecomment-3979651767) I believe it is prompt variation/injection at specific points..but would have to bu...
1
0
2026-03-01T20:20:21
chrisoutwright
false
null
0
o841isc
false
/r/LocalLLaMA/comments/1ri6q8d/repeat_pp_while_using_qwen35_27b_local_with/o841isc/
false
1
t1_o841gzm
Ok fair enough, didn't know that about Qwen3Coder.
0
0
2026-03-01T20:20:06
hum_ma
false
null
0
o841gzm
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o841gzm/
false
0
t1_o841gvf
>It's possible! Imagine what this means! Hope?
1
0
2026-03-01T20:20:05
ParthProLegend
false
null
0
o841gvf
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o841gvf/
false
1
t1_o841ebv
[deleted]
1
0
2026-03-01T20:19:43
[deleted]
true
null
0
o841ebv
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o841ebv/
false
1
t1_o8418vt
Thanks. Memory is straightforward - persistent key-value blocks scoped per project that agents can read/write through tools, plus conversation history that persists across sessions via ADK's memory service. No auto-accumulation, so no pruning needed yet. For runaway loops - ADK handles this at the framework level. Not...
1
0
2026-03-01T20:18:57
ivanantonijevic
false
null
0
o8418vt
false
/r/LocalLLaMA/comments/1rhcxn2/mate_selfhosted_multiagent_system_with_ollama/o8418vt/
false
1
t1_o8411h9
a wild cachyos + niri user has been spotted. is this fish?
2
0
2026-03-01T20:17:53
NYPYT
false
null
0
o8411h9
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o8411h9/
false
2
t1_o840ywq
Intelligence is not knowledge
25
0
2026-03-01T20:17:30
ReallyFineJelly
false
null
0
o840ywq
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o840ywq/
false
25
t1_o840x5a
I managed to do QLoRA finetuning for a 8B Llama-lkke model with a large tokenizer vocabulary on 6GB VRAM. I used the Unsloth notebooks and some custom hacks to keep the embedding layer in CPU RAM. But how on earth do you fine-tune a 70B model on just 6GB VRAM?
4
0
2026-03-01T20:17:15
OsmanthusBloom
false
null
0
o840x5a
false
/r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o840x5a/
false
4
t1_o840u6t
When things get popular, they get adapted to the slower among us. And the slow ones are desperate to be told they are not slow.
2
0
2026-03-01T20:16:50
crantob
false
null
0
o840u6t
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o840u6t/
false
2
t1_o840tvg
Of course it's not in terms of knowledge, but R1 has only 37B active parameters and more than one year old, which in this field is almost ancient technology, so in terms of intelligence is perfectly plausible.
7
0
2026-03-01T20:16:47
dionisioalcaraz
false
null
0
o840tvg
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o840tvg/
false
7
t1_o840sv5
Rule 1 - Duplicate. There are already 3 other anticipation threads on Page 1
1
0
2026-03-01T20:16:38
LocalLLaMA-ModTeam
false
null
0
o840sv5
true
/r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o840sv5/
true
1
t1_o840ski
Lol no you didn't. You built this to earn ref link money. There are dozens of these comparison sites.
3
0
2026-03-01T20:16:36
_qeternity_
false
null
0
o840ski
false
/r/LocalLLaMA/comments/1ri7byg/why_aws_charges_60x_more_for_h100s_than_vastai/o840ski/
false
3
t1_o840pu4
8GB VRAM is not enough(Voice of my experience). Get more VRAM as much as you could afford. For example, 24GB VRAM is good for running 30-50B MOE models & 30B Dense models @ Q4.
-5
0
2026-03-01T20:16:12
pmttyji
false
null
0
o840pu4
false
/r/LocalLLaMA/comments/1ri6nf2/recommendations_for_gpu_with_8gb_vram/o840pu4/
false
-5
t1_o840hb1
I am new to all of this. What's important when looking for hardware to buy? Like the GPU? Intel or AMD? The processor? Ram? DDR5 or DDR4? Thanks! 
1
0
2026-03-01T20:15:00
pet3121
false
null
0
o840hb1
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o840hb1/
false
1
t1_o840h6e
It feels like they're just piggybacking off the success of other tools like claude code and openclaw while having the bulk of new models being so huge that normal people couldn't possibly hope to run them without multi gpu setups, or models that are flat out only available on ollama cloud. Doesn't feel like I need it a...
1
0
2026-03-01T20:14:59
Natjoe64
false
null
0
o840h6e
false
/r/LocalLLaMA/comments/1pvjpmb/why_i_quit_using_ollama/o840h6e/
false
1
t1_o840gbw
For the 50the time GOODHART'S LAW
1
0
2026-03-01T20:14:52
rm-rf-rm
false
null
0
o840gbw
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o840gbw/
false
1
t1_o840brd
MATE's dashboard is web-based and focused more on the config/hierarchy side (building agent trees, wiring tools, managing RBAC) than real-time task tracking. Different problem space. For observability, I use ADK, so the framework gives back usage\_metadata on every LLM response. I log that per agent name and session, ...
1
0
2026-03-01T20:14:14
ivanantonijevic
false
null
0
o840brd
false
/r/LocalLLaMA/comments/1rhcxn2/mate_selfhosted_multiagent_system_with_ollama/o840brd/
false
1
t1_o840boz
Amazing tests, thanks! Can you add something like this [https://huggingface.co/mradermacher/Magnum-Opus-35B-A3B-GGUF](https://huggingface.co/mradermacher/Magnum-Opus-35B-A3B-GGUF) ? Its qwen3.5-35b-a3b with lora of opus 4.6 reasoning data. i personally find it to be the best one vs coder-next-80b and normal qwen3.5 for...
1
0
2026-03-01T20:14:13
sanjxz54
false
null
0
o840boz
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o840boz/
false
1
t1_o840anw
I think number of threads depends on your setup, --n-cpu-moe may be too big, you should experiment with various settings with llama-bench instead believing in some magical "bestest ever options".
1
0
2026-03-01T20:14:05
jacek2023
false
null
0
o840anw
false
/r/LocalLLaMA/comments/1ri3mxa/ideal_llamacpp_settings_for_12gb_vram_and_64gb/o840anw/
false
1
t1_o8408lv
Lingua Ai translator; best translation app ever has all features you’ll need…phrasebook, text translation, conversation translation, custom keyboard to translate in any app, image translation, smart scan, Ai teacher integrated to help you learn different languages and more… check it out https://apps.apple.com/us/app/li...
1
0
2026-03-01T20:13:47
Own-Criticism-1018
false
null
0
o8408lv
false
/r/LocalLLaMA/comments/1o1cti5/whats_the_best_live_translation_app_for_voice/o8408lv/
false
1
t1_o840299
It's trained to say so. I'd be curious to see how models with test time training aka online continual learning would behave for people after a while. That'd be cool, continual learning would benefit local AI much more than cloud because it'd then specialize for YOUR use cases, permanently, eliminating the need for clou...
6
0
2026-03-01T20:12:53
QuackerEnte
false
null
0
o840299
false
/r/LocalLLaMA/comments/1ri5la8/qwen_35_35b_a3b_is_convinced_that_its_running_in/o840299/
false
6
t1_o83zt1r
Hey do you think that doing this course and learning it will help me in my career? I am interested in it but i am also thinking that if it helps my career i would be serious about it, i am a fresher college grad.
1
0
2026-03-01T20:11:34
One_Sun_1878
false
null
0
o83zt1r
false
/r/LocalLLaMA/comments/1jvsvzj/just_did_a_deep_dive_into_googles_agent/o83zt1r/
false
1
t1_o83zr8o
Gemini helped me web scrape Google and it was happy to help lol
1
0
2026-03-01T20:11:19
Crypto_Stoozy
false
null
0
o83zr8o
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o83zr8o/
false
1
t1_o83znjd
Sorry, still I don't see that model's name on your thread/graphs/markdown. I'll recheck later
1
0
2026-03-01T20:10:47
pmttyji
false
null
0
o83znjd
false
/r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o83znjd/
false
1
t1_o83zj2y
[removed]
1
0
2026-03-01T20:10:09
[deleted]
true
null
0
o83zj2y
false
/r/LocalLLaMA/comments/1rhtbwx/where_do_you_use_ai_in_your_workflow/o83zj2y/
false
1
t1_o83zi0p
9B (w/Vision) Model + TTS/STT Model + Qwen IE/Flux/SD Model all on a single 24GB Card 🥰
1
0
2026-03-01T20:10:00
Prestigious-Use5483
false
null
0
o83zi0p
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83zi0p/
false
1
t1_o83zhq4
`docker run` together with this should work: https://github.com/leejet/stable-diffusion.cpp/blob/master/docs/docker.md#run-server
1
0
2026-03-01T20:09:57
bjodah
false
null
0
o83zhq4
false
/r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o83zhq4/
false
1
t1_o83zcrj
Thanks for the launch command. Really appreciate it.
2
0
2026-03-01T20:09:15
mxforest
false
null
0
o83zcrj
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83zcrj/
false
2
t1_o83zceu
I find it amazing to explain things as well... the old one was good at explaining it in text, but this? This started making diagrams and everything, unprompted, I was well surprised.
1
0
2026-03-01T20:09:12
No-Statistician-374
false
null
0
o83zceu
false
/r/LocalLLaMA/comments/1ri6nf2/recommendations_for_gpu_with_8gb_vram/o83zceu/
false
1
t1_o83zbou
There is a small model called Nanbeige4.1-3B it seems good and better than 35B's and 27B's models, google it or see the benchmark
0
1
2026-03-01T20:09:06
Dreifach-M
false
null
0
o83zbou
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o83zbou/
false
0
t1_o83z9vd
Stop refusing big/large models. I think I'm gonna join them to downvote you if you keep refusing :D
1
0
2026-03-01T20:08:51
pmttyji
false
null
0
o83z9vd
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o83z9vd/
false
1
t1_o83z5sz
I have Qwen3.5 27B nvfp4 on 2x RTX 5090 hitting 230 t/s at MTP 5 via vllm.  There are some TTFT issues though when MTP is enabled on current nightly
4
0
2026-03-01T20:08:16
this-just_in
false
null
0
o83z5sz
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83z5sz/
false
4
t1_o83ywkr
Unsloth: https://preview.redd.it/1a16gsf4qhmg1.png?width=1236&format=png&auto=webp&s=ede31dbbe13085aab2eab6442e98a1431f750438
2
0
2026-03-01T20:06:57
jacobpederson
false
null
0
o83ywkr
false
/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o83ywkr/
false
2
t1_o83ytfm
tbf, M2.5 is a pretty good model.  Even Opus 4.6 doesn't outperform in EVERY scenario. And your benchmark seems highly specific to your workflow. Also it's important to note that Qwen3.5 just came out and alot of ppl are still fixing bugs in supporting the models.  So alot of experiences may be tainted by the specifi...
1
0
2026-03-01T20:06:31
chensium
false
null
0
o83ytfm
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o83ytfm/
false
1
t1_o83ymfy
I will chime in that your settings match mine on a 5060 ti but 14 moe layers on CPU gets me 55 tok/s. However, like OP I am on the community model and I'm going to switch to Unsloth because of their stated innovations. Those guys and AesSedai seem to be on similar pages with their quantization approaches.
1
0
2026-03-01T20:05:31
luncheroo
false
null
0
o83ymfy
false
/r/LocalLLaMA/comments/1ri60l3/qwen_35_35b_a3b_lmstudio_settings/o83ymfy/
false
1
t1_o83ye5z
[removed]
1
0
2026-03-01T20:04:20
[deleted]
true
null
0
o83ye5z
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83ye5z/
false
1
t1_o83ycrn
How much memory can the ANE access? Does it have full access to the main memory, like the GPU/CPU, or do you need to allocate and transfer data to a separate buffer?
1
0
2026-03-01T20:04:09
fotcorn
false
null
0
o83ycrn
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o83ycrn/
false
1
t1_o83y6xr
Thats a good bar, capable offload, for quick tool calling. Have to wait and see.
1
0
2026-03-01T20:03:19
ThisWillPass
false
null
0
o83y6xr
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83y6xr/
false
1
t1_o83xzrr
if you really use the stock configuration you can adjust the K and V Cache quantification to q8, use flash attention and turn off mmap. this will increase your t/s.
1
0
2026-03-01T20:02:18
Waste-Excitement-683
false
null
0
o83xzrr
false
/r/LocalLLaMA/comments/1ri60l3/qwen_35_35b_a3b_lmstudio_settings/o83xzrr/
false
1