name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o84gsxr | Mac Studio won't suffice? | 1 | 0 | 2026-03-01T21:38:56 | 1anre | false | null | 0 | o84gsxr | false | /r/LocalLLaMA/comments/1pmek1c/tiiny_ai_pocket_lab_mini_pc_with_12core_arm_cpu/o84gsxr/ | false | 1 |
t1_o84gsr7 | You can cool them "quietly" by installing fans in the rear of your case and a script that checks gpu temps every few seconds and adjusts pwm power according to your own fan curve. You can also lower the wattage to reduce power use. I am very happy with mine. | 2 | 0 | 2026-03-01T21:38:55 | Dry_Yam_4597 | false | null | 0 | o84gsr7 | false | /r/LocalLLaMA/comments/1ri232z/worth_it_to_buy_tesla_p40s/o84gsr7/ | false | 2 |
t1_o84gr0w | Will do! Thank you | 1 | 0 | 2026-03-01T21:38:40 | Photochromism | false | null | 0 | o84gr0w | false | /r/LocalLLaMA/comments/1ri9goi/lm_studio_gemma_3_27b_24gb_vram_stops_when/o84gr0w/ | false | 1 |
t1_o84gqbt | What, you don't enjoy announcements of announcements? | 7 | 0 | 2026-03-01T21:38:34 | Much-Researcher6135 | false | null | 0 | o84gqbt | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84gqbt/ | false | 7 |
t1_o84goo9 | I just download it in LM studio | 5 | 0 | 2026-03-01T21:38:20 | kibblerz | false | null | 0 | o84goo9 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84goo9/ | false | 5 |
t1_o84go9i | yeah obviously, even H100 is 2.5x as fast as an A100 and doesn't cost 2.5x | 1 | 0 | 2026-03-01T21:38:16 | llama-impersonator | false | null | 0 | o84go9i | false | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o84go9i/ | false | 1 |
t1_o84gder | Lets hope that it's going to be decent at languages other than English and Chinese... | 2 | 0 | 2026-03-01T21:36:43 | HighDefinist | false | null | 0 | o84gder | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84gder/ | false | 2 |
t1_o84g8zz | What probably would happen is the government would classify them as a military risk and consume them. | 1 | 0 | 2026-03-01T21:36:06 | Ready_Stuff_4357 | false | null | 0 | o84g8zz | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o84g8zz/ | false | 1 |
t1_o84g4s6 | more artificial hypenalysis bullshit. r/localllama, please stop looking at this ridiculous benchmark and smoking crack. | 5 | 0 | 2026-03-01T21:35:30 | llama-impersonator | false | null | 0 | o84g4s6 | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o84g4s6/ | false | 5 |
t1_o84g33r | Thanks for the heads up, It indeed works much better with tools now | 1 | 0 | 2026-03-01T21:35:15 | octopus_limbs | false | null | 0 | o84g33r | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o84g33r/ | false | 1 |
t1_o84fpdx | Les petits modèles Qwen3.5 arrivent bientôt, je dis juste que Google n'osera jamais publier Gemma 4 27B s'il est moins bon que Qwen 27B... | 3 | 0 | 2026-03-01T21:33:18 | Adventurous-Paper566 | false | null | 0 | o84fpdx | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84fpdx/ | false | 3 |
t1_o84fomg | all support under GNU/Linux is in the kernel, and additional firmware package, newer kernel the better i tested now with 6.18.12 (in debian testing) | 5 | 0 | 2026-03-01T21:33:11 | Educational_Sun_8813 | false | null | 0 | o84fomg | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o84fomg/ | false | 5 |
t1_o84fnut | B200 192GB ? Lol | 0 | 0 | 2026-03-01T21:33:04 | Actual_Wolf_2932 | false | null | 0 | o84fnut | false | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o84fnut/ | false | 0 |
t1_o84fm4m | Wow lol, thanks for this. | 1 | 0 | 2026-03-01T21:32:49 | -Davster- | false | null | 0 | o84fm4m | false | /r/LocalLLaMA/comments/1cerqd8/refusal_in_llms_is_mediated_by_a_single_direction/o84fm4m/ | false | 1 |
t1_o84fiww | yes, i'm using debian and recently there was update to packege amd-gpu-firmware or something like that, but also there were some vulkan improvements on the llama.cpp side | 6 | 0 | 2026-03-01T21:32:22 | Educational_Sun_8813 | false | null | 0 | o84fiww | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o84fiww/ | false | 6 |
t1_o84fbhd | Thanks, ChatGPT, great response. To level the playing field I asked Claude to prepare a reply:
This description raises several red flags and doesn't align well with how transformer LLM fine-tuning actually works:
Technical Issues
"Weight clusters that can be independently trained" - This contradicts how neural network... | 2 | 0 | 2026-03-01T21:31:18 | ilintar | false | null | 0 | o84fbhd | false | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o84fbhd/ | false | 2 |
t1_o84f7w3 | How does one scale decode if there's one cpu and memory bank ? | 1 | 0 | 2026-03-01T21:30:47 | Glittering-Call8746 | false | null | 0 | o84f7w3 | false | /r/LocalLLaMA/comments/1qn02w8/i_put_an_rtx_pro_4000_blackwell_sff_in_my_mss1/o84f7w3/ | false | 1 |
t1_o84f3n3 | I'm running Qwen3.5-35B-A3B Q2\_K\_XL quant on a freakin RTX 2060 laptop gpu with 6GB VRAM at 10-20 tps. Reasoning tuned to low or none (someone posted the settings for qwen3.5 to achieve that) or I use the variant without reasoning budget, that answers almost immediately. Still smarter than any other model i ever ran ... | 2 | 0 | 2026-03-01T21:30:11 | AppealSame4367 | false | null | 0 | o84f3n3 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84f3n3/ | false | 2 |
t1_o84f2yy | A100? okay bot, your knowledge cutoff is truly some ancient shit | 3 | 0 | 2026-03-01T21:30:05 | llama-impersonator | false | null | 0 | o84f2yy | false | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o84f2yy/ | false | 3 |
t1_o84f0a7 | While I agree that AI isn't fully in there yet, there's a big difference between the llama.cpp vs the likely majority of software work which are crud apps which right now is trivial for AI to do | 2 | 0 | 2026-03-01T21:29:42 | MoaTheDog | false | null | 0 | o84f0a7 | false | /r/LocalLLaMA/comments/1ri6jg3/at_what_point_do_we_stop_reading_code/o84f0a7/ | false | 2 |
t1_o84ex3d | You'd need to set the context length for the model to something your machine has room for, after the model and the os/display is loaded. A rolling context window is just going to throw the oldest parts out as it fills up, but if you don't have space for the context in vram anyway, you never get to rolling. Check the ... | 2 | 0 | 2026-03-01T21:29:15 | SmChocolateBunnies | false | null | 0 | o84ex3d | false | /r/LocalLLaMA/comments/1ri9goi/lm_studio_gemma_3_27b_24gb_vram_stops_when/o84ex3d/ | false | 2 |
t1_o84eutq | This. | 1 | 0 | 2026-03-01T21:28:55 | IrisColt | false | null | 0 | o84eutq | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o84eutq/ | false | 1 |
t1_o84ee6w | For 100% parameter coverage of a 70B model on a 1060? About 8 days — not ideal, but mathematically possible. For serious full coverage you'd rent an A100 through the platform for a few dollars.
The point isn't that 1060 is the best tool. It's that it works at all — which it previously didn't, for any method. | 0 | 0 | 2026-03-01T21:26:31 | Actual_Wolf_2932 | false | null | 0 | o84ee6w | false | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o84ee6w/ | false | 0 |
t1_o84e8y8 | i dont think you actually run either of those. If you like being fooled by benches, im not the one to help. 27 is not bad, 35b a3b is not bad, they are not even close to R1. | -3 | 0 | 2026-03-01T21:25:47 | LienniTa | false | null | 0 | o84e8y8 | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o84e8y8/ | false | -3 |
t1_o84e2gi | Hmm... | 1 | 0 | 2026-03-01T21:24:50 | IrisColt | false | null | 0 | o84e2gi | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o84e2gi/ | false | 1 |
t1_o84e2c7 | Several clones and variants have super low overhead. Nanobot and Zeroclaw are interesting. Listed all of the here: [https://shelldex.com](https://shelldex.com) \*disclaimer - my site | 1 | 0 | 2026-03-01T21:24:49 | daveonkels | false | null | 0 | o84e2c7 | false | /r/LocalLLaMA/comments/1rbjxpv/i_think_openclaw_is_overhyped_just_use_skills/o84e2c7/ | false | 1 |
t1_o84dxgs | It is really hard to read those results, especially on a phone and also really hard to compare them to the previous results you mention.
Can you give an indication how much better things got? | 10 | 0 | 2026-03-01T21:24:09 | DerDave | false | null | 0 | o84dxgs | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o84dxgs/ | false | 10 |
t1_o84drgn | Nowhere. Model running on local modest hardware (if for modest we intend 16/24gb vram, which is not modest) are fucking stupid as a rock. My dog could grasp concepts faster.
That's where we are at | 0 | 1 | 2026-03-01T21:23:16 | tracagnotto | false | null | 0 | o84drgn | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o84drgn/ | false | 0 |
t1_o84dq3c | Thank you, Mr. | 1 | 0 | 2026-03-01T21:23:05 | TheMericanIdiot | false | null | 0 | o84dq3c | false | /r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o84dq3c/ | false | 1 |
t1_o84dksu | You got a right to your opinion but I'm the last person on here using small models. Even if it's opus, check the work.
How have all the AI PRs done in llama.cpp? That's from the last 2 months. | 5 | 0 | 2026-03-01T21:22:19 | a_beautiful_rhind | false | null | 0 | o84dksu | false | /r/LocalLLaMA/comments/1ri6jg3/at_what_point_do_we_stop_reading_code/o84dksu/ | false | 5 |
t1_o84dify | Any idea what the full setup for this is on Linux, Unbunto, AMD update links? Thanks! | 3 | 0 | 2026-03-01T21:21:59 | BeginningReveal2620 | false | null | 0 | o84dify | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o84dify/ | false | 3 |
t1_o84dh71 | In this particular situation, yes. Your bottleneck is going to be your overall compute resources before anything architecture wise becomes a concern.
At the end of the day the only way you're going to get hard numbers is to try it, otherwise you will just have to take my word for it. My opinion is informed, but I'm al... | 1 | 0 | 2026-03-01T21:21:48 | JamesEvoAI | false | null | 0 | o84dh71 | false | /r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o84dh71/ | false | 1 |
t1_o84degr | Those tk/s are a single client's throughput (roughly on average), the total combined throughput will go up with multiple clients to a point (not sure where the point is at present). | 1 | 0 | 2026-03-01T21:21:25 | sammcj | false | null | 0 | o84degr | false | /r/LocalLLaMA/comments/1rdvq3s/qwen35_27b_is_match_made_in_heaven_for_size_and/o84degr/ | false | 1 |
t1_o84ddqv | heh | 1 | 0 | 2026-03-01T21:21:20 | IrisColt | false | null | 0 | o84ddqv | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o84ddqv/ | false | 1 |
t1_o84dddj | This seems to work. Thanks! | 1 | 0 | 2026-03-01T21:21:16 | StardockEngineer | false | null | 0 | o84dddj | false | /r/LocalLLaMA/comments/1rg0487/system_prompt_for_qwen35_27b35ba3b_to_reduce/o84dddj/ | false | 1 |
t1_o84db7k | who cares, training is compute bound so training on your 3060 is mostly useless if you're trying to train a billion tokens in less time than the earth expires | -1 | 0 | 2026-03-01T21:20:58 | llama-impersonator | false | null | 0 | o84db7k | false | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o84db7k/ | false | -1 |
t1_o84d4ip | I used --chat-template-kwargs '{"enable\_thinking": false }' and it’s working fine.
Dumb question (because I've made this mistake myself): shouldn't there be a \\ after -fa on so the command reads the next line? On Windows, I forget the \^ and the next arguments get ignored. | 1 | 0 | 2026-03-01T21:20:02 | ComfortableTomato807 | false | null | 0 | o84d4ip | false | /r/LocalLLaMA/comments/1rglgma/qwen_35_llamacpp_turn_of_reasoning_and_performance/o84d4ip/ | false | 1 |
t1_o84d0b0 | but can you run Qwen3.5 27B on a 10\~ yo GPU? it doesn't have smaller versions yet | 1 | 0 | 2026-03-01T21:19:26 | ominotomi | false | null | 0 | o84d0b0 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84d0b0/ | false | 1 |
t1_o84d0cf | You nailed it. | 2 | 0 | 2026-03-01T21:19:26 | IrisColt | false | null | 0 | o84d0cf | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o84d0cf/ | false | 2 |
t1_o84cxct | Gotcha - I saw Ubuntu also on it, but didn't check the dates as I thought it was updated as well. I see now that it has an earlier date when you open up that section. | 1 | 0 | 2026-03-01T21:19:01 | PhilWheat | false | null | 0 | o84cxct | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o84cxct/ | false | 1 |
t1_o84cwyo | The network overhead will be negligible, the latency you're going to experience won't be a result of being on a remote server. You can run a coding agent on a VPS with a remote API like Groq at 445 tok/sec and see no perceivable latency.
Your bottleneck is going to be in the compute. Unless you're willing to pay for d... | 1 | 0 | 2026-03-01T21:18:58 | JamesEvoAI | false | null | 0 | o84cwyo | false | /r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o84cwyo/ | false | 1 |
t1_o84cv1p | So it doesn't matter that is more than one year old? if it were benchmaxxed I doubt it would be receiving such a praise from this sub. See the comments and there are people stating that is certainly better. | 3 | 0 | 2026-03-01T21:18:42 | dionisioalcaraz | false | null | 0 | o84cv1p | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o84cv1p/ | false | 3 |
t1_o84ctyg | I wonder how it'll work for creative writing | 1 | 0 | 2026-03-01T21:18:32 | Silver-Champion-4846 | false | null | 0 | o84ctyg | false | /r/LocalLLaMA/comments/1ri0n8p/llm_lora_on_the_fly_with_hypernetworks/o84ctyg/ | false | 1 |
t1_o84csnu | FLAP uses a novel **parameter sharding algorithm** that identifies which specific weight clusters can be independently trained without affecting the rest of the model's representations. The key breakthrough is not the sharding itself — it's the **selection and reconstruction methodology** that preserves full model qual... | 1 | 0 | 2026-03-01T21:18:21 | Actual_Wolf_2932 | false | null | 0 | o84csnu | false | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o84csnu/ | false | 1 |
t1_o84crm2 | FLAP uses a novel **parameter sharding algorithm** that identifies which specific weight clusters can be independently trained without affecting the rest of the model's representations. The key breakthrough is not the sharding itself — it's the **selection and reconstruction methodology** that preserves full model qual... | 0 | 0 | 2026-03-01T21:18:13 | Actual_Wolf_2932 | false | null | 0 | o84crm2 | false | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o84crm2/ | false | 0 |
t1_o84cqqq | Yup you can do it with custom scripts. What's your use case? | 2 | 0 | 2026-03-01T21:18:05 | -Django | false | null | 0 | o84cqqq | false | /r/LocalLLaMA/comments/1ri8jwz/streamerbot_integration_it_to_qwen3_tts_running/o84cqqq/ | false | 2 |
t1_o84cqca | FLAP uses a novel **parameter sharding algorithm** that identifies which specific weight clusters can be independently trained without affecting the rest of the model's representations. The key breakthrough is not the sharding itself — it's the **selection and reconstruction methodology** that preserves full model qual... | 0 | 0 | 2026-03-01T21:18:02 | Actual_Wolf_2932 | false | null | 0 | o84cqca | false | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o84cqca/ | false | 0 |
t1_o84cp75 | FLAP uses a novel **parameter sharding algorithm** that identifies which specific weight clusters can be independently trained without affecting the rest of the model's representations. The key breakthrough is not the sharding itself — it's the **selection and reconstruction methodology** that preserves full model qual... | -1 | 0 | 2026-03-01T21:17:52 | Actual_Wolf_2932 | false | null | 0 | o84cp75 | false | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o84cp75/ | false | -1 |
t1_o84cgnl | I somehow don't think Ahmad Osman works for a Chinese company. | 1 | 0 | 2026-03-01T21:16:40 | ghulamalchik | false | null | 0 | o84cgnl | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84cgnl/ | false | 1 |
t1_o84cen5 |
Sure thing! I have not still setup TTS yet. Soon ;)
I think Open-WebUI already comes with some built in TTS (not the voice cloning one though, to my knowledge).
AMD Ryzen 5 5600X
AMD Radeon RX 6800 16GB (Reference design by Sapphire)
32 GB DDR4 - 3600 MHz
I keep my llama-swap config mostly updated here: https://gi... | 0 | 0 | 2026-03-01T21:16:23 | ismaelgokufox | false | null | 0 | o84cen5 | false | /r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o84cen5/ | false | 0 |
t1_o84ce9j | 9b for sure | 1 | 0 | 2026-03-01T21:16:20 | -Django | false | null | 0 | o84ce9j | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84ce9j/ | false | 1 |
t1_o84cdi8 | [removed] | 1 | 0 | 2026-03-01T21:16:13 | [deleted] | true | null | 0 | o84cdi8 | false | /r/LocalLLaMA/comments/1ri6jg3/at_what_point_do_we_stop_reading_code/o84cdi8/ | false | 1 |
t1_o84c1l0 | Thank you, will definitely try | 1 | 0 | 2026-03-01T21:14:32 | orblabs | false | null | 0 | o84c1l0 | false | /r/LocalLLaMA/comments/1rhjk18/localization_pain_diary_4500_ui_keys_local_models/o84c1l0/ | false | 1 |
t1_o84c1el | >and, if something happened to me
Why does a little alarm bell just go off in my head? | 1 | 0 | 2026-03-01T21:14:30 | IrisColt | false | null | 0 | o84c1el | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o84c1el/ | false | 1 |
t1_o84bypi | 13 months and we went from one shocking moment to a new one every few weeks. The hardest part now is keeping benchmarks relevant when the floor keeps moving.1 | 3 | 0 | 2026-03-01T21:14:08 | theagentledger | false | null | 0 | o84bypi | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o84bypi/ | false | 3 |
t1_o84bxoi | Sure thing!
- AMD Ryzen 5 5600X
- AMD Radeon RX 6800 16GB (Reference design by Sapphire)
- 32 GB DDR4 - 3600 MHz
I keep my llama-swap config mostly updated here:
https://github.com/djismgaming/docs-llama-swap | 1 | 0 | 2026-03-01T21:13:59 | ismaelgokufox | false | null | 0 | o84bxoi | false | /r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o84bxoi/ | false | 1 |
t1_o84bwlh | Have you tested the new QWEN models? How does this change when they are fine tuned? | 1 | 0 | 2026-03-01T21:13:49 | Emergency_Banana_789 | false | null | 0 | o84bwlh | false | /r/LocalLLaMA/comments/1rh2tmu/benchmarking_opensource_llms_for_security/o84bwlh/ | false | 1 |
t1_o84bvw3 | This is the News i was waiting for. Qwen3-instruct-4B 2507 was the GOAT of small models. It didn’t have the right to be so good at that size. Any improvement to that would be like adding bacon to something already delicious. | 2 | 0 | 2026-03-01T21:13:43 | cibernox | false | null | 0 | o84bvw3 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84bvw3/ | false | 2 |
t1_o84bsjt | lmao ahmad is a mod here and he'll probably ban you for this | 1 | 0 | 2026-03-01T21:13:14 | consistentfantasy | false | null | 0 | o84bsjt | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84bsjt/ | false | 1 |
t1_o84bq17 | pogledaj ovo :
[https://www.youtube.com/watch?v=QJqKqxQR36Y](https://www.youtube.com/watch?v=QJqKqxQR36Y) | 1 | 0 | 2026-03-01T21:12:53 | Crazy-Present-2398 | false | null | 0 | o84bq17 | false | /r/LocalLLaMA/comments/1qb2p26/dxg_spark_vs_ryzen_ai_395_if_the_price_difference/o84bq17/ | false | 1 |
t1_o84bgen | nvidija gp10 sparc samo dobra ako imas team koj u isto vrem,e radi sa agentima na mashini za singel user ne AMD bolji | 1 | 0 | 2026-03-01T21:11:32 | Crazy-Present-2398 | false | null | 0 | o84bgen | false | /r/LocalLLaMA/comments/1qb2p26/dxg_spark_vs_ryzen_ai_395_if_the_price_difference/o84bgen/ | false | 1 |
t1_o84b9cn | Llama.cpp will actually be multiple times slower than vllm for context lenghts above 10k (so basically any long conversation, or any agentic app), as well as it's basically the last engine to get support of new models/features. If you have hardware that can fit entire model into VRAM, you should run vllm. Actually, you... | 1 | 1 | 2026-03-01T21:10:32 | No-Refrigerator-1672 | false | null | 0 | o84b9cn | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84b9cn/ | false | 1 |
t1_o84b44l | I managed to get QuantTrio’s AWQ quant going, figured I’d make good on my promise and report back.
**Setup:**
- TP=4
- Context extended to 1M
- 3.54M effective context budget
- Vision encoder enabled
- MTP disabled (TTFT issues in current vLLM nightly docker)
- Prefix caching enabled
- Expert parallel disabled (requi... | 2 | 0 | 2026-03-01T21:09:49 | this-just_in | false | null | 0 | o84b44l | false | /r/LocalLLaMA/comments/1r77fz7/qwen35_nvfp4_blackwell_is_up/o84b44l/ | false | 2 |
t1_o84b16r | weird how all the comments are laughing at this ive used the 27B model and i feel like it is probably as smart as deepseek-v3.2 on STEM tasks which is exactly what AA-II measures but obviously its gonna be worse at something like creative writing | 24 | 0 | 2026-03-01T21:09:24 | pigeon57434 | false | null | 0 | o84b16r | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o84b16r/ | false | 24 |
t1_o84azxb | Is it possible to train practically useful LoRAs on this setup, whether for text or image generation? | 1 | 0 | 2026-03-01T21:09:14 | ain92ru | false | null | 0 | o84azxb | false | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o84azxb/ | false | 1 |
t1_o84ayyx | Read the original post. The guy said he completed *training* in 6 hours. Using SSD offloading / swapping you could *probably* get a single prompt inference done in 6 hours. | 1 | 0 | 2026-03-01T21:09:06 | ilintar | false | null | 0 | o84ayyx | false | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o84ayyx/ | false | 1 |
t1_o84aw84 | Impressive technical feat. I can't think of a personal use case but I like it. | 1 | 0 | 2026-03-01T21:08:43 | slippery | false | null | 0 | o84aw84 | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o84aw84/ | false | 1 |
t1_o84arkv | single user? | 1 | 0 | 2026-03-01T21:08:03 | Simple_Library_2700 | false | null | 0 | o84arkv | false | /r/LocalLLaMA/comments/1rdvq3s/qwen35_27b_is_match_made_in_heaven_for_size_and/o84arkv/ | false | 1 |
t1_o84aoty | What do you mean? I'm using MTP with multimodal requests just fine in vLLM nightly | 1 | 0 | 2026-03-01T21:07:39 | kantydir | false | null | 0 | o84aoty | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o84aoty/ | false | 1 |
t1_o84am7k | I managed to get QuantTrio’s AWQ quant going, figured I’d make good on my promise and report back.
**Setup:**
- TP=4
- Context extended to 1M
- Vision encoder enabled
- MTP disabled (TTFT issues in current vLLM nightly docker)
- Prefix caching enabled
- Expert parallel disabled (requires data parallel)
**Single batc... | 1 | 0 | 2026-03-01T21:07:18 | this-just_in | false | null | 0 | o84am7k | false | /r/LocalLLaMA/comments/1r77fz7/qwen35_nvfp4_blackwell_is_up/o84am7k/ | false | 1 |
t1_o84afqt | The hard part isn't the save/load – it's that KV-caches are huge (a 64k context on a 7B model is \~1 GB) and tied to the exact model weights. Swap the model or even update the checkpoint and your cached KV is garbage. I see your point though. | 1 | 0 | 2026-03-01T21:06:24 | proggmouse | false | null | 0 | o84afqt | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o84afqt/ | false | 1 |
t1_o84adlb | That's totally doable with e.g. llama.cpp, SSD swap lol | 2 | 0 | 2026-03-01T21:06:06 | ThunderousHazard | false | null | 0 | o84adlb | false | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o84adlb/ | false | 2 |
t1_o84a866 | Stop hating on deepseek please | 0 | 0 | 2026-03-01T21:05:19 | Cheap-Ad-8521 | false | null | 0 | o84a866 | false | /r/LocalLLaMA/comments/1hpjhm0/deepseek_v3_performs_surprisingly_bad_in/o84a866/ | false | 0 |
t1_o849zbg | I'm guessing that OP is talking about Linux 7 RC2. Which was released today. That has improvements for Strix Halo in it. | 8 | 0 | 2026-03-01T21:04:03 | fallingdowndizzyvr | false | null | 0 | o849zbg | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o849zbg/ | false | 8 |
t1_o849yxl | Yes, every agent has a different system prompt (Planner, Critic, Refiner, Judger – each with unique role instructions). In latent mode, Agent B's prompt is just its own system prompt + question (\~200 tokens). Agent A's KV-cache gets injected as past\_key\_values – prepended so Agent B's attention heads can look back a... | 2 | 0 | 2026-03-01T21:03:59 | proggmouse | false | null | 0 | o849yxl | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o849yxl/ | false | 2 |
t1_o849wvg | Dunno, maybe the tiny little problem that even the tiniest possible (bitnet) quants for 70B would need over 10B VRAM to run? You might want to start by explaining how you even got *inference* for a 70B model on 6GB VRAM. | 3 | 0 | 2026-03-01T21:03:42 | ilintar | false | null | 0 | o849wvg | false | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o849wvg/ | false | 3 |
t1_o849vpp | > as there was a new release on 2/26.
That's for Windows. OP is talking about Linux. The last release for that was from January. | 7 | 0 | 2026-03-01T21:03:31 | fallingdowndizzyvr | false | null | 0 | o849vpp | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o849vpp/ | false | 7 |
t1_o849pwk | What would be reasonable to run on a 3090 with 12g? | 1 | 0 | 2026-03-01T21:02:41 | ptinsley | false | null | 0 | o849pwk | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o849pwk/ | false | 1 |
t1_o849nqt | My backend operates exclusively locally. I do not want my technology to be compromised in the event of a data leak or hack. | 1 | 0 | 2026-03-01T21:02:22 | Actual_Wolf_2932 | false | null | 0 | o849nqt | false | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o849nqt/ | false | 1 |
t1_o849nr6 | Do those dense Qwen 3.5 models also use hybrid attention? | 1 | 0 | 2026-03-01T21:02:22 | d4rk31337 | false | null | 0 | o849nr6 | false | /r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o849nr6/ | false | 1 |
t1_o849gzx | What hardware are you running this on? | 1 | 0 | 2026-03-01T21:01:23 | TheMericanIdiot | false | null | 0 | o849gzx | false | /r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o849gzx/ | false | 1 |
t1_o849d9z | That is a fantastic setup, I salute you lol
I can relate to it as well. I did setup something on my humble 7900xtx for me wife, and all I got was "does it have an app like ChatGPT that I can talk to it?", like no validation of the actual work I've put behind it.
I should've realized after I did put adguard & VPN for ... | 1 | 0 | 2026-03-01T21:00:51 | Di_Vante | false | null | 0 | o849d9z | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o849d9z/ | false | 1 |
t1_o849c3h | Yeah someone on twitter just used DRAM set up to be like 600gb/s. you can just do things | 0 | 0 | 2026-03-01T21:00:42 | truth_is_power | false | null | 0 | o849c3h | false | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o849c3h/ | false | 0 |
t1_o8499mu | Nice slop
| 3 | 0 | 2026-03-01T21:00:21 | doge_master | false | null | 0 | o8499mu | false | /r/LocalLLaMA/comments/1ri7lb6/testing_the_limits_of_ai_loyalty_how_qwen3vl4b/o8499mu/ | false | 3 |
t1_o8497ul | I'm wondering if this is in reference to [AMD Ryzen™ AI Max+ PRO 395 Drivers and Downloads | Latest Version](https://www.amd.com/en/support/downloads/drivers.html/processors/ryzen-pro/ryzen-ai-max-pro-300-series/amd-ryzen-ai-max-plus-pro-395.html) as there was a new release on 2/26. | 1 | 0 | 2026-03-01T21:00:06 | PhilWheat | false | null | 0 | o8497ul | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o8497ul/ | false | 1 |
t1_o8495cr | does anyone know if any of these beat Gemma 2 270m for a similar size range? | 1 | 0 | 2026-03-01T20:59:44 | CondiMesmer | false | null | 0 | o8495cr | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8495cr/ | false | 1 |
t1_o84944k | I read less and less. The models used to make bugs I needed to fix, now I make bugs that they need to fix. The biggest change was last two months. | 2 | 0 | 2026-03-01T20:59:33 | Thomas-Lore | false | null | 0 | o84944k | false | /r/LocalLLaMA/comments/1ri6jg3/at_what_point_do_we_stop_reading_code/o84944k/ | false | 2 |
t1_o848vuf | If it really was a breakthrough, do you have any github to verify it? | 3 | 0 | 2026-03-01T20:58:22 | Fresh_Finance9065 | false | null | 0 | o848vuf | false | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o848vuf/ | false | 3 |
t1_o848tiy | I compiled it from source. My box has 3090s and P40s so I guess it links the libs correctly. Building a slim docker container is something I haven’t tried to tackle yet | 1 | 0 | 2026-03-01T20:58:01 | No-Statement-0001 | false | null | 0 | o848tiy | false | /r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o848tiy/ | false | 1 |
t1_o848ofv | You could do this from day one. Whats hard is to.have a good model and you can't have it in reasonable time with such hardware | 2 | 0 | 2026-03-01T20:57:17 | Exotic-Custard4400 | false | null | 0 | o848ofv | false | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o848ofv/ | false | 2 |
t1_o848nq2 | Agreed. I've been bouncing between qwen3.5-35b and qwen3.5-27b since they came out, and they are just the same kind of frustrating sycophant models as so many others. They start every answer with trite openings such as "This is a great observation!—and it reveals a subtle but important aspect..." or "You're on the righ... | 4 | 0 | 2026-03-01T20:57:11 | a1ix2 | false | null | 0 | o848nq2 | false | /r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o848nq2/ | false | 4 |
t1_o848m45 | Oh shoot, you just gave the solution for 2 problems I was having: ollama on rocm is way more limited than raw llama.cpp without tweaking. I haven't looked at llama-swap yet, might test it out to see if I can (finally) properly offload bigger models between GPU & CPU | 1 | 0 | 2026-03-01T20:56:57 | Di_Vante | false | null | 0 | o848m45 | false | /r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o848m45/ | false | 1 |
t1_o848jr7 | If you are only using locla models I udnerstand why you feel that way, but sota models are good enough that this is not needed anymore and will become standard before the end of the year. | 0 | 0 | 2026-03-01T20:56:37 | Thomas-Lore | false | null | 0 | o848jr7 | false | /r/LocalLLaMA/comments/1ri6jg3/at_what_point_do_we_stop_reading_code/o848jr7/ | false | 0 |
t1_o848gob | I have been having a tough time getting acceptable configuration for Qwen 3.5 27B on RTX 5090 with vLLM
What are people doing that makes it work? | 2 | 0 | 2026-03-01T20:56:11 | Ok-Ad-8976 | false | null | 0 | o848gob | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o848gob/ | false | 2 |
t1_o848gkp | There was something called Airi, if I remember the name, but it was burdened by CoC and so I never really tried much of it. | 3 | 0 | 2026-03-01T20:56:10 | AssistBorn4589 | false | null | 0 | o848gkp | false | /r/LocalLLaMA/comments/1ri7gor/ai_waifu_desktop_open_source/o848gkp/ | false | 3 |
t1_o848ezm | Any good guides? Probably should just google around but hard to know what the community consensus is. | 2 | 0 | 2026-03-01T20:55:56 | DK_Tech | false | null | 0 | o848ezm | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o848ezm/ | false | 2 |
t1_o848bmi | This is a great writeup and I wish more people talked about this. I've been banging my head against agent reliability on a 7900 XTX for months and this lines up with a lot of what I've seen.
One thing I'd add though — the KV cache precision issue is real, but it's also downstream of a bigger problem: why is your agent... | 1 | 0 | 2026-03-01T20:55:28 | Di_Vante | false | null | 0 | o848bmi | false | /r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o848bmi/ | false | 1 |
t1_o848akp | What evidence do you want? It's true, I've been working on this for the last six months, testing different theories and coming up with everything from scratch.
This is a real breakthrough, and I'm writing because I deployed this technology a few hours ago. | -1 | 0 | 2026-03-01T20:55:19 | Actual_Wolf_2932 | false | null | 0 | o848akp | false | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o848akp/ | false | -1 |
t1_o8488hp | Did you mean AMD's Linux firmware update for the GPU/Strix halo? | 4 | 0 | 2026-03-01T20:55:01 | rajwanur | false | null | 0 | o8488hp | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o8488hp/ | false | 4 |
t1_o84885q | This comment will not age well. I already only read like 10% of the code. No one will bother in a few months. | 1 | 0 | 2026-03-01T20:54:58 | Thomas-Lore | false | null | 0 | o84885q | false | /r/LocalLLaMA/comments/1ri6jg3/at_what_point_do_we_stop_reading_code/o84885q/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.