name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o82bg6s | I assume that some people might recommend just plugging in context7 mcp for docs, which for SOTA models might be a solid solution, but I think for smaller models a more sophisticated RAG would be required, especially considering that prompt processing speed on local devices isn't great and you want to have as few token... | 3 | 0 | 2026-03-01T15:20:13 | yeah_me_ | false | null | 0 | o82bg6s | false | /r/LocalLLaMA/comments/1rhzknn/best_local_model_for_python_and_qt_quick_coding/o82bg6s/ | false | 3 |
t1_o82bdsl | V2 is now merged to main! RLM enabled ✅ | 1 | 0 | 2026-03-01T15:19:53 | the-ai-scientist | false | null | 0 | o82bdsl | false | /r/LocalLLaMA/comments/1rhxav5/soulpy_persistent_memory_for_any_llm_in_10_lines/o82bdsl/ | false | 1 |
t1_o82bcsp | I was going to reference this too. LatentMas is so underutilized in RAG frameworks, and hasn't been picked up much, I think the whole claudebot thingy eclipsed it. | 2 | 0 | 2026-03-01T15:19:44 | waiting_for_zban | false | null | 0 | o82bcsp | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o82bcsp/ | false | 2 |
t1_o82ba6l | 9B by Tuesday? | 2 | 0 | 2026-03-01T15:19:24 | CommonPurpose1969 | false | null | 0 | o82ba6l | false | /r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o82ba6l/ | false | 2 |
t1_o82b9qa | The 2b variant is going to make my new app so baller | 9 | 0 | 2026-03-01T15:19:20 | YouAreTheCornhole | false | null | 0 | o82b9qa | false | /r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o82b9qa/ | false | 9 |
t1_o82b63w | Yes, that is what I prefer too. Agreeable so that it will just blindly do what you ask it for getting tasks done - but also capable of offering useful feedback if you ask for that. | 4 | 0 | 2026-03-01T15:18:51 | -dysangel- | false | null | 0 | o82b63w | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82b63w/ | false | 4 |
t1_o82b1jp | Appreciate the confidence lol | 1 | 0 | 2026-03-01T15:18:13 | ubrtnk | false | null | 0 | o82b1jp | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o82b1jp/ | false | 1 |
t1_o82azql | [removed] | 1 | 0 | 2026-03-01T15:17:58 | [deleted] | true | null | 0 | o82azql | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o82azql/ | false | 1 |
t1_o82axfe | I have no idea whether this would give any decent results, but instead of trying out different models, I'd try to build some sort of RAG with documentation for those specific libraries. Especially if you're using Pi, you'd have the ability to use extensions to define agents that scout the docs and provide necessary inf... | 3 | 0 | 2026-03-01T15:17:39 | yeah_me_ | false | null | 0 | o82axfe | false | /r/LocalLLaMA/comments/1rhzknn/best_local_model_for_python_and_qt_quick_coding/o82axfe/ | false | 3 |
t1_o82atm9 | This is hilarious 😂 | 1 | 0 | 2026-03-01T15:17:08 | feverdoingwork | false | null | 0 | o82atm9 | false | /r/LocalLLaMA/comments/1rh0bkz/tempted_to_prompt_qwen_on_this_craigslist_rig_but/o82atm9/ | false | 1 |
t1_o82atkq | These models are already trained with multi token prediction. You don’t need a draft model. | 10 | 0 | 2026-03-01T15:17:07 | iwaswrongonce | false | null | 0 | o82atkq | false | /r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o82atkq/ | false | 10 |
t1_o82asna | For primarily text gen / code / summaries — the M4 Mac Mini 256GB is honestly the sleeper pick here. The complaints about it not being good for image/video gen are valid, but you said that's not your priority. For text, the unified memory means you can run 70B models smoothly in ways discrete GPU setups can't match at ... | 1 | 0 | 2026-03-01T15:17:00 | KneeTop2597 | false | null | 0 | o82asna | false | /r/LocalLLaMA/comments/1rha4g1/advice_on_hardware_purchase_and_selling_old/o82asna/ | false | 1 |
t1_o82ar1m | do you think the computer had ronalds universal number kounter installed | 1 | 0 | 2026-03-01T15:16:47 | mumblerit | false | null | 0 | o82ar1m | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82ar1m/ | false | 1 |
t1_o82aqs7 | The easiest way is to just download the models from within LM Studio, from the LMStdio-Community and toggke the thinking off. Also, remove the vision adapter if you want to have long chat with the models. Llama.cpp doesn't support KV Cache reuse yet, so it recompte thr cache for each turn. | 2 | 0 | 2026-03-01T15:16:44 | Iory1998 | false | null | 0 | o82aqs7 | false | /r/LocalLLaMA/comments/1rhr5ko/is_there_a_way_to_disable_thinking_on_qwen_35_27b/o82aqs7/ | false | 2 |
t1_o82amwz | It's just a bubble , no worries, you can create a chat bot in 5 minutes. | 1 | 0 | 2026-03-01T15:16:13 | chuongmep | false | null | 0 | o82amwz | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o82amwz/ | false | 1 |
t1_o82alcq | No it isnt you stupid bot, the 2.5 is outdated. | 9 | 0 | 2026-03-01T15:15:59 | TacGibs | false | null | 0 | o82alcq | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o82alcq/ | false | 9 |
t1_o82ak5a | Will that reduce the thinking? | -2 | 0 | 2026-03-01T15:15:49 | GCoderDCoder | false | null | 0 | o82ak5a | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o82ak5a/ | false | -2 |
t1_o82ag4r | [removed] | 1 | 0 | 2026-03-01T15:15:15 | [deleted] | true | null | 0 | o82ag4r | false | /r/LocalLLaMA/comments/1rhzknn/best_local_model_for_python_and_qt_quick_coding/o82ag4r/ | false | 1 |
t1_o82afy3 | Nah, he just said it shouldn't. "People shouldn't have seen Avatar 3 either. The first two weren't good. What are you going to do about it, though? People are strange."
That's the level of seriousness and him drawing a hard line that I took from his statement about *should*. | 1 | 0 | 2026-03-01T15:15:14 | Django_McFly | false | null | 0 | o82afy3 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o82afy3/ | false | 1 |
t1_o82abaz | My issue was with your word "can't". It is incorrect. The rest of it is superfluous.
I don't think you understand how model training works. | 5 | 0 | 2026-03-01T15:14:34 | ToHallowMySleep | false | null | 0 | o82abaz | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82abaz/ | false | 5 |
t1_o82a8uo | It is, I'm running the full 27B with the nightly on 4 RTX 3090. | 6 | 0 | 2026-03-01T15:14:13 | TacGibs | false | null | 0 | o82a8uo | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o82a8uo/ | false | 6 |
t1_o82a0rt | You're a bot aren't you | 6 | 0 | 2026-03-01T15:13:04 | Pale-Committee8059 | false | null | 0 | o82a0rt | false | /r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o82a0rt/ | false | 6 |
t1_o829yro | Any news of a new Embedding model? 3.5-Embedding-8B would be epic | 2 | 0 | 2026-03-01T15:12:47 | crewone | false | null | 0 | o829yro | false | /r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o829yro/ | false | 2 |
t1_o829ufi | Mars 1.0 | 1 | 0 | 2026-03-01T15:12:10 | -dysangel- | false | null | 0 | o829ufi | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o829ufi/ | false | 1 |
t1_o829soq | Yep.. that would be literally the worst thing you could do to your national security. | 2 | 0 | 2026-03-01T15:11:55 | -dysangel- | false | null | 0 | o829soq | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o829soq/ | false | 2 |
t1_o829smc | It's for my new rig | 1 | 0 | 2026-03-01T15:11:54 | pmttyji | false | null | 0 | o829smc | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o829smc/ | false | 1 |
t1_o829pdw | [removed] | 1 | 0 | 2026-03-01T15:11:27 | [deleted] | true | null | 0 | o829pdw | false | /r/LocalLLaMA/comments/1rhmepa/qwen35122b_on_blackwell_sm120_fp8_kv_cache/o829pdw/ | false | 1 |
t1_o829nqs | Qwen 2.5-7B is basically the 'standard' for small local agents right now. If 3.5 takes it even further on the reasoning side, it's going to make a lot of much larger models redundant for simple automation tasks. | -14 | 0 | 2026-03-01T15:11:13 | 39th_Demon | false | null | 0 | o829nqs | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o829nqs/ | false | -14 |
t1_o829gnl | Qwen3.5 MTP should be fully functional in vLLM. | 5 | 0 | 2026-03-01T15:10:12 | xyz4d | false | null | 0 | o829gnl | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o829gnl/ | false | 5 |
t1_o829eyc | Fightmilk?
Are you fucking kidding me? That shit is made BY bodyguards FOR bodyguards. I’m just a small time jabroni over here drinking my Wolf Cola. | 1 | 0 | 2026-03-01T15:09:57 | mshelbz | false | null | 0 | o829eyc | false | /r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o829eyc/ | false | 1 |
t1_o829bfv | Your 3060 Ti has 8GB VRAM which is the main bottleneck — you're not getting 100+ TPS or 200k context on that regardless of what else you add. Upgrading RAM won't help much since your inference speed is GPU-bound.
Realistically for your target:
• **RTX 3090 (24GB)** is the best bang for buck on the used market (\~$600... | 1 | 0 | 2026-03-01T15:09:27 | KneeTop2597 | false | null | 0 | o829bfv | false | /r/LocalLLaMA/comments/1rhalha/seeking_hardware_recommendations/o829bfv/ | false | 1 |
t1_o829abi | I built my OpenClaw to troubleshoot and help any issues you may have. Come to Discord and @WiseAl and ask it anything related to OpenClaw and see what it does.
https://discord.gg/awiseai | 1 | 0 | 2026-03-01T15:09:17 | Sea_Manufacturer6590 | false | null | 0 | o829abi | false | /r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o829abi/ | false | 1 |
t1_o8297xd | yeah I agree on both counts; they were reacting to claims of sycophancy and are doing a bad job of fixing it.
I think Claude is basically correct; it just agrees with you and tries to help you do whatever you're doing, but if you explicitly ask for criticism it will give it | 4 | 0 | 2026-03-01T15:08:57 | Western_Objective209 | false | null | 0 | o8297xd | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8297xd/ | false | 4 |
t1_o8296rx | Great findings! You just convinced me to give it a try on a Mac Studio M4 Max / 32G from work.
Can you share a bit about your setup? I read you're using llama.cpp and langgraph but about the rest of the stack, such as frontends, other tools?
And what do you think of the quality of the output, especially the code? | 1 | 0 | 2026-03-01T15:08:47 | eurobosch | false | null | 0 | o8296rx | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o8296rx/ | false | 1 |
t1_o828y6o | Yeah I mean it's incredibly surprising that someone who pissed off one of the biggest AI providers in the world is having AI targeted against them. | 3 | 0 | 2026-03-01T15:07:31 | -dysangel- | false | null | 0 | o828y6o | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o828y6o/ | false | 3 |
t1_o828xaz | 🤣
just get some FIGHTMILK | 3 | 0 | 2026-03-01T15:07:23 | brickout | false | null | 0 | o828xaz | false | /r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o828xaz/ | false | 3 |
t1_o828uhh | It is - but it also feels so good when people actually use and like using something you've built. | 1 | 0 | 2026-03-01T15:06:58 | ubrtnk | false | null | 0 | o828uhh | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o828uhh/ | false | 1 |
t1_o828tu0 | Nope.
That is not at all within the realm of what the law actually says. Please actually do some research before you puke nonsense. Grew up in a law firm. Reading is good.
What it actually means is:
If you want to be in bed with the U.S. government, specifically the D.O.D, you are advised NOT to be in bed with Ant... | 1 | 0 | 2026-03-01T15:06:53 | Orpheusly | false | null | 0 | o828tu0 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o828tu0/ | false | 1 |
t1_o828sn3 | I'll use it! 😜 | 2 | 0 | 2026-03-01T15:06:42 | radlinsky | false | null | 0 | o828sn3 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o828sn3/ | false | 2 |
t1_o828pdw | That sounds to me that at least they're trying to encourage an AI that won't agree with you sycophantically, which I think is the right idea. Sounds like they're doing a bad job of it though. | 5 | 0 | 2026-03-01T15:06:15 | -dysangel- | false | null | 0 | o828pdw | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o828pdw/ | false | 5 |
t1_o828p9q | That's what started all this - HA voice assistant is getting REALLY close. I need more HA Voice Speakers and need to figure out how to pipe the audio out to Sonos. That and the Alexa drop in feature. | 2 | 0 | 2026-03-01T15:06:14 | ubrtnk | false | null | 0 | o828p9q | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o828p9q/ | false | 2 |
t1_o828n32 | It's crazy that you tried running QwQ at Q8 with 16 gigs of memory, but it's fun to see that it still got it even a year later. | 2 | 0 | 2026-03-01T15:05:54 | MoffKalast | false | null | 0 | o828n32 | false | /r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o828n32/ | false | 2 |
t1_o828mn4 | If intelligence agencies are interested in you, you have bigger problems. They can make every speaker in your home that’s connected to an internet enabled device function as makeshift microphones. If you have wifi, they can create 3D models of your home’s interior with such high fidelity they can tell if the male occup... | 6 | 0 | 2026-03-01T15:05:50 | diviludicrum | false | null | 0 | o828mn4 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o828mn4/ | false | 6 |
t1_o828m20 | Oh god this is really going to add to my stress level.
Pass me some Enviguron | 3 | 0 | 2026-03-01T15:05:45 | mshelbz | false | null | 0 | o828m20 | false | /r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o828m20/ | false | 3 |
t1_o828gmx | 12Gb | 1 | 0 | 2026-03-01T15:04:58 | Adventurous-Paper566 | false | null | 0 | o828gmx | false | /r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o828gmx/ | false | 1 |
t1_o828fep | ...pollutions?
(It's always sunny :) | 7 | 0 | 2026-03-01T15:04:47 | brickout | false | null | 0 | o828fep | false | /r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o828fep/ | false | 7 |
t1_o828cfn | Honestly, I'm not sure - despite what the suggested tone might be from other comments, I don't sit in a matrix looking room with grafana dashboards and chat sessions and usage logs streaming with notifications if something hasn't been used.
When timer permitted, I'd research, tweak, add and maintain the primary needed... | 1 | 0 | 2026-03-01T15:04:20 | ubrtnk | false | null | 0 | o828cfn | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o828cfn/ | false | 1 |
t1_o8289ct | q8 kv vs q4 kv is one of the most underrated performance variables in local setups, good catch. | 1 | 0 | 2026-03-01T15:03:53 | justserg | false | null | 0 | o8289ct | false | /r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o8289ct/ | false | 1 |
t1_o82827q | seems like your model is too large. Try for iq1m ablated heretic. Add tts, be entertained | 1 | 0 | 2026-03-01T15:02:48 | EclecticAcuity | false | null | 0 | o82827q | false | /r/LocalLLaMA/comments/1r2ej94/i_tried_step_35_flash_iq1_m/o82827q/ | false | 1 |
t1_o8280c7 | 8 what!?! | 6 | 0 | 2026-03-01T15:02:32 | mshelbz | false | null | 0 | o8280c7 | false | /r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o8280c7/ | false | 6 |
t1_o827zkb | Then it should be fine then, you won't need to re-download (apart from toolcalling fixes if you want thaT). We're updating them tomorrow | 1 | 0 | 2026-03-01T15:02:25 | yoracale | false | null | 0 | o827zkb | false | /r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/o827zkb/ | false | 1 |
t1_o827xsu | That worked for me too, but from my understand that fully disables thinking and reduces the quality of the results. I want the model to use thinking, but just not output it. I did what was suggested by SM8085 and what I had as my backup which was to handle the removal of the thinking block in my python wrapper script... | 1 | 0 | 2026-03-01T15:02:09 | jpc82 | false | null | 0 | o827xsu | false | /r/LocalLLaMA/comments/1rhcj7b/qwen35_with_lm_studio_api_without_thinking_output/o827xsu/ | false | 1 |
t1_o827vq7 | They actually wanted to strike a bit earlier but the operation was delayed for several minutes while Claude Code was compacting the context. | 2 | 0 | 2026-03-01T15:01:51 | arbuge00 | false | null | 0 | o827vq7 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o827vq7/ | false | 2 |
t1_o827r9s | but isn't that what Qwen 3.5 is? When Qwen Coder Next came out they said that Qwen 3.5 would be the same architecture | 5 | 0 | 2026-03-01T15:01:10 | -dysangel- | false | null | 0 | o827r9s | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o827r9s/ | false | 5 |
t1_o827pyc | If you're using reasoning models, you could try injecting your rules at the start of the reasoning block and let it continue from there. If you're not using reasoning, try it with a reasoning model, but override the entire reasoning block with your rules (don't let it think at all).
I personally have started to use re... | 2 | 0 | 2026-03-01T15:00:58 | aeqri | false | null | 0 | o827pyc | false | /r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o827pyc/ | false | 2 |
t1_o827ijm | if they could make a -3B I'd load up thousands of them to get more RAM | 40 | 0 | 2026-03-01T14:59:51 | -dysangel- | false | null | 0 | o827ijm | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o827ijm/ | false | 40 |
t1_o827gmc | [removed] | 1 | 0 | 2026-03-01T14:59:35 | [deleted] | true | null | 0 | o827gmc | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o827gmc/ | false | 1 |
t1_o827f5f | Nice !
Q8 35B can’t fit on 5090 or R9700 so can’t compare on raw speed
As for Q4,
If R9700 gets ~127 tk/s and 5090 194 tk/s
I think W7900 should get ~164 tk/s with Vulkan | 2 | 0 | 2026-03-01T14:59:22 | putrasherni | false | null | 0 | o827f5f | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o827f5f/ | false | 2 |
t1_o827bfo | 3090有NVlink,只需要买两张24G就好了 | 1 | 0 | 2026-03-01T14:58:49 | Senior-Bid7091 | false | null | 0 | o827bfo | false | /r/LocalLLaMA/comments/1np9rav/my_second_modified_3080_20gb_from_china_for_local/o827bfo/ | false | 1 |
t1_o8279x6 | Idk if Tinygrad reverse engineered ANE, they were trying hard to do it.
ANE reverse engineering has been done in the past during the time of M1 and one inference repo also exists (i cover them in the article briefly)
But to my knowledge, no one has attempted training on it yet because the intermediate format was not ... | 6 | 0 | 2026-03-01T14:58:35 | jack_smirkingrevenge | false | null | 0 | o8279x6 | false | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o8279x6/ | false | 6 |
t1_o827484 | I cant wait :) | 1 | 0 | 2026-03-01T14:57:45 | Ethrillo | false | null | 0 | o827484 | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o827484/ | false | 1 |
t1_o8272si | My 16GB macbook air is salivating. | 10 | 0 | 2026-03-01T14:57:33 | Hanthunius | false | null | 0 | o8272si | false | /r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o8272si/ | false | 10 |
t1_o8272tl | hey im trying to build a voice agent for the website of a tech company, so it would need to be connected to the phone i guess and take calls and record information and ask required questions, and also tell capabilities of the company's products. the requirement is a natural sounding and smart voice agent but everything... | 1 | 0 | 2026-03-01T14:57:33 | netherpie | false | null | 0 | o8272tl | false | /r/LocalLLaMA/comments/1r2hbsl/best_quality_open_source_tts_model/o8272tl/ | false | 1 |
t1_o8270xf | Pour une carte de 6Go, je pense qu'il est préférable d'utiliser 4B. | 1 | 0 | 2026-03-01T14:57:16 | Adventurous-Paper566 | false | null | 0 | o8270xf | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o8270xf/ | false | 1 |
t1_o826xxs | Any idea of the sizes? | 1 | 0 | 2026-03-01T14:56:49 | Significant_Fig_7581 | false | null | 0 | o826xxs | false | /r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o826xxs/ | false | 1 |
t1_o826xt2 | You just described the exact 3 AM wall I was slamming my head against. "Context Amnesia" is the final boss of system prompts. Thank you for the reality check on negative vs. positive constraints—that is 100% accurate. Telling an LLM "don't be a yes-man" gets completely washed out by message 15.
Your point about using ... | 0 | 0 | 2026-03-01T14:56:48 | Mstep85 | false | null | 0 | o826xt2 | false | /r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o826xt2/ | false | 0 |
t1_o826woy | At least 8 | 18 | 0 | 2026-03-01T14:56:38 | CommunicationOne7441 | false | null | 0 | o826woy | false | /r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o826woy/ | false | 18 |
t1_o826pm3 | Haha, I love that you started with "I'm just a random guy with zero experience" and then casually proposed building a multi-agent evolutionary genetic mutation matrix in my living room. That is peak r/LocalLLaMA. Thank you for the brain food!
You're actually touching on something we wrestled with heavily. Creating a s... | -2 | 0 | 2026-03-01T14:55:36 | Mstep85 | false | null | 0 | o826pm3 | false | /r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o826pm3/ | false | -2 |
t1_o826m0e | Sorry, OWUI is Open-WebUI. It's its own project that just sits between the user and Ollama/Llama.cpp/vLLM/any open-ai compatible inference engine | 2 | 0 | 2026-03-01T14:55:03 | ubrtnk | false | null | 0 | o826m0e | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o826m0e/ | false | 2 |
t1_o826g4n | Yeah, I was running the q8xl. | 1 | 0 | 2026-03-01T14:54:10 | silenceimpaired | false | null | 0 | o826g4n | false | /r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/o826g4n/ | false | 1 |
t1_o826ea1 | How did you manage to not register that they aren’t excited about it, let alone not using it? | 1 | 0 | 2026-03-01T14:53:53 | Virtamancer | false | null | 0 | o826ea1 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o826ea1/ | false | 1 |
t1_o826dk8 | Thanks! Training is surprisingly stable for a small 15M model( left it for training overnight and it converged around 2.5 loss- Karpathy reported around 1 but he also trained it on fp32 on a mature CUDA pipeline)
I'm currently struggling with some boiler plate issues on larger models (currently having to recompile ke... | 5 | 0 | 2026-03-01T14:53:47 | jack_smirkingrevenge | false | null | 0 | o826dk8 | false | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o826dk8/ | false | 5 |
t1_o8269xv | What would be the minimum Vram requirements to comfortably run it? | 1 | 0 | 2026-03-01T14:53:14 | OldStray79 | false | null | 0 | o8269xv | false | /r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o8269xv/ | false | 1 |
t1_o8266pv | Agreed. I've done this testing before and found the same thing. Even q8 kv falls over at long context | 1 | 0 | 2026-03-01T14:52:45 | Front_Eagle739 | false | null | 0 | o8266pv | false | /r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o8266pv/ | false | 1 |
t1_o82662s | Waiting for someone to evaluate a LLM on Babbage's difference engine. | 3 | 0 | 2026-03-01T14:52:39 | Mickenfox | false | null | 0 | o82662s | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o82662s/ | false | 3 |
t1_o82655f | much cheaper, especially you have a chinese id which can allow you buy the chinese coding plan, go for the fast plan, it's crazy cheap, 100 prompts per 5 hours, that's a lot to most average developers.
And the accuracy / thinking process is very good enough | 1 | 0 | 2026-03-01T14:52:31 | Possible-Basis-6623 | false | null | 0 | o82655f | false | /r/LocalLLaMA/comments/1r3s8mq/is_minimax_m25_the_best_coding_model_in_the_world/o82655f/ | false | 1 |
t1_o825ucb | Thank you - I'm on selfhosted a bunch. I started there even before the LLMs. Going from an engineer for a day job to someone who just thinks and guides and writes patterns for other people to implement, the itch to build was real. Then when you discover the MS-01s and their capabilities for size with Proxmox (and work ... | 1 | 0 | 2026-03-01T14:50:53 | ubrtnk | false | null | 0 | o825ucb | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o825ucb/ | false | 1 |
t1_o825u0y | Thanks for that! | 1 | 0 | 2026-03-01T14:50:50 | Front_Eagle739 | false | null | 0 | o825u0y | false | /r/LocalLLaMA/comments/1rg5uee/best_way_to_run_qwen3535ba3b_on_mac/o825u0y/ | false | 1 |
t1_o825qao | No it’s probably qwen3 coder but with gated deltanet. | 0 | 0 | 2026-03-01T14:50:17 | Guinness | false | null | 0 | o825qao | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o825qao/ | false | 0 |
t1_o825m8r | Sorry I'm new here, is it okay to post repo? Just want to make | 1 | 0 | 2026-03-01T14:49:40 | Mstep85 | false | null | 0 | o825m8r | false | /r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o825m8r/ | false | 1 |
t1_o825lvf | Updated with Q8.
Could be that vulkan is faster than ROCM. Haven't tried vulkan. W7900 is much faster than R9700 is eval should be higher. Will try a vulkan build later today. | 1 | 0 | 2026-03-01T14:49:37 | Thrumpwart | false | null | 0 | o825lvf | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o825lvf/ | false | 1 |
t1_o825kzl | But what is it? What is the name of the software for this web ui? It is a part of vLLM? | 1 | 0 | 2026-03-01T14:49:28 | Writerro | false | null | 0 | o825kzl | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o825kzl/ | false | 1 |
t1_o825hvy | Sounds quite interesting, will try that too. That decision point optimization totally make sense. Any tools or eval list you can share? Also interested to learn what is Verdent-style task routing means. | 1 | 0 | 2026-03-01T14:49:01 | ITSamurai | false | null | 0 | o825hvy | false | /r/LocalLLaMA/comments/1rhtyyq/using_evaluations_on_llama_models/o825hvy/ | false | 1 |
t1_o825eol | 这帮人只会‘指鹿为马’,甚至连佩切涅格人和库曼人都能编进政策里。Right?! | 2 | 0 | 2026-03-01T14:48:32 | squired | false | null | 0 | o825eol | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o825eol/ | false | 2 |
t1_o825atp | Solid debugging methodology. This maps to a broader pattern - agent degradation at long context is almost never the model's base capability, it's infrastructure choices that seemed "free" early on. KV cache quantization as silent killer makes sense given K-cache sensitivity.
Did you find Q8 sufficient or did you need... | 4 | 0 | 2026-03-01T14:47:57 | Joozio | false | null | 0 | o825atp | false | /r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o825atp/ | false | 4 |
t1_o8259pd | Eval time seems much slower on W7900 than R9700 for the same model
https://github.com/ggml-org/llama.cpp/discussions/19890
Unless I am missing something | 1 | 0 | 2026-03-01T14:47:46 | putrasherni | false | null | 0 | o8259pd | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o8259pd/ | false | 1 |
t1_o8258r0 | Then better not look at the 300 sqft music studio in the back yard... | 1 | 0 | 2026-03-01T14:47:38 | ubrtnk | false | null | 0 | o8258r0 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o8258r0/ | false | 1 |
t1_o8256t9 | Log analysis catching production issues before Grafana alerts is underrated - that's where agent ROI really shows up. Your honest hype-vs-reality framing is exactly right.
The failure modes are usually context drift and tool-call reliability under load, not the model itself. What's your fallback when MCP server conne... | 1 | 0 | 2026-03-01T14:47:20 | Joozio | false | null | 0 | o8256t9 | false | /r/LocalLLaMA/comments/1rhsto2/i_replaced_my_entire_automation_stack_with_mcp/o8256t9/ | false | 1 |
t1_o8252m2 | I didnt create it, I just use it. Been on it since 06.14. | 1 | 0 | 2026-03-01T14:46:41 | ubrtnk | false | null | 0 | o8252m2 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o8252m2/ | false | 1 |
t1_o824vkc | Thanks for such a great writeup. I’m running the exact same M4 / 24GB setup, but I’m seeing a significant discrepancy in KV cache scaling compared to your numbers.
You mentioned KV cache of \~1.5–2GB for 32K context, but in my tests with OLLAMA\_KV\_CACHE\_TYPE=q8\_0, I see usage of \~4.7GB (and even q4\_0 stays aroun... | 1 | 0 | 2026-03-01T14:45:37 | CellularProcessor | false | null | 0 | o824vkc | false | /r/LocalLLaMA/comments/1r8g3ap/best_qwen_model_for_m4_mac_mini_32gb_unified/o824vkc/ | false | 1 |
t1_o824te2 | [removed] | 1 | 0 | 2026-03-01T14:45:18 | [deleted] | true | null | 0 | o824te2 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o824te2/ | false | 1 |
t1_o824paj | They just released Qwen3 Coder Next. It is likely mostly Qwen3.5.
It's very fast and sits nicely between 35B and 122B. I use it as an orchestrator because of its JSON skills. | 3 | 0 | 2026-03-01T14:44:41 | zipzag | false | null | 0 | o824paj | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o824paj/ | false | 3 |
t1_o824mry | How can you expect your family to care strongly about your hobby project? It's pretty clear you're not doing it "for them"...
They'd probably appreciate it more if you spent less time on that and more time focused on them.
But I get it, a man has to have his projects to be happy. It's a balancing act. | 2 | 0 | 2026-03-01T14:44:18 | delabay | false | null | 0 | o824mry | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o824mry/ | false | 2 |
t1_o824hr0 | I play in tech, she plays in the dirt. | 1 | 0 | 2026-03-01T14:43:32 | ubrtnk | false | null | 0 | o824hr0 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o824hr0/ | false | 1 |
t1_o824c6n | The struggle is real | 2 | 0 | 2026-03-01T14:42:40 | ubrtnk | false | null | 0 | o824c6n | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o824c6n/ | false | 2 |
t1_o824705 | This is 100% part of the build already. I use Bookstack for all this information. How the network is configured, how to take care of the pool and hot tub, details about the appliances etc. There's an MCP server integration. Then I put QR codes in certain places that are easily rememberable, scan the QR code and it take... | 2 | 0 | 2026-03-01T14:41:53 | ubrtnk | false | null | 0 | o824705 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o824705/ | false | 2 |
t1_o8245n1 | That is what im hoping qwen3.5 coder next will be. 60b a8b | 1 | 0 | 2026-03-01T14:41:40 | KURD_1_STAN | false | null | 0 | o8245n1 | false | /r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o8245n1/ | false | 1 |
t1_o8242o0 | "You're absolutely right. This wasn't a soldier. It was a child" | 2 | 0 | 2026-03-01T14:41:12 | Delicious_Sky5329 | false | null | 0 | o8242o0 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8242o0/ | false | 2 |
t1_o823ztn | [removed] | 1 | 0 | 2026-03-01T14:40:45 | [deleted] | true | null | 0 | o823ztn | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o823ztn/ | false | 1 |
t1_o823w1j | Someone recently did PPL tests on this with qwen. Found the PPL loss from Q8 was negligible. Also I did my own PPL test on devstral and my quant does lower PPL at 32K than it did at 512.
Grain of salt is that it's going to be different for different modes. Some couldn't handle Q4 at all | 12 | 0 | 2026-03-01T14:40:10 | a_beautiful_rhind | false | null | 0 | o823w1j | false | /r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o823w1j/ | false | 12 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.