name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o85ouut | You'll be limited by memory locality sadly. PCIE speed at extremely low tok/s
| 1 | 0 | 2026-03-02T01:52:03 | starwaves1 | false | null | 0 | o85ouut | false | /r/LocalLLaMA/comments/16lji25/3090_48gb/o85ouut/ | false | 1 |
t1_o85old4 | Basically, it stops the AI from hallucinating your health metrics by forcing it to read your database first. LLMs are great at talking, but they have zero idea what your actual step count or HRV is.
**Without SQL RAG:** You ask "How did I sleep?" and the AI just guesses
**With SQL RAG (what Leo does):**
1. **Retrie... | 1 | 0 | 2026-03-02T01:50:27 | sandseb123 | false | null | 0 | o85old4 | false | /r/LocalLLaMA/comments/1rif77r/finetuned_a_health_coach_llm_on_my_mac_in_15/o85old4/ | false | 1 |
t1_o85okze | Sounds like an issue with reddit's voting algorithm. If quality comments get buried. Bots, lowest common denominator, idiocracy. | 1 | 0 | 2026-03-02T01:50:23 | NiteCyper | false | null | 0 | o85okze | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o85okze/ | false | 1 |
t1_o85ol0a | I think you should learn how LLMs work before calling this an issue. | 2 | 0 | 2026-03-02T01:50:23 | mlhher | false | null | 0 | o85ol0a | false | /r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o85ol0a/ | false | 2 |
t1_o85oj37 | Doesn’t matter had text. | 1 | 0 | 2026-03-02T01:50:03 | opi098514 | false | null | 0 | o85oj37 | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o85oj37/ | false | 1 |
t1_o85oiyo | If you are used to 1000 tok/s speeds that Claude and OpenAI provide then by all means use that.
I found that 100B+ MoE models are perfectly ok for any kind of work today, and fast enough to run on GPUs or even faster than cloud. But there is a limit, 200B+ is too slow for GPUs, I guess for Macs 30B+ is already too muc... | 2 | 0 | 2026-03-02T01:50:02 | ortegaalfredo | false | null | 0 | o85oiyo | false | /r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o85oiyo/ | false | 2 |
t1_o85oiou | I agree. However, if you set up a solid small model like Qwen 3.5 27B + Web Search, then I really do believe the model competes with much larger models. I think solid web search grounding is the future for home setups. | 5 | 0 | 2026-03-02T01:49:59 | My_Unbiased_Opinion | false | null | 0 | o85oiou | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o85oiou/ | false | 5 |
t1_o85oids | If you're super smart, you don't need so many damn premises. You can generalize.
Source: sitting in class with the human race. | 8 | 0 | 2026-03-02T01:49:56 | MrPecunius | false | null | 0 | o85oids | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o85oids/ | false | 8 |
t1_o85ohp4 | At an office test, a secretary proudly told the boss:
“I can type **1,500 words per minute.”
The boss was impressed and asked her to show it. She sat down and typed very fast, her fingers flying over the keyboard.
After a minute, the boss looked at the page and said:
“But this is all complete nonsense. It doesn’t ma... | -8 | 0 | 2026-03-02T01:49:49 | akazakou | false | null | 0 | o85ohp4 | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o85ohp4/ | false | -8 |
t1_o85ofns | The 27b model is pretty damn good. I'm planning on letting it drive an agentic system but using the 35b model for other subagent tasks where speed is more important. | 2 | 0 | 2026-03-02T01:49:28 | musicsurf | false | null | 0 | o85ofns | false | /r/LocalLLaMA/comments/1ridwl5/qwen3527b_iq3_vs_qwen35_35ba3m_q4_k_m/o85ofns/ | false | 2 |
t1_o85obsi | 1 | 0 | 2026-03-02T01:48:49 | KURD_1_STAN | false | null | 0 | o85obsi | false | /r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85obsi/ | false | 1 | |
t1_o85obr1 | OpenCode. It's the best for open-weight models imo. Really customisable too. | 3 | 0 | 2026-03-02T01:48:48 | ayylmaonade | false | null | 0 | o85obr1 | false | /r/LocalLLaMA/comments/1rie3yc/which_ide_to_code_with_qwen_35/o85obr1/ | false | 3 |
t1_o85oasf | Which it makes by referencing knowledge | 2 | 1 | 2026-03-02T01:48:38 | SpicyWangz | false | null | 0 | o85oasf | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o85oasf/ | false | 2 |
t1_o85o6bs | Why don't you try my suggestion for the time being? | 4 | 0 | 2026-03-02T01:47:52 | Iory1998 | false | null | 0 | o85o6bs | false | /r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o85o6bs/ | false | 4 |
t1_o85o4ii | You are like my grandma who cooks lunch for me without letting me know, thinks she did a favour, and then criticizing me for not eating the lunch and not appreciating the favour I didn’t ask for (or know about). | 1 | 0 | 2026-03-02T01:47:34 | egytaldodolle | false | null | 0 | o85o4ii | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o85o4ii/ | false | 1 |
t1_o85o3vm | You can ask this same question to the same LLM and he will say that he has no idea of the current date. | 1 | 0 | 2026-03-02T01:47:27 | ortegaalfredo | false | null | 0 | o85o3vm | false | /r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o85o3vm/ | false | 1 |
t1_o85nwnx | 2026-03-02 07:44:09 [DEBUG]
slot launch_slot_: id 0 | task -1 | sampler chain: logits -> ?penalties -> ?dry -> ?top-n-sigma -> top-k -> ?typical -> top-p -> ?min-p -> ?xtc -> ?temp-ext -> dist
slot launch_slot_: id 0 | task 0 | processing task, is_child = 0
slot update_slots: id 0 | task 0 | ... | 1 | 0 | 2026-03-02T01:46:11 | FORNAX_460 | false | null | 0 | o85nwnx | false | /r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o85nwnx/ | false | 1 |
t1_o85npto | **"SQL RAG"** means the app translates your plain-English question into an actual SQL database query, runs it against your local health data, then feeds the raw results to the LLM to write a coaching answer. So when you ask "How was my Zone 2 running this week?", it generates something like SELECT activity, avg\_hr, du... | 1 | 0 | 2026-03-02T01:45:00 | sandseb123 | false | null | 0 | o85npto | false | /r/LocalLLaMA/comments/1rif77r/finetuned_a_health_coach_llm_on_my_mac_in_15/o85npto/ | false | 1 |
t1_o85norv | Sounds more like financial times is just trying to play with the market. | 1 | 0 | 2026-03-02T01:44:49 | JacketHistorical2321 | false | null | 0 | o85norv | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o85norv/ | false | 1 |
t1_o85njzt | Totally. I was just pointing out that it's there. I've already built it myself and I'm currently using it so works great 👍 | 1 | 0 | 2026-03-02T01:43:59 | JacketHistorical2321 | false | null | 0 | o85njzt | false | /r/LocalLLaMA/comments/1rfthhd/local_ai_on_mac_pro_2019/o85njzt/ | false | 1 |
t1_o85nhr1 | I'd rather run it slower at higher q. If I have a choice I do not go below 6. If you arent gpu poor you should do the same imo. | 1 | 0 | 2026-03-02T01:43:36 | Lifeisshort555 | false | null | 0 | o85nhr1 | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o85nhr1/ | false | 1 |
t1_o85ng2l | Same principle injecting into the KV cache. But mine is specific for looking at injection of skill files to save context for smaller models (think like 7b and smaller). And it seems like atlasKV has some more fine tuning involved for the model for behavior, and seems a bit more focused on larger models.
But seems to ... | 0 | 0 | 2026-03-02T01:43:18 | Proper-Lab1756 | false | null | 0 | o85ng2l | false | /r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o85ng2l/ | false | 0 |
t1_o85na5o | No idea I just started dabbling into this world of local llms I'm a fresh mechatronics engineer trying to integrate them into something in my major. | 1 | 0 | 2026-03-02T01:42:17 | Electrify338 | false | null | 0 | o85na5o | false | /r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85na5o/ | false | 1 |
t1_o85n366 | How sharp, methodical, and *copious* the thinking is. Q3.5 27b less than 1/6 the speed of Qwen3 30b a3b 2507 on my M4 Pro, and it thinks way more so the speed difference is even greater than that, but output seems to be super worth it. | 3 | 0 | 2026-03-02T01:41:04 | MrPecunius | false | null | 0 | o85n366 | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o85n366/ | false | 3 |
t1_o85n0q8 | Shouldn't 5070 be like 3 times as fast as 3060? Show me a s of ur losd model parameters | 1 | 0 | 2026-03-02T01:40:38 | KURD_1_STAN | false | null | 0 | o85n0q8 | false | /r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85n0q8/ | false | 1 |
t1_o85mz8j | The earlier ANE has been 16 bit float from Hollmans[GitHub](https://github.com/hollance/neural-engine/blob/master/docs/16-bit.md)' . Which means the 38 TFLOPS number is likely market speak to compete with Qualcomm, AMD etc.
Apple docs A17 generation has an INT8 path but it's very likely Apple added dequant in coreML to... | 1 | 0 | 2026-03-02T01:40:23 | jack_smirkingrevenge | false | null | 0 | o85mz8j | false | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o85mz8j/ | false | 1 |
t1_o85my0i | this is kinda the right answer. i've waited four years, i've waited six months, and either way, a new laptop that makes me salivate will be out in weeks. i've decided i'll buy it if i get this job. i also know that if i don't get it i'll wanna rope so i might buy it anyway to cope 😇 | 1 | 0 | 2026-03-02T01:40:11 | roughseasbanshee | false | null | 0 | o85my0i | false | /r/LocalLLaMA/comments/1ndoxxa/why_should_i_not_buy_an_amd_ai_max_395_128gb/o85my0i/ | false | 1 |
t1_o85mvd2 | I've noticed this with Opencode. First I thought llama-swap was ejecting and reloading the model midpoint, but no. It's exactly what you described. Any clue as to what parameters we can inject (aside from raising the context window) to stop this behavior? | 2 | 0 | 2026-03-02T01:39:43 | simracerman | false | null | 0 | o85mvd2 | false | /r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o85mvd2/ | false | 2 |
t1_o85mqe9 | I just roleplay when using LLMs, even on coding harnesses like Claude Code. Seems like the most obvious way to get the agent/LLM to understand what kind of crap I want to build. | 1 | 0 | 2026-03-02T01:38:52 | gripntear | false | null | 0 | o85mqe9 | false | /r/LocalLLaMA/comments/1riboy2/learnt_about_emergent_intention_maybe_prompt/o85mqe9/ | false | 1 |
t1_o85mq5k | The trick is you have to match the geometry the model uses for its latent space.
I originally tried a random embedding model and was running into that when I was first testing. And then I did some research into the geometry, spent a long time checking for a embedding model compatible, then I realized you can just hij... | 0 | 0 | 2026-03-02T01:38:49 | Proper-Lab1756 | false | null | 0 | o85mq5k | false | /r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o85mq5k/ | false | 0 |
t1_o85mp4u | If u want to have real speed don't mess with clusters. If u are Apple guy keep it and get a cheap DDR4 machine that can be loaded with GPUs and run the model on it remotely, u don't need more than maybe 16-32GB RAM. Choose 1/2 GPUs that fit your needs. RAM became way too expensive imo. If x86 sell the Mini and also get... | 1 | 0 | 2026-03-02T01:38:38 | zipperlein | false | null | 0 | o85mp4u | false | /r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o85mp4u/ | false | 1 |
t1_o85mn3g | I managed to get up to 55 tokens/s at 100k context window | 1 | 0 | 2026-03-02T01:38:16 | Electrify338 | false | null | 0 | o85mn3g | false | /r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85mn3g/ | false | 1 |
t1_o85mn01 | Problem with trimming from the middle is that this just won't work for most use cases. You'd effectively be halving the context size. We did this before context shift was a thing where it would dynamically cut the original half of the context, let it fill back up to the limit and then repeat it. That feature is still i... | 7 | 0 | 2026-03-02T01:38:15 | henk717 | false | null | 0 | o85mn01 | false | /r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o85mn01/ | false | 7 |
t1_o85mkuq | Neither of my tools have trouble with dates, I can't really comment on others though. | 1 | 0 | 2026-03-02T01:37:53 | Total-Context64 | false | null | 0 | o85mkuq | false | /r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o85mkuq/ | false | 1 |
t1_o85mjff | No true Scotsman, huh? | 2 | 0 | 2026-03-02T01:37:37 | MrPecunius | false | null | 0 | o85mjff | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o85mjff/ | false | 2 |
t1_o85mgra | About the same or slightly faster. For single use I guess llama.cpp is OK but even for agents vllm already is much faster. | 4 | 0 | 2026-03-02T01:37:10 | ortegaalfredo | false | null | 0 | o85mgra | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o85mgra/ | false | 4 |
t1_o85mgc3 | How much lies and deceptions. Really did not expect that from a guy like him. | 0 | 0 | 2026-03-02T01:37:05 | Another__one | false | null | 0 | o85mgc3 | false | /r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o85mgc3/ | false | 0 |
t1_o85mfms | I thought the same thing. The post on X is only days old, but the commit history said 8 months. I'm wondering if they just made it public or something.
I looked through it on Friday. Definitely interesting. Just need to find time to test it out (and the data). | 2 | 0 | 2026-03-02T01:36:58 | l0nedigit | false | null | 0 | o85mfms | false | /r/LocalLLaMA/comments/1riavbf/t2l_texttolora_by_sakanaai/o85mfms/ | false | 2 |
t1_o85mb9p | Isso é tipo o atlasKV? | 1 | 0 | 2026-03-02T01:36:12 | charmander_cha | false | null | 0 | o85mb9p | false | /r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o85mb9p/ | false | 1 |
t1_o85m9ba | Not really.
Intelligence can produce conclusions from premises. | 4 | 0 | 2026-03-02T01:35:51 | MrPecunius | false | null | 0 | o85m9ba | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o85m9ba/ | false | 4 |
t1_o85m8aa | where are these? | 3 | 0 | 2026-03-02T01:35:40 | Mediocre_Speed_2273 | false | null | 0 | o85m8aa | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85m8aa/ | false | 3 |
t1_o85m83p | Im getting 27t/s with 60k(it was either that or 128k) context on 3060 12gb +32gb ram at q5 from Aesidai. what quants are u using that ur ram fills up? | 1 | 0 | 2026-03-02T01:35:38 | KURD_1_STAN | false | null | 0 | o85m83p | false | /r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85m83p/ | false | 1 |
t1_o85m83s | i hadn't looked closely at the photo and was thinking "damn. dude's bold as hell to say this on an account with a picture of him and his girlfriend" 😭 | 1 | 0 | 2026-03-02T01:35:38 | roughseasbanshee | false | null | 0 | o85m83s | false | /r/LocalLLaMA/comments/1o3evon/what_laptop_would_you_choose_ryzen_ai_max_395/o85m83s/ | false | 1 |
t1_o85m47v | Yeah, when I pasted in the output of a date command, it's thinking changed to "Note: This is in the future relative to my training data, but I must treat it as the "current" context provided by the user." Seems totally reasonable.
However, Cursor is having a similar issue - [https://forum.cursor.com/t/today-is-the-wro... | 1 | 0 | 2026-03-02T01:34:58 | drappleyea | false | null | 0 | o85m47v | false | /r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o85m47v/ | false | 1 |
t1_o85m3x9 | I did 7 years 5 years in IT sales, I can sell some stuff for sure!
Its one thing to be support for your immediate family - its another to be that for randos. BUT I agree with you that we are on that cusp of "Her" | 2 | 0 | 2026-03-02T01:34:55 | ubrtnk | false | null | 0 | o85m3x9 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o85m3x9/ | false | 2 |
t1_o85m1u2 | Yeah you'll do better with llama.ccp. No cap 🧢 I got a 30+ speed increase. | 1 | 0 | 2026-03-02T01:34:33 | nakedspirax | false | null | 0 | o85m1u2 | false | /r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85m1u2/ | false | 1 |
t1_o85m16k | Exo it's a nice trick, it's not what you want for this. You just a larger chunk of unified memory. 24 GB sounds like a lot, and it is if you're just browsing the web. But the OS is gonna take at least eight of that, as regular ram. Your display is taking some of it as vram. It doesn't have enough space to run much... | 1 | 0 | 2026-03-02T01:34:26 | SmChocolateBunnies | false | null | 0 | o85m16k | false | /r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o85m16k/ | false | 1 |
t1_o85lza6 | As would I! Especially because right now it’s just testing recall of skills (0.5b is pretty small for meaningful data to be obtained through actual tool use. So I had to rely on recall to measure behavior.)
Unfortunately even with only training the projector, my computer was throwing a fit. So I’ll either have to upg... | 1 | 0 | 2026-03-02T01:34:05 | Proper-Lab1756 | false | null | 0 | o85lza6 | false | /r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o85lza6/ | false | 1 |
t1_o85lxdv | Cool! I've been trying to do stuff like this but the model always ended up getting very confused, looping, and other "this won't work" sort of behavior. Thanks for sharing, can't wait to check it out! | 1 | 0 | 2026-03-02T01:33:45 | ladz | false | null | 0 | o85lxdv | false | /r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o85lxdv/ | false | 1 |
t1_o85lwi6 | Awesome | 1 | 0 | 2026-03-02T01:33:36 | LuckyLuckierLuckest | false | null | 0 | o85lwi6 | false | /r/LocalLLaMA/comments/1gaoxuu/run_your_local_ai_stack_with_docker_compose/o85lwi6/ | false | 1 |
t1_o85lo0h | mac studio | 1 | 0 | 2026-03-02T01:32:07 | sunshinecheung | false | null | 0 | o85lo0h | false | /r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o85lo0h/ | false | 1 |
t1_o85lksk | No this is a bad idea. Sell your computer and buy something that is better suited to the task. You want to play with big LLMs you need to spend big money to use them in the real world. New Mac’s are coming so maybe something will be revealed that would help. You need a massive amount of ram on a Mac to run these models... | 7 | 0 | 2026-03-02T01:31:33 | Expert_Bat4612 | false | null | 0 | o85lksk | false | /r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o85lksk/ | false | 7 |
t1_o85lj1x | The license is essentially a formality, unlike with Facebook, where they actually check which country you're applying from to access the model.
| 4 | 0 | 2026-03-02T01:31:15 | MadPelmewka | false | null | 0 | o85lj1x | false | /r/LocalLLaMA/comments/1riehh9/licensing_restrictions_for_tencent_models/o85lj1x/ | false | 4 |
t1_o85lg1l | Occasionally so, not always. Tokens usually cross-correlate whenever it is relevant. The sink would act as the "use if nothing is relevant" switch. Therefore if position 193 IS a sink, (a) the information in that position is irrelevant, (b) the model will avoiding adding useful information, (c) directional steering of ... | 2 | 0 | 2026-03-02T01:30:44 | TomLucidor | false | null | 0 | o85lg1l | false | /r/LocalLLaMA/comments/1qpg4ty/the_mystery_of_position_193_i_found_a_weird/o85lg1l/ | false | 2 |
t1_o85l9kl | bro, 8845HS 32GB DDR5 RAM Qwen3-27B @ Q4 at what speed? | 1 | 0 | 2026-03-02T01:29:38 | sunshinecheung | false | null | 0 | o85l9kl | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o85l9kl/ | false | 1 |
t1_o85l73x | unsloth Q5 K XL worked like a charm. Curious whether you downloaded q8 after the fixes unsloth made and reuploaded the models? (probably not as u dont have the K XL version) | 3 | 0 | 2026-03-02T01:29:13 | Old-Sherbert-4495 | false | null | 0 | o85l73x | false | /r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o85l73x/ | false | 3 |
t1_o85l5p5 | Considering how stupid trump is, he’s not actually calling the shots it’s the techbros and the Nazis | 1 | 0 | 2026-03-02T01:28:58 | Savantskie1 | false | null | 0 | o85l5p5 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o85l5p5/ | false | 1 |
t1_o85l13r | AI as a tool to compete more effectively against those who ignore or hate it. Call me selfish, but I believe its best use in the workplace is quietly; sharing something that benefits you with strangers when they won't hesitate to push you out of your job or worse, take advantage of your success while pretending to care... | 1 | 0 | 2026-03-02T01:28:11 | Shockbum | false | null | 0 | o85l13r | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o85l13r/ | false | 1 |
t1_o85l0ne | This is very interesting, and it makes sense. I'd love to see how it works on a 7B model. | 5 | 0 | 2026-03-02T01:28:06 | Intraluminal | false | null | 0 | o85l0ne | false | /r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o85l0ne/ | false | 5 |
t1_o85kx0j | That's an awesome project and thank you for sharing the dataset openly, it gave me some good ideas
What training framework are you using and what's the context length of the CPT phase? What context length you'll use for instruct post-training? | 4 | 0 | 2026-03-02T01:27:29 | FullOf_Bad_Ideas | false | null | 0 | o85kx0j | false | /r/LocalLLaMA/comments/1ribjum/i_trained_a_3b_patristic_theology_llm_on_a_single/o85kx0j/ | false | 4 |
t1_o85kwnt | What I find most concerning here is how it's *hallucinating* performing a date check twice: "as of my current internal clock" and "I need to check the date. Today is May 2024 (in reality)." | -1 | 0 | 2026-03-02T01:27:25 | Murgatroyd314 | false | null | 0 | o85kwnt | false | /r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o85kwnt/ | false | -1 |
t1_o85kuot | The SQL RAG layer...does what now? | 1 | 0 | 2026-03-02T01:27:05 | SmChocolateBunnies | false | null | 0 | o85kuot | false | /r/LocalLLaMA/comments/1rif77r/finetuned_a_health_coach_llm_on_my_mac_in_15/o85kuot/ | false | 1 |
t1_o85kn06 | Reused questions over time is useless. Ofcourse the model suppliers scans the webb for "benchmarking" questions. Only test with non open questions are valuable. | 5 | 0 | 2026-03-02T01:25:45 | Maximum_Parking_5174 | false | null | 0 | o85kn06 | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o85kn06/ | false | 5 |
t1_o85khnt | Ty, I haven't downloaded the dense one yet, so all target the q5 or higher. | 1 | 0 | 2026-03-02T01:24:50 | SmChocolateBunnies | false | null | 0 | o85khnt | false | /r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o85khnt/ | false | 1 |
t1_o85khl1 | Assume budget for GPU at best is 4060/5060 Ti with 16GB (heard the crazy 3090 is a heater?). 48GB/64GB in the second hand space for MLX is reachable as well I think? | 1 | 0 | 2026-03-02T01:24:49 | TomLucidor | false | null | 0 | o85khl1 | false | /r/LocalLLaMA/comments/1rhjg6w/longcatflashlite_685b_maybe_a_relatively_good/o85khl1/ | false | 1 |
t1_o85kg7c | jsut watned to elt u know fuk u | 0 | 0 | 2026-03-02T01:24:36 | TrickySpare6504 | false | null | 0 | o85kg7c | false | /r/LocalLLaMA/comments/1rgxywo/new_claude_21_refuses_to_kill_a_python_process/o85kg7c/ | false | 0 |
t1_o85k8l1 | Good call, I forgot to mention ollama version. I bashed into the ollama container (pulled latest an hour ago), and it's reporting 17.4... which is odd, since I thought it was currently 17.1. The for the callout on this, it's appreciated! | 2 | 0 | 2026-03-02T01:23:17 | Background_Baker9021 | false | null | 0 | o85k8l1 | false | /r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o85k8l1/ | false | 2 |
t1_o85k7zn | 'Nuff said | 0 | 0 | 2026-03-02T01:23:11 | emprahsFury | false | null | 0 | o85k7zn | false | /r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o85k7zn/ | false | 0 |
t1_o85k5xa | Agreed on the efficiency on forward pass, that said I was able to see 10-12% efficiency on backward pass which makes me think we can go higher 😅 | 3 | 0 | 2026-03-02T01:22:50 | jack_smirkingrevenge | false | null | 0 | o85k5xa | false | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o85k5xa/ | false | 3 |
t1_o85k5mb | Fair point on context shifting, I wasn't aware backends could recover from that. But it still sounds like something we'd want to avoid. From my understanding of attention sinks, sliding windows can disrupt them. That actually ties into what you said about leaking original bias, right? Trimming from the middle seems s... | 5 | 0 | 2026-03-02T01:22:46 | StardockEngineer | false | null | 0 | o85k5mb | false | /r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o85k5mb/ | false | 5 |
t1_o85k4tn | What’s the PPL? And/or KLD but even just PPL would tell us a lot in this case. | 18 | 0 | 2026-03-02T01:22:38 | DistanceSolar1449 | false | null | 0 | o85k4tn | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o85k4tn/ | false | 18 |
t1_o85jw56 | go dense, beyond being smarter, it will handle long context better
For some quick translations i used qwen 3 30ba3b 2507 , but i turned to ministral 3 14B to get better quality. But it was slow... so i ended using ministral 3 8B , which was smart enough to use my workflow, to deliver an uncensored translation an... | 5 | 0 | 2026-03-02T01:21:08 | brahh85 | false | null | 0 | o85jw56 | false | /r/LocalLLaMA/comments/1ridwl5/qwen3527b_iq3_vs_qwen35_35ba3m_q4_k_m/o85jw56/ | false | 5 |
t1_o85jvdc | Yeah, but it does require it | 0 | 1 | 2026-03-02T01:21:00 | SpicyWangz | false | null | 0 | o85jvdc | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o85jvdc/ | false | 0 |
t1_o85jv5z | Check what version of Ollama, on docker I was pulling latest and it wasn't working but 17.1 worked for me. That said it's def slower on Ollama than llama.cpp or something
| 2 | 0 | 2026-03-02T01:20:57 | Send_Boobs_Via_DM | false | null | 0 | o85jv5z | false | /r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o85jv5z/ | false | 2 |
t1_o85jqns | I believe home companions are the future, just most people aren't used to the idea. And you have the working hardware available.
If I were you I'd make a simple experiment: print posters themed "Single? Let us end it: inexpensive secure AI companions and smart homes for mere $20/month" and stick them around, just to s... | 1 | 0 | 2026-03-02T01:20:11 | 3dom | false | null | 0 | o85jqns | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o85jqns/ | false | 1 |
t1_o85joi4 | the quality is surprising, actually - I urge you to try it before you mock it! | 8 | 0 | 2026-03-02T01:19:49 | JohnTheNerd3 | false | null | 0 | o85joi4 | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o85joi4/ | false | 8 |
t1_o85jnfy | It’ll be semi useful. I don’t think some of his decisions are good. Using 4 bit attention is questionable, it’s gonna wreck model performance. Using nvlink is overkill, it won’t help the performance much at all (an all-reduce with hidden size = 5120 and BF16 activations across 128 collectives would be 1.3MB, which does... | 15 | 0 | 2026-03-02T01:19:38 | DistanceSolar1449 | false | null | 0 | o85jnfy | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o85jnfy/ | false | 15 |
t1_o85jlrn | Love to see projects like this certainly has its use cases | 1 | 0 | 2026-03-02T01:19:21 | bralynn2222 | false | null | 0 | o85jlrn | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o85jlrn/ | false | 1 |
t1_o85jivs | No.
That simple. | 10 | 0 | 2026-03-02T01:18:50 | grim-432 | false | null | 0 | o85jivs | false | /r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o85jivs/ | false | 10 |
t1_o85jbq8 | cline in vscode | 1 | 0 | 2026-03-02T01:17:38 | -Django | false | null | 0 | o85jbq8 | false | /r/LocalLLaMA/comments/1rie3yc/which_ide_to_code_with_qwen_35/o85jbq8/ | false | 1 |
t1_o85j2ev | Oh what fun. Linear hybrid memory with exponential prompt processing lol | 9 | 0 | 2026-03-02T01:15:59 | ArchdukeofHyperbole | false | null | 0 | o85j2ev | false | /r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o85j2ev/ | false | 9 |
t1_o85j03p | Knowledge is what facts it knows. Intelligence is the ability to work with those facts. | 1 | 0 | 2026-03-02T01:15:35 | Murgatroyd314 | false | null | 0 | o85j03p | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o85j03p/ | false | 1 |
t1_o85iv8x | Cool research but since you need to train those hypernetworks it's just not going to happen without major upfront compute spend, unless miracously you want to finetune 1 of 3-5 models they made those hypernetworks for.
Person wanting to do a finetune will see it, see that it's not compatible with their model and go aw... | 1 | 0 | 2026-03-02T01:14:45 | FullOf_Bad_Ideas | false | null | 0 | o85iv8x | false | /r/LocalLLaMA/comments/1ri0n8p/llm_lora_on_the_fly_with_hypernetworks/o85iv8x/ | false | 1 |
t1_o85iuhi | I think OP's second paragraph is slightly off, and this a major issue I've seen people come across on all models (that I recall): the AI is so god damn sure of everything.
"OP wants advice on a 2026 coin, but it's 2024!" It spins on this issue for the majority of its thinking. It should just ask the user a follow up q... | 1 | 0 | 2026-03-02T01:14:37 | mrfocus22 | false | null | 0 | o85iuhi | false | /r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o85iuhi/ | false | 1 |
t1_o85irfs | My dad is alone :(. I tried giving him access to it but he didnt like the idea of any AI period. My mom is not technically savy at all and has no desire...
Might be a theme... | 2 | 0 | 2026-03-02T01:14:05 | ubrtnk | false | null | 0 | o85irfs | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o85irfs/ | false | 2 |
t1_o85ioep | all hunyuan models, youtu, wedlm. Other chinese providers don't have this weird restriction :\\ | 2 | 0 | 2026-03-02T01:13:33 | 4baobao | false | null | 0 | o85ioep | false | /r/LocalLLaMA/comments/1riehh9/licensing_restrictions_for_tencent_models/o85ioep/ | false | 2 |
t1_o85im31 | 10x cost drop in 13 months matters less than the inference stack catching up — llama.cpp speculative decoding finally making small drafters viable. | 2 | 0 | 2026-03-02T01:13:09 | tom_mathews | false | null | 0 | o85im31 | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o85im31/ | false | 2 |
t1_o85ierj | Great datapoint. If you want to prove how much is firmware vs llama.cpp changes, a reproducible mini-matrix would be super useful:
- same GGUF + same flags (n_batch, n_gpu_layers, ctx, rope settings)
- report both pp and tg at 4k / 32k / 128k context
- include exact kernel + linux-firmware package + llama.cpp commit
... | 3 | 0 | 2026-03-02T01:11:55 | ikkiho | false | null | 0 | o85ierj | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o85ierj/ | false | 3 |
t1_o85ie8m | Q4_K_M, trying to fit in a 4090 after Windows steals a bunch of my vram. | 1 | 0 | 2026-03-02T01:11:50 | k31thdawson | false | null | 0 | o85ie8m | false | /r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o85ie8m/ | false | 1 |
t1_o85iddp | I think poor in practice.
They trained those hypernetworks with context size only up to 512 tokens, and only on 2b-7b models that are not top performers. It will be far away from performance of 70-400B pre-trained model with the reference text being put in context. | 1 | 0 | 2026-03-02T01:11:41 | FullOf_Bad_Ideas | false | null | 0 | o85iddp | false | /r/LocalLLaMA/comments/1ri0n8p/llm_lora_on_the_fly_with_hypernetworks/o85iddp/ | false | 1 |
t1_o85iaav | Okay, the user is asking who won the NBA in late 2025. Let me think about this.
First, I need to recall that my knowledge is current up to early 2026. The NBA season typically ends around June each year with the Finals. So the 2024-2025 season would conclude in June 2025. But wait, the user is asking about late 2025. ... | 1 | 0 | 2026-03-02T01:11:11 | alex_godspeed | false | null | 0 | o85iaav | false | /r/LocalLLaMA/comments/1rguzz2/qwen_35_cutoff_date_is_2024/o85iaav/ | false | 1 |
t1_o85i8t5 | Neovim with avante | 1 | 0 | 2026-03-02T01:10:56 | 10F1 | false | null | 0 | o85i8t5 | false | /r/LocalLLaMA/comments/1rie3yc/which_ide_to_code_with_qwen_35/o85i8t5/ | false | 1 |
t1_o85i3vn | FWIW, I've tried it with Gemini 3.1 Pro and Qwen 3.5 Plus.
Gemini 3.1 Pro: [https://pastebin.com/vvMk2pMi](https://pastebin.com/vvMk2pMi)
Qwen 3.5 Plus: [https://pastebin.com/Ew3q0m4H](https://pastebin.com/Ew3q0m4H)
Gemini version didn't run on first shot, I needed it to fix a JS issue. After that, it ran.
Qwen ver... | 2 | 0 | 2026-03-02T01:10:06 | dryadofelysium | false | null | 0 | o85i3vn | false | /r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o85i3vn/ | false | 2 |
t1_o85hvbk | It’s because he’s running attention at int4 (in order to take advantage of ampere hardware support for int4)
Attention quants better than SSM, but 4 bit attention is a brave/stupid move. Most people quant attention to Q8 for a reason. For example, unsloth Q4_K_XL quants attention qkv to Q8 and gate to Q6.
That model... | 51 | 0 | 2026-03-02T01:08:39 | DistanceSolar1449 | false | null | 0 | o85hvbk | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o85hvbk/ | false | 51 |
t1_o85huj3 | [removed] | 1 | 0 | 2026-03-02T01:08:30 | [deleted] | true | null | 0 | o85huj3 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85huj3/ | false | 1 |
t1_o85hrsu | Appreciate it bro, have a nice evening! | 1 | 0 | 2026-03-02T01:08:02 | Competitive_Book4151 | false | null | 0 | o85hrsu | false | /r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/o85hrsu/ | false | 1 |
t1_o85hpjt | dense model is considered "smarter" for complex reasoning, logic, and consistent functional output, so 27B | 10 | 0 | 2026-03-02T01:07:40 | OkBoysenberry2742 | false | null | 0 | o85hpjt | false | /r/LocalLLaMA/comments/1ridwl5/qwen3527b_iq3_vs_qwen35_35ba3m_q4_k_m/o85hpjt/ | false | 10 |
t1_o85hnxt | This project has helped me a lot, brother, thank you | 1 | 0 | 2026-03-02T01:07:23 | Suspicious_Candy7178 | false | null | 0 | o85hnxt | false | /r/LocalLLaMA/comments/1qy2fwe/built_a_comparison_openclaw_vs_memoryfirst_local/o85hnxt/ | false | 1 |
t1_o85hi65 | Ah, I see what you mean. Yes then the balance is on how much they actually listen to their subordinates. I mean, they clearly understood that Claude was the superior platform... While I have managers pushing Copilot as The Solution due to MS market share and marketing push. | 1 | 0 | 2026-03-02T01:06:24 | sweetbacon | false | null | 0 | o85hi65 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o85hi65/ | false | 1 |
t1_o85hgkg | This is the answer. | 1 | 0 | 2026-03-02T01:06:08 | kasparZ | false | null | 0 | o85hgkg | false | /r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o85hgkg/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.