name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7y2sfh | That's what we are here. Before we get turned into supply chain risk too - then it's off to discord. | 1 | 0 | 2026-02-28T21:26:28 | FPham | false | null | 0 | o7y2sfh | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7y2sfh/ | false | 1 |
t1_o7y2sao | Can you report if you not US citizen? | 1 | 0 | 2026-02-28T21:26:27 | Intelligent-Slip8325 | false | null | 0 | o7y2sao | false | /r/LocalLLaMA/comments/1rgu849/if_your_chutesai_subscription_was_unilaterally/o7y2sao/ | false | 1 |
t1_o7y2rgn | Thank you very fresh repo.
I think I was close to this working with skills files | 1 | 0 | 2026-02-28T21:26:20 | yes_yes_no_repeat | false | null | 0 | o7y2rgn | false | /r/LocalLLaMA/comments/1rgyb4r/local_manus/o7y2rgn/ | false | 1 |
t1_o7y2mmh | I agree completely. The quality and speed of this model on my 3090 with 8 experts blew me away | 1 | 0 | 2026-02-28T21:25:36 | aslto | false | null | 0 | o7y2mmh | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7y2mmh/ | false | 1 |
t1_o7y2j1g | My microwave broke this month | -1 | 1 | 2026-02-28T21:25:04 | KURD_1_STAN | false | null | 0 | o7y2j1g | false | /r/LocalLLaMA/comments/1rhe4oo/qwen_35_27b_and_qwen3535ba3b_ran_locally_on_my/o7y2j1g/ | false | -1 |
t1_o7y2ipp | And so begins the downfall of Nvidia... If this is real anyways... | 1 | 0 | 2026-02-28T21:25:01 | lakimens | false | null | 0 | o7y2ipp | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7y2ipp/ | false | 1 |
t1_o7y2iin | Number of parameters is just a result of number of layers in the architecture. You can't really compare old models to the new ones this way.
Also isn't 27B smarter than 32B? So maybe it's the other way around: 32B -> 27B etc | 1 | 0 | 2026-02-28T21:24:59 | jacek2023 | false | null | 0 | o7y2iin | false | /r/LocalLLaMA/comments/1rheepm/qwen_model_sizes_over_time/o7y2iin/ | false | 1 |
t1_o7y2g3a | FT’s China team is just as bad as any other newspaper. They don’t seem to have any good sources and their articles on China are frequently inaccurate. And not “slightly” inaccurate in a sense that they get some numbers wrong. Inaccurate, as in they completely misreport the actual situation on the ground.
They’ve done ... | 4 | 0 | 2026-02-28T21:24:38 | June1994 | false | null | 0 | o7y2g3a | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7y2g3a/ | false | 4 |
t1_o7y2eir | The /r/rag community is also awesome and, if possible, even nerdier | 4 | 0 | 2026-02-28T21:24:24 | Much-Researcher6135 | false | null | 0 | o7y2eir | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7y2eir/ | false | 4 |
t1_o7y2ee0 | Tired misclick, 14900F, my bad | 1 | 0 | 2026-02-28T21:24:23 | tableball35 | false | null | 0 | o7y2ee0 | false | /r/LocalLLaMA/comments/1rh1q8j/i919400f_rtx_4070_super_12gb_32gb_ddr5_ram/o7y2ee0/ | false | 1 |
t1_o7y2e6h | will give it a try thanks | 1 | 0 | 2026-02-28T21:24:21 | thibautrey | false | null | 0 | o7y2e6h | false | /r/LocalLLaMA/comments/1rgwryb/speculative_decoding_qwen35_27b/o7y2e6h/ | false | 1 |
t1_o7y2cj4 | It's more the fallback from this. What if next week tells admin about Chinese models taking big chunk of US companies compute? Antropic is supply-chain risk, feels very peculiar when you compare who the other players are. | 2 | 0 | 2026-02-28T21:24:06 | FPham | false | null | 0 | o7y2cj4 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7y2cj4/ | false | 2 |
t1_o7y2bbi | Holy smokes, can I ask what motherboard lets you do that? | 3 | 0 | 2026-02-28T21:23:55 | Much-Researcher6135 | false | null | 0 | o7y2bbi | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7y2bbi/ | false | 3 |
t1_o7y2b11 | Yep. [https://www.reddit.com/r/LocalLLaMA/comments/1rgtxry/is\_qwen35\_a\_coding\_game\_changer\_for\_anyone\_else/](https://www.reddit.com/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/) For filling in the knowledge gaps, I just give it some instructions to tell it to confirm its knowle... | 1 | 0 | 2026-02-28T21:23:52 | paulgear | false | null | 0 | o7y2b11 | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7y2b11/ | false | 1 |
t1_o7y27sh | The sizes of the models are a function of server GPU offerings from Nvidia. I.e. they have a variety of models that have 40GB or 48GB of VRAM for relatively adequate price, hence there are so much 20-30B models who can comfortably fit into a single such card and be used by a small business. Then there are higer-level G... | 3 | 0 | 2026-02-28T21:23:24 | No-Refrigerator-1672 | false | null | 0 | o7y27sh | false | /r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7y27sh/ | false | 3 |
t1_o7y26p7 | I can't get it to install at all. The needed dlls are installed, it keeps failing on pytorch install. How do I install it under comfyui portable? | 1 | 0 | 2026-02-28T21:23:14 | Ok_Cover890 | false | null | 0 | o7y26p7 | false | /r/LocalLLaMA/comments/1qetgy1/experimental_pytorch_271_backports_for_kepler_20/o7y26p7/ | false | 1 |
t1_o7y21n7 | I'm running 122b at iq4 with MoE on cpu. It takes all the vram and 45gb dram. Yeah 27b on vram would be better than a smaller 122b quant for sure, but no reason to do that. | 1 | 0 | 2026-02-28T21:22:30 | gtrak | false | null | 0 | o7y21n7 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7y21n7/ | false | 1 |
t1_o7y1w1i | That's what blew me away with Qwen3.5 - I didn't really need anything. I just told it to implement all the tasks, and it did it. I just left it on overnight again on a new task after I wrote the OP and it did the same thing again. I'm just getting it to write me a report now about what it did, but it looks solid. | 1 | 0 | 2026-02-28T21:21:40 | paulgear | false | null | 0 | o7y1w1i | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7y1w1i/ | false | 1 |
t1_o7y1tt3 | Very good point. I haven’t tested chains longer than 4 agents so I don’t have a good data on this. At the same time, In our fan-out benchmark, when two “specialists” KV-caches get sequentially injected into an “aggregator”, accuracy drops harder than expected, especially on 7B.
Longer chain experiments are on the list... | 0 | 0 | 2026-02-28T21:21:20 | proggmouse | false | null | 0 | o7y1tt3 | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7y1tt3/ | false | 0 |
t1_o7y1t5a | Feels pretty surreal - the chronology of this. They spent the last week telling everyone how China plays unfairly... and boom. | 5 | 0 | 2026-02-28T21:21:14 | FPham | false | null | 0 | o7y1t5a | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7y1t5a/ | false | 5 |
t1_o7y1p67 | [https://arxiv.org/pdf/2601.06002](https://arxiv.org/pdf/2601.06002)
The Molecular Structure of Thought: Mapping the Topology of Long Chain-of-Thought Reasoning
Wanted to share this too. By Bytedance. Dont let the title trip u up, The paper is fire. | 4 | 0 | 2026-02-28T21:20:38 | valkarias | false | null | 0 | o7y1p67 | false | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7y1p67/ | false | 4 |
t1_o7y1n5k | Those got cleared after I uninstalled. But, before I try the force resume, I'll reinstall and let it get as far as it gets. | 1 | 0 | 2026-02-28T21:20:20 | SmChocolateBunnies | false | null | 0 | o7y1n5k | false | /r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/o7y1n5k/ | false | 1 |
t1_o7y1ipx | That's what I'm thinking too. | 2 | 0 | 2026-02-28T21:19:40 | FPham | false | null | 0 | o7y1ipx | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7y1ipx/ | false | 2 |
t1_o7y1h0y | QAT is one of the secret levers all the big model releasers are doing yes. It makes sense to do both QAT and then QAD where a quant fits nicely, and QAT provides a really good end product quant. | 2 | 0 | 2026-02-28T21:19:26 | Phaelon74 | false | null | 0 | o7y1h0y | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7y1h0y/ | false | 2 |
t1_o7y1h2b | my friend Thanks, I appreciate it.
Learning from scratch was mostly about understanding the tradeoffs of architecture under hardware constraints. Still learning and refining iterations. | 3 | 0 | 2026-02-28T21:19:26 | zemondza | false | null | 0 | o7y1h2b | false | /r/LocalLLaMA/comments/1rhe790/my_frends_trained_and_benchmarked_4_diffusion/o7y1h2b/ | false | 3 |
t1_o7y177d | Your bus speed is only 120Gb/s, which is the limiting factor. MLX should be faster than GGUF the same bit size. | 4 | 0 | 2026-02-28T21:17:57 | zipzag | false | null | 0 | o7y177d | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7y177d/ | false | 4 |
t1_o7y1721 | MLX version? | 1 | 0 | 2026-02-28T21:17:56 | mathbrot | false | null | 0 | o7y1721 | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7y1721/ | false | 1 |
t1_o7y14ka | Maybe you two should see other quants. | 17 | 0 | 2026-02-28T21:17:33 | FPham | false | null | 0 | o7y14ka | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7y14ka/ | false | 17 |
t1_o7y149w | IBM Granite 4 H Tiny with a Wikipedia offline clone and RAG. | 2 | 0 | 2026-02-28T21:17:31 | Technical-Earth-3254 | false | null | 0 | o7y149w | false | /r/LocalLLaMA/comments/1rhcs8p/tiny_small_faster_models_for_13_year_old_laptop/o7y149w/ | false | 2 |
t1_o7y12ue | It gets faster at 4bit - but then that's really when you have to , not when you can choose. | 3 | 0 | 2026-02-28T21:17:17 | FPham | false | null | 0 | o7y12ue | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7y12ue/ | false | 3 |
t1_o7y11cy | I have a huge respect for anyone training a model from scratch. Sorry for lack of substance in the comment | 15 | 0 | 2026-02-28T21:17:04 | Medium_Chemist_4032 | false | null | 0 | o7y11cy | false | /r/LocalLLaMA/comments/1rhe790/my_frends_trained_and_benchmarked_4_diffusion/o7y11cy/ | false | 15 |
t1_o7y10sc | I will be messaging you in 7 days on [**2026-03-07 21:16:19 UTC**](http://www.wolframalpha.com/input/?i=2026-03-07%2021:16:19%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/LocalLLaMA/comments/1n4garp/creating_the_brain_behind_dumb_models/o7y0w91/?context=3)
[**CLICK THIS LINK**](... | 1 | 0 | 2026-02-28T21:16:59 | RemindMeBot | false | null | 0 | o7y10sc | false | /r/LocalLLaMA/comments/1n4garp/creating_the_brain_behind_dumb_models/o7y10sc/ | false | 1 |
t1_o7y10j0 | [meituan-longcat/LongCat-Flash-Lite](https://huggingface.co/meituan-longcat/LongCat-Flash-Lite) is a 69B A3B so pretty Nice | 1 | 0 | 2026-02-28T21:16:56 | random-tomato | false | null | 0 | o7y10j0 | false | /r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7y10j0/ | false | 1 |
t1_o7y0w91 | RemindMe! 1 week Viz | 1 | 0 | 2026-02-28T21:16:19 | MrMag00 | false | null | 0 | o7y0w91 | false | /r/LocalLLaMA/comments/1n4garp/creating_the_brain_behind_dumb_models/o7y0w91/ | false | 1 |
t1_o7y0uxd | Personally running 35B on my Spark. 16 parallel with a 2048k context. Allows for multiple agent sessions with context caching and its pretty fast (not simultanously). I find 122B great for one on one chats in openwebui, but its a bit too slow on the spark for interactive agent work. This new line has forced me to s... | 1 | 0 | 2026-02-28T21:16:07 | shinkamui | false | null | 0 | o7y0uxd | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7y0uxd/ | false | 1 |
t1_o7y0utf | Good for u, I have a microwave and a bicycle. | -1 | 1 | 2026-02-28T21:16:06 | tomakorea | false | null | 0 | o7y0utf | false | /r/LocalLLaMA/comments/1rhe4oo/qwen_35_27b_and_qwen3535ba3b_ran_locally_on_my/o7y0utf/ | false | -1 |
t1_o7y0up0 | yappy model but it gets to the finishing line. | 6 | 0 | 2026-02-28T21:16:05 | FPham | false | null | 0 | o7y0up0 | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7y0up0/ | false | 6 |
t1_o7y0sk2 | Honestly? get an ik_llama quant of 122B or an unsloth quant that leaves you with 70-100k of context at f16 kv cache after fitting it all in vram. I'm using the IQ2_KL from ubergarm to fit into 2x 3090's and getting just over 50 tk/s and about 600 pp/s | 1 | 0 | 2026-02-28T21:15:46 | jwpbe | false | null | 0 | o7y0sk2 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7y0sk2/ | false | 1 |
t1_o7y0mwu | you can use any model, eg. \`ollama\_chat/deepseek-v3.1:671b-cloud\`. LLama3.2 is only in readme file :) | 0 | 0 | 2026-02-28T21:14:55 | ivanantonijevic | false | null | 0 | o7y0mwu | false | /r/LocalLLaMA/comments/1rhcxn2/mate_selfhosted_multiagent_system_with_ollama/o7y0mwu/ | false | 0 |
t1_o7y0mq5 | 24 GB VRAM total? I'd be trying [Qwen3.5-27B-UD-Q5\_K\_XL](https://huggingface.co/unsloth/Qwen3.5-27B-GGUF/blob/main/Qwen3.5-27B-UD-Q5_K_XL.gguf) (once it gets respun) or [Qwen3.5-35B-A3B-UD-Q4\_K\_XL](https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF/blob/main/Qwen3.5-35B-A3B-UD-Q4_K_XL.gguf). I can just fit [Qwen... | 1 | 0 | 2026-02-28T21:14:54 | paulgear | false | null | 0 | o7y0mq5 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7y0mq5/ | false | 1 |
t1_o7y0k54 | You people are so delusional
You would have chastised the person who invented the fork by saying "Yes but can you eat soup with it?" | 1 | 0 | 2026-02-28T21:14:30 | TheBurkMeister | false | null | 0 | o7y0k54 | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o7y0k54/ | false | 1 |
t1_o7y0isd | 👍 | 1 | 0 | 2026-02-28T21:14:18 | getmevodka | false | null | 0 | o7y0isd | false | /r/LocalLLaMA/comments/1rhe4oo/qwen_35_27b_and_qwen3535ba3b_ran_locally_on_my/o7y0isd/ | false | 1 |
t1_o7y0ck9 | Try something like this if you want MoE: [https://huggingface.co/bknyaz/Qwen3-Next-80B-A3B-Instruct-REAM](https://huggingface.co/bknyaz/Qwen3-Next-80B-A3B-Instruct-REAM) | 0 | 0 | 2026-02-28T21:13:22 | s1mplyme | false | null | 0 | o7y0ck9 | false | /r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7y0ck9/ | false | 0 |
t1_o7y0cb2 | Prefix caching reuses computation for identical text across requests. My system transfers computation between agents that have different prompts. With prefix caching, Agent A still has to generate text and Agent B still has to process it. AVP skips both – Agent A never generates text, Agent B never processes it. | -2 | 0 | 2026-02-28T21:13:20 | proggmouse | false | null | 0 | o7y0cb2 | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7y0cb2/ | false | -2 |
t1_o7y08op | What are you using them for generally with that repeat penalty? I'm using 122b with opencode and having OK results but it could use a little push towards deeper solutions, it tends to latch onto the first one it finds even if it's not the actual solution. | 1 | 0 | 2026-02-28T21:12:47 | jwpbe | false | null | 0 | o7y08op | false | /r/LocalLLaMA/comments/1rhchvi/qwen35_family_running_notes/o7y08op/ | false | 1 |
t1_o7y07bm | And these government contracts aren't even lucrative. Boggles my mind. | 2 | 0 | 2026-02-28T21:12:35 | 20ol | false | null | 0 | o7y07bm | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7y07bm/ | false | 2 |
t1_o7y03sw | I have been in this sub since early llama and most things i learned about local AI I learned here. With This sub is very needed to keep our freedom and privacy 🙂 | 21 | 0 | 2026-02-28T21:12:04 | simplir | false | null | 0 | o7y03sw | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7y03sw/ | false | 21 |
t1_o7y02h3 | What’s better? | 1 | 0 | 2026-02-28T21:11:52 | GrungeWerX | false | null | 0 | o7y02h3 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7y02h3/ | false | 1 |
t1_o7y01nv | Still does - well maybe not so much with the world-knowledge as time passes, but that's what RAG is for. | 3 | 0 | 2026-02-28T21:11:45 | DinoAmino | false | null | 0 | o7y01nv | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7y01nv/ | false | 3 |
t1_o7y00ds | What do you suggest instead? You can see the ones I've tried in the OP. I currently use OpenCode because it's Open Source and the TUI is less buggy for me than Claude Code's. I run Linux on my laptop, so maybe it's better on that than on Windows or Mac OS? I don't think OpenCode is the best thing since sliced bread... | 2 | 0 | 2026-02-28T21:11:33 | paulgear | false | null | 0 | o7y00ds | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7y00ds/ | false | 2 |
t1_o7xzsja | On the side note: oss-120b is not a very good model in non-english languages. However, neither is Qwen-3.5-35B :) | 0 | 0 | 2026-02-28T21:10:22 | netikas | false | null | 0 | o7xzsja | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xzsja/ | false | 0 |
t1_o7xzpf2 | Not hallucination — the opposite. The problem is that persistent agents lose earned reasoning quality during context compaction: not facts, but judgment texture, negative knowledge (what not to do), and methodology. UCS preserves that through a toroidal routing engine + Emergent Judgment Protocol that survives full com... | -2 | 0 | 2026-02-28T21:09:55 | TheBrierFox | false | null | 0 | o7xzpf2 | false | /r/LocalLLaMA/comments/1rhcjd3/p_ucs_v12_judgment_preservation_in_persistent_ai/o7xzpf2/ | false | -2 |
t1_o7xzpgd | You described very well what I think too. It's a club of enthusiasts who want their slice of the pie and do so by helping one another and always rooting for the next good open model. | 2 | 0 | 2026-02-28T21:09:55 | thecalmgreen | false | null | 0 | o7xzpgd | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xzpgd/ | false | 2 |
t1_o7xzn0m | Q4? Or q8? Surely not q8... Im finding q5 kxl is working great and still contained to the gpu? | 1 | 0 | 2026-02-28T21:09:34 | hay-yo | false | null | 0 | o7xzn0m | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xzn0m/ | false | 1 |
t1_o7xzmac | Yes, but not really. If you compare the performance on the classic benchmarks like MMLU or whatnot, the scores might be similar. But humans (and llm-as-a-judge) strongly prefer non-quantized models. I've seen this effect myself even in FP8 quantization -- I work in one of the subfrontier LLM labs and measure the final ... | 0 | 0 | 2026-02-28T21:09:27 | netikas | false | null | 0 | o7xzmac | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xzmac/ | false | 0 |
t1_o7xzfrs | 300w + undervolt. Maybe 250w, but also with undervolt. And get nvlink for them \*if\* you manage to find it for cheap (it is not critical). They are running fine on my 9700k rig when I boot it (I've mostly switched to my other one by now). Another choice that you will have to deal with is cooling. Turbo versions of 309... | 1 | 0 | 2026-02-28T21:08:29 | Prudent-Ad4509 | false | null | 0 | o7xzfrs | false | /r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o7xzfrs/ | false | 1 |
t1_o7xzfqq | More leverage that will pivot by Wednesday, gotta have a liquidity event per weekend, it's now the law. | 1 | 0 | 2026-02-28T21:08:28 | sirebral | false | null | 0 | o7xzfqq | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7xzfqq/ | false | 1 |
t1_o7xzd3d | Good luck running agents with llama3.2 | 2 | 0 | 2026-02-28T21:08:05 | panic_in_the_galaxy | false | null | 0 | o7xzd3d | false | /r/LocalLLaMA/comments/1rhcxn2/mate_selfhosted_multiagent_system_with_ollama/o7xzd3d/ | false | 2 |
t1_o7xz5u2 | I just want to see if this time speculative decoding will actually provide a speedup | 3 | 0 | 2026-02-28T21:06:59 | Dany0 | false | null | 0 | o7xz5u2 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7xz5u2/ | false | 3 |
t1_o7xz5jw | 12 | 0 | 2026-02-28T21:06:57 | jacek2023 | false | null | 0 | o7xz5jw | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xz5jw/ | false | 12 | |
t1_o7xyzal | Which is the bigger number: 60gb in bf16 or 60gb in mxfp4? | 1 | 0 | 2026-02-28T21:06:00 | netikas | false | null | 0 | o7xyzal | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xyzal/ | false | 1 |
t1_o7xyvxg | You are wrong. You are confusing MXFP4 with NVFP4. They are not the same. You need NVFP4 for Blackwell.
https://developer.nvidia.com/blog/introducing-nvfp4-for-efficient-and-accurate-low-precision-inference/
Regardless, we are talking about Blackwell here. We are talking about Strix Halo. Which doesn't even have nati... | 2 | 0 | 2026-02-28T21:05:31 | fallingdowndizzyvr | false | null | 0 | o7xyvxg | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xyvxg/ | false | 2 |
t1_o7xyvb5 | Rtx 3060 12gb, R5 3600, 64gb ddr4. IQ4_XS of Qwen3.5-35b-a3b with 204800 ctx at kv q8 with full expert offload uses around 7gb vram and 25gb ram. Speed is Round 33t/s so expect slightly more with your newer card. You can keep more experts in GPU, for me that only slightly increased speed tho. It's totally usable, PP co... | 1 | 0 | 2026-02-28T21:05:25 | cookieGaboo24 | false | null | 0 | o7xyvb5 | false | /r/LocalLLaMA/comments/1rhcnbt/best_coding_model_to_run_entirely_on_12gb_vram/o7xyvb5/ | false | 1 |
t1_o7xyjsz | I use openframe [https://www.reddit.com/r/LocalLLaMA/comments/1nsnahe/september\_2025\_benchmarks\_3x3090/](https://www.reddit.com/r/LocalLLaMA/comments/1nsnahe/september_2025_benchmarks_3x3090/)
I have fan on the CPU, but my fans on 3090s are mostly silent if I set power correctly.
I believe even with low power they... | 1 | 0 | 2026-02-28T21:03:40 | jacek2023 | false | null | 0 | o7xyjsz | false | /r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o7xyjsz/ | false | 1 |
t1_o7xyg88 | [removed] | 1 | 0 | 2026-02-28T21:03:07 | [deleted] | true | null | 0 | o7xyg88 | false | /r/LocalLLaMA/comments/1r7fgqd/built_a_3_in_1_colab_notebook_with_qwen3tts_voice/o7xyg88/ | false | 1 |
t1_o7xyflf | Yeah, this comment should have way more upvotes - and would have if more of the OG were still around. | 4 | 0 | 2026-02-28T21:03:02 | DinoAmino | false | null | 0 | o7xyflf | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7xyflf/ | false | 4 |
t1_o7xyauy | I'm no expert on that, but my normal practice is to try the biggest thing that will fit in my hardware with full context. Gotta wait longer for the download, though. ;-) | 1 | 0 | 2026-02-28T21:02:20 | paulgear | false | null | 0 | o7xyauy | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7xyauy/ | false | 1 |
t1_o7xy8so | the self-hosted route is underrated for exactly this reason. more ops overhead upfront but you own the entire data lifecycle - legal just reviews your own infra controls, no DPA negotiations or vendor retention clause debates.
the week you spent emailing sales reps at providers could've been spent getting Axolotl runn... | 0 | 0 | 2026-02-28T21:02:01 | theagentledger | false | null | 0 | o7xy8so | false | /r/LocalLLaMA/comments/1rhb1xb/fine_tuning_on_proprietary_data_is_way_harder_to/o7xy8so/ | false | 0 |
t1_o7xy5zw | the O(n²) scaling point is the real clincher here. text-based agent chains have a fundamental quadratic problem that prefix caching can't actually fix since each hop introduces genuinely new tokens. you're not caching a shared prefix - you're dealing with a growing unique context at every hop.
curious whether accuracy... | 4 | 0 | 2026-02-28T21:01:36 | theagentledger | false | null | 0 | o7xy5zw | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7xy5zw/ | false | 4 |
t1_o7xy3lt | Its surprisingly really good for its size, close in perf to 122b a10b | 1 | 0 | 2026-02-28T21:01:14 | Emotional-Baker-490 | false | null | 0 | o7xy3lt | false | /r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7xy3lt/ | false | 1 |
t1_o7xy1wi | So again: if you structure your prompt in such a way that the whole message history comes first, then comes agent role prompt, then it generates - then how your system is any different than prefix caching? | 7 | 0 | 2026-02-28T21:00:59 | No-Refrigerator-1672 | false | null | 0 | o7xy1wi | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7xy1wi/ | false | 7 |
t1_o7xxz6i | It should actually work with llama.cpp if it is using chat/completions. Enter lmstudio or ollama as the provider and fill in the fields accordingly (note: this requires you manually adding the /v1 at the end of your url). I've tested lmstudio, ollama, and OpenAI, hence those are in the readme. I'll try to get around to... | 1 | 0 | 2026-02-28T21:00:34 | _WaterBear | false | null | 0 | o7xxz6i | false | /r/LocalLLaMA/comments/1rhbfya/shunyanet_sentinel_a_selfhosted_rss_aggregator/o7xxz6i/ | false | 1 |
t1_o7xxx26 | Gemma3:1b is alright for small texts, the vocab isnt as bad and zerogpt gave it a 0% | 1 | 0 | 2026-02-28T21:00:15 | xXRickroller01Xx | false | null | 0 | o7xxx26 | false | /r/LocalLLaMA/comments/1lrlgvk/best_local_humanizer_tool/o7xxx26/ | false | 1 |
t1_o7xxqpo | There are benchmarks for concurrent requests as well on spark-arena.com. Each local model varies a lot on their pp and tg performance numbers over concurrency. | 3 | 0 | 2026-02-28T20:59:17 | raphaelamorim | false | null | 0 | o7xxqpo | false | /r/LocalLLaMA/comments/1rhbtnw/the_state_of_openweights_llms_performance_on/o7xxqpo/ | false | 3 |
t1_o7xxq98 | So what did you replace it with then? | 1 | 0 | 2026-02-28T20:59:13 | GrungeWerX | false | null | 0 | o7xxq98 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7xxq98/ | false | 1 |
t1_o7xxpf1 | Great idea, want to test it. Can you share your prompt, command, and results? If works, next I'll try to make a CAS calculator. Could be a good test for testing agentic behaviour. | 1 | 0 | 2026-02-28T20:59:06 | Several-Tax31 | false | null | 0 | o7xxpf1 | false | /r/LocalLLaMA/comments/1rhdddm/qwen_35_122ba10b_q3_k_xl_ud_actually_passed_my/o7xxpf1/ | false | 1 |
t1_o7xxjho | could you describe in a few words what that AI hallucination is about and why we might need it? | 2 | 0 | 2026-02-28T20:58:13 | MelodicRecognition7 | false | null | 0 | o7xxjho | false | /r/LocalLLaMA/comments/1rhcjd3/p_ucs_v12_judgment_preservation_in_persistent_ai/o7xxjho/ | false | 2 |
t1_o7xx7ad | GLM is a beast in OpenCode. Even a heavily quantized 4.7 flash can work autonomously for hours without a single failed tool call IME.
Minimax sounds, on paper, like the stronger model. But it only ever made it 4mins in OpenCode without botching the tool calls so badly I had to manually start it again. No idea why. | 4 | 0 | 2026-02-28T20:56:21 | Zestyclose839 | false | null | 0 | o7xx7ad | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7xx7ad/ | false | 4 |
t1_o7xx2q1 | Man I have 48 VRAM and I'm frustrated because to get good intelligence I need small context. If I get a something smaller I get more context but as context grows it gets sluggish... and with lower intelligence it just becomes buggy. | 1 | 0 | 2026-02-28T20:55:39 | Somarring | false | null | 0 | o7xx2q1 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7xx2q1/ | false | 1 |
t1_o7xx2ki | My two cents. Firstly, has anyone worked out how to use the NPU yet? On current gpu only setup, speed is proportional to power i.e clock speed, so increase gpu power, running ubuntu 25.10, with the 24 versions of rocm and amggpu. Using ollama so can work out exact speeds currently, however,
Increase to full power, ty... | 1 | 0 | 2026-02-28T20:55:38 | hay-yo | false | null | 0 | o7xx2ki | false | /r/LocalLLaMA/comments/1re9h4r/some_qwen35_benchmarks_on_strix_halo_llamacpp/o7xx2ki/ | false | 1 |
t1_o7xwyn4 | Note:
I daily drove cachyos from 2021-2024 on a laptop and pc.
Laptop had a Ryzen 5 4500u (zen 2) with 8gb ram.
PC had a Ryzen 3 3200g (zen +), 16gb ram and RX 6600xt in the final year.
Cachyos is definitely a power user's distro. It is arch linux with every performance optimization under the sun. It wouldn't hold y... | 1 | 0 | 2026-02-28T20:55:03 | Fresh_Finance9065 | false | null | 0 | o7xwyn4 | false | /r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7xwyn4/ | false | 1 |
t1_o7xwx4a | I feel like doing single use is slowly falling behind tho, all the big players are trying to get people to use a dozen terminals filled with parallel agents and I'm slowly beginning to agree. It might take a while for the best path forward to crystallize but I feel just staying on a single loop/single thread isn't quit... | 1 | 0 | 2026-02-28T20:54:49 | ethereal_intellect | false | null | 0 | o7xwx4a | false | /r/LocalLLaMA/comments/1rhbtnw/the_state_of_openweights_llms_performance_on/o7xwx4a/ | false | 1 |
t1_o7xws2c | I was able to get gpt-oss to not include anything extra 90% of the time and only occasionally it would include some extra text with my prompt. But with qwen3.5 it include all thinking. I am updating my script to manually remove the thinking but was hoping for a way to handle it to avoid it as much as possible. | 1 | 0 | 2026-02-28T20:54:04 | jpc82 | false | null | 0 | o7xws2c | false | /r/LocalLLaMA/comments/1rhcj7b/qwen35_with_lm_studio_api_without_thinking_output/o7xws2c/ | false | 1 |
t1_o7xwqun | Link | 1 | 0 | 2026-02-28T20:53:53 | Quiet_Dasy | false | null | 0 | o7xwqun | false | /r/LocalLLaMA/comments/1r7fgqd/built_a_3_in_1_colab_notebook_with_qwen3tts_voice/o7xwqun/ | false | 1 |
t1_o7xwq75 | You could try dropping -ncmoe and instead use -fit (which is on by default nowadays). If you do that then also adjust --fit-target, the default leaves 1024MB VRAM unused and a smaller value may work, depending on whether you need to leave some VRAM for other uses or not. | 1 | 0 | 2026-02-28T20:53:47 | OsmanthusBloom | false | null | 0 | o7xwq75 | false | /r/LocalLLaMA/comments/1rh9983/qwen_35b_a3b_aessedai_finetune_on_8gb_vram_and/o7xwq75/ | false | 1 |
t1_o7xwpqv | Theres a strange and bitter irony knowing that theyre willing throw as much money and time as necessary at the models but asking for a raise, or even justifying a raise, let alone fair compensation, is still somehow taboo. | 7 | 0 | 2026-02-28T20:53:43 | teleprint-me | false | null | 0 | o7xwpqv | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xwpqv/ | false | 7 |
t1_o7xwc87 | Surprised you’re getting good results with gpt-oss considering it’s entirely synthetic data | 1 | 0 | 2026-02-28T20:51:41 | TheLegendOfKitty123 | false | null | 0 | o7xwc87 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7xwc87/ | false | 1 |
t1_o7xw9jv | When AI starts surveying catch basins and manholes call me back | 1 | 0 | 2026-02-28T20:51:16 | SMELLYCHEESE8 | false | null | 0 | o7xw9jv | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o7xw9jv/ | false | 1 |
t1_o7xw4ju | No, you're just naive (and so was Hanlon). | 0 | 0 | 2026-02-28T20:50:31 | KarlGustavXII | false | null | 0 | o7xw4ju | false | /r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7xw4ju/ | false | 0 |
t1_o7xw28h | I think this is the way forward, except a giant model with mostly ssd loadable layers instead of the 50/50 split there. If they manage a deepseek 1tb sdd 128b ram 10a vram mix it would be wild. Or whatever they go for | 3 | 0 | 2026-02-28T20:50:10 | ethereal_intellect | false | null | 0 | o7xw28h | false | /r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7xw28h/ | false | 3 |
t1_o7xvx41 | Hello. Good work! The dataset isn't available, right?
What are my options if I want to do something like that only for structured data? I have millions of trees, that can be represented in , say, XML, if needed, and train to convert the trees in a specific way. | 1 | 0 | 2026-02-28T20:49:23 | pgess | false | null | 0 | o7xvx41 | false | /r/LocalLLaMA/comments/1qym566/i_trained_a_18m_params_model_from_scratch_on_a/o7xvx41/ | false | 1 |
t1_o7xvsz3 | I had four that I bought back when they 100 each, but sold them in favor of P40s because the latter has 24GB. Now I have 8 P40s in one rig. Not exceptionally fast, but 192GB VRAM means I can run 200B+ models at Q4 with a metric ton of context. | 7 | 0 | 2026-02-28T20:48:45 | FullstackSensei | false | null | 0 | o7xvsz3 | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xvsz3/ | false | 7 |
t1_o7xvs6i | DGX spark has arm64 issues coupled with using vLLM as means to serve up the model. If you’re x86 and using something other than vLLM I reckon you’ll be fine stability wise… as for NVFP4, I’m only doing this to run the 122B parameters on my 128GB setup. I managed to get the Intel/Qwen3.5-122B-A10B-int4-AutoRound loaded ... | 1 | 0 | 2026-02-28T20:48:38 | lenjet | false | null | 0 | o7xvs6i | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7xvs6i/ | false | 1 |
t1_o7xvos5 | I have an old Turing card (rtx 8000 48gb) - would a Q6 fit? What’s the closest it can compare to with the commercial models? (Gemini, GPT, Claude code versions? )
I bought an Asus GX10 for this purpose - spending too much on subscriptions, but wondering if the quality drop is significant? | 1 | 0 | 2026-02-28T20:48:06 | hyllus123 | false | null | 0 | o7xvos5 | false | /r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7xvos5/ | false | 1 |
t1_o7xvnhr | I asked gemini 3.1 to optimise for me while keeping context 64k.
Results:
Generation speed:(write) 20.3tk/s -> 31.16tk/s
Prompt Eval(read) 150-200tk/s -> 62.tk/s
Not sure if I like it but the generation speed is impressive
export GGML_CUDA_GRAPH_OPT=1
llama-server \
-m Qwen3.5-35B-A3B-Q4_K_M-0000... | 2 | 0 | 2026-02-28T20:47:55 | sagiroth | false | null | 0 | o7xvnhr | false | /r/LocalLLaMA/comments/1rh9983/qwen_35b_a3b_aessedai_finetune_on_8gb_vram_and/o7xvnhr/ | false | 2 |
t1_o7xvmjz | I've run it on a 3090 but it's slow. Need lots of system RAM and a solid CPU (I have a 10 core i7) and it's usable, but slow. Not as fast as like GPT 5.3 | 1 | 0 | 2026-02-28T20:47:46 | ScuffedBalata | false | null | 0 | o7xvmjz | false | /r/LocalLLaMA/comments/1r6jklq/are_20100b_models_enough_for_good_coding/o7xvmjz/ | false | 1 |
t1_o7xvi4m | All I want for Christmas is another RTX pro 6000. | 11 | 0 | 2026-02-28T20:47:06 | teh_spazz | false | null | 0 | o7xvi4m | false | /r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7xvi4m/ | false | 11 |
t1_o7xv7nl | I ran my first wizard-13b finetune just today (mid/late 2023), I still love the style for a quick roleplay game. Sometimes short and quick turns make it more fun. | 0 | 0 | 2026-02-28T20:45:31 | cosimoiaia | false | null | 0 | o7xv7nl | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7xv7nl/ | false | 0 |
t1_o7xuxyv | Agree, I'm very happy for this sub's existence! | 4 | 0 | 2026-02-28T20:44:04 | TopTippityTop | false | null | 0 | o7xuxyv | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xuxyv/ | false | 4 |
t1_o7xuvkj | 0% cache hit rate on the model page unfortunately | 1 | 0 | 2026-02-28T20:43:43 | Temporary-Tourist-10 | false | null | 0 | o7xuvkj | false | /r/LocalLLaMA/comments/1rfr254/stepfun35flash_kv_cache_openrouter/o7xuvkj/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.