name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o81p1sq | A slightly different angle: we stopped evaluating outputs only at the response level and started measuring decision points inside the flow. For example, did the system choose the right tool, retrieve the right documents, or escalate when confidence was low? Some of our biggest UX gains came from improving those hidden ... | 1 | 0 | 2026-03-01T13:09:49 | Zestyclose_Ring1123 | false | null | 0 | o81p1sq | false | /r/LocalLLaMA/comments/1rhtyyq/using_evaluations_on_llama_models/o81p1sq/ | false | 1 |
t1_o81oo5m | Yesssss | 4 | 0 | 2026-03-01T13:07:12 | ParthProLegend | false | null | 0 | o81oo5m | false | /r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o81oo5m/ | false | 4 |
t1_o81onxn | For git diff mcp, do you create your own mcp server, or you just use an external one? | 1 | 0 | 2026-03-01T13:07:10 | Mean-Sprinkles3157 | false | null | 0 | o81onxn | false | /r/LocalLLaMA/comments/1rhsto2/i_replaced_my_entire_automation_stack_with_mcp/o81onxn/ | false | 1 |
t1_o81okh8 | Speculative decoding ❤️ | 45 | 0 | 2026-03-01T13:06:30 | streppelchen | false | null | 0 | o81okh8 | false | /r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o81okh8/ | false | 45 |
t1_o81ok5f | I saw this comment last night and thought about it so much that I had to come back today to reply and thank you. This makes perfect sense and I never even considered it from this perspective. | 11 | 0 | 2026-03-01T13:06:26 | PassengerPigeon343 | false | null | 0 | o81ok5f | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81ok5f/ | false | 11 |
t1_o81ohmg | It's not there on other quants, just on the lmstudio-community ones | 3 | 0 | 2026-03-01T13:05:57 | Skystunt | false | null | 0 | o81ohmg | false | /r/LocalLLaMA/comments/1rhr5ko/is_there_a_way_to_disable_thinking_on_qwen_35_27b/o81ohmg/ | false | 3 |
t1_o81oec0 | 👌 | 1 | 0 | 2026-03-01T13:05:20 | ElinaRayne777 | false | null | 0 | o81oec0 | false | /r/LocalLLaMA/comments/1qpdn1t/fashn_vton_v15_apache20_virtual_tryon_model_runs/o81oec0/ | false | 1 |
t1_o81od6d | That's a bug or misconfiguration, not normal. | -6 | 0 | 2026-03-01T13:05:07 | kiwibonga | false | null | 0 | o81od6d | false | /r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o81od6d/ | false | -6 |
t1_o81oagx | Yeah I don’t know how this isn’t obvious to people. It’s like living in two different realities.
It is nearly guaranteed this admin is using Claude to make important decisions for them.
People need to learn how to think in different perspectives. We can’t be speed running to world models when people have less perce... | 1 | 0 | 2026-03-01T13:04:35 | brownman19 | false | null | 0 | o81oagx | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81oagx/ | false | 1 |
t1_o81o476 | Are you finding it any good with FP8 kv-cache? I saw a note from cyankiwi suggesting that the pure 4-bit AWQ doesn't play well with kv quant. And are you calculating kv scales anywhere? | 1 | 0 | 2026-03-01T13:03:24 | thigger | false | null | 0 | o81o476 | false | /r/LocalLLaMA/comments/1rf6p63/qwen35_on_vllm_with_fp8_kvcache/o81o476/ | false | 1 |
t1_o81o2qs | Have you tried Weesper Neon Flow? It's a local voice dictation app for Windows that works completely offline. Supports 50+ languages and inserts text anywhere you can type. Not free but quite affordable, and runs locally for privacy. | 1 | 0 | 2026-03-01T13:03:06 | Weesper75 | false | null | 0 | o81o2qs | false | /r/LocalLLaMA/comments/1qnnq9r/looking_for_a_free_windows_tool_dictation_ai/o81o2qs/ | false | 1 |
t1_o81nhh6 | That 40% reduction is serious — what does your pre-filter check for specifically? | 1 | 0 | 2026-03-01T12:58:56 | LOGOSOSAI | false | null | 0 | o81nhh6 | false | /r/LocalLLaMA/comments/1rhv06r/how_are_you_preventing_runaway_ai_agent_behavior/o81nhh6/ | false | 1 |
t1_o81ndqu | You are majorly downplaying how much of a privacy nightmare OP has presented | 2 | 0 | 2026-03-01T12:58:12 | ___fallenangel___ | false | null | 0 | o81ndqu | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81ndqu/ | false | 2 |
t1_o81ndk2 | Are you using Peta.io yourself or building the MCP intercept layer in-house? | 1 | 0 | 2026-03-01T12:58:10 | LOGOSOSAI | false | null | 0 | o81ndk2 | false | /r/LocalLLaMA/comments/1rhv06r/how_are_you_preventing_runaway_ai_agent_behavior/o81ndk2/ | false | 1 |
t1_o81napx | REAP removes experts, and while the 35B is MoE, the 27B is not, it's dense, and it's much more difficult to prube. Better wait for smaller qwen 3.5 models to come out. | 4 | 0 | 2026-03-01T12:57:36 | Awwtifishal | false | null | 0 | o81napx | false | /r/LocalLLaMA/comments/1rhvviu/qwen35_reap/o81napx/ | false | 4 |
t1_o81n0p2 | 6 month is pretty "immediate" by the government standards. | 7 | 0 | 2026-03-01T12:55:37 | Barafu | false | null | 0 | o81n0p2 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81n0p2/ | false | 7 |
t1_o81mu9i | all the qwen3.5 series share the same chat template structure, but my small potato comuter cannot run 27b very well, so you may test it on your own. | 2 | 0 | 2026-03-01T12:54:20 | kironlau | false | null | 0 | o81mu9i | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o81mu9i/ | false | 2 |
t1_o81mr7x | 7 | 0 | 2026-03-01T12:53:45 | jacek2023 | false | null | 0 | o81mr7x | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81mr7x/ | false | 7 | |
t1_o81mna7 | Could you please benchmark https://huggingface.co/AesSedai/Qwen3.5-122B-A10B-GGUF/tree/main/IQ2_XXS ? I'm very curious how such an aggressive quant would perform. | 3 | 0 | 2026-03-01T12:52:56 | LMLocalizer | false | null | 0 | o81mna7 | false | /r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o81mna7/ | false | 3 |
t1_o81mdi0 | Will do, thanks | 1 | 0 | 2026-03-01T12:50:58 | dumbelco | false | null | 0 | o81mdi0 | false | /r/LocalLLaMA/comments/1rh2tmu/benchmarking_opensource_llms_for_security/o81mdi0/ | false | 1 |
t1_o81mdid | Welcome. I just copied the idea from others. | 1 | 0 | 2026-03-01T12:50:58 | kironlau | false | null | 0 | o81mdid | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o81mdid/ | false | 1 |
t1_o81mbs3 | I would like to see qwen3.5 coder ver. | 7 | 0 | 2026-03-01T12:50:37 | aezak_me | false | null | 0 | o81mbs3 | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81mbs3/ | false | 7 |
t1_o81m90s | what sort of bifurcation are you running on that? | 2 | 0 | 2026-03-01T12:50:03 | kashimacoated | false | null | 0 | o81m90s | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o81m90s/ | false | 2 |
t1_o81m2n8 | Tuesday is they fav day for Qwen realease | 3 | 0 | 2026-03-01T12:48:48 | Skyline34rGt | false | null | 0 | o81m2n8 | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81m2n8/ | false | 3 |
t1_o81m0nt | I tested this model on my M1 Max 64 GB, in Claude Code, with [these settings](https://pchalasani.github.io/claude-code-tools/integrations/local-llms/#qwen35-35b-a3b--smart-general-purpose-moe) but I only get \~ 12 tok/s generation, nowhere close to the \~27 tok/s you're getting. Also, setting thinking budget to 0 didn... | 1 | 0 | 2026-03-01T12:48:23 | SatoshiNotMe | false | null | 0 | o81m0nt | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o81m0nt/ | false | 1 |
t1_o81ltve | Way to reply to a conversation that happened 9 days ago. | 1 | 0 | 2026-03-01T12:47:01 | sleepingsysadmin | false | null | 0 | o81ltve | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o81ltve/ | false | 1 |
t1_o81lsut | [removed] | 1 | 0 | 2026-03-01T12:46:49 | [deleted] | true | null | 0 | o81lsut | false | /r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o81lsut/ | false | 1 |
t1_o81lojb | How that helps you with 8GB setup? | 1 | 1 | 2026-03-01T12:45:55 | jacek2023 | false | null | 0 | o81lojb | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81lojb/ | false | 1 |
t1_o81ln1f | The n\_swa = 1 in your logs means --swa-full isn't taking effect; the SWA window should be your full context size, not 1. Most likely your llama.cpp build is too old. Pull latest from source and rebuild:
`cd llama.cpp && git pull && cmake -B build && cmake --build build --config Release -j`
Then confirm --swa-f... | 1 | 0 | 2026-03-01T12:45:36 | SatoshiNotMe | false | null | 0 | o81ln1f | false | /r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/o81ln1f/ | false | 1 |
t1_o81llvi | Dawg, if you're going to do all this stuff, you need to find a way to automatically wipe all of the logs so you can't view your family's private conversations or usage data. It's not for you to see. | 1 | 0 | 2026-03-01T12:45:22 | ___fallenangel___ | false | null | 0 | o81llvi | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81llvi/ | false | 1 |
t1_o81le8n | Been there too. You’re making a product for them competing with many others. Good luck! | 1 | 0 | 2026-03-01T12:43:49 | somesortapsychonaut | false | null | 0 | o81le8n | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81le8n/ | false | 1 |
t1_o81lcxz | > unless you just end up writing your own OS
LLMOS when? | 1 | 0 | 2026-03-01T12:43:33 | AvidCyclist250 | false | null | 0 | o81lcxz | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o81lcxz/ | false | 1 |
t1_o81l8ve | 30-50B size models are unlikely from them(I would never say No). I think it's been a year they released models in such size as most are 600B+ models. So getting 100B additionally itself a bonus. | 3 | 0 | 2026-03-01T12:42:42 | pmttyji | false | null | 0 | o81l8ve | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81l8ve/ | false | 3 |
t1_o81l4cp | Appreciate the attempt to help but anyone that knows about KV cache knows it has costs otherwise why not always have it at Q1 lol
The actual answer to this problem is the unsloth GGUFs have bugs both in the chat template and MXFP4 layers that were included where they shouldn't be, they already said everyone needs to r... | -5 | 0 | 2026-03-01T12:41:44 | trusty20 | false | null | 0 | o81l4cp | false | /r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o81l4cp/ | false | -5 |
t1_o81kxz7 | honestly hard to put a % on it without benchmarking, but snap overhead is usually measurable for anything latency-sensitive. for my setup i run llama.cpp directly on android/ios through JNI and Swift wrappers. quantized GGUF models, usually 1-3B param range like Qwen3 0.6B or Gemma 3 1B. on a midrange phone you get may... | 1 | 0 | 2026-03-01T12:40:24 | angelin1978 | false | null | 0 | o81kxz7 | false | /r/LocalLLaMA/comments/1rfmzfp/new_upcoming_ubuntu_2604_lts_will_be_optimized/o81kxz7/ | false | 1 |
t1_o81ktn0 | 17 - 9 = 8. I guess 2 or 3 models then (Ex: 2 models + 2 Base + 2 FP8). | 1 | 0 | 2026-03-01T12:39:28 | pmttyji | false | null | 0 | o81ktn0 | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81ktn0/ | false | 1 |
t1_o81kpwi | 😂😂😂perfect analogy | 1 | 0 | 2026-03-01T12:38:40 | sinogrime | false | null | 0 | o81kpwi | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o81kpwi/ | false | 1 |
t1_o81kkdc | That wouldn’t help people with 12 GB of VRAM, and I’m not interested in DeepSeek until I see a usable model. For now, DeepSeek is just a cloud model for me, not something I can run locally. | -1 | 1 | 2026-03-01T12:37:27 | jacek2023 | false | null | 0 | o81kkdc | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81kkdc/ | false | -1 |
t1_o81kk62 | Just started testing Qwen 3.5 35B-A3B yesterday, too early to say where it shines and where it breaks, but it's doing very well for what I've tested so far. One word of caution though, right now Ollama have several issues with Qwen 3.5, so better use Llama.cpp, that has worked much better for me.
Thanks for the insigh... | 1 | 0 | 2026-03-01T12:37:25 | UncleRedz | false | null | 0 | o81kk62 | false | /r/LocalLLaMA/comments/1rhsto2/i_replaced_my_entire_automation_stack_with_mcp/o81kk62/ | false | 1 |
t1_o81khuw | I recently used claude code (with bubblewrap and --dangerously-skip-permissions) to scrape prices from 500+ websites. Typically 10 at a time. Incredibly timesaving; prices are represented in many different ways; in pdf's, behind javascript, etc. | 1 | 0 | 2026-03-01T12:36:55 | dr_fungus | false | null | 0 | o81khuw | false | /r/LocalLLaMA/comments/1rhtbwx/where_do_you_use_ai_in_your_workflow/o81khuw/ | false | 1 |
t1_o81kh8a | Dawg you are the biggest security threat to your own operation | 1 | 0 | 2026-03-01T12:36:47 | ___fallenangel___ | false | null | 0 | o81kh8a | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81kh8a/ | false | 1 |
t1_o81keo0 | sorry that you have to discover that theyre npcs that way, but better than never right | 0 | 0 | 2026-03-01T12:36:13 | Both-Employment-5113 | false | null | 0 | o81keo0 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81keo0/ | false | 0 |
t1_o81k4tl | It is a) supported by HPE and b)it fits in my 1U and c) it uses less than 60w. | 1 | 0 | 2026-03-01T12:34:02 | allpowerfulee | false | null | 0 | o81k4tl | false | /r/LocalLLaMA/comments/1rhifeg/im_waiting_for_my_nvidia_a2_to_crawl_in_to_run_a/o81k4tl/ | false | 1 |
t1_o81k0hb | yes with "use kaboom" directive | 1 | 0 | 2026-03-01T12:33:04 | Negative-Magazine174 | false | null | 0 | o81k0hb | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81k0hb/ | false | 1 |
t1_o81jzn8 | I hope this time they release additional models like 100B, etc., | 3 | 1 | 2026-03-01T12:32:53 | pmttyji | false | null | 0 | o81jzn8 | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81jzn8/ | false | 3 |
t1_o81juqo | yeah, these include base models and FP8 versions and such. | 0 | 0 | 2026-03-01T12:31:48 | And1mon | false | null | 0 | o81juqo | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81juqo/ | false | 0 |
t1_o81jsgn | Yes | 19 | 0 | 2026-03-01T12:31:17 | Mushoz | false | null | 0 | o81jsgn | false | /r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o81jsgn/ | false | 19 |
t1_o81jq5x | 17 items what the | 2 | 0 | 2026-03-01T12:30:45 | Odd-Ordinary-5922 | false | null | 0 | o81jq5x | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81jq5x/ | false | 2 |
t1_o81jhgk | Collection updated again, seems like we're in for a treat!
https://preview.redd.it/fed7q13dgfmg1.png?width=1037&format=png&auto=webp&s=3bfff7400edceb0272fa5d988c7f8ad92018e8ab
| 4 | 0 | 2026-03-01T12:28:50 | And1mon | false | null | 0 | o81jhgk | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81jhgk/ | false | 4 |
t1_o81jg54 | How could I forgot about the daily V4 prediction post? | 2 | 0 | 2026-03-01T12:28:33 | OC2608 | false | null | 0 | o81jg54 | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o81jg54/ | false | 2 |
t1_o81jc0p | The whole regime is a mismatch between what the Administration says and the reality. | 0 | 0 | 2026-03-01T12:27:38 | SinnerP | false | null | 0 | o81jc0p | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81jc0p/ | false | 0 |
t1_o81jbfs | I'm really not an expert. But aren't you always fighting token noise? From the first token token probability is getting more and more constrained. It seems like it would be clear / obvious that thinking needs to be as short as possible. | 1 | 0 | 2026-03-01T12:27:30 | Zeeplankton | false | null | 0 | o81jbfs | false | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o81jbfs/ | false | 1 |
t1_o81javh | for the scope boundary problem specifically, a policy layer that intercepts MCP tool calls before execution gives you deny/require-approval without relying on the model to self-limit - peta (peta.io) is building exactly this for MCP. retry/spend caps work best at the client layer with a hard circuit breaker so the agen... | 2 | 0 | 2026-03-01T12:27:23 | BC_MARO | false | null | 0 | o81javh | false | /r/LocalLLaMA/comments/1rhv06r/how_are_you_preventing_runaway_ai_agent_behavior/o81javh/ | false | 2 |
t1_o81j301 | Yeah but I want to restrict the vocab in a way thag it can still do that but be more token efficient at generation by having either a hard rule or a bias towards "longer" tokens. Like for example when closing parenthesis ]) to be more likely to use "])" rather than "]" ")". Does it make sense? | 1 | 0 | 2026-03-01T12:25:37 | Windowsideplant | false | null | 0 | o81j301 | false | /r/LocalLLaMA/comments/1rhvqwl/restricting_token_vocabulary_at_output_for_coding/o81j301/ | false | 1 |
t1_o81j0jd | For example you definitely need to turn off thinking on deepseek for better results :) | 2 | 0 | 2026-03-01T12:25:03 | Ronrel | false | null | 0 | o81j0jd | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o81j0jd/ | false | 2 |
t1_o81ipc8 | Would Microsoft claiming IP over your AI assisted code be an issue? | 2 | 0 | 2026-03-01T12:22:30 | Monkey_1505 | false | null | 0 | o81ipc8 | false | /r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/o81ipc8/ | false | 2 |
t1_o81imgm | But the couch fucker in charge said "I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology." Lol | -5 | 0 | 2026-03-01T12:21:51 | MediocreAd8440 | false | null | 0 | o81imgm | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81imgm/ | false | -5 |
t1_o81i8uk | You can be 100% certain they are using every available resource to figure out how to get around any guardrails. The genie is already out of the bottle. | 1 | 0 | 2026-03-01T12:18:47 | joey2scoops | false | null | 0 | o81i8uk | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81i8uk/ | false | 1 |
t1_o81i8cj | The problem is that quantization seems to increase probability of entering infinite generation loops. I really wish we had some kind of looped output detector in llama-server that would detect looped generations and break them early instead of wasting compute. | 2 | 0 | 2026-03-01T12:18:40 | fairydreaming | false | null | 0 | o81i8cj | false | /r/LocalLLaMA/comments/1rg9lli/little_qwen_35_27b_and_qwen_35ba3b_models_did/o81i8cj/ | false | 2 |
t1_o81i2an | Always bet on Tuesday | 6 | 0 | 2026-03-01T12:17:19 | KaMaFour | false | null | 0 | o81i2an | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81i2an/ | false | 6 |
t1_o81i0bx | [removed] | 1 | 0 | 2026-03-01T12:16:52 | [deleted] | true | null | 0 | o81i0bx | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81i0bx/ | false | 1 |
t1_o81hvd4 | are you on the latest version ? it support antropic format since a few release look at v1 rest api and supported endpoint in the developers tab. then it is just a matter of changing the endpoint in claude code. | 2 | 0 | 2026-03-01T12:15:43 | Wild_Requirement8902 | false | null | 0 | o81hvd4 | false | /r/LocalLLaMA/comments/1rhgzyb/cant_use_claude_code_with_ollama_local_model/o81hvd4/ | false | 2 |
t1_o81hsp8 | Yes I use a m4 mini non pro 16gb | 2 | 0 | 2026-03-01T12:15:07 | xyzmanas | false | null | 0 | o81hsp8 | false | /r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o81hsp8/ | false | 2 |
t1_o81hn9o | You know code needs variable, function names and strings right | 1 | 0 | 2026-03-01T12:13:51 | Velocita84 | false | null | 0 | o81hn9o | false | /r/LocalLLaMA/comments/1rhvqwl/restricting_token_vocabulary_at_output_for_coding/o81hn9o/ | false | 1 |
t1_o81hmje | `llamacpp` with its grammar (gbnf)? even just thinking about it tho, seems like it'd be a monumental task | 1 | 0 | 2026-03-01T12:13:41 | x11iyu | false | null | 0 | o81hmje | false | /r/LocalLLaMA/comments/1rhvqwl/restricting_token_vocabulary_at_output_for_coding/o81hmje/ | false | 1 |
t1_o81hm7d | I can't run DeepSeek on my setup, so that would be pointless | 1 | 1 | 2026-03-01T12:13:36 | jacek2023 | false | null | 0 | o81hm7d | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81hm7d/ | false | 1 |
t1_o81hkw8 | There will be a 2B and a smaller 0.8B | 24 | 0 | 2026-03-01T12:13:18 | SystematicKarma | false | null | 0 | o81hkw8 | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81hkw8/ | false | 24 |
t1_o81hhgv | In llama.cpp (llama-server), If you don’t pass cache-type arguments, it stays at FP16.
Right? | 53 | 0 | 2026-03-01T12:12:30 | kripper-de | false | null | 0 | o81hhgv | false | /r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o81hhgv/ | false | 53 |
t1_o81h7nm | It would be nice to have a small model of less than 2b to do speculative decoding with the 27b | 52 | 0 | 2026-03-01T12:10:14 | ArckToons | false | null | 0 | o81h7nm | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81h7nm/ | false | 52 |
t1_o81h18i | I can't even get my wife to use a password manager 😅😎🤙😂 | 3 | 0 | 2026-03-01T12:08:45 | LongBeachHXC | false | null | 0 | o81h18i | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81h18i/ | false | 3 |
t1_o81gzim | News flash… kids are wise to things like text monitoring etc.. for example you think you know their instagram account but that’s just the decoy, you def don’t know about their secret one.
And they don’t use apps exclusively they will be logging into their secondary accounts that browser. | 6 | 0 | 2026-03-01T12:08:20 | skydiver19 | false | null | 0 | o81gzim | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81gzim/ | false | 6 |
t1_o81guwn | In the CBS interview Dario said that actually there has been zero official communication from the government about the designation as supply chain risk. The only thing he knew was the same everyone else knew, the tweets by Trump and Hegseth. | 39 | 0 | 2026-03-01T12:07:15 | cyberdork | false | null | 0 | o81guwn | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81guwn/ | false | 39 |
t1_o81gh6a | Try Qwen3.5-35b-a3b. | 1 | 0 | 2026-03-01T12:04:03 | jslominski | false | null | 0 | o81gh6a | false | /r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/o81gh6a/ | false | 1 |
t1_o81gfr2 | I think you sample has a typo in 'temperture' vs 'temperature' | 4 | 0 | 2026-03-01T12:03:43 | Thrynneld | false | null | 0 | o81gfr2 | false | /r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o81gfr2/ | false | 4 |
t1_o81gfj8 | Probably.
BTW do the same for Deepseek, Gemma, Llama, Grok, GPT-OSS, etc., | 7 | 0 | 2026-03-01T12:03:40 | pmttyji | false | null | 0 | o81gfj8 | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81gfj8/ | false | 7 |
t1_o81gag2 | U can run Qwen 3.5 35B A3B at circa 100k context and quite redonable speeds of about 40-50 tkps
I run it at 8gb vram and 32ram at 32tkps and 64k context | 1 | 0 | 2026-03-01T12:02:28 | sagiroth | false | null | 0 | o81gag2 | false | /r/LocalLLaMA/comments/1rhcnbt/best_coding_model_to_run_entirely_on_12gb_vram/o81gag2/ | false | 1 |
t1_o81fzjq | PewDiePie is some kind of coder? | 1 | 0 | 2026-03-01T11:59:54 | soober | false | null | 0 | o81fzjq | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o81fzjq/ | false | 1 |
t1_o81fq1x | You're absolutely right! | 10 | 0 | 2026-03-01T11:57:39 | 1-800-methdyke | false | null | 0 | o81fq1x | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81fq1x/ | false | 10 |
t1_o81fpc2 | Used it for weeks now, only had a hard time adding localization/i18n on my project, so I did it manually instead. Great model but I've been having issues with tool calling and there are times the model will give up randomly. | 1 | 0 | 2026-03-01T11:57:29 | Jeffhubert113 | false | null | 0 | o81fpc2 | false | /r/LocalLLaMA/comments/1rc6c8m/feels_like_magic_a_local_gptoss_20b_is_capable_of/o81fpc2/ | false | 1 |
t1_o81fpak | llama.cpp is not applying --context-shift to 3.5
Is that a 'just me' problem? | 1 | 0 | 2026-03-01T11:57:28 | crantob | false | null | 0 | o81fpak | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o81fpak/ | false | 1 |
t1_o81f6kh | 1. cool benchmark compliment
2. i am missing what KV cache precision was used for all tests
3. i think much harder benchmarks than gsm8k and mmlu would have been better, because gsm8k and mmlu are soo much ingested and trained on that benchmarking them is worthless | 1 | 0 | 2026-03-01T11:52:57 | snapo84 | false | null | 0 | o81f6kh | false | /r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o81f6kh/ | false | 1 |
t1_o81f61p | Idiocracy at peak. AI is used to parse through tons of data that is being captured not to predict future. Yes the underlying mechanism is predicting next word but it is not actual future. | 6 | 0 | 2026-03-01T11:52:50 | gpt872323 | false | null | 0 | o81f61p | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81f61p/ | false | 6 |
t1_o81f4rh | I don't know. I don't use ollama. | 5 | 0 | 2026-03-01T11:52:33 | tat_tvam_asshole | false | null | 0 | o81f4rh | false | /r/LocalLLaMA/comments/1rhr5ko/is_there_a_way_to_disable_thinking_on_qwen_35_27b/o81f4rh/ | false | 5 |
t1_o81f1h4 | Which local LLM are you using in your macbook laptop? I am hunting for an LLM that can modify n8n workflows using n8n-mcp (github awesome tool) but only claude is working so far. I have a very basic laptop (beginner in tech but so excited! - been a year since I started learning Power Automate Desktop and using it to au... | 1 | 0 | 2026-03-01T11:51:45 | AdministrationOk3584 | false | null | 0 | o81f1h4 | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o81f1h4/ | false | 1 |
t1_o81esbu | Congrats, you are famous now | 59 | 0 | 2026-03-01T11:49:31 | No_Swimming6548 | false | null | 0 | o81esbu | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o81esbu/ | false | 59 |
t1_o81eo6t | This, basically. | 3 | 0 | 2026-03-01T11:48:31 | atika | false | null | 0 | o81eo6t | false | /r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o81eo6t/ | false | 3 |
t1_o81ent9 | Sounds like what OpnenAI promoting astroturfed Claw Bot would write. | 1 | 0 | 2026-03-01T11:48:25 | Voxandr | false | null | 0 | o81ent9 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o81ent9/ | false | 1 |
t1_o81ekre | > | 1 | 0 | 2026-03-01T11:47:40 | Disastrous_Talk7604 | false | null | 0 | o81ekre | false | /r/LocalLLaMA/comments/1rh7mlv/before_i_rewrite_my_stack_again_advice/o81ekre/ | false | 1 |
t1_o81eko0 | Have you checked the sampling? Never had looping issues. Probably I had already the right release. Did a run on the 35BA3B in the [comments ](https://www.reddit.com/r/LocalLLaMA/comments/1rhfque/comment/o81edaq/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)
| 1 | 0 | 2026-03-01T11:47:39 | Holiday_Purpose_3166 | false | null | 0 | o81eko0 | false | /r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o81eko0/ | false | 1 |
t1_o81ej03 | It's Prohibitions **on** domestic mass surveillance, not Prohibition **of** domestic mass surveillance. So it's not prohibited, but they have some internal rules that we don't know. | 1 | 0 | 2026-03-01T11:47:13 | LevianMcBirdo | false | null | 0 | o81ej03 | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o81ej03/ | false | 1 |
t1_o81edaq | I'd have to sell my boyfriend's wife kidney to get hardware to run the A17B. Here's the 35BA3B
**Total Score:** 3540 **Memory Usage:** 30GB VRAM **Accuracy per VRAM/RAM:** 2.17% **Context:** 252,000
Replaced my Q5\_K\_M with Unsloth's newer UD-Q5\_K\_XL.
Beats all Devstral Small 2 in the post! Nice. Just slightly po... | 5 | 0 | 2026-03-01T11:45:50 | Holiday_Purpose_3166 | false | null | 0 | o81edaq | false | /r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o81edaq/ | false | 5 |
t1_o81ecoh | This is exactly why the infrastructure layer matters more than the model layer right now. If access to frontier models becomes the leverage, then who controls the compute and where the data sits becomes the real strategic question. Governments are figuring this out way too slowly compared to the companies building the ... | 3 | 0 | 2026-03-01T11:45:40 | BreizhNode | false | null | 0 | o81ecoh | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81ecoh/ | false | 3 |
t1_o81e8j4 | We cap agent runs with a hard token budget per session and a max execution time. Beyond that, the real lifesaver has been deterministic pre-filters before the LLM even sees the input, kills maybe 40% of unnecessary calls. For spending, we track cost per session in a lightweight DB and auto-terminate if it crosses the t... | 3 | 0 | 2026-03-01T11:44:40 | BreizhNode | false | null | 0 | o81e8j4 | false | /r/LocalLLaMA/comments/1rhv06r/how_are_you_preventing_runaway_ai_agent_behavior/o81e8j4/ | false | 3 |
t1_o81e7wb | Try Ling-mini. [bailingmoe - Ling(17B) models' speed is better now](https://www.reddit.com/r/LocalLLaMA/comments/1qp7so2/bailingmoe_ling17b_models_speed_is_better_now/) | 1 | 0 | 2026-03-01T11:44:30 | pmttyji | false | null | 0 | o81e7wb | false | /r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o81e7wb/ | false | 1 |
t1_o81e3cu | this is gnarly in the best way possible. writing a tokenizer and inference engine in freestanding C with zero OS dependencies is no joke. the fact you got wifi working in UEFI boot services mode is honestly the harder part, most UEFI network stacks are a pain. curious what model/quantization you can actually run on the... | 3 | 0 | 2026-03-01T11:43:24 | ElectricalOpinion639 | false | null | 0 | o81e3cu | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o81e3cu/ | false | 3 |
t1_o81dvjj | Interesting. Will do next on MLX, I’ve just got this weekend so will cover as many models. Is that Qwen3 ? Gemma 12b ist as good to talk to nor toolcall for clawdbot in my experience | 1 | 0 | 2026-03-01T11:41:28 | Honest-Debate-6863 | false | null | 0 | o81dvjj | false | /r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o81dvjj/ | false | 1 |
t1_o81du2l | Also he filled pretty much with trustees to get as little opposition as possible | 1 | 0 | 2026-03-01T11:41:06 | LevianMcBirdo | false | null | 0 | o81du2l | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o81du2l/ | false | 1 |
t1_o81dgj3 | Imagine how much ram and ssds this f...kers used for this. | 2 | 0 | 2026-03-01T11:37:45 | fofo9683 | false | null | 0 | o81dgj3 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o81dgj3/ | false | 2 |
t1_o81ddsw | Scam Cultman | 1 | 0 | 2026-03-01T11:37:05 | Polite_Jello_377 | false | null | 0 | o81ddsw | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o81ddsw/ | false | 1 |
t1_o81d8v0 | Qwen 80 Next seems to be what you're asking for. Personally I think dense are the way to go. A dense model in the 50b range with new architecture would be an absolute banger (metaphorically of course!). And yes, this VRAM problem will go away in due time. It's probably a couple of years away. Objectively you can get 2x... | 2 | 0 | 2026-03-01T11:35:51 | Long_comment_san | false | null | 0 | o81d8v0 | false | /r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o81d8v0/ | false | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.