name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7tpkhl | Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/PgFhZ8cnWW)
You've also been given a special flair for your contribution. We appreciate your post!
*I am a bot and this action was performed automatically.* | 1 | 0 | 2026-02-28T04:10:10 | WithoutReason1729 | false | null | 0 | o7tpkhl | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tpkhl/ | true | 1 |
t1_o7tpjfg | [removed] | 1 | 0 | 2026-02-28T04:09:58 | [deleted] | true | null | 0 | o7tpjfg | false | /r/LocalLLaMA/comments/1rgexmk/qwen3527bclaude46opusreasoningdistilledgguf_is_out/o7tpjfg/ | false | 1 |
t1_o7tphf6 | You must not have been around then. When llama3 came out. Gemma was not part of the conversation. | 5 | 0 | 2026-02-28T04:09:35 | segmond | false | null | 0 | o7tphf6 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tphf6/ | false | 5 |
t1_o7tpg01 | They literally say instruct in the model page. If you can converse with them, they are instruct. Reasoning or not. | 2 | 1 | 2026-02-28T04:09:18 | StardockEngineer | false | null | 0 | o7tpg01 | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7tpg01/ | false | 2 |
t1_o7tp9kk | What did you think? | 1 | 0 | 2026-02-28T04:08:02 | Icy_Butterscotch6661 | false | null | 0 | o7tp9kk | false | /r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o7tp9kk/ | false | 1 |
t1_o7tp7o7 | Always happy to meet a fan | 12 | 0 | 2026-02-28T04:07:39 | ForsookComparison | false | null | 0 | o7tp7o7 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7tp7o7/ | false | 12 |
t1_o7tot65 | Not familiar with wyoming, what is it? | 1 | 0 | 2026-02-28T04:04:51 | jc2375 | false | null | 0 | o7tot65 | false | /r/LocalLLaMA/comments/1rgqyhg/wyoming_parakeet_mlx/o7tot65/ | false | 1 |
t1_o7toqsp | > My understanding is Vulkan/ROCm tends to have faster kernels for legacy llama.cpp quant types like q8_0/q4_0/q4_1.
Where can I read more about this? | 1 | 0 | 2026-02-28T04:04:23 | Count_Rugens_Finger | false | null | 0 | o7toqsp | false | /r/LocalLLaMA/comments/1resggh/best_qwen3535ba3b_gguf_for_24gb_vram/o7toqsp/ | false | 1 |
t1_o7tooc6 | Yeah this is one of the trickiest parts for sure. My approach is basically contradiction detection - when new input conflicts with an existing memory, the system flags it and then cross-references against other supporting evidence to decide if the change is legit or just noise.
Like if a user used to talk about livi... | 1 | 0 | 2026-02-28T04:03:55 | Illustrious-Song-896 | false | null | 0 | o7tooc6 | false | /r/LocalLLaMA/comments/1rg489b/what_are_some_edge_cases_that_break_ai_memory/o7tooc6/ | false | 1 |
t1_o7tnwq4 | If they start using grok then they are Im trouble. Grok is probably one of the dummer models Ive tried to work with. | 1 | 1 | 2026-02-28T03:58:35 | mcblockserilla | false | null | 0 | o7tnwq4 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tnwq4/ | false | 1 |
t1_o7tnut0 | I think you would make a dataset with a mix of questions the model can answer and questions it can’t. The heretic refusal checker would be replaced with a correctness checker that accepts “I don’t know” as an answer to questions the model cannot know the answer to. Then obliterate as normal. | 1 | 0 | 2026-02-28T03:58:13 | EconomicMajority | false | null | 0 | o7tnut0 | false | /r/LocalLLaMA/comments/1recm21/hneurons_on_the_existence_impact_and_origin_of/o7tnut0/ | false | 1 |
t1_o7tnnqf | yeah thats fair actually. for onboarding new people into local AI the snap approach makes total sense, just apt install and go. my concern is more for production workloads where that abstraction layer costs you | 1 | 0 | 2026-02-28T03:56:51 | angelin1978 | false | null | 0 | o7tnnqf | false | /r/LocalLLaMA/comments/1rfmzfp/new_upcoming_ubuntu_2604_lts_will_be_optimized/o7tnnqf/ | false | 1 |
t1_o7tnn4x | hard to say exactly without benchmarking but id guess 5-15% overhead from the snap layer, mostly filesystem I/O and cold start. for real time inference it adds up.
for on-device I use llama.cpp with gguf quantized models, currently running gemma 3 1B and a fine tuned qwen 2.5 3B on android via JNI bindings. the key is... | 2 | 0 | 2026-02-28T03:56:44 | angelin1978 | false | null | 0 | o7tnn4x | false | /r/LocalLLaMA/comments/1rfmzfp/new_upcoming_ubuntu_2604_lts_will_be_optimized/o7tnn4x/ | false | 2 |
t1_o7tnm6m | That doesn’t make sense. If Lockheed Martin said they don’t make spy planes designed to watch US citizens, the military can still buy the plane and modify it to spy on US citizens.
This is Anthropic telling the military that they have to follow Anthropic’s Terms of Service if they want access to Anthropic’s services, ... | -11 | 0 | 2026-02-28T03:56:32 | StarMNF | false | null | 0 | o7tnm6m | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tnm6m/ | false | -11 |
t1_o7tnlo1 | You could turn thinking off
--chat-template-kwargs "{\"enable_thinking\": false}"
https://unsloth.ai/docs/models/qwen3.5 | 4 | 0 | 2026-02-28T03:56:26 | see_spot_ruminate | false | null | 0 | o7tnlo1 | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7tnlo1/ | false | 4 |
t1_o7tnl8i | I know right? Pony was literally one of the most massive and popular models for a long time and is still used by many and it ain't because it made realistic stock images lol | 11 | 0 | 2026-02-28T03:56:21 | JazzlikeLeave5530 | false | null | 0 | o7tnl8i | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tnl8i/ | false | 11 |
t1_o7tnhwg | Which models did you try? Did you change the ctx for Ollama? I know that Ollama sets a really small context length. You should try something like unsloth/Qwen3.5-35B-A3B-GGUF at 4 bit would fit with ok context. You should try llama.cpp it is faster than Ollama and it is easier to setup now. You can download it from git... | 4 | 0 | 2026-02-28T03:55:43 | knownboyofno | false | null | 0 | o7tnhwg | false | /r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7tnhwg/ | false | 4 |
t1_o7tnh01 | I think the Ollama one almost has to be faulty settings / a bad quant. I've used it *very* little so far, but had the following illuminating chat with qwen3.5:35b-a3b-Q4_K_M this morning.
```
- Short answer only please. How much VRAM does the RTX 5090 have?
- It has not been officially announced yet.
- What's you... | 12 | 0 | 2026-02-28T03:55:33 | natufian | false | null | 0 | o7tnh01 | false | /r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7tnh01/ | false | 12 |
t1_o7tng2o | /v1 and r sonething. Look up ollama endpoints url Claude OpenAI.
Most just run Claude proxy and that does adapting | 1 | 0 | 2026-02-28T03:55:22 | fasti-au | false | null | 0 | o7tng2o | false | /r/LocalLLaMA/comments/1rf6c6u/need_help_on_api_key_export/o7tng2o/ | false | 1 |
t1_o7tnfdl | Yeah instead they were speaking in almost English and then went off on nonsensical tangents...I hate the sycophantic crap too but that wasn't better. | 7 | 0 | 2026-02-28T03:55:14 | JazzlikeLeave5530 | false | null | 0 | o7tnfdl | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tnfdl/ | false | 7 |
t1_o7tn8cv | After months of agentic coding, I actually went back to this old school technique of copying things in and out of the chat message. I actually prefer it when I actually "code". It's faster, more engaging, and it makes me pay attention. There is no begging the model to follow architecture, since i hold the architecture ... | 2 | 0 | 2026-02-28T03:53:55 | o0genesis0o | false | null | 0 | o7tn8cv | false | /r/LocalLLaMA/comments/1refyef/oneshot_vs_agentic_performance_of_openweight/o7tn8cv/ | false | 2 |
t1_o7tmtxy | wouldn't be surprised. both Claude and GPT exist in the Qwen3 Next / 3.5 weights, depending on the task/topic/active experts. if it looks like a duck and quacks like a duck, it might be Gemma | 2 | 0 | 2026-02-28T03:51:13 | itsappleseason | false | null | 0 | o7tmtxy | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7tmtxy/ | false | 2 |
t1_o7tmrjh | Still chugging and it's been like 18 hours straight now. 🫤 | 1 | 0 | 2026-02-28T03:50:46 | hesperaux | false | null | 0 | o7tmrjh | false | /r/LocalLLaMA/comments/1rgjozi/heretic_stalled/o7tmrjh/ | false | 1 |
t1_o7tmjgr | Qwen3.5 right now feels like late 2025 SOTA (Kimi aswell) imo.
the 35B-A3B one right now has replaced my need to use Haiku 4.5.
I see everyone saying it thinks long which i have not experienced. For me im getting 9 second thinking, and for complex tasks the longest ive seen is 17 seconds. So far its been very accu... | 13 | 0 | 2026-02-28T03:49:14 | SoupDue6629 | false | null | 0 | o7tmjgr | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7tmjgr/ | false | 13 |
t1_o7tmhg6 | That's a well-designed separation of concerns. Bond = seller quality signal. Per-invocation dispute = individual outcome accountability. Tying them together would have created perverse incentives -- sellers gaming for bond safety instead of output quality.
The philosophical reasoning case is actually harder to benchma... | 1 | 0 | 2026-02-28T03:48:51 | molusco_ai | false | null | 0 | o7tmhg6 | false | /r/LocalLLaMA/comments/1rgkv8u/agenttoagent_marketplace_let_your_local_agents/o7tmhg6/ | false | 1 |
t1_o7tmgwj | Is the Qwen team using Gemma for some form of training? That would be funny and weird at the same time. | 1 | 0 | 2026-02-28T03:48:45 | SkyFeistyLlama8 | false | null | 0 | o7tmgwj | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7tmgwj/ | false | 1 |
t1_o7tmbwb | TACO being TACO... | 1 | 0 | 2026-02-28T03:47:49 | SkyFeistyLlama8 | false | null | 0 | o7tmbwb | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tmbwb/ | false | 1 |
t1_o7tmb5a | bro hasnt tried 3.5 yet | 1 | 0 | 2026-02-28T03:47:41 | arthor | false | null | 0 | o7tmb5a | false | /r/LocalLLaMA/comments/1rd8nr7/andrej_karpathy_survived_the_weekend_with_the/o7tmb5a/ | false | 1 |
t1_o7tmaty | Hey u/danielhanchen , just want to say thanks for all your work on getting workable quants out to the masses. When I hear about a new model dropping, I nearly always find that one of your quants is ready to go within a few hours. You have saved thousands of people countless days of work, and it is much appreciated. ... | 2 | 0 | 2026-02-28T03:47:38 | paulgear | false | null | 0 | o7tmaty | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7tmaty/ | false | 2 |
t1_o7tm9sq | Maybe this company is bigger than the US gov and doesn't need them or their partners. Obviously that is the calculation they have made, and will stick with. | -1 | 0 | 2026-02-28T03:47:27 | Space__Whiskey | false | null | 0 | o7tm9sq | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tm9sq/ | false | -1 |
t1_o7tm2fq | you could, and honestly for a one-off task that works fine. the issue is when you want it to be persistent and reusable. you don't want Claude re-reading a 50k line spec every conversation, and curl means the LLM has to figure out auth headers, URL construction, query params etc. every single time.
the MCP server appr... | 1 | 0 | 2026-02-28T03:46:05 | Beautiful-Dream-168 | false | null | 0 | o7tm2fq | false | /r/LocalLLaMA/comments/1rgf9zb/mcpforge_generate_mcp_servers_from_openapi_specs/o7tm2fq/ | false | 1 |
t1_o7tlx8o | yeah exactly, that's the core insight, dumping 300 raw endpoints on an LLM is basically useless. the optimization step makes a huge difference in practice.
good point on policy controls, hadn't thought much about the post-generation governance side yet. will check out [peta.io](http://peta.io), thanks for the pointer.... | 1 | 0 | 2026-02-28T03:45:07 | Beautiful-Dream-168 | false | null | 0 | o7tlx8o | false | /r/LocalLLaMA/comments/1rgf9zb/mcpforge_generate_mcp_servers_from_openapi_specs/o7tlx8o/ | false | 1 |
t1_o7tlpas | Boeing has a whole ecosystem of subcontractors, and while they might not have the power to enforce anything at all, they will have it in writing that these subcontractors agreed to the terms rinsing themselves of any responsability. It gets pretty ugly out there, if you mess up with a company like Boeing they dont look... | 3 | 0 | 2026-02-28T03:43:40 | Lesser-than | false | null | 0 | o7tlpas | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tlpas/ | false | 3 |
t1_o7tlj90 | Nothing Trump can say or do can stop Google from supplying and working with Anthropic. | 1 | 1 | 2026-02-28T03:42:35 | TheArchitectOfChaos | false | null | 0 | o7tlj90 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tlj90/ | false | 1 |
t1_o7tlfim | Hah.. yes it's pretty rough getting started, and having a very underpowered system makes it tougher. You're pretty much looking at running in ram. Grab Qwen3 coder next Q4_K_M. You should be able to get something around GPT4 level results at a crawl. | 1 | 0 | 2026-02-28T03:41:55 | RedParaglider | false | null | 0 | o7tlfim | false | /r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7tlfim/ | false | 1 |
t1_o7tldhd | So much for trying to show how "patriotic" you are, especially to a government that does not deserve it. They may win back a bit of respect today, but I am not forgiving that they are labelling open knowledge sharing across all humanity as "threats to national security", just arbitrarily as they are being labelled as s... | 0 | 0 | 2026-02-28T03:41:33 | aeroumbria | false | null | 0 | o7tldhd | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tldhd/ | false | 0 |
t1_o7tlccr | > Get the latest powerball numbers
Did you provide it tools? And a framework on how to answer your queries? | 7 | 0 | 2026-02-28T03:41:21 | Budget-Juggernaut-68 | false | null | 0 | o7tlccr | false | /r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7tlccr/ | false | 7 |
t1_o7tlb24 | Literally cannot happen, can he force the government agencies to stop using it yeah but he can’t force Google or Nvidia to stop supplying them. They’re also private corporations. It would need serious legal legislation that would be challenged in court immediately. Just because the pig I mean Trump tweeted that doesn’t... | 17 | 0 | 2026-02-28T03:41:07 | TheArchitectOfChaos | false | null | 0 | o7tlb24 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tlb24/ | false | 17 |
t1_o7tl97h | The fun starts at 32GB VRAM ( and much faster GPU ) | -2 | 0 | 2026-02-28T03:40:47 | mr_zerolith | false | null | 0 | o7tl97h | false | /r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7tl97h/ | false | -2 |
t1_o7tl5fo | That is oversimplification, they said they would not use it for that. But you know how guardrails work, you used GPT-OSS 120B, that kind of thing cannot be used reliably if it refuses to answer to military cause they put guardrails in. Guardrails are more than just these 2 demands | -2 | 0 | 2026-02-28T03:40:06 | tomt610 | false | null | 0 | o7tl5fo | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tl5fo/ | false | -2 |
t1_o7tl4wi | Define complex - because you opened with this > can’t do basic front end work. | 1 | 0 | 2026-02-28T03:40:00 | alphatrad | false | null | 0 | o7tl4wi | false | /r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7tl4wi/ | false | 1 |
t1_o7tl3y7 | you don't have to take a side you know. you can just hope they somehow kill each other like the rest of us. | -1 | 0 | 2026-02-28T03:39:50 | StewedAngelSkins | false | null | 0 | o7tl3y7 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tl3y7/ | false | -1 |
t1_o7tl1yk | [deleted] | 1 | 0 | 2026-02-28T03:39:28 | [deleted] | true | null | 0 | o7tl1yk | false | /r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7tl1yk/ | false | 1 |
t1_o7tl14f | i fit this class of concerns neatly into my development worldview which emphasizes the importance of observability in all software. Just because often it is easy to add does not mean that it is remotely easy to actually functionally achieve in whatever software stack you need to use to do something. We often overextend... | 1 | 0 | 2026-02-28T03:39:19 | michaelsoft__binbows | false | null | 0 | o7tl14f | false | /r/LocalLLaMA/comments/1rgelk1/the_supply_chain_problem_nobody_talks_about_agent/o7tl14f/ | false | 1 |
t1_o7tksh6 | What *are* the local models now that are as good as GPT4? Or maybe even GPT5? | 6 | 0 | 2026-02-28T03:37:44 | megacewl | false | null | 0 | o7tksh6 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tksh6/ | false | 6 |
t1_o7tks9o | Have a look at the model documentation. Every model has a particular way of prompting structure. Some of them require specific tags others only accept a specific format, it's not one size fits all | 3 | 0 | 2026-02-28T03:37:42 | alexndb | false | null | 0 | o7tks9o | false | /r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7tks9o/ | false | 3 |
t1_o7tkmvb | Yes, because education famously promotes reductive thinking and generalization. I assume this is also where you learned to spell what I can only assume was meant to be “drooling.”
Idk man, I’m not too fond of the guy either, but you could at least try to be normal about it. You’re making the rest of us look bad. | 6 | 0 | 2026-02-28T03:36:42 | OGVentrix | false | null | 0 | o7tkmvb | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7tkmvb/ | false | 6 |
t1_o7tkhb8 | Typically they perform at the geometric mean of their total and active parameters. So it has the memory usage of a 35b model, the speed of a 3b model, and the smarts of roughly a 9b model. Compared to 27b across the board.
It’s a lot more complicated than that but that’s a quick way to estimate MOE performance. | 24 | 0 | 2026-02-28T03:35:42 | svachalek | false | null | 0 | o7tkhb8 | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7tkhb8/ | false | 24 |
t1_o7tkfrz | Certainly interesting to see where other people rank them differently.
I struggle with defining SOTA though.
Like Gemini and GPT are the 'SOTA' by most peoples standards but on many tasks Kimi, GLM, Deepseek can be on the same level or even better, the main difference is that they are closed weights vs open weight... | 2 | 0 | 2026-02-28T03:35:25 | CC_NHS | false | null | 0 | o7tkfrz | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7tkfrz/ | false | 2 |
t1_o7tkewu | Imagine being a parent and working. But obviously cool that he has the option. | 3 | 0 | 2026-02-28T03:35:15 | noage | false | null | 0 | o7tkewu | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7tkewu/ | false | 3 |
t1_o7tkeiw | I believe there was a series of posts showing that recent Unsloth quants were messed up; though not sure if that affects this one specifically
what if you tried a quant from someone else? | 8 | 0 | 2026-02-28T03:35:11 | x11iyu | false | null | 0 | o7tkeiw | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o7tkeiw/ | false | 8 |
t1_o7tkdxu | Gemma 2 27B ❤️ | 5 | 0 | 2026-02-28T03:35:05 | Adventurous-Paper566 | false | null | 0 | o7tkdxu | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tkdxu/ | false | 5 |
t1_o7tka7f | I think 80b is better than the 110b also. | 1 | 0 | 2026-02-28T03:34:25 | getfitdotus | false | null | 0 | o7tka7f | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7tka7f/ | false | 1 |
t1_o7tk73x | It isn't a restriction so much as "we don't make or sell that". | 9 | 0 | 2026-02-28T03:33:52 | Deep90 | false | null | 0 | o7tk73x | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tk73x/ | false | 9 |
t1_o7tk3sh | Subscribe to Anthropic services.
Got it. | -2 | 1 | 2026-02-28T03:33:17 | optomas | false | null | 0 | o7tk3sh | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tk3sh/ | false | -2 |
t1_o7tk0oz | Very dumb picture. | -6 | 0 | 2026-02-28T03:32:43 | MikeLPU | false | null | 0 | o7tk0oz | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7tk0oz/ | false | -6 |
t1_o7tjye8 | Ahh yes so he wants a private company to bend to the will of the administration isn’t that like you know Socialism? | 5 | 0 | 2026-02-28T03:32:19 | TheArchitectOfChaos | false | null | 0 | o7tjye8 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tjye8/ | false | 5 |
t1_o7tjpd5 | 27B by a decent margin. The 35B just isn't hitting it in my own tests while 27B nails it for me. I think the 3B activated parameters of 35B is really hurting its performance | 40 | 0 | 2026-02-28T03:30:41 | theskilled42 | false | null | 0 | o7tjpd5 | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7tjpd5/ | false | 40 |
t1_o7tjlby | That's an interesting way to put that – the embodiment of knowledge. And I agree with you, with one caveat: I've started to see Gemma's nuance in the 80B+ param MOE models. | 3 | 0 | 2026-02-28T03:29:57 | itsappleseason | false | null | 0 | o7tjlby | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7tjlby/ | false | 3 |
t1_o7tjihz | Try it again after unsloth drops updated ggufs. | 6 | 0 | 2026-02-28T03:29:27 | paryska99 | false | null | 0 | o7tjihz | false | /r/LocalLLaMA/comments/1rg9lli/little_qwen_35_27b_and_qwen_35ba3b_models_did/o7tjihz/ | false | 6 |
t1_o7tjdxr | It's solid until it's not. Probably a bug. Report it | 1 | 0 | 2026-02-28T03:28:37 | RhubarbSimilar1683 | false | null | 0 | o7tjdxr | false | /r/LocalLLaMA/comments/1rgotwp/llamaserver_doesnt_see_rocm_device_strix_halo/o7tjdxr/ | false | 1 |
t1_o7tjd55 | I mean, at one point Reddit posted their top cities for post and [Eglin Air Force base was listed under "Most Addicted Cities"](https://web.archive.org/web/20160410083943/http://www.redditblog.com/2013/05/get-ready-for-global-reddit-meetup-day.html?m=1). They deleted the post (possibly when they moved to the new blog f... | 8 | 0 | 2026-02-28T03:28:28 | Yorn2 | false | null | 0 | o7tjd55 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tjd55/ | false | 8 |
t1_o7tja8x | AI companies stealing content and using it to train their own models? Who could have ever seen such a thing coming! :O | 1 | 0 | 2026-02-28T03:27:55 | NewArchive | false | null | 0 | o7tja8x | false | /r/LocalLLaMA/comments/1rcpmwn/anthropic_weve_identified_industrialscale/o7tja8x/ | false | 1 |
t1_o7tj89d | 27b vs 35b which is better?? And why? | 4 | 0 | 2026-02-28T03:27:34 | 9r4n4y | false | null | 0 | o7tj89d | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7tj89d/ | false | 4 |
t1_o7tj4t6 | tfw you're the sex organs of the machine world... or maybe more like its detachable tail. truly corporations were the original unfriendly AI. | 1 | 0 | 2026-02-28T03:26:55 | StewedAngelSkins | false | null | 0 | o7tj4t6 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tj4t6/ | false | 1 |
t1_o7tj4fl | Use the app they both rely on directly, llama.cpp, and you can stop thinking with one command line arg | 2 | 0 | 2026-02-28T03:26:51 | StardockEngineer | false | null | 0 | o7tj4fl | false | /r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7tj4fl/ | false | 2 |
t1_o7tj1j1 | No need to make it personal.
I’m talking about model performance on complex production systems. Experience is great — but it doesn’t change the output. | 5 | 0 | 2026-02-28T03:26:19 | Virtual-Listen4507 | false | null | 0 | o7tj1j1 | false | /r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7tj1j1/ | false | 5 |
t1_o7tj0zy | qwen 3.5 are reasoning models sir, how do you not understand this concept | 3 | 1 | 2026-02-28T03:26:14 | llama-impersonator | false | null | 0 | o7tj0zy | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7tj0zy/ | false | 3 |
t1_o7tizw4 | It is all hypotheticals, but we will find out in 6 months whether this had any teeth by looking at whether AWS dumps their shares in Anthropic and stops providing their models on Bedrock, or if the government stops using AWS. If you believe the government intends to follow through with their statements, both of those ... | 0 | 0 | 2026-02-28T03:26:02 | Similar_Director6322 | false | null | 0 | o7tizw4 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tizw4/ | false | 0 |
t1_o7tiy50 | [removed] | 1 | 0 | 2026-02-28T03:25:42 | [deleted] | true | null | 0 | o7tiy50 | false | /r/LocalLLaMA/comments/1rgqpn2/im_looking_for_local_spanishspeaking_communities/o7tiy50/ | false | 1 |
t1_o7tiy2n | It's not just about having just the interface with pre-built skills. It's about what it makes possible. People are automating tasks that used to take hours. They're saving time, saving money, and getting results that are sometimes better than what they'd get doing by themselves. A whole ecosystem of startups, tools and... | 1 | 0 | 2026-02-28T03:25:41 | stosssik | false | null | 0 | o7tiy2n | false | /r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7tiy2n/ | false | 1 |
t1_o7tiula | It took me a long time to dial in Mistral settings for temperature, repetition, frequency, presence, dry multiplier, and all to top k, etc. I now have it pretty good and also use magidonka, cydonia fine tunes to make it better. I really like that ministral has vision. I can show it some ingredients and it will (with he... | 6 | 0 | 2026-02-28T03:25:03 | Helpful_Jelly5486 | false | null | 0 | o7tiula | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7tiula/ | false | 6 |
t1_o7tishf | When the smaller models come out next week they will be great for this.
As others have said, 27B actually has more active parameters than 122B-10AB and so is not suitable. You’d want a larger multiple of size gap anyway for a decent speed up.
| 3 | 0 | 2026-02-28T03:24:41 | Elusive_Spoon | false | null | 0 | o7tishf | false | /r/LocalLLaMA/comments/1rgp2nu/anyone_doing_speculative_decoding_with_the_new/o7tishf/ | false | 3 |
t1_o7tiq5b | [deleted] | 1 | 0 | 2026-02-28T03:24:17 | [deleted] | true | null | 0 | o7tiq5b | false | /r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7tiq5b/ | false | 1 |
t1_o7tiof7 | You can use Anthropic models through Amazon bedrock and google services. I guess this will need to stop? That will be quite annoying for a lot of businesses. | 2 | 0 | 2026-02-28T03:23:58 | Serprotease | false | null | 0 | o7tiof7 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tiof7/ | false | 2 |
t1_o7timm2 | That's going to come from your tools in webui. I've been having good luck with this option. Somebody posted this a while back here and it's the best easy option I've found to set up on my modest rig similar in capacity to yours.
https://github.com/Shelpuk-AI-Technology-Consulting/kindly-web-search-mcp-server | 5 | 0 | 2026-02-28T03:23:38 | Xp_12 | false | null | 0 | o7timm2 | false | /r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7timm2/ | false | 5 |
t1_o7tilzb | Is this something good? | 1 | 0 | 2026-02-28T03:23:31 | vvolxey | false | null | 0 | o7tilzb | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tilzb/ | false | 1 |
t1_o7tiivj | Awesome. Pain in the ass tho! | 1 | 0 | 2026-02-28T03:22:58 | StardockEngineer | false | null | 0 | o7tiivj | false | /r/LocalLLaMA/comments/1oy7ane/i_just_discovered_something_about_lm_studio_i_had/o7tiivj/ | false | 1 |
t1_o7tigcb | I still keep Gemma 3 27B and Mistral 3.2 24B for questions like these. The MOEs are good at coding or RAG but they don't embody knowledge as well or as eloquently as these dense models. | 24 | 0 | 2026-02-28T03:22:30 | SkyFeistyLlama8 | false | null | 0 | o7tigcb | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7tigcb/ | false | 24 |
t1_o7tid24 | My friend how do you not know what an instruct model is. | 0 | 1 | 2026-02-28T03:21:56 | StardockEngineer | false | null | 0 | o7tid24 | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7tid24/ | false | 0 |
t1_o7ti6n0 | the issue is these guys call everything woke so it's hard to tell when they mean it. i feel like in this case the poor bastard just couldn't think of any other way to sell the taxpayer on the idea that mass domestic surveillance is something they should want him to do. | 7 | 0 | 2026-02-28T03:20:46 | StewedAngelSkins | false | null | 0 | o7ti6n0 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ti6n0/ | false | 7 |
t1_o7ti5mc | [removed] | 1 | 0 | 2026-02-28T03:20:35 | [deleted] | true | null | 0 | o7ti5mc | false | /r/LocalLLaMA/comments/1rd1tj9/exclusive_chinas_deepseek_trained_ai_model_on/o7ti5mc/ | false | 1 |
t1_o7ti2gf | Exactly that, also great for getting translations or small explanations, or getting things in context of something else without having to switch tabs. | 4 | 0 | 2026-02-28T03:20:01 | Technical-Earth-3254 | false | null | 0 | o7ti2gf | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7ti2gf/ | false | 4 |
t1_o7ti0ei | This is why local is addictive: you stop treating the model like magic and start treating it like a system. KV cache is the big one: it should grow with tokens kept in context, so if qwen3.5 looks stable, that screams “KV pre-allocated to n\_ctx” or “sliding window/ring buffer,” not that KV disappeared. MoE changes com... | 3 | 0 | 2026-02-28T03:19:38 | CalvinBuild | false | null | 0 | o7ti0ei | false | /r/LocalLLaMA/comments/1rg0ir2/after_using_local_models_for_one_month_i_learned/o7ti0ei/ | false | 3 |
t1_o7thzrq | As much as I agree it's terrible, if you want to directly source something Trump has posted, that's where he posted it. | 6 | 0 | 2026-02-28T03:19:32 | t3h | false | null | 0 | o7thzrq | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7thzrq/ | false | 6 |
t1_o7thxl2 | I'd say there is almost zero chance that Google employees stop using Claude or integrating it with their services. | 1 | 1 | 2026-02-28T03:19:08 | Similar_Director6322 | false | null | 0 | o7thxl2 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7thxl2/ | false | 1 |
t1_o7thwe7 | sadly my mistral 7b model isnt working under the new llama.cpp version
just starts talking with itself and prints </user><user> . it works fine under version b6327 | 1 | 0 | 2026-02-28T03:18:55 | Zealousideal_Nail288 | false | null | 0 | o7thwe7 | false | /r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7thwe7/ | false | 1 |
t1_o7thpcy | Boeing relies on suppliers for computer hardware and software, and for many of these components, Boeing is not a large enough customer to have any ability to influence what tools those suppliers are utilizing. Boeing cannot influence the open source community on what tools they will use, and I doubt they will start wr... | 5 | 0 | 2026-02-28T03:17:40 | Similar_Director6322 | false | null | 0 | o7thpcy | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7thpcy/ | false | 5 |
t1_o7thn2o | I agree with you, and as Claude Code and Codex have improved it feels pretty good. It's definitely nowhere near Anthropic but it's good enough for daily driving. I can have Claude make a plan, either review it myself or have codex review it, then have one execute it and the other review its execution. Just Claude code ... | 1 | 0 | 2026-02-28T03:17:16 | ImaginaryBluejay0 | false | null | 0 | o7thn2o | false | /r/LocalLLaMA/comments/1qrsy4q/how_close_are_openweight_models_to_sota_my_honest/o7thn2o/ | false | 1 |
t1_o7thmry | no, qwen 3.5s are qwq-level thinkslop | -2 | 1 | 2026-02-28T03:17:13 | llama-impersonator | false | null | 0 | o7thmry | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7thmry/ | false | -2 |
t1_o7thb7z | They’re all instruct models. | 6 | 0 | 2026-02-28T03:15:10 | StardockEngineer | false | null | 0 | o7thb7z | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7thb7z/ | false | 6 |
t1_o7thb4h | Nah. Anthropic decided step in bed with the government with lobbying and contracts. They deserve everything they get for their stupidity. | 2 | 1 | 2026-02-28T03:15:09 | BusRevolutionary9893 | false | null | 0 | o7thb4h | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7thb4h/ | false | 2 |
t1_o7tha6q | Haha, Python library versions are a big headache lots of open-source projects struggle with this. | 1 | 0 | 2026-02-28T03:15:00 | zxlzr | false | null | 0 | o7tha6q | false | /r/LocalLLaMA/comments/1rfg53c/lightmem_iclr_2026_lightweight_and_efficient/o7tha6q/ | false | 1 |
t1_o7th6us | It doesn't go that far. If Redhat or IBM has government contracts, then it can't have any contracts with Anthropic, which won't be that hard because there are plenty of other coding LLM providers. Azure will audit IBM if required and IBM will comply to keep its own government contracts safe.
I think Microsoft, Google ... | 3 | 0 | 2026-02-28T03:14:25 | SkyFeistyLlama8 | false | null | 0 | o7th6us | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7th6us/ | false | 3 |
t1_o7th4o7 | 8bit KV Cache? | 1 | 0 | 2026-02-28T03:14:02 | hatlessman | false | null | 0 | o7th4o7 | false | /r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7th4o7/ | false | 1 |
t1_o7th4h0 | I love that I have to know what the logos for all of these are just to begin to read this. Glad we didn't develop some kind of system of writing to convey information. | 15 | 0 | 2026-02-28T03:14:00 | JamesTiberiusCrunk | false | null | 0 | o7th4h0 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7th4h0/ | false | 15 |
t1_o7tgwf2 | No radical left but radical right is ok?? | 4 | 0 | 2026-02-28T03:12:37 | Quetxolotle | false | null | 0 | o7tgwf2 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tgwf2/ | false | 4 |
t1_o7tgqri | [https://github.com/p-e-w/heretic](https://github.com/p-e-w/heretic) | 1 | 0 | 2026-02-28T03:11:39 | de4dee | false | null | 0 | o7tgqri | false | /r/LocalLLaMA/comments/1cerqd8/refusal_in_llms_is_mediated_by_a_single_direction/o7tgqri/ | false | 1 |
t1_o7tgofh | DAMN but I can’t believe it’s been over two years since we got Mixtral MoE. That was cutting edge at the time and now I’ve literally forgotten it existed. | 9 | 0 | 2026-02-28T03:11:14 | met_MY_verse | false | null | 0 | o7tgofh | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tgofh/ | false | 9 |
t1_o7tgjbf | No human ever talk about it | 1 | 0 | 2026-02-28T03:10:20 | jikilan_ | false | null | 0 | o7tgjbf | false | /r/LocalLLaMA/comments/1rgelk1/the_supply_chain_problem_nobody_talks_about_agent/o7tgjbf/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.