name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7t03og | It depends on the context window you set with an ollama Modfile. I'm at about 18GB with a 60k context window. Still running into openclaw gateway api issues working with ollama though. | 1 | 0 | 2026-02-28T01:27:20 | Lukasjw | false | null | 0 | o7t03og | false | /r/LocalLLaMA/comments/1rawfwt/openclaw_and_ollama/o7t03og/ | false | 1 |
t1_o7t01nb | thank you so much!
the speed difference was due to two things:
- the context window, mine was 128k, OP was 64 (`-c 65536`)
- OP probably has stronger CPU than mine, and was using 20 threads (`-t 20`), mine was only 8 threads :D | 1 | 0 | 2026-02-28T01:26:59 | bobaburger | false | null | 0 | o7t01nb | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7t01nb/ | false | 1 |
t1_o7szzw1 | I use nvidia nemotron 30B exclusively to test my homegrown agentic stuffs. It's surprisingly capable and I know that when I don't use Nvidia's server, I still have the same model on my GPU, just slower, but not unusably slow. | 15 | 0 | 2026-02-28T01:26:41 | o0genesis0o | false | null | 0 | o7szzw1 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7szzw1/ | false | 15 |
t1_o7szze0 | Is it running 85 tokens/sec tg and 7,500-15,000 tokens/sec pp speed?
That's how fast the MXFP4 quant is running on VLLM with a single RTX 6000 Max Q.
I have not been able to get full performance out of Llama.cpp with any of the hybrid attention Qwen models | 3 | 0 | 2026-02-28T01:26:36 | TokenRingAI | false | null | 0 | o7szze0 | false | /r/LocalLLaMA/comments/1rgiait/switched_to_qwen35122ba10bi1gguf/o7szze0/ | false | 3 |
t1_o7szy4v | All the major tech companies are consuming and supplying Anthropic, and utilizing their technology is much more valuable than any US government contracts. Abandoning the AI market is not anything a major tech company is going to do, and there is no feasible way to utilize or develop AI in a market-leading fashion with... | 7 | 1 | 2026-02-28T01:26:23 | Similar_Director6322 | false | null | 0 | o7szy4v | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7szy4v/ | false | 7 |
t1_o7szps0 | "Get a hobby" you say shitting on someone who is sharing their hobby. God you're really fitting the Reddit stereotype, huh | 1 | 0 | 2026-02-28T01:24:55 | TheBurkMeister | false | null | 0 | o7szps0 | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o7szps0/ | false | 1 |
t1_o7szke0 | Nah, I posted it cuz the refusal is funny | 2 | 0 | 2026-02-28T01:23:58 | Witty_Mycologist_995 | false | null | 0 | o7szke0 | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7szke0/ | false | 2 |
t1_o7szjuo | I would disable all GUI staff to save RAM. | 1 | 0 | 2026-02-28T01:23:52 | foldl-li | false | null | 0 | o7szjuo | false | /r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o7szjuo/ | false | 1 |
t1_o7szcnu | Could you elaborate on the web browser assistant use case, please? | 14 | 0 | 2026-02-28T01:22:39 | MaverickPT | false | null | 0 | o7szcnu | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7szcnu/ | false | 14 |
t1_o7szcip | Ty | 8 | 0 | 2026-02-28T01:22:38 | triynizzles1 | false | null | 0 | o7szcip | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7szcip/ | false | 8 |
t1_o7szcbk | >Altman stated, "AI should not be used for mass surveillance or autonomous lethal weapons, and humans must remain involved in high-risk automated decision-making; these are our primary red lines."
Unless of course it's his company doing it, in which I'm sure he'd be happy to "figure out a solution" so that maybe OpenA... | 60 | 0 | 2026-02-28T01:22:36 | Toooooool | false | null | 0 | o7szcbk | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7szcbk/ | false | 60 |
t1_o7sz905 | Using cmd prompt | 1 | 0 | 2026-02-28T01:22:01 | EbbNorth7735 | false | null | 0 | o7sz905 | false | /r/LocalLLaMA/comments/1rglgma/qwen_35_llamacpp_turn_of_reasoning_and_performance/o7sz905/ | false | 1 |
t1_o7sz0nu | It was released within weeks of one another, 3.5 is based on next architecture, and 80B > 35B. Plus coder indicates it was specifically trained on code. I would assume this is the case. | 7 | 0 | 2026-02-28T01:20:37 | EbbNorth7735 | false | null | 0 | o7sz0nu | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7sz0nu/ | false | 7 |
t1_o7syy0q | “Effective immediately, no **contractor, supplier, or partner** that does business with the United States military **may conduct any commercial activity with Anthropic** “
Note the phrasing. | 43 | 0 | 2026-02-28T01:20:09 | DistanceSolar1449 | false | null | 0 | o7syy0q | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7syy0q/ | false | 43 |
t1_o7syvg3 | I stopped reading after Q4_0.
It's 2026, bro. | -8 | 0 | 2026-02-28T01:19:43 | Ok-Measurement-1575 | false | null | 0 | o7syvg3 | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o7syvg3/ | false | -8 |
t1_o7sysmr | Too many variables at play without my hands on equipment to offer any further advice, but the next direction I'd go to play with is --cache-ram starting with 0, -1, and values <=8. also the --mlock parameter. Sorry if the last comment came off odd. It looks like you have constant drive activity, but the sample size is ... | 1 | 0 | 2026-02-28T01:19:14 | Xp_12 | false | null | 0 | o7sysmr | false | /r/LocalLLaMA/comments/1rgaw5c/gpu_shared_vram_makes_qwen3535b_prompt_processing/o7sysmr/ | false | 1 |
t1_o7sys5k | You guys are such pieces of shit lol someone built something they find cool and helpful and your first instinct is to shit on them because you hate AI | 0 | 0 | 2026-02-28T01:19:09 | TheBurkMeister | false | null | 0 | o7sys5k | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o7sys5k/ | false | 0 |
t1_o7syigc | Thank you! It only supports Nvidia cards at the moment though (sorry), I only own Nvidia cards right now. Offloading some layers is something I want to experiment with for decode for speed reasons but it could also help with system RAM constraints which is an interesting idea. I think you would probably need a bit mo... | 2 | 0 | 2026-02-28T01:17:30 | mrstoatey | false | null | 0 | o7syigc | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7syigc/ | false | 2 |
t1_o7syhb3 | [removed] | 1 | 0 | 2026-02-28T01:17:18 | [deleted] | true | null | 0 | o7syhb3 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7syhb3/ | false | 1 |
t1_o7syg5k | This post as reported for being off-topic - I have approved it.
This is not local model news but is a development in the ecosystem that has and will have broad recursions. As such, it is worthy of conversation in this sub. | 202 | 0 | 2026-02-28T01:17:05 | rm-rf-rm | false | null | 0 | o7syg5k | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7syg5k/ | false | 202 |
t1_o7syazr | It's sadly not local, and my system couldn't run it even if I wanted to, but Qwen 3 Max Thinking is the best Linux problem solver I have ever used. Something doesn't work, and I can't make it work, even after doing research? I just throw it together with logs or whatever I have in the WebUI, and it gets it done.
Very ... | 11 | 0 | 2026-02-28T01:16:12 | Technical-Earth-3254 | false | null | 0 | o7syazr | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7syazr/ | false | 11 |
t1_o7sy8fr | They do online searches. | 1 | 0 | 2026-02-28T01:15:46 | TopTippityTop | false | null | 0 | o7sy8fr | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7sy8fr/ | false | 1 |
t1_o7sy7o6 | Imho Qwen 3 Coder Next 80B is better than Qwen 3.5 35B A3B. | 31 | 0 | 2026-02-28T01:15:38 | Cool-Chemical-5629 | false | null | 0 | o7sy7o6 | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7sy7o6/ | false | 31 |
t1_o7sy6bq | this 27b and 35b-a3b release is the first time i've felt like i could enjoy using local with opencode on a single 3090. getting ~30t/s and ~100t/s respectively and they've been working great. | 1 | 0 | 2026-02-28T01:15:24 | ydnar | false | null | 0 | o7sy6bq | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7sy6bq/ | false | 1 |
t1_o7sy312 | [removed] | 1 | 0 | 2026-02-28T01:14:50 | [deleted] | true | null | 0 | o7sy312 | false | /r/LocalLLaMA/comments/1rgomsq/ubuntu_or_debian_speed_difference_on_llamacpp/o7sy312/ | false | 1 |
t1_o7sy21g | the distro don't matter, pick your fav
(i use debian btw) | 2 | 0 | 2026-02-28T01:14:40 | llama-impersonator | false | null | 0 | o7sy21g | false | /r/LocalLLaMA/comments/1rgomsq/ubuntu_or_debian_speed_difference_on_llamacpp/o7sy21g/ | false | 2 |
t1_o7sy08f | Rule 1 - Duplicate post. lost count of the number of Artifical analysis index screengrab posts for Qwen3.5. | 1 | 0 | 2026-02-28T01:14:22 | LocalLLaMA-ModTeam | false | null | 0 | o7sy08f | true | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7sy08f/ | true | 1 |
t1_o7sxzet | 100% marketing. | -6 | 0 | 2026-02-28T01:14:13 | Ok-Measurement-1575 | false | null | 0 | o7sxzet | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sxzet/ | false | -6 |
t1_o7sxvdx | You got it to work with SGLang? I got all sorts of errors, guess I need to try again. What GPU? | 2 | 0 | 2026-02-28T01:13:33 | MLWillRuleTheWorld | false | null | 0 | o7sxvdx | false | /r/LocalLLaMA/comments/1re9xbi/qwen35_on_vllm/o7sxvdx/ | false | 2 |
t1_o7sxths | Why no love for step 3.5 flash? | 1 | 0 | 2026-02-28T01:13:13 | Available-Craft-5795 | false | null | 0 | o7sxths | false | /r/LocalLLaMA/comments/1rgixxr/what_models_do_you_think_owned_february/o7sxths/ | false | 1 |
t1_o7sxs8v | I can highly recommend Ministral 14B Instruct in whatever quant your system can run with >20t/s as a web-browser assistant. I am using it with Brave's "Leo" and it's great! For sure not SOTA, but sometimes you just need something that can see, understand and "work". | 23 | 0 | 2026-02-28T01:13:00 | Technical-Earth-3254 | false | null | 0 | o7sxs8v | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7sxs8v/ | false | 23 |
t1_o7sxrwl | Wth happened to Grok? | 1 | 0 | 2026-02-28T01:12:57 | No_Conversation9561 | false | null | 0 | o7sxrwl | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7sxrwl/ | false | 1 |
t1_o7sxexw | thanks for your tests. i'm wondering why mxfp4_moe is slower than q4_k_s for similar file size. | 1 | 0 | 2026-02-28T01:10:43 | Conscious_Chef_3233 | false | null | 0 | o7sxexw | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7sxexw/ | false | 1 |
t1_o7sxdoq | I'm not going to make this sound like it's the worst thing in the world, or that you aren't entitled to your own decisions. But as someone whose neurons fire intensely with all sorts of activity when reading or hearing language, I find the use of all lowercase letters to be.. let's just say unpleasant. In the same way ... | 1 | 0 | 2026-02-28T01:10:30 | _-_David | false | null | 0 | o7sxdoq | false | /r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/o7sxdoq/ | false | 1 |
t1_o7sxakb | Anthropic only had _two_ guardrails:
1. No mass surveillance of United States citizens in the United States.
2. No pure-AI autonomous weapon systems without a human somewhere in the loop.
They didn't even rule out (2) forever, saying that it might be necessary someday if other countries went down that route. But th... | 52 | 0 | 2026-02-28T01:09:58 | vtkayaker | false | null | 0 | o7sxakb | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sxakb/ | false | 52 |
t1_o7sxa6z | Anything 250B and under I'll run locally. If I quantize beyond Q4 I try and use OpenRouter to get a better feel for the model and see how much quantization impacts it. | 13 | 0 | 2026-02-28T01:09:55 | ForsookComparison | false | null | 0 | o7sxa6z | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7sxa6z/ | false | 13 |
t1_o7sx7qg | Please create a proper GitHub issue, this is not the appropriate place for it. Thank you! | 1 | 0 | 2026-02-28T01:09:29 | RIP26770 | false | null | 0 | o7sx7qg | false | /r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o7sx7qg/ | false | 1 |
t1_o7sx72m | My understanding is that ngl (permanent layer offloading) is the default in llama, that’s certainly what I’ve seen people using. Your link, if I understand it correctly, is more along the lines of making decisions live whether to offload batches to the GPU for processing whereas krasis is designed and optimised to alw... | 1 | 0 | 2026-02-28T01:09:22 | mrstoatey | false | null | 0 | o7sx72m | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7sx72m/ | false | 1 |
t1_o7sx5ts | the reason ubuntu has newer drivers is because they're non-free. The search bar thing doesn't do online searches unless you go in the privacy settings and manually enable it. Also you don't even need a desktop for llama so it's irrelevant if you don't install it
| 1 | 0 | 2026-02-28T01:09:09 | h4ck3r_n4m3 | false | null | 0 | o7sx5ts | false | /r/LocalLLaMA/comments/1rgomsq/ubuntu_or_debian_speed_difference_on_llamacpp/o7sx5ts/ | false | 1 |
t1_o7swxsw | If you think about it, its absolutely stupid to give Anthropic, and by extension, Amazon Cloud, the data of all security forces of a country. Hundreds of humans have access to all the prompts/responses and every single of them can be hacked. | 0 | 1 | 2026-02-28T01:07:46 | ortegaalfredo | false | null | 0 | o7swxsw | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7swxsw/ | false | 0 |
t1_o7swxha | You don't run anything locally, though, right? | 6 | 0 | 2026-02-28T01:07:43 | Ok-Measurement-1575 | false | null | 0 | o7swxha | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7swxha/ | false | 6 |
t1_o7swqvr | How does this compare with llama.cpp's -ngl 0 option with a sufficiently high ubatch?
Now if only we could use the dGPU for prefill while also using the iGPU for better decode throughput than CPU alone. | 2 | 0 | 2026-02-28T01:06:35 | EugenePopcorn | false | null | 0 | o7swqvr | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7swqvr/ | false | 2 |
t1_o7swpe1 | [https://www.reddit.com/r/LocalLLaMA/comments/1rgkyt5/comment/o7sw7kw/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/LocalLLaMA/comments/1rgkyt5/comment/o7sw7kw/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_but... | 1 | 0 | 2026-02-28T01:06:20 | Cool-Chemical-5629 | false | null | 0 | o7swpe1 | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7swpe1/ | false | 1 |
t1_o7swotz | No, it just means they won't do business with the US government if this is enforced. | 11 | 1 | 2026-02-28T01:06:14 | Similar_Director6322 | false | null | 0 | o7swotz | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7swotz/ | false | 11 |
t1_o7swnq0 | There is so much fake stuff on github with hundreds or even thousands of stars. It's really a shame because it's such a useful platform but it's really difficult to trust anything if you don't 100% understand how it functions. | 1 | 0 | 2026-02-28T01:06:03 | What_Do_It | false | null | 0 | o7swnq0 | false | /r/LocalLLaMA/comments/1rfxi64/is_microsoft_going_to_train_llm_on_this_github_is/o7swnq0/ | false | 1 |
t1_o7swezo | I do not believe this. | -9 | 0 | 2026-02-28T01:04:35 | eredhuin | false | null | 0 | o7swezo | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7swezo/ | false | -9 |
t1_o7swdfo | Honestly I think that game is absurdly boring, and yet the Helldiver's America parody is one of the most accurate portrayals of the current government. | 9 | 0 | 2026-02-28T01:04:19 | jazir555 | false | null | 0 | o7swdfo | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7swdfo/ | false | 9 |
t1_o7swd1w | Yeah, but model by model. | -1 | 0 | 2026-02-28T01:04:16 | ReasonablePossum_ | false | null | 0 | o7swd1w | false | /r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7swd1w/ | false | -1 |
t1_o7sw9a7 | Interesting that 122B and 27B are close to the same. The geometric mean of 122 and 10 is roughly 35. | 2 | 0 | 2026-02-28T01:03:38 | EbbNorth7735 | false | null | 0 | o7sw9a7 | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7sw9a7/ | false | 2 |
t1_o7sw7kw | If you think the models don't need that knowledge, you don't really understand this problem.
I already tried to use the small models with tools to mimic the knowledge of the big models. Just like you, I was naive to think that it's going to work if the model could just search the web. Guess what. It didn't work and th... | 1 | 0 | 2026-02-28T01:03:20 | Cool-Chemical-5629 | false | null | 0 | o7sw7kw | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7sw7kw/ | false | 1 |
t1_o7sw5p7 | instruction following has noticeably tightened up. the older Qwen2.5 series would occasionally go rogue on complex multi-step prompts, Qwen3.5 is much more reliable there. the 35B A3B hitting production-grade quality at MoE efficiency is kind of a big deal for self-hosted deployments | 3 | 0 | 2026-02-28T01:03:01 | theagentledger | false | null | 0 | o7sw5p7 | false | /r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7sw5p7/ | false | 3 |
t1_o7sw50o | This is amazing!! I have a 6800xt and 7700x gaming PC running idle at the moment with 32gb system ram and 16gb VRAM, do you think we could fit a Q4_K_M Qwen3.5 35b a3b model by shifting more of the layers onto the unused ~8gb VRAM shown in the screenshots? Or do I just not have enough DDR5 to take advantage of this fra... | 2 | 0 | 2026-02-28T01:02:54 | Qwen30bEnjoyer | false | null | 0 | o7sw50o | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7sw50o/ | false | 2 |
t1_o7sw3pu | this is exactly the tool the community needed. half the questions in this sub are "will X model fit in Y GB" - having a one-liner for this should cut those down a lot. would love to see it account for kv cache overhead at different context lengths too | 0 | 0 | 2026-02-28T01:02:42 | theagentledger | false | null | 0 | o7sw3pu | false | /r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7sw3pu/ | false | 0 |
t1_o7sw3ln | Have you tried the dense 27B Qwen? I had some good creative writing results with it. | 3 | 0 | 2026-02-28T01:02:40 | metigue | false | null | 0 | o7sw3ln | false | /r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7sw3ln/ | false | 3 |
t1_o7sw3bd | Ok but what are you doing with it | 1 | 0 | 2026-02-28T01:02:38 | amacgregor | false | null | 0 | o7sw3bd | false | /r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7sw3bd/ | false | 1 |
t1_o7sw2zw | okay cause Canonical uploads whatever you type in Ubuntu Desktop search bar | 0 | 0 | 2026-02-28T01:02:34 | ClimateBoss | false | null | 0 | o7sw2zw | false | /r/LocalLLaMA/comments/1rgomsq/ubuntu_or_debian_speed_difference_on_llamacpp/o7sw2zw/ | false | 0 |
t1_o7sw1uc | 3k+ tok/s prefill on a single 5080 is wild. the hybrid CPU/GPU approach for MoE makes total sense - why load experts you might not use. curious what the decode speed looks like at longer contexts though, that's usually where things get spicy | 2 | 0 | 2026-02-28T01:02:22 | theagentledger | false | null | 0 | o7sw1uc | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7sw1uc/ | false | 2 |
t1_o7svzh3 | ## Original Rankings
[This post I made 1 month ago](https://reddit.com/r/LocalLLaMA/comments/1qrsy4q/how_close_are_openweight_models_to_sota_my_honest/)
## What changed in this last month
- Revisited some Mistral/Magistral models.. was very meh'd. They write decently but at every size/weight there's a model I'd rath... | 40 | 0 | 2026-02-28T01:01:58 | ForsookComparison | false | null | 0 | o7svzh3 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7svzh3/ | false | 40 |
t1_o7svxnj | what do you do in your free time? | 1 | 0 | 2026-02-28T01:01:39 | Zilch274 | false | null | 0 | o7svxnj | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o7svxnj/ | false | 1 |
t1_o7svxiu | the dynamic quant approach is genuinely smart - preserving precision on attention layers while compressing FFN layers harder is exactly the right tradeoff. been watching the perplexity gap between Q4_K_M and UD equivalents shrink with each release. unsloth keeps delivering | 1 | 0 | 2026-02-28T01:01:38 | theagentledger | false | null | 0 | o7svxiu | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7svxiu/ | false | 1 |
t1_o7svx7v | honestly 7b models are underrated for specific tasks. i use them for:
- commit message generation (works great, fast, stays local)
- quick text cleanup and reformatting
- simple code explanation when im reading unfamiliar repos
where they fall apart is anything multi-step or where you need it to hold context across a... | 4 | 0 | 2026-02-28T01:01:35 | Pitiful-Impression70 | false | null | 0 | o7svx7v | false | /r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7svx7v/ | false | 4 |
t1_o7svta6 | the difference will be negligable. this isnt what you should be caring about.
i just realized op put "less malware" as a pro for debian. | 2 | 0 | 2026-02-28T01:00:55 | HealthyCommunicat | false | null | 0 | o7svta6 | false | /r/LocalLLaMA/comments/1rgomsq/ubuntu_or_debian_speed_difference_on_llamacpp/o7svta6/ | false | 2 |
t1_o7svr9z | 68 | 0 | 2026-02-28T01:00:34 | Fault23 | false | null | 0 | o7svr9z | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7svr9z/ | false | 68 | |
t1_o7svhtz | There are those who have money for contracts and services and there are those who have hardware to run local models. And the U.S. grown crop of local models isn’t very competitive when you consider the alternative. | 0 | 0 | 2026-02-28T00:58:58 | aaronr_90 | false | null | 0 | o7svhtz | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7svhtz/ | false | 0 |
t1_o7svgj9 | its arch or nuthin | 1 | 0 | 2026-02-28T00:58:45 | jwpbe | false | null | 0 | o7svgj9 | false | /r/LocalLLaMA/comments/1rgomsq/ubuntu_or_debian_speed_difference_on_llamacpp/o7svgj9/ | false | 1 |
t1_o7svf7v | unfortunately its not working. I was really excited to have this as a backend for a project im working on. | 1 | 0 | 2026-02-28T00:58:32 | Street-Buyer-2428 | false | null | 0 | o7svf7v | false | /r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7svf7v/ | false | 1 |
t1_o7svckz | 2023 - The good old days.
https://old.reddit.com/r/LocalLLaMA/comments/18h2vdj/if_you_have_issue_running/ | 13 | 0 | 2026-02-28T00:58:05 | mantafloppy | false | null | 0 | o7svckz | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7svckz/ | false | 13 |
t1_o7sv9vl | I'm going to laugh if they end up going with a rebranded Qwen or something, as their "more patriotic" platform. | 2 | 0 | 2026-02-28T00:57:39 | Sambojin1 | false | null | 0 | o7sv9vl | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sv9vl/ | false | 2 |
t1_o7sv9av | They don't have the depth of knowledge - at 27B parameters it just can't hold the amount of information in its neural net that a larger model can like Kimi K2.5 and GLM 5.
However, what the benchmarks show (and my real world usage) is that the model is highly "intelligent" in the way it is consistent with data it's gi... | 2 | 0 | 2026-02-28T00:57:33 | metigue | false | null | 0 | o7sv9av | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7sv9av/ | false | 2 |
t1_o7sv6k3 | Are you leaking classified material? Because you can't possibly know that otherwise. | 1 | 0 | 2026-02-28T00:57:05 | postitnote | false | null | 0 | o7sv6k3 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sv6k3/ | false | 1 |
t1_o7sv5lt | Yeah man I was around when GPT-J got quantized and I could finally fit it on my GPU. That was a 6B and was basically the first open source LLM that consistently gave coherent output. Then it was finetuned into Pygmalion and it was incredible,
Then Llama leaked and everybody thought it was a hoax for like 3 days until ... | 33 | 0 | 2026-02-28T00:56:55 | SuchAGoodGirlsDaddy | false | null | 0 | o7sv5lt | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7sv5lt/ | false | 33 |
t1_o7sv341 | true | 1 | 0 | 2026-02-28T00:56:30 | Fault23 | false | null | 0 | o7sv341 | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7sv341/ | false | 1 |
t1_o7sv26e | yep...funny true story. I asked chatgpt about this and it told me to turn the damn 'thinking' off. Worked like a charm (of course chat doesnt want to be replaced!) | 1 | 0 | 2026-02-28T00:56:21 | yuhjulio | false | null | 0 | o7sv26e | false | /r/LocalLLaMA/comments/1rec6bs/qwen35_thinking_for_too_long/o7sv26e/ | false | 1 |
t1_o7suy3f | [removed] | 1 | 0 | 2026-02-28T00:55:39 | [deleted] | true | null | 0 | o7suy3f | false | /r/LocalLLaMA/comments/1rgo5re/mcp_that_issues_virtual_justintime_visa_cards_for/o7suy3f/ | false | 1 |
t1_o7sup2u | “In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States... | 51 | 0 | 2026-02-28T00:54:10 | DistanceSolar1449 | false | null | 0 | o7sup2u | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sup2u/ | false | 51 |
t1_o7suob5 | Qwen3-VL-32B was finally able to convince me that knowledge-depth and instruction-following had been matched at smaller sizes. | 19 | 0 | 2026-02-28T00:54:02 | ForsookComparison | false | null | 0 | o7suob5 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7suob5/ | false | 19 |
t1_o7sulxk | I can assure you that's not the case. I have at least 2 gigs of RAM available until the crash moment. Furthermore, if I disable shared GPU memory this does not happen with or without MMAP and I have a happy 100k context with no stability issues, jsut 3 times slower PP.
With --no-mmap and reduced shared GPU memory, it ... | 1 | 0 | 2026-02-28T00:53:37 | Xantrk | false | null | 0 | o7sulxk | false | /r/LocalLLaMA/comments/1rgaw5c/gpu_shared_vram_makes_qwen3535b_prompt_processing/o7sulxk/ | false | 1 |
t1_o7sugs4 | What replaced 3.3 70B in your workflows? | 8 | 0 | 2026-02-28T00:52:46 | SuchAGoodGirlsDaddy | false | null | 0 | o7sugs4 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7sugs4/ | false | 8 |
t1_o7sud40 | [removed] | 1 | 0 | 2026-02-28T00:52:10 | [deleted] | true | null | 0 | o7sud40 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sud40/ | false | 1 |
t1_o7sublz | If I was the enemy, I would love it if US used an autoregressive cloud transformer model to try to shoot me. By the time it gets through it's reasoning block, I will have run to the next town over, plus I just have to jam radio signals to be safe. Cherry on top if the company behind the tech never wanted to do it and w... | 12 | 0 | 2026-02-28T00:51:54 | catplusplusok | false | null | 0 | o7sublz | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sublz/ | false | 12 |
t1_o7su6f5 | I’m developing a direction-free approach based on matrix optimization (article coming soon), and SVD on the results seems to indicate that the “true” dimension of the refusal manifold (position of the last major spectral gap) is typically between 10 and 30 for the models I’ve tested. | 1 | 0 | 2026-02-28T00:51:03 | -p-e-w- | false | null | 0 | o7su6f5 | false | /r/LocalLLaMA/comments/1rf6s0d/qwen3527bhereticgguf/o7su6f5/ | false | 1 |
t1_o7su4h1 | >I feel like I'm missing something obvious,
It's an inference engine wrapper, it's not an IDE to code in nor is it an LLM harness to orchistrate this. You need an IDE like VScode with some harness plugin like Roo, or just a CLI sort of harness like OpenCode if you want that same gemini-CLI experience. | 1 | 0 | 2026-02-28T00:50:43 | Marksta | false | null | 0 | o7su4h1 | false | /r/LocalLLaMA/comments/1rgns5u/lm_studio_can_it_load_a_small_local_folder_of_code/o7su4h1/ | false | 1 |
t1_o7su40o | 2024? *_USE ROPE_*?!
In 2023 Rope scaling was literaly being invented here. | 10 | 0 | 2026-02-28T00:50:39 | SuchAGoodGirlsDaddy | false | null | 0 | o7su40o | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7su40o/ | false | 10 |
t1_o7stskj | “In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States... | 22 | 0 | 2026-02-28T00:48:44 | DistanceSolar1449 | false | null | 0 | o7stskj | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7stskj/ | false | 22 |
t1_o7stnfh | In theory this is why all the old Mac studio gpus are still forever valuable, low compute doesn't matter if you can separate the phases, so if the idea came to RPC where models are streamed once into your strongest GPU(s) via fastest pcie in quantized form, we'd have two different on-demand pre-fill processes, 1 for sm... | 1 | 0 | 2026-02-28T00:47:54 | Aaaaaaaaaeeeee | false | null | 0 | o7stnfh | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7stnfh/ | false | 1 |
t1_o7sti8r | This is already how it works in llama.cpp and ik_llama.cpp, first in https://github.com/ggml-org/llama.cpp/pull/6083, then further improved for MoE in https://github.com/ikawrakow/ik_llama.cpp/pull/520
And in these implementation, the RAM usage remains the same, while the VRAM usage increases by a few GB to have a la... | 3 | 0 | 2026-02-28T00:47:02 | notdba | false | null | 0 | o7sti8r | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7sti8r/ | false | 3 |
t1_o7steuh | We put the models through a series of tests convert those tests to numeric values and then chart them I think I'll call it benchmarks. Nothing could go wrong
/Jk | 4 | 0 | 2026-02-28T00:46:28 | mindwip | false | null | 0 | o7steuh | false | /r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7steuh/ | false | 4 |
t1_o7stcca | Yea as others mentioned, it's just rebar / 4g decoding related. Even some server boards don't want to boot multiple GPUs, a cheap consumer board might just have no chance at it. | 1 | 0 | 2026-02-28T00:46:04 | Marksta | false | null | 0 | o7stcca | false | /r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7stcca/ | false | 1 |
t1_o7st9ea | GLM 5 torches Sonnet. Sonnet is just an ad for Opus now. This isn’t the Sonnet 3.5 days anymore | 7 | 0 | 2026-02-28T00:45:35 | Remolten11 | false | null | 0 | o7st9ea | false | /r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7st9ea/ | false | 7 |
t1_o7st6vv | no. running a 1000b+ model such as kimi k2.5 still does not match expectations of opus and it literally never will.
the most important thing here is that u need to have a real need - not just some "oh this would look cool" - because without an actual need or use case that can be filled with LLM's, you will simply neve... | -1 | 0 | 2026-02-28T00:45:10 | HealthyCommunicat | false | null | 0 | o7st6vv | false | /r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7st6vv/ | false | -1 |
t1_o7st2c4 | All OpenClaw really adds over the existing LLM CLIs for Claude/Gemini/Codex/OpenCode/whatever is the messaging and cron job capabilities. If your tasks don't require those, you can probably accomplish them manually in the CLIs. | 1 | 0 | 2026-02-28T00:44:25 | NoahFect | false | null | 0 | o7st2c4 | false | /r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7st2c4/ | false | 1 |
t1_o7st0v0 | this is super helpful! totally make it another post. all these quant posts (especially for 16gb, cuz selfishly i also have 16gb vram on my 4060ti) have been super enlightening and saves a lot of people a lot of testing!
i wonder why your t/s was closer to 40 vs OP’s 70, cuz that’s what i’m seeing too on my end | 2 | 0 | 2026-02-28T00:44:10 | KeldenL | false | null | 0 | o7st0v0 | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7st0v0/ | false | 2 |
t1_o7ssymg | This is incredibly helpful—it's practically a perfect checklist for my future paper! You are absolutely right about the baselines. Since I'm modifying the underlying architecture, I plan to start by training from scratch using nanoGPT. This will make it much easier to track the router's internal states in a small-scale... | 1 | 0 | 2026-02-28T00:43:48 | Public_Bill_2618 | false | null | 0 | o7ssymg | false | /r/LocalLLaMA/comments/1rgdrpg/choosing_llm_baselines_for_academic_research_with/o7ssymg/ | false | 1 |
t1_o7ssvab | [removed] | 1 | 0 | 2026-02-28T00:43:15 | [deleted] | true | null | 0 | o7ssvab | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ssvab/ | false | 1 |
t1_o7ssuav | they should bring back old ai video generation back mainstream, i fw the the hell videos | 2 | 0 | 2026-02-28T00:43:05 | Verdugie | false | null | 0 | o7ssuav | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7ssuav/ | false | 2 |
t1_o7ssu8t | [deleted] | 1 | 0 | 2026-02-28T00:43:04 | [deleted] | true | null | 0 | o7ssu8t | false | /r/LocalLLaMA/comments/1rgdrpg/choosing_llm_baselines_for_academic_research_with/o7ssu8t/ | false | 1 |
t1_o7ssnh1 | You can’t have everything with a small model. The ideal (not yet achievable) would be to give up most knowledge but keep general abilities and reasoning skills. Your Stargate example requires no prior knowledge and small context, if the model could properly consider the possibilities of what you’re asking and perform a... | 1 | 0 | 2026-02-28T00:41:59 | didroe | false | null | 0 | o7ssnh1 | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7ssnh1/ | false | 1 |
t1_o7ssli8 | You love to see it. Glory to Alibaba. | 1 | 0 | 2026-02-28T00:41:39 | Select_Elephant_8808 | false | null | 0 | o7ssli8 | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7ssli8/ | false | 1 |
t1_o7ssjf8 | to be fair that would be us if the Chinese didn't release Open models | 3 | 0 | 2026-02-28T00:41:18 | redditorialy_retard | false | null | 0 | o7ssjf8 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7ssjf8/ | false | 3 |
t1_o7ssh3h | what's your inference speed or tokens per second? | 2 | 0 | 2026-02-28T00:40:56 | sir_creamy | false | null | 0 | o7ssh3h | false | /r/LocalLLaMA/comments/1rgiait/switched_to_qwen35122ba10bi1gguf/o7ssh3h/ | false | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.