name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7s5b74 | I let the llms write their prompts themself and correct it.
You world have really strong power (the less words the better), each llm may have it's own words that trigger different thing. | 12 | 0 | 2026-02-27T22:28:56 | No_Afternoon_4260 | false | null | 0 | o7s5b74 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7s5b74/ | false | 12 |
t1_o7s55a8 | What gen speed do you get? I would love to replace my OSS120B, but man, it has crazy speeds! | 3 | 0 | 2026-02-27T22:28:03 | nunodonato | false | null | 0 | o7s55a8 | false | /r/LocalLLaMA/comments/1rgiait/switched_to_qwen35122ba10bi1gguf/o7s55a8/ | false | 3 |
t1_o7s4yab | Please go chat with MechaHitler about this instead. I have no interest in discussing Elon's horse tranquilizer addiction anymore. | 1 | 0 | 2026-02-27T22:27:00 | p13t3rm | false | null | 0 | o7s4yab | false | /r/LocalLLaMA/comments/1rctg3y/we_cant_upvote_elon_musk_this_is_reddit/o7s4yab/ | false | 1 |
t1_o7s4u1d | The thing with minimax is that for agentic it is incredibly strong but as soon as you ask for creativity it falls on its face. | 3 | 0 | 2026-02-27T22:26:22 | Zc5Gwu | false | null | 0 | o7s4u1d | false | /r/LocalLLaMA/comments/1rgixxr/what_models_do_you_think_owned_february/o7s4u1d/ | false | 3 |
t1_o7s4mdz | At first I found schizo-posts about “innovative” llm architectures that pop up every week or so entertaining. The authors typically have very vague idea of math and how it gets optimized in kernels. But now even that brings me no joy. I miss the feeling of reading my first schizo-posts. It was something physics-inspire... | 43 | 0 | 2026-02-27T22:25:15 | No_You3985 | false | null | 0 | o7s4mdz | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7s4mdz/ | false | 43 |
t1_o7s4m6v | What do we mean by “web-scale” retrieval here? | 1 | 0 | 2026-02-27T22:25:13 | Entuni | false | null | 0 | o7s4m6v | false | /r/LocalLLaMA/comments/1rfkdjk/pplxembed_stateoftheart_embedding_models_for/o7s4m6v/ | false | 1 |
t1_o7s4kqk | I ran llama and KTransformers some time ago on the Epyc system and couldn’t get them to perform well, especially under usecases like Opencode. I haven’t run with those params but to my understanding llama -ngl is going to offload layers to the GPU. With Q3CN at Q4 the model is around 46GB on disk, so with CUDA overhe... | 2 | 0 | 2026-02-27T22:25:01 | mrstoatey | false | null | 0 | o7s4kqk | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7s4kqk/ | false | 2 |
t1_o7s4ikz | Darn, they are probably losing the moat. This is the typical reaction of companies that are losing edge - blame their failure on other "external" things.
They apparently can train their models on "whatever we want because it's fair use" and keep it secret. But no, you can't train your model on theirs. That's just ab... | 1 | 0 | 2026-02-27T22:24:41 | FPham | false | null | 0 | o7s4ikz | false | /r/LocalLLaMA/comments/1rd8cfw/anthropics_recent_distillation_blog_should_make/o7s4ikz/ | false | 1 |
t1_o7s4gt2 | Or join with other students to make the 2x rtx pro more economical.
More over, the hardware can be used for more than 2 years | 1 | 0 | 2026-02-27T22:24:26 | Awkward-Candle-4977 | false | null | 0 | o7s4gt2 | false | /r/LocalLLaMA/comments/1rg68e6/starting_a_phd_in_ml_what_is_the_best_infra_i_can/o7s4gt2/ | false | 1 |
t1_o7s4e9j | I get the feeling you might not understand that the long term use of an addictive medication, for some reason, always results in the person forming an addiction. | 0 | 0 | 2026-02-27T22:24:03 | BusRevolutionary9893 | false | null | 0 | o7s4e9j | false | /r/LocalLLaMA/comments/1rctg3y/we_cant_upvote_elon_musk_this_is_reddit/o7s4e9j/ | false | 0 |
t1_o7s4ctx | This is amazing thank you!
Is it worth using
--no-kv-offload
to offload KV cache into ram? | 1 | 0 | 2026-02-27T22:23:50 | CATLLM | false | null | 0 | o7s4ctx | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7s4ctx/ | false | 1 |
t1_o7s4a0i | Thanks! indeed cpu works (and runs also at 5t/s vs 2 of SyCL) | 1 | 0 | 2026-02-27T22:23:25 | inphaser | false | null | 0 | o7s4a0i | false | /r/LocalLLaMA/comments/1qz5uww/qwen3_coder_next_as_first_usable_coding_model_60/o7s4a0i/ | false | 1 |
t1_o7s44ns | u/Digger412, your Qwen 35B-3A is 3x faster in my machine versus the regular quants. (32 token/s vs 100 token/s). Any chance you could Generate a Qwen 27b quant as well? | 4 | 0 | 2026-02-27T22:22:38 | Substantial_Swan_144 | false | null | 0 | o7s44ns | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7s44ns/ | false | 4 |
t1_o7s435o | Those were the golden days. That was 20 years ago in LLM time.
Sometimes I still can't believe the size of prompts and context we are complaining about today.
An MoE that runs on a 2-3k rig is a LOT better than what chatgpt 3.5 was (but that's not necessarily a good thing, imo).
One thing keeps being true, writing g... | 143 | 0 | 2026-02-27T22:22:24 | cosimoiaia | false | null | 0 | o7s435o | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7s435o/ | false | 143 |
t1_o7s3sqk | I remember people saying we wouldn't have a local model as good as GPT4 for at least 10 years. Good times | 334 | 0 | 2026-02-27T22:20:52 | AcornTear | false | null | 0 | o7s3sqk | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7s3sqk/ | false | 334 |
t1_o7s3s1a | So we are now approaching GPT o3 output cost (8$) soon. Not hating, but I'm getting curious where this will lead. | 12 | 0 | 2026-02-27T22:20:45 | Technical-Earth-3254 | false | null | 0 | o7s3s1a | false | /r/LocalLLaMA/comments/1rggpu9/glm5code/o7s3s1a/ | false | 12 |
t1_o7s3nh8 | I pray that I find 1% of OpenAI's gullible customers and sell them stuff. | 1 | 0 | 2026-02-27T22:20:06 | Dry_Yam_4597 | false | null | 0 | o7s3nh8 | false | /r/LocalLLaMA/comments/1rgi6ky/openai_raises_110_billion_in_the_largest_private/o7s3nh8/ | false | 1 |
t1_o7s3bgd | vram on a x86 platform without ram is kind of a bumper tho | 0 | 0 | 2026-02-27T22:18:20 | No_Afternoon_4260 | false | null | 0 | o7s3bgd | false | /r/LocalLLaMA/comments/1rgj6e9/psa_dgx_spark_rdimm_go_rtx_pro_only_4x_the_price/o7s3bgd/ | false | 0 |
t1_o7s3bba | Nope. I didn't say 3.5 either. But the guy only forked this 5 days ago - do give him a chance to catch up... :-) | 1 | 0 | 2026-02-27T22:18:18 | Protopia | false | null | 0 | o7s3bba | false | /r/LocalLLaMA/comments/1rg8gkx/qwen35_35b_a3b_45_ts_128k_ctx_on_single_16gb_5060/o7s3bba/ | false | 1 |
t1_o7s37zn | No it's not vastly behind, OP is configuring wrong. I get 18 tok/s on a 3050 mobile with 6gb vram.... | 1 | 0 | 2026-02-27T22:17:50 | nasone32 | false | null | 0 | o7s37zn | false | /r/LocalLLaMA/comments/1rg4apu/qwen_35_architecture_analysis_parameter/o7s37zn/ | false | 1 |
t1_o7s37j8 | Could you elaborate on that? My interpretation of what you're saying is that the FP32 is an upcast of the BF16 (or whatever's native) and using it will give better performance. Am I correct? And is there a downside? | 2 | 0 | 2026-02-27T22:17:46 | TomatoCo | false | null | 0 | o7s37j8 | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7s37j8/ | false | 2 |
t1_o7s2xee | It’s better to disable thinking when you want a json or a tool call. | 0 | 0 | 2026-02-27T22:16:17 | Alarmed-Ad-6201 | false | null | 0 | o7s2xee | false | /r/LocalLLaMA/comments/1rg0487/system_prompt_for_qwen35_27b35ba3b_to_reduce/o7s2xee/ | false | 0 |
t1_o7s2sk3 | You need to offload all the layers to the GPU (yes, LM studio will say it require more memory than you GPUs has) and then offload the minimum MOE experts possible to the CPU with the separate slider.
I get 18 tok/s on a 3050 mobile with 6gb vram.... | 2 | 0 | 2026-02-27T22:15:35 | nasone32 | false | null | 0 | o7s2sk3 | false | /r/LocalLLaMA/comments/1rg4apu/qwen_35_architecture_analysis_parameter/o7s2sk3/ | false | 2 |
t1_o7s2mtz | Great work! We also need better datasets for evaluation of real-world performance with no label errors and other issues that were found in the current ones. I'm sure a lot of companies already have datasets for internal testing, I would be grateful if they open sourced them for the community. | 1 | 0 | 2026-02-27T22:14:45 | Majesticeuphoria | false | null | 0 | o7s2mtz | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7s2mtz/ | false | 1 |
t1_o7s2mqv | So did everyone else, and everyone else stole from each other and public too. | 1 | 0 | 2026-02-27T22:14:44 | Navhkrin | false | null | 0 | o7s2mqv | false | /r/LocalLLaMA/comments/1rcpmwn/anthropic_weve_identified_industrialscale/o7s2mqv/ | false | 1 |
t1_o7s2mej | Honestly, this is one of the biggest gaps right now. Most harnesses treat the LLM call as a black box - you send a prompt in, you get a response out, and good luck figuring out what actually happened in between.
The tracing problem gets worse with skills/tools because now you have multiple layers: the system prompt, t... | 2 | 0 | 2026-02-27T22:14:41 | RickClaw_Dev | false | null | 0 | o7s2mej | false | /r/LocalLLaMA/comments/1rgelk1/the_supply_chain_problem_nobody_talks_about_agent/o7s2mej/ | false | 2 |
t1_o7s2m7f | I'm not talking about the absolute, I'm talking about the relative score. One day, Opus 4.5, better than any other model, dozens of models ahead of Kimi K2, and another day, Opus 4.5, a loser, who is worse than all these dozen models and in the same position as Kimi K2. Does it mean the Opus 4.5 becomes more stupid tha... | 1 | 0 | 2026-02-27T22:14:39 | Exciting_Garden2535 | false | null | 0 | o7s2m7f | false | /r/LocalLLaMA/comments/1r3weq3/swerebench_jan_2026_glm5_minimax_m25/o7s2m7f/ | false | 1 |
t1_o7s2jvn | Nowadays it is better to buy VRAM. RTX PRO 6000 is a good option if you can afford it, or classic 3090 if not. MI50 32 GB could be an alternative, even though went up in price recently.
Cannot recommend buying too much RAM now. About a year ago was the time to buy RAM... could get 1 TB 3200 MHz DDR4 for $1600 or 768 G... | 3 | 0 | 2026-02-27T22:14:19 | Lissanro | false | null | 0 | o7s2jvn | false | /r/LocalLLaMA/comments/1rgj6e9/psa_dgx_spark_rdimm_go_rtx_pro_only_4x_the_price/o7s2jvn/ | false | 3 |
t1_o7s29v9 | Thank you so much for all your help, I will check all of this, to be honest I am super early on this so I cannot even reply to these questions, so anything is very helpful ! | 1 | 0 | 2026-02-27T22:12:50 | SpellGlittering1901 | false | null | 0 | o7s29v9 | false | /r/LocalLLaMA/comments/1rgf12v/how_to_chose_the_right_model/o7s29v9/ | false | 1 |
t1_o7s26z2 | oh this one doesn't do speech to speech :( is that on the roadmap at all ? | 1 | 0 | 2026-02-27T22:12:26 | woahdudee2a | false | null | 0 | o7s26z2 | false | /r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o7s26z2/ | false | 1 |
t1_o7s26ti | Actually thought. If you're not going to try it, don't have an opinion on it. I rarely try any of the new LLMs that come out, but I try not to have an opinion on any of them until I try them myself some day. | 37 | 0 | 2026-02-27T22:12:25 | KaroYadgar | false | null | 0 | o7s26ti | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7s26ti/ | false | 37 |
t1_o7s20nw | I wish but I don’t even know enough models to make a decision | 0 | 0 | 2026-02-27T22:11:29 | SpellGlittering1901 | false | null | 0 | o7s20nw | false | /r/LocalLLaMA/comments/1rgf12v/how_to_chose_the_right_model/o7s20nw/ | false | 0 |
t1_o7s1xzt | Extremely dumb question but I am completely new to it :
What is the difference between RAM and VRAM, and why are we talking more in VRAM when it comes to LLM ? VRAM is linked to GPUs no ? Not RAM
And so if I have 10Gb of VRAM and a model takes 3, it might still fill it during training because it multiplies ? | 1 | 0 | 2026-02-27T22:11:05 | SpellGlittering1901 | false | null | 0 | o7s1xzt | false | /r/LocalLLaMA/comments/1rgf12v/how_to_chose_the_right_model/o7s1xzt/ | false | 1 |
t1_o7s1vb8 | “LLM keeps forgetting required parameters on complex tools, especially when the schema has nested objects with many fields”
You may have to find a way to simplify the message history. Not everything needs to be kept in memory. If the messages list grow with every new request and return. Forgetting will happen more fre... | 1 | 0 | 2026-02-27T22:10:42 | Alarmed-Ad-6201 | false | null | 0 | o7s1vb8 | false | /r/LocalLLaMA/comments/1rg4ahx/react_pattern_hitting_a_wall_for_domainspecific/o7s1vb8/ | false | 1 |
t1_o7s1uyk | Not exactly. You can say how much vram you have OR how much RAM and HF just looks if the model fits into the parameters. It says most of the time 'no' to me, but these models (mostly Moe) work fine | 3 | 0 | 2026-02-27T22:10:39 | Responsible-Stock462 | false | null | 0 | o7s1uyk | false | /r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7s1uyk/ | false | 3 |
t1_o7s1u3f | well then put a guardrail in the tooling. It's a good practice, anyway | 1 | 0 | 2026-02-27T22:10:30 | juandann | false | null | 0 | o7s1u3f | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o7s1u3f/ | false | 1 |
t1_o7s1tyr | how does this compare to [https://github.com/kyutai-labs/unmute](https://github.com/kyutai-labs/unmute) in terms of latency. i know kyutai removed voice cloning but there is still a way to do it | 1 | 0 | 2026-02-27T22:10:29 | woahdudee2a | false | null | 0 | o7s1tyr | false | /r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o7s1tyr/ | false | 1 |
t1_o7s1pem | He probably has the money to afford multiple beefier GPUs, but Qwen 2.5 had some sizes that where ideal for mid/high tier consumer GPUs, where you can actually fit the whole dense model into VRAM on a single GPU.
I really wish we'd get more models like that, not having to rely on post-hoc quants, but models spe... | 4 | 0 | 2026-02-27T22:09:49 | Bakoro | false | null | 0 | o7s1pem | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7s1pem/ | false | 4 |
t1_o7s1kqv | Nice, thanks for sharing | 1 | 0 | 2026-02-27T22:09:09 | danuser8 | false | null | 0 | o7s1kqv | false | /r/LocalLLaMA/comments/1qe1dec/is_5060ti_16gb_and_32gb_ddr5_system_ram_enough_to/o7s1kqv/ | false | 1 |
t1_o7s1cns | Ive actually made a few distilled loras using my claude chats, from CC and Web all compiled, performed better all around and in some smaller benchmark tests I got up to 30% better coding scores, did this for 3.5 27b, 3 30B and currently in the process of making a glm 4.7 flash version.
Probably wont release them due ... | 3 | 0 | 2026-02-27T22:07:59 | ghgi_ | false | null | 0 | o7s1cns | false | /r/LocalLLaMA/comments/1rgexmk/qwen3527bclaude46opusreasoningdistilledgguf_is_out/o7s1cns/ | false | 3 |
t1_o7s16ul | AesSedai congratulations on being below everybody else (better than everybody else) in terms of KL Div / Disk Space in this [chart from OP](https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd.it%2Fnew-qwen3-5-35b-a3b-unsloth-dynamic-ggufs-benchmarks-v0-5hmdthgyp2mg1.png%3Fwidth%3D2320%26format%3Dpng%26auto%3Dwe... | 10 | 0 | 2026-02-27T22:07:09 | Ueberlord | false | null | 0 | o7s16ul | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7s16ul/ | false | 10 |
t1_o7s162h | After a lot of benchmarking the best I can get is 3.4 t/s for generation. That's for a 12K context. | 2 | 0 | 2026-02-27T22:07:02 | OrbMan99 | false | null | 0 | o7s162h | false | /r/LocalLLaMA/comments/1rgexmk/qwen3527bclaude46opusreasoningdistilledgguf_is_out/o7s162h/ | false | 2 |
t1_o7s15dt | Hi! Thanks for your work. What about the 27B model? | 1 | 0 | 2026-02-27T22:06:56 | Roubbes | false | null | 0 | o7s15dt | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7s15dt/ | false | 1 |
t1_o7s0wgf | Yeah I guess especially if we combine with 3D mesh generation models, or even just go full Google Genie then it can happen a lot sooner. I was thinking more like "ten or twenty years until you can give a model a prompt and no other resources, and it can generate high quality everything from scratch" | 2 | 0 | 2026-02-27T22:05:39 | -dysangel- | false | null | 0 | o7s0wgf | false | /r/LocalLLaMA/comments/1rgixxr/what_models_do_you_think_owned_february/o7s0wgf/ | false | 2 |
t1_o7s0w5x | 21 | 0 | 2026-02-27T22:05:37 | NaymmmYT | false | null | 0 | o7s0w5x | false | /r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7s0w5x/ | false | 21 | |
t1_o7s0utm | Qwen 3.5-35b-a3b is running in Q6 on my home computer. It can solve the logic benchmarks I use. It is vision enabled. I have a single button (in LMStudio) to turn thinking on and off without doing anything else. It correctly answered my literature benchmark questions.
38.5 tokens/sec. It's faster than some of the infe... | 2 | 0 | 2026-02-27T22:05:25 | Morphon | false | null | 0 | o7s0utm | false | /r/LocalLLaMA/comments/1rgixxr/what_models_do_you_think_owned_february/o7s0utm/ | false | 2 |
t1_o7s0tjt | Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/PgFhZ8cnWW)
You've also been given a special flair for your contribution. We appreciate your post!
*I am a bot and this action was performed automatically.* | 1 | 0 | 2026-02-27T22:05:15 | WithoutReason1729 | false | null | 0 | o7s0tjt | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7s0tjt/ | true | 1 |
t1_o7s0rz8 | Hmmm, you could download or create a CPU-only build of llama.cpp, without any SYCL functionality integrated. If it works with that version then it's a SYCL bug that you could create an issue for on GitHub. If it still doesn't work then download another quant from some other repo to see if your existing file might be co... | 1 | 0 | 2026-02-27T22:05:01 | Chromix_ | false | null | 0 | o7s0rz8 | false | /r/LocalLLaMA/comments/1qz5uww/qwen3_coder_next_as_first_usable_coding_model_60/o7s0rz8/ | false | 1 |
t1_o7s0nfl | Here's the direct link to the video instead of that embedded crap
[https://www.youtube.com/watch?v=aV4j5pXLP-I](https://www.youtube.com/watch?v=aV4j5pXLP-I) | 6 | 0 | 2026-02-27T22:04:21 | tpwn3r | false | null | 0 | o7s0nfl | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7s0nfl/ | false | 6 |
t1_o7s0f5g | Fingers crossed
It does appear to exist
>{"error":{"code":"1220","message":"You do not have permission to access glm-5-code"}}
Where if you send a gibberish model name to the endpoint:
>{"error":{"code":"1211","message":"Unknown Model, please check the model code."}} | 24 | 0 | 2026-02-27T22:03:11 | AnomalyNexus | false | null | 0 | o7s0f5g | false | /r/LocalLLaMA/comments/1rggpu9/glm5code/o7s0f5g/ | false | 24 |
t1_o7s09jr | Thank you! I haven’t really focused on the decode much at all yet, I plan to look at it next. You’re right that the 64 token is short for a proper test, that’s partly because in the early days I was getting 1 token per second decode so waiting for the decode benchmark was painful! I also don’t know about the decode ... | 1 | 0 | 2026-02-27T22:02:22 | mrstoatey | false | null | 0 | o7s09jr | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7s09jr/ | false | 1 |
t1_o7s0740 | Usually using tool call for json output (define json schema as tool input and ask model to call that tool) results in better accuracy than describing the json in prompts. Newer models are heavily optimized for that. | 1 | 0 | 2026-02-27T22:02:01 | Alarmed-Ad-6201 | false | null | 0 | o7s0740 | false | /r/LocalLLaMA/comments/1rgcipc/what_small_models_30b_do_you_actually_use_for/o7s0740/ | false | 1 |
t1_o7s061c | IMO Qwen3.5 122b is best overall at the moment in terms of speed/context and amount of vram required to run AI on-premises. | 6 | 0 | 2026-02-27T22:01:51 | Its_Powerful_Bonus | false | null | 0 | o7s061c | false | /r/LocalLLaMA/comments/1rgiait/switched_to_qwen35122ba10bi1gguf/o7s061c/ | false | 6 |
t1_o7s04c7 | I love that prompt processing bar. I wish that existed in KoboldCPP and Text Gen by Oobabooga. | 2 | 0 | 2026-02-27T22:01:36 | silenceimpaired | false | null | 0 | o7s04c7 | false | /r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o7s04c7/ | false | 2 |
t1_o7rz8hq | For CPU inference these could be cool
1B Dense
9B-A1B MOE
For comfyUi image gen flow we have gone through
Qwen2.5 7B // Qwen3 4B // so … Qwen3.5 3B ???
Nanbeige 4.1 3B is extremely capable for its size, so I expect super strong & super strong micro models | 3 | 0 | 2026-02-27T21:57:05 | Current-Interest-369 | false | null | 0 | o7rz8hq | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7rz8hq/ | false | 3 |
t1_o7rz70k | A decade or two? One maximum. AI will improve itself. | -2 | 0 | 2026-02-27T21:56:53 | Far_Note6719 | false | null | 0 | o7rz70k | false | /r/LocalLLaMA/comments/1rgixxr/what_models_do_you_think_owned_february/o7rz70k/ | false | -2 |
t1_o7rz3ve | Thanks! Gemma 4, please. | 1 | 0 | 2026-02-27T21:56:26 | IrisColt | false | null | 0 | o7rz3ve | false | /r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7rz3ve/ | false | 1 |
t1_o7rz22k | The 27b model is capable of very complex tool call. Similar to gpt5-mini level. I guess a 4-8b model can be suitable for most local agents. | 2 | 0 | 2026-02-27T21:56:11 | Alarmed-Ad-6201 | false | null | 0 | o7rz22k | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7rz22k/ | false | 2 |
t1_o7ryo2h | Hei Daniel,
Any chance or idea to have a follow-up on the very weird artefacts (plurals vs singulars mismatch) on the Minimax M2.5? I mean, there is something terribly wrong happening during some quantization (prolly sigmoid gates trashed) and sadly it affects all quantizations,. including those for vllm and sglang. B... | 1 | 0 | 2026-02-27T21:54:14 | One-Macaron6752 | false | null | 0 | o7ryo2h | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7ryo2h/ | false | 1 |
t1_o7ryjk8 | if this going to be ran locally, why even bother with "National security risk"? It's literally in your control. The model wont even connected into a Chinese server when you give it a web search capability | 1 | 0 | 2026-02-27T21:53:35 | juandann | false | null | 0 | o7ryjk8 | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o7ryjk8/ | false | 1 |
t1_o7ryi0z | heh, that's a stretch | 1 | 0 | 2026-02-27T21:53:22 | IrisColt | false | null | 0 | o7ryi0z | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7ryi0z/ | false | 1 |
t1_o7rydwj | "I see a tiny line of silver" | 2 | 0 | 2026-02-27T21:52:48 | KAkshat | false | null | 0 | o7rydwj | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7rydwj/ | false | 2 |
t1_o7rydne | The prefill numbers are genuinely impressive for a single 5080. Curious about one thing though — the 64-token decode benchmark is pretty short. In practice with agentic coding loops you're generating 500-2000 tokens per turn, and CPU decode throughput tends to degrade as KV cache pressure builds. Does the 14.9 tok/s ho... | 1 | 0 | 2026-02-27T21:52:46 | tom_mathews | false | null | 0 | o7rydne | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7rydne/ | false | 1 |
t1_o7ry8oo | [removed] | 1 | 0 | 2026-02-27T21:52:04 | [deleted] | true | null | 0 | o7ry8oo | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7ry8oo/ | false | 1 |
t1_o7ry8ky | Thanks! I think if the prompt is such that it will benefit from the GPU then Krasis’ approach will be faster, and if not (eg small prompt / decode heavy) then it could make sense to switch strategies. I think there are more gains to be had at decode though, I haven’t spent much time on it yet. Krasis holds a model i... | 2 | 0 | 2026-02-27T21:52:03 | mrstoatey | false | null | 0 | o7ry8ky | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7ry8ky/ | false | 2 |
t1_o7rxrui | I'm definitely satisfied with Qwen 3.5 for general purpose, programming, and agentic use cases. However, there's just one thing that hasn't improved in small models in years: creative writing. Though Qwen has tried to benchmaxx EQ bench creative writing, in reality, the best we have right now are still Mistral Nemo 12B... | 8 | 0 | 2026-02-27T21:49:43 | ArsNeph | false | null | 0 | o7rxrui | false | /r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7rxrui/ | false | 8 |
t1_o7rxk95 | Care to share more?
I saw I could finally purchase it on aliexpress, but at this point I think they were too late and there are better options.
| 1 | 0 | 2026-02-27T21:48:39 | ProfessionalSpend589 | false | null | 0 | o7rxk95 | false | /r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o7rxk95/ | false | 1 |
t1_o7rxjyf | The mistakes he mentions in the video is not something that would even pop up on the radar if he paid someone to do it. Every one of them sounds like something someone just starting out would do since they well, just started. | 5 | 0 | 2026-02-27T21:48:36 | larrytheevilbunnie | false | null | 0 | o7rxjyf | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7rxjyf/ | false | 5 |
t1_o7rxewp | Thanks! Almost all my focus has been on prefill so far to prove the concept to myself. Krasis does have an optimised CPU model in system RAM though (currently optimised for AVX2, AVX512 isn’t specifically supported yet) so in theory it could get up to a comparable speed to any other pure CPU decode and the prefill st... | 3 | 0 | 2026-02-27T21:47:54 | mrstoatey | false | null | 0 | o7rxewp | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7rxewp/ | false | 3 |
t1_o7rxbfd | Ryzen 7 isn't a component. | 2 | 0 | 2026-02-27T21:47:25 | Alpacaaea | false | null | 0 | o7rxbfd | false | /r/LocalLLaMA/comments/1rgixk7/accuracy_vs_speed_my_top_5/o7rxbfd/ | false | 2 |
t1_o7rx3hu | Thanks i just tried, it looks the same:
`$ docker run -it --rm --name llama.cpp --network=host --device /dev/dri -v $MODEL_DIR:/models` [`ghcr.io/ggml-org/llama.cpp:light-intel`](http://ghcr.io/ggml-org/llama.cpp:light-intel) `-m /models/Qwen3-Coder-Next-IQ4_XS.gguf -ngl 99 -np 1 -c 32768`
`load_backend: loaded SYCL ... | 1 | 0 | 2026-02-27T21:46:19 | inphaser | false | null | 0 | o7rx3hu | false | /r/LocalLLaMA/comments/1qz5uww/qwen3_coder_next_as_first_usable_coding_model_60/o7rx3hu/ | false | 1 |
t1_o7rx0vd | hes ragebaiting. don't waste your time | 5 | 0 | 2026-02-27T21:45:58 | Mayion | false | null | 0 | o7rx0vd | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7rx0vd/ | false | 5 |
t1_o7rwxfp | It’s using 82GB with Q8 KV cache. 100 or so tps out and decently quick at pre processing a full context window. | 1 | 0 | 2026-02-27T21:45:29 | NaiRogers | false | null | 0 | o7rwxfp | false | /r/LocalLLaMA/comments/1rgiait/switched_to_qwen35122ba10bi1gguf/o7rwxfp/ | false | 1 |
t1_o7rwx55 | I saw a similar issue reported there so it’s probably to blame. I’ll try exllama exl3 | 1 | 0 | 2026-02-27T21:45:27 | silenceimpaired | false | null | 0 | o7rwx55 | false | /r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7rwx55/ | false | 1 |
t1_o7rwqv7 | Pewdiepie is a very smart guy. I have been impressed by his tech projects | 0 | 1 | 2026-02-27T21:44:35 | TurnUpThe4D3D3D3 | false | null | 0 | o7rwqv7 | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7rwqv7/ | false | 0 |
t1_o7rwoof | These new Qwen models are genuine steps forward. | 10 | 0 | 2026-02-27T21:44:18 | LoveMind_AI | false | null | 0 | o7rwoof | false | /r/LocalLLaMA/comments/1rgixxr/what_models_do_you_think_owned_february/o7rwoof/ | false | 10 |
t1_o7rwoci | > *but I'm sure it will outperform ChatGPT And Grok*
**Narrator:** *"The small model did not, in fact, outperform ChatGPT and Grok"* | 10 | 0 | 2026-02-27T21:44:15 | ForsookComparison | false | null | 0 | o7rwoci | false | /r/LocalLLaMA/comments/1rgexmk/qwen3527bclaude46opusreasoningdistilledgguf_is_out/o7rwoci/ | false | 10 |
t1_o7rwniu | Qwen 3.5. The only one of these I can run locally lol | 5 | 0 | 2026-02-27T21:44:08 | dampflokfreund | false | null | 0 | o7rwniu | false | /r/LocalLLaMA/comments/1rgixxr/what_models_do_you_think_owned_february/o7rwniu/ | false | 5 |
t1_o7rwmja | Is it good | 0 | 0 | 2026-02-27T21:44:00 | Witty_Mycologist_995 | false | null | 0 | o7rwmja | false | /r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7rwmja/ | false | 0 |
t1_o7rwkrk | Definitely dig in, especially interesting will be how I'm using it for my workflows across my org.
CLIO can do everything that you've mentioned, and it does it all very well. :)
Maybe contributing there would be a good option instead of creating something new? Just a thought ofc. | 1 | 0 | 2026-02-27T21:43:46 | Total-Context64 | false | null | 0 | o7rwkrk | false | /r/LocalLLaMA/comments/1rgj2ol/architect_an_opensource_cli_to_orchestrate/o7rwkrk/ | false | 1 |
t1_o7rwg65 | LFM 1.2B on my potato laptop does wonders. I get 45tps on an 8th gen i7 with 1050 4gb nvidia GPU. For summarization it's perfect and fast. Does tool calling well too.
Granite 4B was my previous favorite but on several occasions I would find it super stubborn to a point that I could point it to a website with documenta... | 1 | 0 | 2026-02-27T21:43:07 | -Akos- | false | null | 0 | o7rwg65 | false | /r/LocalLLaMA/comments/1rfxtfz/eagerly_waiting_for_qwen_35_17b/o7rwg65/ | false | 1 |
t1_o7rwg39 | did you manage to fix it? my sg-lang stucks in an infinite loop , sometimes its just think flag loop sometimes it just repeats my prompt. | 2 | 0 | 2026-02-27T21:43:06 | Medium_Question8837 | false | null | 0 | o7rwg39 | false | /r/LocalLLaMA/comments/1re9xbi/qwen35_on_vllm/o7rwg39/ | false | 2 |
t1_o7rwfur | I have two Strix halos in a pipeline with llama.cpp and get decent speeds. Network bandwidth is not the bottleneck due to the setup - 1 computer works while the other is waiting its turn. Speed of the cluster is a bit less than one computer with same VRAM capacity.
I fully believe 2, 3 or 4 Pi 5 would offer decent cap... | 1 | 0 | 2026-02-27T21:43:04 | ProfessionalSpend589 | false | null | 0 | o7rwfur | false | /r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o7rwfur/ | false | 1 |
t1_o7rw8q4 | "Task: create a GTA-like 3D game where you can walk around, get in and drive cars"
Give the tech a decade or two to mature, and add in some XR/VR, and we've pretty much got a Star Trek holodeck where you just say what you want, and it does it.
https://i.redd.it/rqf6pj1vw3mg1.gif
| 5 | 0 | 2026-02-27T21:42:04 | -dysangel- | false | null | 0 | o7rw8q4 | false | /r/LocalLLaMA/comments/1rgixxr/what_models_do_you_think_owned_february/o7rw8q4/ | false | 5 |
t1_o7rw6ho | it's slower because you're using your disk as ram... | 1 | 0 | 2026-02-27T21:41:46 | Xp_12 | false | null | 0 | o7rw6ho | false | /r/LocalLLaMA/comments/1rgaw5c/gpu_shared_vram_makes_qwen3535b_prompt_processing/o7rw6ho/ | false | 1 |
t1_o7rw5kv | And here I am running qwen3.5-35B on my potato RTX2070 + 16GB RAM.. | 2 | 0 | 2026-02-27T21:41:39 | Manamultus | false | null | 0 | o7rw5kv | false | /r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7rw5kv/ | false | 2 |
t1_o7rw5aa | [removed] | 1 | 0 | 2026-02-27T21:41:36 | [deleted] | true | null | 0 | o7rw5aa | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rw5aa/ | false | 1 |
t1_o7rw123 | Cool, I'll check out CLIO, looks interesting!
I think they solve different things though. CLIO is interactive (you're at the terminal), architect is headless (nobody watching, CI/CD, cron jobs, overnight). The focus is on verification: retry loops against real tests, deterministic guards the LLM can't skip, budget... | 2 | 0 | 2026-02-27T21:41:01 | RiskRain303 | false | null | 0 | o7rw123 | false | /r/LocalLLaMA/comments/1rgj2ol/architect_an_opensource_cli_to_orchestrate/o7rw123/ | false | 2 |
t1_o7rvtcd | MiniMax 2.5 disappointed but is pretty achievable for self hosting.
GLM 5 made it into some of my flows. Cheap and sometimes gets the job done right but it's slow as molasses.
Qwen3.5 won February for me. So many options that fit in so many workflows. | 3 | 0 | 2026-02-27T21:39:55 | ForsookComparison | false | null | 0 | o7rvtcd | false | /r/LocalLLaMA/comments/1rgixxr/what_models_do_you_think_owned_february/o7rvtcd/ | false | 3 |
t1_o7rvsqk | Weird. Default vulkan llama.cpp settings on a rig with a RX 6600, a x5600 and 16 gigs of ram nets me around 15 t/s with reasoning and close to 50 t/s without. | 1 | 0 | 2026-02-27T21:39:51 | LewdKantian | false | null | 0 | o7rvsqk | false | /r/LocalLLaMA/comments/1rg4fb7/should_qwen3535ba3b_be_this_much_slower_than/o7rvsqk/ | false | 1 |
t1_o7rvs9m | I’ve focused heavily on the prefill speed for now but plan to look at decode next. I think there’s more gains to be had there, both with further optimisation and with a draft model. My experience with llama and KTransformers where the VRAM is limited (1x16GB) has been that when using tools like Opencode where the pro... | 6 | 0 | 2026-02-27T21:39:47 | mrstoatey | false | null | 0 | o7rvs9m | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7rvs9m/ | false | 6 |
t1_o7rvrww | Fixed btw! You can paste the raw JSONs in now \^\^ | 2 | 0 | 2026-02-27T21:39:44 | ENT_Alam | false | null | 0 | o7rvrww | false | /r/LocalLLaMA/comments/1r6h3ha/difference_between_qwen_3_maxthinking_and_qwen_35/o7rvrww/ | false | 2 |
t1_o7rvm5c | I'd gone back to MiniMax M2.5 at Q3, but I want to redownload unsloth's new quants for 122B and also test the 27B one | 1 | 0 | 2026-02-27T21:38:56 | LastAd7195 | false | null | 0 | o7rvm5c | false | /r/LocalLLaMA/comments/1rg05k7/qwen_35_122b_a10b_3584_score_on_natint_ugi/o7rvm5c/ | false | 1 |
t1_o7rv918 | Holy shit. A [US company](https://www.arcee.ai/about) with a [400B A13B MoE](https://huggingface.co/arcee-ai/Trinity-Large-Preview) instruct tuned from a 17T base model??
They should immediately fire everyone on their marketing team, it’s a travesty that this hasn’t splashed all over LocalLlama.
…unless it’s shit. T... | 1 | 0 | 2026-02-27T21:37:07 | __JockY__ | false | null | 0 | o7rv918 | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o7rv918/ | false | 1 |
t1_o7rv82t | Love this so much 💙 | 2 | 0 | 2026-02-27T21:36:59 | RelicDerelict | false | null | 0 | o7rv82t | false | /r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o7rv82t/ | false | 2 |
t1_o7rv70v | Great work! I was thinking of making a separate post, but since this is also in the 16 GB VRAM category, I'm adding my findings for anyone using 5060 Ti here. My setup also using 32 GB DDR5 RAM.
All tests were done with q8\_0 kv cache, context window 128k, pp 18k, tg 768, depth 0. Why? Because this closes to a cold st... | 3 | 0 | 2026-02-27T21:36:49 | bobaburger | false | null | 0 | o7rv70v | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7rv70v/ | false | 3 |
t1_o7rv0qg | Thanks… for sure they are… inevitable thats the future… i am doing my part… | 0 | 0 | 2026-02-27T21:35:57 | fourwheels2512 | false | null | 0 | o7rv0qg | false | /r/LocalLLaMA/comments/1rgd851/catastrophic_forgetting_by_language_models/o7rv0qg/ | false | 0 |
t1_o7ruzo6 | The Huntarr situation is just garden variety incompetence, overconfidence and arrogance, all understandably human traits. The SSH Software I mentioned above? Someone went to some effort to conceal their history so that automatically makes me distrust it. | 2 | 0 | 2026-02-27T21:35:49 | ArthurStevensNZ | false | null | 0 | o7ruzo6 | false | /r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7ruzo6/ | false | 2 |
t1_o7rupsj | The supported architecture is up to 3 not 3.5? Do I missing something? | 1 | 0 | 2026-02-27T21:34:26 | RelicDerelict | false | null | 0 | o7rupsj | false | /r/LocalLLaMA/comments/1rg8gkx/qwen35_35b_a3b_45_ts_128k_ctx_on_single_16gb_5060/o7rupsj/ | false | 1 |
t1_o7ruoiq | > HATE SPEECH *is* our (US) government
Actually I’m deeply grateful to Trump for that. Finally the taboo for a strong European army is broken and countries are increasing their real spending for armies and the headcount.
It’ll be good for us Europeans to be strong again.
| 1 | 0 | 2026-02-27T21:34:15 | ProfessionalSpend589 | false | null | 0 | o7ruoiq | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o7ruoiq/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.