name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7xhxo3 | Looks cool! | 3 | 0 | 2026-02-28T19:35:25 | lundrog | false | null | 0 | o7xhxo3 | false | /r/LocalLLaMA/comments/1rhbfya/shunyanet_sentinel_a_selfhosted_rss_aggregator/o7xhxo3/ | false | 3 |
t1_o7xhw05 | Just don't type "hello" | 1 | 0 | 2026-02-28T19:35:10 | zipzag | false | null | 0 | o7xhw05 | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xhw05/ | false | 1 |
t1_o7xhqa5 | And it is actually punching above its weight (but not usable for me due to the insane thinking times!, would just tune a bigger model that would take less time I guess!) | 5 | 0 | 2026-02-28T19:34:20 | Potential_Block4598 | false | null | 0 | o7xhqa5 | false | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7xhqa5/ | false | 5 |
t1_o7xhmz3 | No. In fact v4 is coming soon according to Hassabis | 1 | 0 | 2026-02-28T19:33:51 | alexx_kidd | false | null | 0 | o7xhmz3 | false | /r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7xhmz3/ | false | 1 |
t1_o7xhmrk | Have you tried nanbeige?
It is a 4B model that thinks A LOT (one question might take 3k tokens of thinking!) | 7 | 0 | 2026-02-28T19:33:49 | Potential_Block4598 | false | null | 0 | o7xhmrk | false | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7xhmrk/ | false | 7 |
t1_o7xhlnn | Which is dumb because GPTOSS is also a MoE, you are comparing Apples to Apples already, no formula needed gpt-oss-120B has 5.1B acrmtive parameters on the MoE Layers, and the MoE layers are trained from the ground up in MXFP4 format.
That formula is for comparing dense and MoE models, but is kinda outdated because arc... | 4 | 0 | 2026-02-28T19:33:39 | guesdo | false | null | 0 | o7xhlnn | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xhlnn/ | false | 4 |
t1_o7xhkqo | Traces is the fix | 1 | 0 | 2026-02-28T19:33:32 | mikkel1156 | false | null | 0 | o7xhkqo | false | /r/LocalLLaMA/comments/1rgwyqi/agent_debugging_is_a_mess_am_i_the_only_one/o7xhkqo/ | false | 1 |
t1_o7xhkm1 | ngl, this sub is my favorite place on the internet lately. | 8 | 0 | 2026-02-28T19:33:31 | jeremyckahn | false | null | 0 | o7xhkm1 | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xhkm1/ | false | 8 |
t1_o7xhk7t | what model are you using for coding? | 1 | 0 | 2026-02-28T19:33:27 | msbeaute00000001 | false | null | 0 | o7xhk7t | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7xhk7t/ | false | 1 |
t1_o7xhixs | I was referring to the overall concept that overthinking things can lead to worse accuracy | 1 | 0 | 2026-02-28T19:33:15 | ThatRandomJew7 | false | null | 0 | o7xhixs | false | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7xhixs/ | false | 1 |
t1_o7xhh5l | Qwen is just as good as Gemma in Greek | 1 | 0 | 2026-02-28T19:32:59 | alexx_kidd | false | null | 0 | o7xhh5l | false | /r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7xhh5l/ | false | 1 |
t1_o7xhecf | I've asked Gemini that question. The model is meant to be different, to my surprise, at least some of the time. I would suggest you pay for one month after planning a systematic way to test both modes, save the output, and get a different cloud model to analyse. You can ask free Claude. Then depending on the outcome yo... | -2 | 0 | 2026-02-28T19:32:34 | Hector_Rvkp | false | null | 0 | o7xhecf | false | /r/LocalLLaMA/comments/1rhbbq1/gemini_ultra_vs_pro_actually_different_or_just_a/o7xhecf/ | false | -2 |
t1_o7xhckt | 9 what nine ?!
Qwen3.5 the big one
122b
27b
And the 35b
That is not nine is it ?! | 1 | 0 | 2026-02-28T19:32:19 | Potential_Block4598 | false | null | 0 | o7xhckt | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7xhckt/ | false | 1 |
t1_o7xhbtu | We need wrappers like LM Studio to add the presence penalty that helps mitigate that. | 1 | 0 | 2026-02-28T19:32:12 | Iory1998 | false | null | 0 | o7xhbtu | false | /r/LocalLLaMA/comments/1rhaoty/anyone_noticing_qwen35_27b_getting_stuck_in/o7xhbtu/ | false | 1 |
t1_o7xh7h8 | Not quite – prefix caching helps when multiple requests share the same prompt prefix (like a system prompt). But in a multi agent chain, each agent’s prompt is different, it includes the previous agent’s output. So there’s no shared prefix to cache between hops.
AVP skips that entirely. Instead of pasting text output ... | 5 | 0 | 2026-02-28T19:31:34 | proggmouse | false | null | 0 | o7xh7h8 | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7xh7h8/ | false | 5 |
t1_o7xh7iz | This is not what the paper states. | 0 | 0 | 2026-02-28T19:31:34 | Thomas-Lore | false | null | 0 | o7xh7iz | false | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7xh7iz/ | false | 0 |
t1_o7xh5b5 | > I came across this article on 𝕏 where they used Clawdbot with polymarket to make money.
The ratio of red flags to words in this sentence is amazing. | 5 | 0 | 2026-02-28T19:31:15 | TastesLikeOwlbear | false | null | 0 | o7xh5b5 | false | /r/LocalLLaMA/comments/1rhazbc/13_m1_mbp_instead_of_m4_mac_mini/o7xh5b5/ | false | 5 |
t1_o7xh2qt | Haven’t tried Claude code but I know from vllm install and trying Ollama there are a lot of stability issues with dependency versions so it may need some time still to be stabilized. Once o got it working was impressive for local | 2 | 0 | 2026-02-28T19:30:52 | 2BucChuck | false | null | 0 | o7xh2qt | false | /r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/o7xh2qt/ | false | 2 |
t1_o7xgxbs | This is not what the paper states. | 12 | 0 | 2026-02-28T19:30:05 | Thomas-Lore | false | null | 0 | o7xgxbs | false | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7xgxbs/ | false | 12 |
t1_o7xgws6 | did anyone try it with the ai hat yet? | 1 | 0 | 2026-02-28T19:30:01 | overflow74 | false | null | 0 | o7xgws6 | false | /r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o7xgws6/ | false | 1 |
t1_o7xgwmm | They are not on the same level.
Qwen3.5-35B-A3B-GGUF Q5_K_M is 26.2GB. | 0 | 0 | 2026-02-28T19:29:59 | moahmo88 | false | null | 0 | o7xgwmm | false | /r/LocalLLaMA/comments/1rh0msv/qwen3535ba3b_q5_k_mbest_model_for_nvidia_16gb_gpus/o7xgwmm/ | false | 0 |
t1_o7xgugq | At only 50 tokens? I doubt it. | 1 | 0 | 2026-02-28T19:29:40 | Thomas-Lore | false | null | 0 | o7xgugq | false | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7xgugq/ | false | 1 |
t1_o7xgtzg | I have a disk of x teras, which models should I download? They are recurrent but at the same time very useful. The problem comes later with a new model that I eliminate? If I delete the version before this, what am I losing? | 1 | 0 | 2026-02-28T19:29:36 | Macestudios32 | false | null | 0 | o7xgtzg | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7xgtzg/ | false | 1 |
t1_o7xgsi6 | Who gives a shit if it was prompt engineered or not? Automation doesn't negate anything. You're a dipshit and will be left in the past. Have fun with that | 1 | 0 | 2026-02-28T19:29:23 | TheBurkMeister | false | null | 0 | o7xgsi6 | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o7xgsi6/ | false | 1 |
t1_o7xgsc3 | ? Many MLX quants are already available. I'm running it using oMLX just fine | 7 | 0 | 2026-02-28T19:29:22 | dwkdnvr | false | null | 0 | o7xgsc3 | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7xgsc3/ | false | 7 |
t1_o7xgp6t | Bots are here to stay | 9 | 0 | 2026-02-28T19:28:55 | DinoAmino | false | null | 0 | o7xgp6t | false | /r/LocalLLaMA/comments/1rhb1xb/fine_tuning_on_proprietary_data_is_way_harder_to/o7xgp6t/ | false | 9 |
t1_o7xglrp | They can think what ever they want. And not every thought needs to be hurled at someone else to make them feel like shit | 1 | 0 | 2026-02-28T19:28:25 | TheBurkMeister | false | null | 0 | o7xglrp | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o7xglrp/ | false | 1 |
t1_o7xglls | Local models are the future. The consolidation is happening soon for the megacorps. Many will be burned. | 1 | 0 | 2026-02-28T19:28:24 | Taki_Minase | false | null | 0 | o7xglls | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7xglls/ | false | 1 |
t1_o7xgkdi | that’s true. apologies for the inconvenience.
is there a way you can run the following command on your terminal. it’s a HF command :
# delete any partial blob and force fresh download
rm -rf ~/.cache/huggingface/hub/models--srswti--bodega-raptor-0.9b/blobs/*
# force download with explicit file
huggingface-cli downlo... | 1 | 0 | 2026-02-28T19:28:14 | EmbarrassedAsk2887 | false | null | 0 | o7xgkdi | false | /r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/o7xgkdi/ | false | 1 |
t1_o7xghy7 | Precompiled Llama.cpp is great as it comes with it's own webui. './llama-server -m /path/to/model' open up browser, input 'localhost:8080' and you're golden. | 1 | 0 | 2026-02-28T19:27:53 | hackiv | false | null | 0 | o7xghy7 | false | /r/LocalLLaMA/comments/1rh1q8j/i919400f_rtx_4070_super_12gb_32gb_ddr5_ram/o7xghy7/ | false | 1 |
t1_o7xgcsz | Still using it for conversation loops | 1 | 0 | 2026-02-28T19:27:09 | mikkel1156 | false | null | 0 | o7xgcsz | false | /r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7xgcsz/ | false | 1 |
t1_o7xg9cn | yeah, just seems like extra steps and more links in the 'ol security chain. | 2 | 0 | 2026-02-28T19:26:38 | Complainer_Official | false | null | 0 | o7xg9cn | false | /r/LocalLLaMA/comments/1rhb99s/agents_are_here_but_ecommerce_stuck_in_the_past/o7xg9cn/ | false | 2 |
t1_o7xg8fk | This is not what the paper proposes. They run several branches but only allow some of them to finish thinking, the rest is thrown out. It speeds up the multi-reasoning approach that Deep Thinking version of Gemini or GPT 5 Pro models use. | 1 | 0 | 2026-02-28T19:26:30 | Thomas-Lore | false | null | 0 | o7xg8fk | false | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7xg8fk/ | false | 1 |
t1_o7xg7vf | Yeah Gemini 3 loves to reason in comments above code it writes, haha. | 13 | 0 | 2026-02-28T19:26:25 | dsanft | false | null | 0 | o7xg7vf | false | /r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7xg7vf/ | false | 13 |
t1_o7xg6w5 | Even at Q3 quants it beats it for me... | 1 | 0 | 2026-02-28T19:26:17 | Significant_Fig_7581 | false | null | 0 | o7xg6w5 | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7xg6w5/ | false | 1 |
t1_o7xg62k | Windows version? And llama.cpp support? | 3 | 0 | 2026-02-28T19:26:09 | pmttyji | false | null | 0 | o7xg62k | false | /r/LocalLLaMA/comments/1rhalir/local_llms_are_slow_i_have_too_many_things_to_try/o7xg62k/ | false | 3 |
t1_o7xg4ay | I think it’s VS only? | 1 | 0 | 2026-02-28T19:25:54 | BahnMe | false | null | 0 | o7xg4ay | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7xg4ay/ | false | 1 |
t1_o7xg3vc | TLDR; You need to set your CTX-Size to 2048, to have true Apples to Apples. 2048 is the base height to ride the PPL/KLD ride when comparing models.
I did not see this before, but in your information above you have your context set for PPL at 512. This is not apples to apples. I recommend reading up on how Turbo... | 2 | 0 | 2026-02-28T19:25:50 | Phaelon74 | false | null | 0 | o7xg3vc | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7xg3vc/ | false | 2 |
t1_o7xg0k6 | You forgot to specify your parameters, your model quant, your KV cache quant.
Please give more context. | 1 | 0 | 2026-02-28T19:25:21 | PhilippeEiffel | false | null | 0 | o7xg0k6 | false | /r/LocalLLaMA/comments/1rhaoty/anyone_noticing_qwen35_27b_getting_stuck_in/o7xg0k6/ | false | 1 |
t1_o7xfyhj | So this means that this is useless if I'm using an inference engine that has prefix caching? I feel like all of them do nowadays. | 8 | 0 | 2026-02-28T19:25:03 | No-Refrigerator-1672 | false | null | 0 | o7xfyhj | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7xfyhj/ | false | 8 |
t1_o7xfum9 | What’s a good one to start with? | 1 | 0 | 2026-02-28T19:24:30 | BahnMe | false | null | 0 | o7xfum9 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7xfum9/ | false | 1 |
t1_o7xfsgz | This is in llama serve? How is the command will it run on gpu and as well cpu if low vram, just a look on the command. | 1 | 0 | 2026-02-28T19:24:11 | abubakkar_s | false | null | 0 | o7xfsgz | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7xfsgz/ | false | 1 |
t1_o7xfrhg | I don't think anything on the iPhone is private so what's the use case for the local LLM? | 2 | 0 | 2026-02-28T19:24:02 | jacek2023 | false | null | 0 | o7xfrhg | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7xfrhg/ | false | 2 |
t1_o7xfodx | Oh I actually like the look of this. I've implemented half of this in slightly hacky ways for my own use but this looks much cleaner. I'll give it a good shot | 2 | 0 | 2026-02-28T19:23:35 | Front_Eagle739 | false | null | 0 | o7xfodx | false | /r/LocalLLaMA/comments/1rhalir/local_llms_are_slow_i_have_too_many_things_to_try/o7xfodx/ | false | 2 |
t1_o7xfnzi | isn't it all just Gemini 3.1 Pro? | 4 | 0 | 2026-02-28T19:23:31 | atape_1 | false | null | 0 | o7xfnzi | false | /r/LocalLLaMA/comments/1rhbbq1/gemini_ultra_vs_pro_actually_different_or_just_a/o7xfnzi/ | false | 4 |
t1_o7xfnd8 | PayPal = your real credentials stored, merchant has to integrate. PayClaw = fresh card per task, anywhere Visa works, expires in 15 min. Closer to Ramp for agents than PayPal. | 1 | 0 | 2026-02-28T19:23:26 | Opposite-Exam3541 | false | null | 0 | o7xfnd8 | false | /r/LocalLLaMA/comments/1rhb99s/agents_are_here_but_ecommerce_stuck_in_the_past/o7xfnd8/ | false | 1 |
t1_o7xfja9 | vLLM has some interesting capabilities:
\- robust when concurrency is required
\- ability to change gpt-oss reasoning level on the fly (llama.cpp need restart)
\- recommended default parameters are in the model's files
I probably forgot some additional advantages. | 1 | 0 | 2026-02-28T19:22:51 | PhilippeEiffel | false | null | 0 | o7xfja9 | false | /r/LocalLLaMA/comments/1r7l7q5/the_strix_halo_feels_like_an_amazing_super_power/o7xfja9/ | false | 1 |
t1_o7xfhdx | My school give free access to the HPU which contains many 3090s, H200s, RTX 6000, A90s ect. Its been fun | 4 | 0 | 2026-02-28T19:22:34 | jovn1234567890 | false | null | 0 | o7xfhdx | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xfhdx/ | false | 4 |
t1_o7xfegu | Maybe frees up gpus at claude, more free tokens coming! | 1 | 0 | 2026-02-28T19:22:09 | Comfortable_Camp9744 | false | null | 0 | o7xfegu | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7xfegu/ | false | 1 |
t1_o7xfclv | The Chinese are laughing. | 1 | 0 | 2026-02-28T19:21:53 | Own_Respond_9189 | false | null | 0 | o7xfclv | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7xfclv/ | false | 1 |
t1_o7xfb55 | Smart. | 1 | 0 | 2026-02-28T19:21:41 | Pantoffel86 | false | null | 0 | o7xfb55 | false | /r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7xfb55/ | false | 1 |
t1_o7xfar2 | The only AI sub not on my mute list! Love it here! | 32 | 0 | 2026-02-28T19:21:38 | AbheekG | false | null | 0 | o7xfar2 | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xfar2/ | false | 32 |
t1_o7xf3v6 | Hahahaha I laughed a lot harder than expected. | 2 | 0 | 2026-02-28T19:20:37 | Hector_Rvkp | false | null | 0 | o7xf3v6 | false | /r/LocalLLaMA/comments/1rhazbc/13_m1_mbp_instead_of_m4_mac_mini/o7xf3v6/ | false | 2 |
t1_o7xf2y7 | so... its paypal? | 1 | 0 | 2026-02-28T19:20:29 | Complainer_Official | false | null | 0 | o7xf2y7 | false | /r/LocalLLaMA/comments/1rhb99s/agents_are_here_but_ecommerce_stuck_in_the_past/o7xf2y7/ | false | 1 |
t1_o7xf2j1 | If it fits an iphone it will be an instant favorite | 1 | 0 | 2026-02-28T19:20:26 | Traditional-Card6096 | false | null | 0 | o7xf2j1 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7xf2j1/ | false | 1 |
t1_o7xex8p | Yeah, the market is not so welcoming for now, so i decided to be loyal at work now :D | 12 | 0 | 2026-02-28T19:19:40 | bobaburger | false | null | 0 | o7xex8p | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xex8p/ | false | 12 |
t1_o7xew3r | The problem in my mind is you need to have a good underlying simulation. If humans can have fun doing the activities you will learn more from the models. | 1 | 0 | 2026-02-28T19:19:31 | vox-deorum | false | null | 0 | o7xew3r | false | /r/LocalLLaMA/comments/1rf7wy0/rant_post_genuinely_losing_my_mind_over_a_llm/o7xew3r/ | false | 1 |
t1_o7xerws | I use it for a baseline for 2 of my Apps. (They are MacOS only - sorry!)
[https://github.com/scouzi1966/maclocal-api](https://github.com/scouzi1966/maclocal-api) (OpenSource)
[https://github.com/scouzi1966/vesta-mac-dist](https://github.com/scouzi1966/vesta-mac-dist) (Closed source for now -- want to shape it). Th... | 1 | 0 | 2026-02-28T19:18:55 | scousi | false | null | 0 | o7xerws | false | /r/LocalLLaMA/comments/1rh1haa/benchmarks_report_optimized_cosmosreason2_qwen3vl/o7xerws/ | false | 1 |
t1_o7xepbk | I built one for them to play Civilization. Immense fun. Hopefully to be able to share the detailed data with you soon.. | 1 | 0 | 2026-02-28T19:18:33 | vox-deorum | false | null | 0 | o7xepbk | false | /r/LocalLLaMA/comments/1rf7wy0/rant_post_genuinely_losing_my_mind_over_a_llm/o7xepbk/ | false | 1 |
t1_o7xep7f | my PC is slow for local LLMs so I'd kick off a task and just... wait - so im not saying local llms are slow in general sorry | 1 | 0 | 2026-02-28T19:18:32 | BadBoy17Ge | false | null | 0 | o7xep7f | false | /r/LocalLLaMA/comments/1rhalir/local_llms_are_slow_i_have_too_many_things_to_try/o7xep7f/ | false | 1 |
t1_o7xen7r | FWIW LMCache solves a different problem. It caches KV for previously seen text so you don’t re-prefill the same prompt across requests. AVP transfers KV-cache between agents with different prompts as a communication channel.
One is “I’ve seen this text before, skip prefill.”
The other is “here’s my reasoning, don’t ma... | 7 | 0 | 2026-02-28T19:18:15 | proggmouse | false | null | 0 | o7xen7r | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7xen7r/ | false | 7 |
t1_o7xec98 | If anything the price of non flagship cards will go up due to increased on premises llm demand lol | 3 | 0 | 2026-02-28T19:16:42 | gradient8 | false | null | 0 | o7xec98 | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7xec98/ | false | 3 |
t1_o7xe56y | Haha | 0 | 0 | 2026-02-28T19:15:41 | Iory1998 | false | null | 0 | o7xe56y | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7xe56y/ | false | 0 |
t1_o7xe3j0 | I thought that's how tech worked, myself 😂 | 1 | 0 | 2026-02-28T19:15:27 | ctanna5 | false | null | 0 | o7xe3j0 | false | /r/LocalLLaMA/comments/1rcpmwn/anthropic_weve_identified_industrialscale/o7xe3j0/ | false | 1 |
t1_o7xe3dm | 4B is good for my purposes actually.
I'm writing my own inferencing engine and small models are great to test new architectures with. | 1 | 0 | 2026-02-28T19:15:25 | dsanft | false | null | 0 | o7xe3dm | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7xe3dm/ | false | 1 |
t1_o7xdx37 | backup: https://files.catbox.moe/dhumvv.png | 1 | 0 | 2026-02-28T19:14:30 | MelodicRecognition7 | false | null | 0 | o7xdx37 | false | /r/LocalLLaMA/comments/1rh4rsf/i_compiled_every_confirmed_rubin_vs_blackwell/o7xdx37/ | false | 1 |
t1_o7xdun0 | My surname is Stalker, and LLMs don't like it because the common noun appears far more often in their training data. | 1 | 0 | 2026-02-28T19:14:09 | ross_st | false | null | 0 | o7xdun0 | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o7xdun0/ | false | 1 |
t1_o7xdtoy | There is/was a NY court order that prohibited OpenAI from actually deleting the chats that the users "deleted". OpenAI may actually want to be ethical but in the end the US government and US courts can just take the data. And that will cause massive issues in the EU where companies actually have to follow the law. | 3 | 0 | 2026-02-28T19:14:01 | dingo_xd | false | null | 0 | o7xdtoy | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xdtoy/ | false | 3 |
t1_o7xdrc5 | Consider big PC as your next computer because you can always replace the future GPU with another future GPU | 1 | 0 | 2026-02-28T19:13:41 | jacek2023 | false | null | 0 | o7xdrc5 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7xdrc5/ | false | 1 |
t1_o7xdoh3 | Never tried it on Windows unfortunately. First try to install llama-server
Ask Gemini, it's been really helpful with all kind of installation advice and tuning adive, it's very good at that: [https://gemini.google.com/app](https://gemini.google.com/app) | 2 | 0 | 2026-02-28T19:13:17 | AppealSame4367 | false | null | 0 | o7xdoh3 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7xdoh3/ | false | 2 |
t1_o7xdljz | Qwen3.5 27B runs ok on 16VRAM at iQ3XS. It can be fully offloaded to GPU and its fast and vison does not cause OOM. But don't expect huge content length.
For 35B MOE, it's too much for 16GB🫠 Perhaps IQ2 may fit...not with only 3B active params at Q2...
My PC is still DDR4, so medium size MOE models are generally slow ... | 2 | 0 | 2026-02-28T19:12:52 | FancyImagination880 | false | null | 0 | o7xdljz | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7xdljz/ | false | 2 |
t1_o7xdjab | Sure, but point is to squeeze the hardware that somebody already have. In future we'll have more ad-hoc hardware | 1 | 0 | 2026-02-28T19:12:32 | Deep_Traffic_7873 | false | null | 0 | o7xdjab | false | /r/LocalLLaMA/comments/1rgixk7/accuracy_vs_speed_my_top_5/o7xdjab/ | false | 1 |
t1_o7xdj02 | To me it's quite the opposite haha it's funny how personal experience differs from person to person. I have used AesSedai's Minimax 2.5 Q5 to perform incremental tasks in an existing project, and it has been great | 6 | 0 | 2026-02-28T19:12:29 | Septerium | false | null | 0 | o7xdj02 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7xdj02/ | false | 6 |
t1_o7xdgb7 | look up "GRPO done right" | 0 | 0 | 2026-02-28T19:12:06 | Necessary-Wasabi-619 | false | null | 0 | o7xdgb7 | false | /r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7xdgb7/ | false | 0 |
t1_o7xdge2 | Often, the only way to get that raise is to move firms. | 25 | 0 | 2026-02-28T19:12:06 | Veastli | false | null | 0 | o7xdge2 | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xdge2/ | false | 25 |
t1_o7xdfxh | What's the news? Is Gemma officially dead ? | 1 | 0 | 2026-02-28T19:12:02 | tomakorea | false | null | 0 | o7xdfxh | false | /r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7xdfxh/ | false | 1 |
t1_o7xdel4 | It's a laptop, i would have to buy a new one. And if I would do that, I'd buy an HP Elite Book with 128GB unified ram..
Maybe in summer | 1 | 0 | 2026-02-28T19:11:51 | AppealSame4367 | false | null | 0 | o7xdel4 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7xdel4/ | false | 1 |
t1_o7xdb0z | I noticed it too - in both qwen3.5-27b and qwen3.5-35b-a3b. It's pretty rare - happened only a few times during my few hundred inferences. I used recommended sampling parameters for precise tasks (temp 0.6, top\_p 0.95, top\_k 20) and official OpenRouter Alibaba endpoint.
| 1 | 0 | 2026-02-28T19:11:21 | fairydreaming | false | null | 0 | o7xdb0z | false | /r/LocalLLaMA/comments/1rhaoty/anyone_noticing_qwen35_27b_getting_stuck_in/o7xdb0z/ | false | 1 |
t1_o7xd0tc | the thinking disabled tip is criminally underrated in this post
thinking mode is a trap for agentic tasks — you're paying 2-3x latency for marginal gains on steps where the model already knows what to do. the planning overhead kills you in multi-step loops
also ditching 2 specialized models removes all the routin... | 23 | 0 | 2026-02-28T19:09:54 | salmenus | false | null | 0 | o7xd0tc | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7xd0tc/ | false | 23 |
t1_o7xd0bk | The cleaner path I've seen work is front-loading the data classification conversation before anyone writes a line of code. Get legal to define the data sensitivity tiers and what each tier allows (local inference only, private cloud, managed API with DPA, etc.) in week one. Then your architecture choices fall out of th... | 1 | 0 | 2026-02-28T19:09:50 | ruibranco | false | null | 0 | o7xd0bk | false | /r/LocalLLaMA/comments/1rhb1xb/fine_tuning_on_proprietary_data_is_way_harder_to/o7xd0bk/ | false | 1 |
t1_o7xczxl | thats because google ran trillions of tokens through it, Ive asked gemma 3 27b about a ps1 game called eternal eyes, and it made comments that directly linked to that game, no model at that size can pull as much obscure lore as googles models, because they just dont throw as many tokens at the training as google does | 2 | 0 | 2026-02-28T19:09:47 | Background-Ad-5398 | false | null | 0 | o7xczxl | false | /r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7xczxl/ | false | 2 |
t1_o7xczzu | Is the account selling a course for a crazy price? Then it's a scam | 1 | 0 | 2026-02-28T19:09:47 | Realistic_Muscles | false | null | 0 | o7xczzu | false | /r/LocalLLaMA/comments/1rhazbc/13_m1_mbp_instead_of_m4_mac_mini/o7xczzu/ | false | 1 |
t1_o7xcye6 | Disagree and that's fine | 1 | 0 | 2026-02-28T19:09:33 | ForsookComparison | false | null | 0 | o7xcye6 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7xcye6/ | false | 1 |
t1_o7xcwp6 | sustained performance is lower on a macbook, its for mobility. Also people prefer mini because they already have monitors for their computer setup and mini has less footprint to sit on the desk.
Also M4 is just a much higher performing chip?
Btw, Clawdbot is not a local LLM, its an API interface basically, so you're ... | 1 | 0 | 2026-02-28T19:09:18 | Amazing_Trace | false | null | 0 | o7xcwp6 | false | /r/LocalLLaMA/comments/1rhazbc/13_m1_mbp_instead_of_m4_mac_mini/o7xcwp6/ | false | 1 |
t1_o7xckw6 | the article is using an advanced technique known as "lying" | 8 | 0 | 2026-02-28T19:07:34 | HopePupal | false | null | 0 | o7xckw6 | false | /r/LocalLLaMA/comments/1rhazbc/13_m1_mbp_instead_of_m4_mac_mini/o7xckw6/ | false | 8 |
t1_o7xckh2 | Super will be sort of OP size range. 100B but trained in NVFP4. Ultra is supposed to be 500B. | 2 | 0 | 2026-02-28T19:07:31 | Middle_Bullfrog_6173 | false | null | 0 | o7xckh2 | false | /r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7xckh2/ | false | 2 |
t1_o7xciej | have you tried more modern ocr? PaddleVL-OCR? etc? | 1 | 0 | 2026-02-28T19:07:13 | Budget-Juggernaut-68 | false | null | 0 | o7xciej | false | /r/LocalLLaMA/comments/1rh3xey/seeking_help_improving_ocr_in_my_rag_pipeline/o7xciej/ | false | 1 |
t1_o7xcglj | Way Worse. Here's KLD on Llama3.1-8B-Instruct. NVFP4 is W4A4 or Weights 4bit Activations 4bit. INT4 for LLM\_Compressor is W4A16 and outperforms NVFP4 by a MILE.
TLDR; an NVFP4 HAS to be QAD'd (Quantization Aware Distillation) to do what Nvidia says it can do, which is \~1% of FP8 Quality. NVFP4 and INT4 on Blackw... | 3 | 0 | 2026-02-28T19:06:58 | Phaelon74 | false | null | 0 | o7xcglj | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7xcglj/ | false | 3 |
t1_o7xcemx | The default llama.cpp batch settings (b=2048, ub=512) are good for TG but pretty bad for PP speeds. The "fit-nobatch" (which means using --fit and not touching the batch setting defaults) may be good if you are only concerned about TG speeds. Personally PP is more important to me so I set ub=2048 even if it hurts TG a ... | 1 | 0 | 2026-02-28T19:06:42 | OsmanthusBloom | false | null | 0 | o7xcemx | false | /r/LocalLLaMA/comments/1rh9983/qwen_35b_a3b_aessedai_finetune_on_8gb_vram_and/o7xcemx/ | false | 1 |
t1_o7xcc7w | 16GB but it's HBM, so it has more memory bandwidth than a 3080. | 10 | 0 | 2026-02-28T19:06:21 | FullstackSensei | false | null | 0 | o7xcc7w | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xcc7w/ | false | 10 |
t1_o7xcbfu | The others are right about 64GB being tight... You'll be comfortable with 7B/13B but anything above 30B Q4 will be slow or won't fit cleanly once the OS takes its cut. On the M1 Ultra at £2500 it's a capable machine but you're paying for 2021 silicon. For the same money (or a bit more) the M4 Mac Studio 64GB gets you r... | 1 | 0 | 2026-02-28T19:06:15 | KneeTop2597 | false | null | 0 | o7xcbfu | false | /r/LocalLLaMA/comments/1rha4g1/advice_on_hardware_purchase_and_selling_old/o7xcbfu/ | false | 1 |
t1_o7xc5c2 | Obligatory [https://xkcd.com/505/](https://xkcd.com/505/) | 20 | 0 | 2026-02-28T19:05:23 | jacobpederson | false | null | 0 | o7xc5c2 | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xc5c2/ | false | 20 |
t1_o7xc4yo | Yup just needa make sure experts are co-located with the compute that routes to them... It's in an awkward zone of needing 2-3 GPUs. Lots of teams jump up to 8... | 1 | 0 | 2026-02-28T19:05:20 | paulahjort | false | null | 0 | o7xc4yo | false | /r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7xc4yo/ | false | 1 |
t1_o7xbvu5 | FTW = LMCache + vLLM
https://docs.lmcache.ai/index.html | 3 | 0 | 2026-02-28T19:04:04 | DinoAmino | false | null | 0 | o7xbvu5 | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7xbvu5/ | false | 3 |
t1_o7xbudn | the biggest win is definitely how well it handles those custom mcps compared to older open source models.. getting it to trigger browser scripts to pull live data instead of just blindly hallucinating an answer makes it actually usable for complex real world workflows.. | 3 | 0 | 2026-02-28T19:03:51 | Olivia_Davis_09 | false | null | 0 | o7xbudn | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xbudn/ | false | 3 |
t1_o7xbnjh | I will be messaging you in 7 days on [**2026-03-07 19:01:59 UTC**](http://www.wolframalpha.com/input/?i=2026-03-07%2019:01:59%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7xbgye/?context=3)
[**CLICK THI... | 1 | 0 | 2026-02-28T19:02:55 | RemindMeBot | false | null | 0 | o7xbnjh | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7xbnjh/ | false | 1 |
t1_o7xbkaa | i love the attitude but that's not how the corporate world works. they'll accept massive risk in order to save money in the short term, provided there's a legal framework for blaming someone else for hallucinations and data breaches. a previous employer was happily shoveling hundreds of gigabytes of customer images and... | 14 | 0 | 2026-02-28T19:02:28 | HopePupal | false | null | 0 | o7xbkaa | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xbkaa/ | false | 14 |
t1_o7xbi1g | You can allocate as much memory to the GPU as you want with the Strix Halo if you run Linux. Look up 'Strix Halo toolboxes' for the exact instructions.
If you buy the motherboard from framework, you can also mount it in a standard case and use an adapter to run your 3090 with it. This is a bit more complex, but, once... | 1 | 0 | 2026-02-28T19:02:08 | jreddit6969 | false | null | 0 | o7xbi1g | false | /r/LocalLLaMA/comments/1rha4g1/advice_on_hardware_purchase_and_selling_old/o7xbi1g/ | false | 1 |
t1_o7xbgye | !RemindMe 7 days | 1 | 0 | 2026-02-28T19:01:59 | Stahlboden | false | null | 0 | o7xbgye | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7xbgye/ | false | 1 |
t1_o7xbfwa | The P40s are solid but yeah, 24GB per card starts to hurt once you're running anything above 30B Q4. Multi-GPU offloading helps but the TPS hit is real. If you're thinking about what to move to next, the calculus has shifted a lot in the past year. The Mac Studio M4 Max (64GB unified) hits \~500 GB/s bandwidth and hand... | 1 | 0 | 2026-02-28T19:01:50 | KneeTop2597 | false | null | 0 | o7xbfwa | false | /r/LocalLLaMA/comments/1rha4g1/advice_on_hardware_purchase_and_selling_old/o7xbfwa/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.