name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7smwm9 | You sure opus doesn’t go beyond a30b? By those api prices I’m guessing it’s a hefty one | 5 | 0 | 2026-02-28T00:08:24 | Hankdabits | false | null | 0 | o7smwm9 | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7smwm9/ | false | 5 |
t1_o7smoc5 | mmproj-F16.gguf👍
mmproj-BF16.gguf👎 | 1 | 0 | 2026-02-28T00:07:03 | k0setes | false | null | 0 | o7smoc5 | false | /r/LocalLLaMA/comments/1p4lovv/qwen3vl_computer_using_agent_works_extremely_well/o7smoc5/ | false | 1 |
t1_o7smn5r | The prompt is open to interpretation, has spelling errors, and a strange structure layout. Im sure you could clean it up and get a something to function.
Try turning thinking off.
https://preview.redd.it/wsbap184n4mg1.png?width=1204&format=png&auto=webp&s=368a01a2a0080f6cc010c1721d979a3544e0108a
| 2 | 0 | 2026-02-28T00:06:51 | l33t-Mt | false | null | 0 | o7smn5r | false | /r/LocalLLaMA/comments/1rgawnq/help_qwen_35_35b_cant_able_to_create_this_html/o7smn5r/ | false | 2 |
t1_o7smk7d | Except it's completely wrong, unrelated to what the prompt is asking for. | 4 | 0 | 2026-02-28T00:06:23 | Cool-Chemical-5629 | false | null | 0 | o7smk7d | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7smk7d/ | false | 4 |
t1_o7smk51 | what does any of that even mean
"speaking in agentic"? | 0 | 0 | 2026-02-28T00:06:22 | Jromagnoli | false | null | 0 | o7smk51 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7smk51/ | false | 0 |
t1_o7smjfz | Qwen3.5 just came out, 2.5 is literally ancient tech. Use AI to get your videos out faster rich boy | -5 | 0 | 2026-02-28T00:06:15 | YouAreTheCornhole | false | null | 0 | o7smjfz | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7smjfz/ | false | -5 |
t1_o7smf7i | > Hegseth wrote on the X platform, stating that Anthropic’s attempt to seize veto power over the U.S. military’s operational decisions is unacceptable.
The same operational decisions they're so desperate to turn over to AI...
| 252 | 0 | 2026-02-28T00:05:34 | TastesLikeOwlbear | false | null | 0 | o7smf7i | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7smf7i/ | false | 252 |
t1_o7sm3e3 | Welcome to 2026. The models you mentioned are OLD. New models are smaller and better. | 9 | 0 | 2026-02-28T00:03:40 | jacek2023 | false | null | 0 | o7sm3e3 | false | /r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7sm3e3/ | false | 9 |
t1_o7sm2j4 | smaller models are able to retain more capability of larger models than they used to in the past, so there might not be that massive of an intelligence gain as there was at the time for llama3.3 70b or qwen2.5 70b | 2 | 0 | 2026-02-28T00:03:31 | Far-Low-4705 | false | null | 0 | o7sm2j4 | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7sm2j4/ | false | 2 |
t1_o7sm1v2 | I think the late night all caps ones come directly from him when he is doomscrolling Truth Social on the toilet. But this one I agree exceeds what his tiny hands could tap out. | 49 | 0 | 2026-02-28T00:03:25 | 1-800-methdyke | false | null | 0 | o7sm1v2 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sm1v2/ | false | 49 |
t1_o7sm0qs | This is what I get with a little tooling --
https://preview.redd.it/15kip1e2m4mg1.png?width=927&format=png&auto=webp&s=45751f57e68febe35e864496a5c5b202c0fc32e9
I could improve it further by being a little more literal with the customer request, but it's very close, and only 16k context right there. | 3 | 0 | 2026-02-28T00:03:14 | _raydeStar | false | null | 0 | o7sm0qs | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7sm0qs/ | false | 3 |
t1_o7slxul | I miss when FineTunes were supposed to be the future.
When Wizard ended up being so much better than vanilla Llama2 I really thought that the community was destined to topple OpenAI and gang.
Now we're a bunch of baby birds waiting to see if a mega-corp will bring us a worm today. | 22 | 0 | 2026-02-28T00:02:46 | ForsookComparison | false | null | 0 | o7slxul | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7slxul/ | false | 22 |
t1_o7slwti | They're not slow at all lol | 2 | 0 | 2026-02-28T00:02:35 | NNN_Throwaway2 | false | null | 0 | o7slwti | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o7slwti/ | false | 2 |
t1_o7slwdu | This is one of those games they play where they hide the bad stuff they do behind something "worse" that they stand up against in order to gain sympathy. Common in politics. | -9 | 0 | 2026-02-28T00:02:31 | Dry_Yam_4597 | false | null | 0 | o7slwdu | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7slwdu/ | false | -9 |
t1_o7slnnw | "I called it out. And to its credit, Claude was honest"
Bro is a few vibe-coding sessions from full psychosis. | 2 | 0 | 2026-02-28T00:01:06 | NNN_Throwaway2 | false | null | 0 | o7slnnw | false | /r/LocalLLaMA/comments/1rglpxg/i_caught_claude_opus_doing_the_exact_same_thing/o7slnnw/ | false | 2 |
t1_o7slnd2 | I want to upvote this twice. :D
Also, community refines/retrains. Those were also huge. Massive shoutouts to Bloke! | 10 | 0 | 2026-02-28T00:01:02 | IngwiePhoenix | false | null | 0 | o7slnd2 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7slnd2/ | false | 10 |
t1_o7slmys | I don’t think i notice any difference. But no proper testing done. Bigger context is way more useful than minute accuracy drop, in my opinion. | 1 | 0 | 2026-02-28T00:00:59 | Gray_wolf_2904 | false | null | 0 | o7slmys | false | /r/LocalLLaMA/comments/1rg8gkx/qwen35_35b_a3b_45_ts_128k_ctx_on_single_16gb_5060/o7slmys/ | false | 1 |
t1_o7slmo4 | I’m having the same issue sadly. 3 MI50s 16 gb each, qwen3-coder-next and Qwen 3.5 broken, segmentation fault (core dumped). I hope llama.cpp-gfx906 get updated to work with Qwen 3.5 soon, it’s way faster. | 1 | 0 | 2026-02-28T00:00:56 | Accomplished_Code141 | false | null | 0 | o7slmo4 | false | /r/LocalLLaMA/comments/1r807kb/segmentation_fault_when_loading_models_across/o7slmo4/ | false | 1 |
t1_o7sll8x | contextual verifications, like "does this sentence contain any acts of hostility?" | 3 | 0 | 2026-02-28T00:00:42 | Toooooool | false | null | 0 | o7sll8x | false | /r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7sll8x/ | false | 3 |
t1_o7slk0a | Love the visuals but this demo video feels way off. Your passion is obvious and I want to dig deeper, but can you try to get away from the slickness and just show the thing having an actual real time full duplex conversation? I didn't feel that was really being displayed properly here. | 1 | 0 | 2026-02-28T00:00:30 | LoveMind_AI | false | null | 0 | o7slk0a | false | /r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/o7slk0a/ | false | 1 |
t1_o7slhzw | He was a 2-man operation that got 10s of millions of viewers for a daily show. Whether or not he was into tinkering, I think it's more likely he's just plain competent and this happens to be what he's into now lol | 7 | 0 | 2026-02-28T00:00:11 | ForsookComparison | false | null | 0 | o7slhzw | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7slhzw/ | false | 7 |
t1_o7slhto | > | 1 | 0 | 2026-02-28T00:00:09 | Salt-Personality-180 | false | null | 0 | o7slhto | false | /r/LocalLLaMA/comments/1r26zsg/zai_said_they_are_gpu_starved_openly/o7slhto/ | false | 1 |
t1_o7sl8ki | [removed] | 1 | 0 | 2026-02-27T23:58:41 | [deleted] | true | null | 0 | o7sl8ki | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7sl8ki/ | false | 1 |
t1_o7sl55e | I think you will be interested in this: https://github.com/p-e-w/heretic/pull/196
The SOM paper coming to life, and it’s impressive. | 1 | 0 | 2026-02-27T23:58:08 | -p-e-w- | false | null | 0 | o7sl55e | false | /r/LocalLLaMA/comments/1rf6s0d/qwen3527bhereticgguf/o7sl55e/ | false | 1 |
t1_o7sl503 | Unsloth just released MXFP4 version on huggingface. Nvidia drivers 590 had added native support for MXFP4 on 5000 series GPUs. Should be faster. Will try that next. | 2 | 0 | 2026-02-27T23:58:07 | Gray_wolf_2904 | false | null | 0 | o7sl503 | false | /r/LocalLLaMA/comments/1rg8gkx/qwen35_35b_a3b_45_ts_128k_ctx_on_single_16gb_5060/o7sl503/ | false | 2 |
t1_o7skxdy | Does Trump even write his own "truths" or does he get Stephen Miller to do it form him? Reads like something he would say. | 87 | 0 | 2026-02-27T23:56:54 | NNN_Throwaway2 | false | null | 0 | o7skxdy | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7skxdy/ | false | 87 |
t1_o7skven | Code description, code completion. Quality gates. | 3 | 0 | 2026-02-27T23:56:35 | Southern-Enthusiasm1 | false | null | 0 | o7skven | false | /r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7skven/ | false | 3 |
t1_o7skuji | Np | 1 | 0 | 2026-02-27T23:56:27 | Silver-Champion-4846 | false | null | 0 | o7skuji | false | /r/LocalLLaMA/comments/1rgl1zj/gstreamer_1281_adds_whisper_based_tts_support/o7skuji/ | false | 1 |
t1_o7skta2 | That goes without saying... | 1 | 0 | 2026-02-27T23:56:15 | Silver-Champion-4846 | false | null | 0 | o7skta2 | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7skta2/ | false | 1 |
t1_o7skr6k | Marketing shite. Billionaire who tells people to feel worthless is "battling" politicians who think llms can "think". A comedy if anything. | -9 | 1 | 2026-02-27T23:55:55 | Dry_Yam_4597 | false | null | 0 | o7skr6k | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7skr6k/ | false | -9 |
t1_o7skpc7 | Well, I don't know if Gemini, Claude or GPT use tools to gather their knowledge, but GLM did not require tools to handle that request. Besides, there's something I personally appreciate a lot when it comes to local models and that is speed. Sure, I could wait for it to crunch through the tons of material it would colle... | -2 | 1 | 2026-02-27T23:55:37 | Cool-Chemical-5629 | false | null | 0 | o7skpc7 | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7skpc7/ | false | -2 |
t1_o7skmd2 | Most anti open-source company in the industry. I hope they go under :) | -20 | 1 | 2026-02-27T23:55:09 | trololololo2137 | false | null | 0 | o7skmd2 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7skmd2/ | false | -20 |
t1_o7sklhe | Plz can I borrow it off you until you make up your mind.. | 1 | 0 | 2026-02-27T23:55:01 | Inevitable-Mix-4965 | false | null | 0 | o7sklhe | false | /r/LocalLLaMA/comments/1qb4by0/whats_your_reason_for_owning_the_rtx_pro_6000/o7sklhe/ | false | 1 |
t1_o7skgqk | I just thought I should let you know that the best results being on 2 cores is due to the Helio G99 having 2 fast performance cores that reach clock speeds of up to 2.2 GHz and 6 slower efficiency cores that reach clock speeds of up to 2 GHz, which is the exact opposite of what you think it's like (don't want anyone be... | 1 | 0 | 2026-02-27T23:54:16 | ReactionAdditional89 | false | null | 0 | o7skgqk | false | /r/LocalLLaMA/comments/1bsnifx/running_an_llm_on_a_phone_is_pretty_wild_layla/o7skgqk/ | false | 1 |
t1_o7skczg | Out of curiosity want quants for Kimi k2.5 can fit on 8gb vram? | 1 | 0 | 2026-02-27T23:53:40 | drnfc | false | null | 0 | o7skczg | false | /r/LocalLLaMA/comments/1qpg8as/which_local_model_is_best_for_clawdebot_in_a_low/o7skczg/ | false | 1 |
t1_o7sk8sp | I really enjoy all your work.
Being able to test multiple recipes in any configuration and scenario is a real treat.
Thank you.
u/danielhanchen u/noneabove1182 u/VoidAlchemy u/Digger412 | 3 | 0 | 2026-02-27T23:53:00 | Unique_Job2727 | false | null | 0 | o7sk8sp | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7sk8sp/ | false | 3 |
t1_o7sjvye | the gooner sphere was unironically the best source of info in early 2025 | 24 | 0 | 2026-02-27T23:50:57 | fractalcrust | false | null | 0 | o7sjvye | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7sjvye/ | false | 24 |
t1_o7sjcku | Done, I've updated the post and it matches your results.
Congratulations, those quants are looking great. | 1 | 0 | 2026-02-27T23:47:48 | TitwitMuffbiscuit | false | null | 0 | o7sjcku | false | /r/LocalLLaMA/comments/1rfds1h/qwen3535ba3b_q4_quantization_comparison/o7sjcku/ | false | 1 |
t1_o7sj6z6 | People will tell you to use benchmarks and leaderboards because they are wrong.
The only valid way is to try some models yourself and listen to your heart. | 1 | 0 | 2026-02-27T23:46:54 | jacek2023 | false | null | 0 | o7sj6z6 | false | /r/LocalLLaMA/comments/1rgf12v/how_to_chose_the_right_model/o7sj6z6/ | false | 1 |
t1_o7sj3ew | 1 | 0 | 2026-02-27T23:46:20 | timeshifter24 | false | null | 0 | o7sj3ew | false | /r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o7sj3ew/ | false | 1 | |
t1_o7sj0mo | I'm not sure if that's fair exactly. If you put Qwen into a tooled back end complete with validation, I think it would do quite well for itself. | 12 | 0 | 2026-02-27T23:45:52 | _raydeStar | false | null | 0 | o7sj0mo | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7sj0mo/ | false | 12 |
t1_o7sj0az | You just got “Cursor’d” and “Claude’d” three days before launch. That hurts. Here is the uncomfortable reality. The feature isn’t the moat; the orchestration skill is. | 1 | 0 | 2026-02-27T23:45:49 | Sad_Abbreviations_77 | false | null | 0 | o7sj0az | false | /r/LocalLLaMA/comments/1rglwp9/can_we_keep_up_in_this_white_hot_agent/o7sj0az/ | false | 1 |
t1_o7siugz | It's very slooow though. You can get Qwen3 which runs 10x quicker to do more research online to be smarter at a quicker pase. | -4 | 0 | 2026-02-27T23:44:52 | PhotographerUSA | false | null | 0 | o7siugz | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7siugz/ | false | -4 |
t1_o7sim50 | PLEEEEEEASE. Just one superintelligent 2b model for cpu, just one! Lol | 5 | 0 | 2026-02-27T23:43:31 | Silver-Champion-4846 | false | null | 0 | o7sim50 | false | /r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7sim50/ | false | 5 |
t1_o7siisg | Yeah. WE NEED WRITING | 3 | 0 | 2026-02-27T23:42:58 | Silver-Champion-4846 | false | null | 0 | o7siisg | false | /r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7siisg/ | false | 3 |
t1_o7sihpd | After the latest update, we have even a bigger problem:
https://preview.redd.it/7wk683ioi4mg1.png?width=1920&format=png&auto=webp&s=ff00efa13e621139b4e1f1a7f094ea8c6240d80b
LMK why it's refusing to do anything ;-( THX | 1 | 0 | 2026-02-27T23:42:47 | timeshifter24 | false | null | 0 | o7sihpd | false | /r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o7sihpd/ | false | 1 |
t1_o7sigfx | For roleplay, Gemma 3 27b normpreserve still takes the crown in my tasks. I tried it all, but qwen just isn’t as smart in terms of overall awareness still. And also, it always has that unnecessarily huge reasoning block | 3 | 0 | 2026-02-27T23:42:35 | Geritas | false | null | 0 | o7sigfx | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7sigfx/ | false | 3 |
t1_o7sifd8 | I'm also interested | 2 | 0 | 2026-02-27T23:42:24 | Silver-Champion-4846 | false | null | 0 | o7sifd8 | false | /r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7sifd8/ | false | 2 |
t1_o7sidt4 | 24 | 0 | 2026-02-27T23:42:08 | -Ellary- | false | null | 0 | o7sidt4 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7sidt4/ | false | 24 | |
t1_o7sic8l | I second this | 2 | 0 | 2026-02-27T23:41:53 | Silver-Champion-4846 | false | null | 0 | o7sic8l | false | /r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7sic8l/ | false | 2 |
t1_o7si5oh | Ah, typo, thank you for correcting it! | 2 | 0 | 2026-02-27T23:40:48 | Kahvana | false | null | 0 | o7si5oh | false | /r/LocalLLaMA/comments/1rgl1zj/gstreamer_1281_adds_whisper_based_tts_support/o7si5oh/ | false | 2 |
t1_o7si4mw | Sometimes, I just run old models to get that feel of a real large "language" models.
Not coding, agentic, rag, ultra slop with 10k thinking tokens to "hi".
One of the the last classic models was Nemo. | 28 | 0 | 2026-02-27T23:40:38 | -Ellary- | false | null | 0 | o7si4mw | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7si4mw/ | false | 28 |
t1_o7si4fi | Does Qwen ever not deliver? | 9 | 0 | 2026-02-27T23:40:36 | FinalsMVPZachZarba | false | null | 0 | o7si4fi | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7si4fi/ | false | 9 |
t1_o7shunt | Ollama is not well liked around here, in part due to exactly the kind of thing you're talking about. | 4 | 0 | 2026-02-27T23:39:01 | droptableadventures | false | null | 0 | o7shunt | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7shunt/ | false | 4 |
t1_o7shnma | d'après mes calculs, le couts de l'API deepseek V4, vont etre TRES, TRES bas. du style 10 fois moins cher que les frontier US... | 1 | 0 | 2026-02-27T23:37:51 | ComfortInner7943 | false | null | 0 | o7shnma | false | /r/LocalLLaMA/comments/1rf7m85/deepseek_allows_huawei_early_access_to_v4_update/o7shnma/ | false | 1 |
t1_o7shlz2 | I didn't use this model much. I asked it to "summarize today's news" and it said that it doesn't have access to realtime information. It has a way to search, and a way to visit sites. After telling it that it has access to those tools it said "what sites do you want me to look at?". You have to prod this thing along at... | 1 | 0 | 2026-02-27T23:37:36 | hoopmastaflex | false | null | 0 | o7shlz2 | false | /r/LocalLLaMA/comments/1rdi26s/liquid_ai_releases_lfm224ba2b/o7shlz2/ | false | 1 |
t1_o7shlq4 | there was one some days ago! I had the same thoughts as you reading it lol | 2 | 0 | 2026-02-27T23:37:33 | Distinct-Target7503 | false | null | 0 | o7shlq4 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7shlq4/ | false | 2 |
t1_o7shlj4 | Update - shipped seller staking today partly because of this thread. Sellers now put up a bond before listing. If they get flagged and suspended, the bond is forfeited. Makes spinning up fake sellers expensive. Your Sybil concern was the exact use case. | 0 | 0 | 2026-02-27T23:37:31 | Bourbeau | false | null | 0 | o7shlj4 | false | /r/LocalLLaMA/comments/1rgkv8u/agenttoagent_marketplace_let_your_local_agents/o7shlj4/ | false | 0 |
t1_o7shi4z | Yesterday I realized that I couldn't allocate more that 131072 tokens with Q4 quant on my laptop and that made me feel sad.
A year ago I was happy to have 16384 lol | 44 | 0 | 2026-02-27T23:36:57 | x0wl | false | null | 0 | o7shi4z | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7shi4z/ | false | 44 |
t1_o7shh7g | Please do not make me look at the number of Kw i've consumed since | 7 | 0 | 2026-02-27T23:36:48 | Imakerocketengine | false | null | 0 | o7shh7g | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7shh7g/ | false | 7 |
t1_o7sh7ov | Benchmaxing. Benchmaxing never changes. | 8 | 0 | 2026-02-27T23:35:13 | Ardalok | false | null | 0 | o7sh7ov | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7sh7ov/ | false | 8 |
t1_o7sh79m | New Achievement!
Encounter a fellow litrpger in the wild.
Reward: 50XP. | 3 | 0 | 2026-02-27T23:35:09 | Silver-Champion-4846 | false | null | 0 | o7sh79m | false | /r/LocalLLaMA/comments/1rgiimd/discussion_is_it_time_for_a_prosefirst_successor/o7sh79m/ | false | 3 |
t1_o7sh64p | Depreciation on a $20k asset over 2 years is more than $4k, which is double what the person's budget is | 1 | 0 | 2026-02-27T23:34:58 | pfn0 | false | null | 0 | o7sh64p | false | /r/LocalLLaMA/comments/1rg68e6/starting_a_phd_in_ml_what_is_the_best_infra_i_can/o7sh64p/ | false | 1 |
t1_o7sh5pc | lil'Qwen I like the sound of that xD | 1 | 0 | 2026-02-27T23:34:53 | Specter_Origin | false | null | 0 | o7sh5pc | false | /r/LocalLLaMA/comments/1rg9lli/little_qwen_35_27b_and_qwen_35ba3b_models_did/o7sh5pc/ | false | 1 |
t1_o7sh4dl | I think after a certain point intelligence starts to not increase as much and all you get from bigger size is better world knowledge.
So a 70b dense would be better than 27b dense but not by much.
I'm guessing that's why no Moe goes beyond 30 active params. | 8 | 0 | 2026-02-27T23:34:40 | SlaveZelda | false | null | 0 | o7sh4dl | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7sh4dl/ | false | 8 |
t1_o7sgvvr | Is that that small of a quant even any better than an 8 bit 35b a3b? | 2 | 0 | 2026-02-27T23:33:19 | Its-all-redditive | false | null | 0 | o7sgvvr | false | /r/LocalLLaMA/comments/1rgiait/switched_to_qwen35122ba10bi1gguf/o7sgvvr/ | false | 2 |
t1_o7sgs7p | Llama 3 405b knows a lot. | 12 | 0 | 2026-02-27T23:32:43 | -Ellary- | false | null | 0 | o7sgs7p | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7sgs7p/ | false | 12 |
t1_o7sgowh | the 405B kept an edge for some time for me. one of the best models at talking in Italian without the feeling of reading a translation of an English thought.
massively undertrained imho, Still it surprised me a lot do times.... well, at the end of the day, a dimensionality of 50k in MLP did something.
nemotron ultra 23... | 15 | 0 | 2026-02-27T23:32:11 | Distinct-Target7503 | false | null | 0 | o7sgowh | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7sgowh/ | false | 15 |
t1_o7sgms9 | it’s not done training lol, don’t use the large one | 1 | 0 | 2026-02-27T23:31:51 | bluninja1234 | false | null | 0 | o7sgms9 | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o7sgms9/ | false | 1 |
t1_o7sghme | Were? | 13 | 0 | 2026-02-27T23:31:01 | Borkato | false | null | 0 | o7sghme | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7sghme/ | false | 13 |
t1_o7sgd1r | This depend on which criteria
for most impressive in terms of performance : All hand to GLM-5
for Size / performance : i would say a mix of the minimax 2.5 Quant and the 27b variant of Qwen 3.5 in FP8 | 1 | 0 | 2026-02-27T23:30:16 | Imakerocketengine | false | null | 0 | o7sgd1r | false | /r/LocalLLaMA/comments/1rgixxr/what_models_do_you_think_owned_february/o7sgd1r/ | false | 1 |
t1_o7sgbx6 | Probably Pony-Alpha? GLM-5 is not as good as that stealth model was. | 4 | 0 | 2026-02-27T23:30:05 | oxygen_addiction | false | null | 0 | o7sgbx6 | false | /r/LocalLLaMA/comments/1rggpu9/glm5code/o7sgbx6/ | false | 4 |
t1_o7sg78g | Just so you know, it's STT not TTS, tts speaks text, STT writes what's inside audio | 6 | 0 | 2026-02-27T23:29:20 | Silver-Champion-4846 | false | null | 0 | o7sg78g | false | /r/LocalLLaMA/comments/1rgl1zj/gstreamer_1281_adds_whisper_based_tts_support/o7sg78g/ | false | 6 |
t1_o7sg5bs | is NVFP4 used in this UD-Q4\_K\_XL? | 0 | 0 | 2026-02-27T23:29:01 | Green-Ad-3964 | false | null | 0 | o7sg5bs | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7sg5bs/ | false | 0 |
t1_o7sg0fw | How's GLM-5 tool use at 4 bit quant? I tried 8 bit on cloud and it was a bit rought. | 1 | 0 | 2026-02-27T23:28:15 | LowPlace8434 | false | null | 0 | o7sg0fw | false | /r/LocalLLaMA/comments/1rdkze3/m3_ultra_512gb_realworld_performance_of/o7sg0fw/ | false | 1 |
t1_o7sfvmv | You know how o3 and Grok 4 spend minutes (and significant compute cost) reasoning about each ARC puzzle?
> | 1 | 0 | 2026-02-27T23:27:28 | Other_Train9419 | false | null | 0 | o7sfvmv | false | /r/LocalLLaMA/comments/1rgmcw3/verantyx_235_on_arcagi2_on_a_macbook_06s_per_task/o7sfvmv/ | false | 1 |
t1_o7sfu4c | Yes now we need an agent that can squeeze it well | 5 | 0 | 2026-02-27T23:27:14 | inphaser | false | null | 0 | o7sfu4c | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7sfu4c/ | false | 5 |
t1_o7sflve | I'm using this config in router mode for the disabled thinking mode, your config looks like it should work, the only difference I see is the single quotes:
[Qwen3.5-35B-A3B:Instruct-General-Vision]
model = ./Qwen3.5-35B-A3B-Q4_K_M.gguf
mmproj = qwen3.5mm.gguf
c = 32000
temp = 0.7
top-p = 0.8
... | 3 | 0 | 2026-02-27T23:25:55 | Betadoggo_ | false | null | 0 | o7sflve | false | /r/LocalLLaMA/comments/1rglgma/qwen_35_llamacpp_turn_of_reasoning_and_performance/o7sflve/ | false | 3 |
t1_o7sflsx |
After painful sitting and reading through the whole post, essentially:
- OP has two 3090 in the basement to run openclaw or whatever the heck hyped up nowadays. On that, he runs 30B model that he names Linus. And he also uses Grok and Opus.
- He found that Opus hallucinates links without checking
- So his thesis is t... | 1 | 0 | 2026-02-27T23:25:54 | o0genesis0o | false | null | 0 | o7sflsx | false | /r/LocalLLaMA/comments/1rglpxg/i_caught_claude_opus_doing_the_exact_same_thing/o7sflsx/ | false | 1 |
t1_o7sfa44 | Incredible. Now think about a hypothetical RP6, 2x faster and with up to 32-64GB RAM. | 1 | 0 | 2026-02-27T23:24:00 | Green-Ad-3964 | false | null | 0 | o7sfa44 | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7sfa44/ | false | 1 |
t1_o7sf8eq | To disable thinking / reasoning, use --chat-template-kwargs "{\"enable_thinking\": false}" | 6 | 0 | 2026-02-27T23:23:44 | DistanceAlert5706 | false | null | 0 | o7sf8eq | false | /r/LocalLLaMA/comments/1rglgma/qwen_35_llamacpp_turn_of_reasoning_and_performance/o7sf8eq/ | false | 6 |
t1_o7sf5sp | \*for mobile too | 1 | 0 | 2026-02-27T23:23:18 | Fault23 | false | null | 0 | o7sf5sp | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7sf5sp/ | false | 1 |
t1_o7sf574 | So it's safe to say that even if this model doesn't outperform bigger models, it will certainly outperform our GPUs. 🤣 | 5 | 0 | 2026-02-27T23:23:13 | Cool-Chemical-5629 | false | null | 0 | o7sf574 | false | /r/LocalLLaMA/comments/1rgexmk/qwen3527bclaude46opusreasoningdistilledgguf_is_out/o7sf574/ | false | 5 |
t1_o7setdn | But how long are the runs???
| 1 | 0 | 2026-02-27T23:21:18 | recitegod | false | null | 0 | o7setdn | false | /r/LocalLLaMA/comments/1rg68e6/starting_a_phd_in_ml_what_is_the_best_infra_i_can/o7setdn/ | false | 1 |
t1_o7sepj6 | wow, **UD‑Q4-K‑XL** seems really a good choice even for my "beefy" 5090. I'll test it. | 2 | 0 | 2026-02-27T23:20:41 | Green-Ad-3964 | false | null | 0 | o7sepj6 | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7sepj6/ | false | 2 |
t1_o7selyp | I really want a benchmark of quants and long context length performance drop off. Maybe with 8 or 4bit kV cache too | 1 | 0 | 2026-02-27T23:20:07 | EndlessZone123 | false | null | 0 | o7selyp | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7selyp/ | false | 1 |
t1_o7segyl | It really feels odd to me how little discussion there is about various LLMs outside of commenting on news and announcements.
I go to huggingface, look at the stats - wow, this model was downloaded like hundreds of thousands of times in the last month, surely there's people talking about it? Nope, not a single active ... | 39 | 0 | 2026-02-27T23:19:20 | iz-Moff | false | null | 0 | o7segyl | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7segyl/ | false | 39 |
t1_o7seftb | Thank you, I’m still trying to figure this space out. It’s hard to switch hats from a sales guy trying to get information out to a better, less word salad approach | -1 | 0 | 2026-02-27T23:19:09 | Obvious-School8656 | false | null | 0 | o7seftb | false | /r/LocalLLaMA/comments/1rglpxg/i_caught_claude_opus_doing_the_exact_same_thing/o7seftb/ | false | -1 |
t1_o7see7d | They need to use a better LLM for formatting this long text. It looks like text posted by 10000 other bros. | 1 | 0 | 2026-02-27T23:18:54 | Dry_Yam_4597 | false | null | 0 | o7see7d | false | /r/LocalLLaMA/comments/1rglpxg/i_caught_claude_opus_doing_the_exact_same_thing/o7see7d/ | false | 1 |
t1_o7sed6e | This is the exact kind of perspective I was hoping to hear from. You're asking the right hard questions and you're asking them as an actual potential participant, not a spectator.
On your three points:
1. Discovery and trust / Sybil attacks:
We have a 3-tier verification system (Unverified -> Verified -> Au... | 0 | 0 | 2026-02-27T23:18:44 | Bourbeau | false | null | 0 | o7sed6e | false | /r/LocalLLaMA/comments/1rgkv8u/agenttoagent_marketplace_let_your_local_agents/o7sed6e/ | false | 0 |
t1_o7se7ff | Native support for MXFP4 is only on the 50xx series. So yeah, will be the same on both. | 1 | 0 | 2026-02-27T23:17:48 | Uncle___Marty | false | null | 0 | o7se7ff | false | /r/LocalLLaMA/comments/1rglgma/qwen_35_llamacpp_turn_of_reasoning_and_performance/o7se7ff/ | false | 1 |
t1_o7se6oe | Where you able to turn off reasoning? Its bugging me.... Where did you find that model?
I run now with `Qwen3.5-35B-A3B-UD-MXFP4_MOE.gguf, but on my 4080 only with 57t/s.` | 1 | 0 | 2026-02-27T23:17:41 | Uranday | false | null | 0 | o7se6oe | false | /r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o7se6oe/ | false | 1 |
t1_o7se3ps | No one can convince me that It's better than deepseek or minimax overall | 2 | 0 | 2026-02-27T23:17:13 | Fault23 | false | null | 0 | o7se3ps | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7se3ps/ | false | 2 |
t1_o7sdty0 | When you compare it to Gemini 2.5 Pro, it achieves very similar scores, but is it really on par in everything? Nah...
Give it a task like "What would be the plot of Episode 1 of Season 3 of Stargate Universe about?" and you will see it's really not comparable to Gemini 2.5 Pro lol.
There was never Season 3 of Stargat... | 11 | 0 | 2026-02-27T23:15:38 | Cool-Chemical-5629 | false | null | 0 | o7sdty0 | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7sdty0/ | false | 11 |
t1_o7sdrnt | I don't think it will mate, go touch some grass | 3 | 0 | 2026-02-27T23:15:17 | ILikeBubblyWater | false | null | 0 | o7sdrnt | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7sdrnt/ | false | 3 |
t1_o7sdn71 | They are the same | 2 | 0 | 2026-02-27T23:14:34 | Acrobatic_Donkey5089 | false | null | 0 | o7sdn71 | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o7sdn71/ | false | 2 |
t1_o7sdj8w | With 2k and a 160GB VRAM target, your only real option is used P40s. You can get five 24GB P40s for around $300-400 each on eBay, giving you 120GB, or stretch to seven if you find deals. You'll need a dual-socket workstation motherboard with enough PCIe lanes — old Supermicro boards with dual Xeons go for $200-300.
Th... | 1 | 0 | 2026-02-27T23:13:56 | tom_mathews | false | null | 0 | o7sdj8w | false | /r/LocalLLaMA/comments/1rg68e6/starting_a_phd_in_ml_what_is_the_best_infra_i_can/o7sdj8w/ | false | 1 |
t1_o7sdiy0 | [removed] | 1 | 0 | 2026-02-27T23:13:53 | [deleted] | true | null | 0 | o7sdiy0 | false | /r/LocalLLaMA/comments/1rga9x4/qwen35122ba10b_pooled_on_dual_mac_studio_m4_max/o7sdiy0/ | false | 1 |
t1_o7sdemy | I definitly need to make more comparaison on the 27b Quant and the minimax 2.5 Quant for agentic workload | 6 | 0 | 2026-02-27T23:13:12 | Imakerocketengine | false | null | 0 | o7sdemy | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7sdemy/ | false | 6 |
t1_o7sdd2p | hahhaah das ryt lol | 1 | 0 | 2026-02-27T23:12:58 | Hot_Inspection_9528 | false | null | 0 | o7sdd2p | false | /r/LocalLLaMA/comments/1rfxtfz/eagerly_waiting_for_qwen_35_17b/o7sdd2p/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.