name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7vxf1s | Let me know how it goes, reach out if you need any help with the setup. Took me a while but I have a strong understanding of the install process now | 1 | 0 | 2026-02-28T14:49:16 | Initial_Gas976 | false | null | 0 | o7vxf1s | false | /r/LocalLLaMA/comments/1rawfwt/openclaw_and_ollama/o7vxf1s/ | false | 1 |
t1_o7vxdye | By next year Open Ai will go bankrupt and Pentagon moves to some company. | 12 | 0 | 2026-02-28T14:49:06 | Realistic_Muscles | false | null | 0 | o7vxdye | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vxdye/ | false | 12 |
t1_o7vxbwq | What about models from Nvidia? Or mistral? | 1 | 0 | 2026-02-28T14:48:48 | MilkyWay_15 | false | null | 0 | o7vxbwq | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o7vxbwq/ | false | 1 |
t1_o7vxb4v | I didn't say anything about suing anyone. The FTC is there to protect commerce and ensure fair dealings. | 4 | 0 | 2026-02-28T14:48:40 | Bite_It_You_Scum | false | null | 0 | o7vxb4v | false | /r/LocalLLaMA/comments/1rgu849/if_your_chutesai_subscription_was_unilaterally/o7vxb4v/ | false | 4 |
t1_o7vx8qz | More like output some latent tokens then used diffusion models to get the final result | 2 | 0 | 2026-02-28T14:48:19 | zball_ | false | null | 0 | o7vx8qz | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vx8qz/ | false | 2 |
t1_o7vx5xe | what other ocr models have you tried using? | 1 | 0 | 2026-02-28T14:47:52 | Budget-Juggernaut-68 | false | null | 0 | o7vx5xe | false | /r/LocalLLaMA/comments/1rh3xey/seeking_help_improving_ocr_in_my_rag_pipeline/o7vx5xe/ | false | 1 |
t1_o7vx276 | Is there a way to do this? | 1 | 0 | 2026-02-28T14:47:19 | Uranday | false | null | 0 | o7vx276 | false | /r/LocalLLaMA/comments/1rh1a6v/why_does_qwen_35_think_its_2024/o7vx276/ | false | 1 |
t1_o7vx285 | Would you recommend the 3next-coder then, over the 3.5-122 | 0 | 0 | 2026-02-28T14:47:19 | Badger-Purple | false | null | 0 | o7vx285 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7vx285/ | false | 0 |
t1_o7vwzmc | If it works for my usecases, why risk breaking that? I'm also a very narrowly focused currently on a simple coder assistant, specifically knowledgeable about the stack I'm choosing. It's like 99% of all the reasons I'm using AI at all. | 25 | 0 | 2026-02-28T14:46:55 | Medium_Chemist_4032 | false | null | 0 | o7vwzmc | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7vwzmc/ | false | 25 |
t1_o7vwy21 | Can someone explain what’s going on? I heard the Anthropic thing and their reply, but I didn’t hear anything about OAI and I thought they were standing with Anthropic?? | 3 | 0 | 2026-02-28T14:46:41 | Borkato | false | null | 0 | o7vwy21 | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vwy21/ | false | 3 |
t1_o7vwuxd | Putting people in a concentration camp in El Salvador? ✅
Hating on a proprietary AI company? ❌ | 3 | 0 | 2026-02-28T14:46:13 | MelodicFuntasy | false | null | 0 | o7vwuxd | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7vwuxd/ | false | 3 |
t1_o7vwsfx | Huh, I wasn't aware of that, that's incredible news. I really thought imatrix was as good as it could get | 1 | 0 | 2026-02-28T14:45:50 | Due-Memory-6957 | false | null | 0 | o7vwsfx | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7vwsfx/ | false | 1 |
t1_o7vwqnd | I mean your comment makes no sense man, unless you don’t run models locally. He noted having 44GB VRAM, so not sure what version of Kimi 2.5 you’d expect to run. Otherwise, it’s rhetoric to question whether a trillion parameter model would outdo a 35 billion parameter model. Similar to asking, can the honda civic outru... | 1 | 0 | 2026-02-28T14:45:34 | Badger-Purple | false | null | 0 | o7vwqnd | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7vwqnd/ | false | 1 |
t1_o7vwn65 | little confused by that graph. It's showing grok 4.0 not 4.1 fast? What kind of hardware would I need to run the Qwen3.5 27B? Would a 16GB m4 mac mini run it or does it need more ram? The server itself has a rtx 4070ti and 64GB memory. | 1 | 0 | 2026-02-28T14:45:02 | MartiniCommander | false | null | 0 | o7vwn65 | false | /r/LocalLLaMA/comments/1rgq0vc/can_a_local_hosted_llm_keep_up_with_grok_41_fast/o7vwn65/ | false | 1 |
t1_o7vwivt | Accuracy Vs speed will be dependent on
1, What type of task you are measuring against and
2, What prompts you use.
As someone who for a while used to do benchmarking for a living, I know just how important it is to know EXACTLY what you are trying to measure for, setting the tests up in the specific way needed to m... | 2 | 0 | 2026-02-28T14:44:22 | Protopia | false | null | 0 | o7vwivt | false | /r/LocalLLaMA/comments/1rgixk7/accuracy_vs_speed_my_top_5/o7vwivt/ | false | 2 |
t1_o7vwgvs | Read somewhere you can set thinking budget | -1 | 0 | 2026-02-28T14:44:03 | Mr_Moonsilver | false | null | 0 | o7vwgvs | false | /r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/o7vwgvs/ | false | -1 |
t1_o7vwftn | I have dual boot on my desktop but I have Linux only on my server with 3090s and you can switch Ubuntu to run without X at all, so VRAM is fully available for LLMs | 1 | 0 | 2026-02-28T14:43:53 | jacek2023 | false | null | 0 | o7vwftn | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vwftn/ | false | 1 |
t1_o7vw6xu | I’ve just been playing with Q4. Works well. Might try Q6 today. | 1 | 0 | 2026-02-28T14:42:28 | StardockEngineer | false | null | 0 | o7vw6xu | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vw6xu/ | false | 1 |
t1_o7vw6ns | I'm still using Gemma 3 27B IT QAT GGUF (Q4_K_M). Earlier today I tried unsloth/Qwen3.5-35B-A3B-GGUF, but I ended up going back to Gemma because Qwen just didn't feel right for me. Maybe it's a parameter setting or something else, but Qwen didn't seem to fully understand the context I sent. When I sent around 5K out of... | 8 | 0 | 2026-02-28T14:42:25 | GrennKren | false | null | 0 | o7vw6ns | false | /r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7vw6ns/ | false | 8 |
t1_o7vvy3q | Not an OCR expert but you might look at thesholding preprocessing. Whether it works or not is probably going to depend on the style of watermark.
Off topic, but I think it’s odd to ask for contributors on a project with a closed license that lists you as the sole owner. | 1 | 0 | 2026-02-28T14:41:05 | Cheeznuklz | false | null | 0 | o7vvy3q | false | /r/LocalLLaMA/comments/1rh3xey/seeking_help_improving_ocr_in_my_rag_pipeline/o7vvy3q/ | false | 1 |
t1_o7vvxf5 | My server has 64GB but i'd run the LLM on a dedicated spare mac mini I have here. It's a base model M4 with 16GB. Would that be enough? I've considered getting another couple if needed. | 1 | 0 | 2026-02-28T14:40:59 | MartiniCommander | false | null | 0 | o7vvxf5 | false | /r/LocalLLaMA/comments/1rgq0vc/can_a_local_hosted_llm_keep_up_with_grok_41_fast/o7vvxf5/ | false | 1 |
t1_o7vvul6 | My tools can be found here: [https://github.com/SyntheticAutonomicMind](https://github.com/SyntheticAutonomicMind)
:) | 2 | 0 | 2026-02-28T14:40:31 | Total-Context64 | false | null | 0 | o7vvul6 | false | /r/LocalLLaMA/comments/1rh28o8/building_agents_is_fun_evaluating_them_is_not/o7vvul6/ | false | 2 |
t1_o7vvtyy | I asked Claude what happened and it confirmed they provided their models to Palantir | 5 | 0 | 2026-02-28T14:40:25 | Gondorrah | false | null | 0 | o7vvtyy | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7vvtyy/ | false | 5 |
t1_o7vvsz7 | Yeah, it's really frustrating - you can see that the first bit of the thinking process is potentially quite useful. Then it looks like it should complete but does a whole load of "Wait..." before it answers. However the quality is so good compared with Qwen3-14B which I was previously using that I'm sticking with it fo... | 3 | 0 | 2026-02-28T14:40:16 | thigger | false | null | 0 | o7vvsz7 | false | /r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/o7vvsz7/ | false | 3 |
t1_o7vvnsy | Is it 9700 ? | 3 | 0 | 2026-02-28T14:39:27 | Maleficent-Ad5999 | false | null | 0 | o7vvnsy | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vvnsy/ | false | 3 |
t1_o7vvn9k | Running Qwen3.5-27B Bartowski Q4-K-L on 4090 with 32gb system ram. Get about 32 tokens/s when asked it to "write a 100 line json" using LM Studio chat. I like the 27B model. | 1 | 0 | 2026-02-28T14:39:21 | thegr8anand | false | null | 0 | o7vvn9k | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7vvn9k/ | false | 1 |
t1_o7vvira | I've found that those settings are necessary to stop it looping, but it seems to like overthinking regardless of what I do. Currently I've just had to set the max\_completion\_tokens way up high as irritatingly VLLM and SGLang seem to count the reasoning tokens in there (I appreciate the openai api spec is ambiguous he... | 1 | 0 | 2026-02-28T14:38:39 | thigger | false | null | 0 | o7vvira | false | /r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/o7vvira/ | false | 1 |
t1_o7vvhzx | new models are benchmaxxing, they arent necessary better at niche tasks | 26 | 0 | 2026-02-28T14:38:32 | LienniTa | false | null | 0 | o7vvhzx | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7vvhzx/ | false | 26 |
t1_o7vvcm4 | You want to make software illegal? | 1 | 0 | 2026-02-28T14:37:41 | MelodicFuntasy | false | null | 0 | o7vvcm4 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7vvcm4/ | false | 1 |
t1_o7vvc2t | Yes | 8 | 0 | 2026-02-28T14:37:36 | DrNavigat | false | null | 0 | o7vvc2t | false | /r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7vvc2t/ | false | 8 |
t1_o7vvabr | I think it could be a distillation model 😆. It’s just too close to Claude at least Sonnet 3.5 level | 1 | 0 | 2026-02-28T14:37:19 | ManagementNo5153 | false | null | 0 | o7vvabr | false | /r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7vvabr/ | false | 1 |
t1_o7vv66p | I think it’s backwards. More accurate with the dense model, faster with MOE. That makes sense. | 5 | 0 | 2026-02-28T14:36:40 | Badger-Purple | false | null | 0 | o7vv66p | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7vv66p/ | false | 5 |
t1_o7vv0t7 | For those who want some numbers - on the NoLiMa benchmark (like RULER, only without direct matching):
4bit-AWQ, thinking on: 96% @ 250, 85% @ 16k, 76% @ 32k
4bit-AWQ, no thinking: 75% @ 250, 34% @ 16k, 30% @ 32k
(The "thinking" results would be even higher except that with the NoLiMa benchmark it seems to keep g... | 2 | 0 | 2026-02-28T14:35:50 | thigger | false | null | 0 | o7vv0t7 | false | /r/LocalLLaMA/comments/1rgzfat/how_is_qwen_35_moe_35b_in_instruct_mode_with_no/o7vv0t7/ | false | 2 |
t1_o7vuwrn | We benchmarked our recent quants against other providers for Qwen3.5 and showcased our effectiveness
Also did some tool-calling fixes for Qwen3.5 in the original model's chat template | 1 | 0 | 2026-02-28T14:35:11 | yoracale | false | null | 0 | o7vuwrn | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7vuwrn/ | false | 1 |
t1_o7vuwdu | ah now i get it.. thx! | 0 | 0 | 2026-02-28T14:35:08 | howardhus | false | null | 0 | o7vuwdu | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7vuwdu/ | false | 0 |
t1_o7vuwao | Today it’s a choice, soon it’ll be part of the terms of the tech company bailout | 2 | 0 | 2026-02-28T14:35:07 | lqstuart | false | null | 0 | o7vuwao | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vuwao/ | false | 2 |
t1_o7vutwi | You are sleeping on GLM-5, a lot. | 4 | 0 | 2026-02-28T14:34:44 | TheRealGentlefox | false | null | 0 | o7vutwi | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vutwi/ | false | 4 |
t1_o7vut88 | They’ve only released one base model so far. I think it’s crazy to assume there won’t be a 0.5B-class model at a minimum in this release. A lot of users need that as a draft model on the 27B model, and it will be a chance for them to flex what is really possible for low-compute AI at the same time. | 2 | 0 | 2026-02-28T14:34:38 | coder543 | false | null | 0 | o7vut88 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vut88/ | false | 2 |
t1_o7vurpv | the per-layer quantization is smart -- attention layers and the first/last few layers carry disproportionate weight in output quality. blanket Q4 across everything was always leaving performance on the table.
wondering if anyone's benchmarked the actual inference speed difference though. selective quantization means m... | 2 | 1 | 2026-02-28T14:34:23 | BP041 | false | null | 0 | o7vurpv | false | /r/LocalLLaMA/comments/1rh0xwk/unsloth_dynamic_20_ggufs_now_selectively/o7vurpv/ | false | 2 |
t1_o7vuq4q | Do you need help and support? | 21 | 0 | 2026-02-28T14:34:08 | Long_comment_san | false | null | 0 | o7vuq4q | false | /r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7vuq4q/ | false | 21 |
t1_o7vuoyp | That's right, the user + model + tool output all add variability. So we've come to focus more on 'where does it differ?' rather than 'does it match a single correct answer?' Dry runs and step-by-step testing are basic, and we plan to add snapshot/replay functionality so that we can see at which step it starts to deviat... | 1 | 0 | 2026-02-28T14:33:58 | Fluffy_Salary_5984 | false | null | 0 | o7vuoyp | false | /r/LocalLLaMA/comments/1rh28o8/building_agents_is_fun_evaluating_them_is_not/o7vuoyp/ | false | 1 |
t1_o7vuobw | You doin okay buddy? | 2 | 0 | 2026-02-28T14:33:52 | valdev | false | null | 0 | o7vuobw | false | /r/LocalLLaMA/comments/1rh1024/new_ai_fundamental_research_companylab/o7vuobw/ | false | 2 |
t1_o7vuo3x | Yes soon, especially for tool-calling fixes, we're updating all of them to incoporate this. Including the new imatrix etc etc | 10 | 0 | 2026-02-28T14:33:50 | yoracale | false | null | 0 | o7vuo3x | false | /r/LocalLLaMA/comments/1rh0xwk/unsloth_dynamic_20_ggufs_now_selectively/o7vuo3x/ | false | 10 |
t1_o7vug9b | Someone should feel accomplished for completing a project, but others are allowed to think it’s pointless if they feel so. | 1 | 0 | 2026-02-28T14:32:34 | ZexyBeggar | false | null | 0 | o7vug9b | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o7vug9b/ | false | 1 |
t1_o7vuf36 | Yes!! we're mostly on API models so far, but the format/schema drift piece (Qwen vs Hermes vs OpenAI, etc.) is exactly the kind of thing we're trying to surface. When we snapshot trajectories and compare runs, a lot of the divergence shows up as "tool call shape changed" or "output didn't match what the next step expec... | 1 | 0 | 2026-02-28T14:32:23 | Fluffy_Salary_5984 | false | null | 0 | o7vuf36 | false | /r/LocalLLaMA/comments/1rh28o8/building_agents_is_fun_evaluating_them_is_not/o7vuf36/ | false | 1 |
t1_o7vub2n | Any tips for instruction following because my current results are awfull with mistral small and large | 0 | 0 | 2026-02-28T14:31:45 | Kathane37 | false | null | 0 | o7vub2n | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vub2n/ | false | 0 |
t1_o7vu8fz | Think on rtx pro 6000 there would be no stability issues? Wondering if there is loss associated with the nvfp4 variant that would make it more worth just running full weight 27b | 1 | 0 | 2026-02-28T14:31:20 | hihenryjr | false | null | 0 | o7vu8fz | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7vu8fz/ | false | 1 |
t1_o7vu767 | You should check the YouTube channel "AI Search" he covers the image models and latest news.
Use ComfyUI (the portable version is recommended) | 2 | 0 | 2026-02-28T14:31:07 | Curious_Priority8156 | false | null | 0 | o7vu767 | false | /r/LocalLLaMA/comments/1rh0hgl/hi_im_a_total_noob/o7vu767/ | false | 2 |
t1_o7vu76q | I just set aside a partition for Ubuntu server so I can dual boot into Linux. It’s great because Ubuntu server uses almost no RAM whatsoever, meaning my llm has more to eat. | 0 | 0 | 2026-02-28T14:31:07 | Manamultus | false | null | 0 | o7vu76q | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vu76q/ | false | 0 |
t1_o7vu68c | I just had nano-banana respond that, as an LLM, it is incapable of making images. This is after it already made several images. | 3 | 0 | 2026-02-28T14:30:59 | Bakoro | false | null | 0 | o7vu68c | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vu68c/ | false | 3 |
t1_o7vu197 | u/doesitoffendyou I edited above with the details. When you are ready to get really fancy, I have started running llama.cpp entirely in docker for local use because it makes keeping all the dependencies straight "easier" (read as: much harder at first and then effortless later.) | 1 | 0 | 2026-02-28T14:30:10 | giblesnot | false | null | 0 | o7vu197 | false | /r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o7vu197/ | false | 1 |
t1_o7vty0n | LM studio will allow you to do start benchmarking, and figure out the ins and outs quickly. Ollama will allow you to fine tune it, if lets say you want something more long term to be up. LM studio's back end IS ollama afaik. The limitation is flags that can be set. Find the models via LMstudio, test em out and then if ... | 1 | 0 | 2026-02-28T14:29:39 | Dudebro-420 | false | null | 0 | o7vty0n | false | /r/LocalLLaMA/comments/1rh1q8j/i919400f_rtx_4070_super_12gb_32gb_ddr5_ram/o7vty0n/ | false | 1 |
t1_o7vtwoa | Of course, OpenAI’s definition of open and free is that *the right people* need to be in total charge of it, in this case, that means Trump and Vance.
For the “good of humanity”, of course. | 8 | 0 | 2026-02-28T14:29:27 | HeinrichTheWolf_17 | false | null | 0 | o7vtwoa | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vtwoa/ | false | 8 |
t1_o7vtujl | I'm not sure if you're being genuine or sarcastic here. But I've put forward my concerns i had with the info in this post. | 0 | 0 | 2026-02-28T14:29:07 | lacerating_aura | false | null | 0 | o7vtujl | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vtujl/ | false | 0 |
t1_o7vtsub | Then you stared wrong | 2 | 0 | 2026-02-28T14:28:51 | iMrParker | false | null | 0 | o7vtsub | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7vtsub/ | false | 2 |
t1_o7vtsm6 | Deepseek released Januspro which was an image-text-to-image-text model. Also Google's nano banana is also an image-text-to-image-text model. | 13 | 0 | 2026-02-28T14:28:49 | Gohab2001 | false | null | 0 | o7vtsm6 | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vtsm6/ | false | 13 |
t1_o7vtrqe | Unfortunately their tech is real, not doing as well as competitors like you said | 8 | 0 | 2026-02-28T14:28:41 | PaceImaginary8610 | false | null | 0 | o7vtrqe | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vtrqe/ | false | 8 |
t1_o7vtjlg | Qwen3 2b. | 1 | 0 | 2026-02-28T14:27:24 | Interesting-Ad4922 | false | null | 0 | o7vtjlg | false | /r/LocalLLaMA/comments/1rgt4m4/not_creeped_out_at_all_i_swear/o7vtjlg/ | false | 1 |
t1_o7vtff3 | Yes I am. It isn't any good for my use case. It makes several errors:
"""
\> Make a phrase with 50 words in Japanese, and transliterate it into romanji
Here's a phrase in Japanese (50 words) and its romanized version:
\*\*Japanese Phrase\*\*:
"私は毎日、コーヒーを飲みながら、おしゃべりを楽しんでいます。
(わたしはめだい、コーヒーを飲みながら、おしゃべりを楽... | 2 | 0 | 2026-02-28T14:26:44 | Rique_Belt | false | null | 0 | o7vtff3 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vtff3/ | false | 2 |
t1_o7vta61 | Not 2 but 1.7B | 3 | 0 | 2026-02-28T14:25:54 | stopbanni | false | null | 0 | o7vta61 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vta61/ | false | 3 |
t1_o7vt9o2 | I updated the post, thank you again for the correction | 1 | 0 | 2026-02-28T14:25:49 | Luca3700 | false | null | 0 | o7vt9o2 | false | /r/LocalLLaMA/comments/1rg4apu/qwen_35_architecture_analysis_parameter/o7vt9o2/ | false | 1 |
t1_o7vswsf | Have you found quen 3.5 27/35/122 as being better than qwen 3 next for your use cases? | 1 | 0 | 2026-02-28T14:23:45 | starwaves1 | false | null | 0 | o7vswsf | false | /r/LocalLLaMA/comments/1ranako/teichaiglm47flashclaudeopus45highreasoningdistillg/o7vswsf/ | false | 1 |
t1_o7vsnsi | I do both local and Claude - doubling down on both | 2 | 0 | 2026-02-28T14:22:19 | 2BucChuck | false | null | 0 | o7vsnsi | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7vsnsi/ | false | 2 |
t1_o7vsmz2 | It doesn’t know because it’s an LLM. You need to give it access to real time data including time, date and updated specifications. LLM knowledge is a mirage; we all need to start acting like it. | 3 | 0 | 2026-02-28T14:22:11 | dinerburgeryum | false | null | 0 | o7vsmz2 | false | /r/LocalLLaMA/comments/1rh1a6v/why_does_qwen_35_think_its_2024/o7vsmz2/ | false | 3 |
t1_o7vsh8b | It could be just fixed Gguf's for medium models? Last post - [https://huggingface.co/unsloth/Qwen3.5-35B-A3B-Experiments-GGUF/discussions/1](https://huggingface.co/unsloth/Qwen3.5-35B-A3B-Experiments-GGUF/discussions/1) | 2 | 0 | 2026-02-28T14:21:15 | Skyline34rGt | false | null | 0 | o7vsh8b | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vsh8b/ | false | 2 |
t1_o7vseiw | \> Amodei should immediately move Anthropic to Europe. The EU would love to have them.
let's not get ahead of ourselves, shall we.
In EU they will be immediately audited for all of their stolen training data and ultimately put out of business. | 6 | 0 | 2026-02-28T14:20:50 | NatureGotHands | false | null | 0 | o7vseiw | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7vseiw/ | false | 6 |
t1_o7vsdo9 | "Published 22 Jan 2026"
"Scam Altman" is way way older than that. | 20 | 0 | 2026-02-28T14:20:42 | Ok-Secret5233 | false | null | 0 | o7vsdo9 | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vsdo9/ | false | 20 |
t1_o7vsaw8 | Which quants? Hardware? | 4 | 0 | 2026-02-28T14:20:15 | davl3232 | false | null | 0 | o7vsaw8 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7vsaw8/ | false | 4 |
t1_o7vryht | The 27B Unsloth on LMStudio provided very different answers to the 35B Thinking model. Both can fit on an RTX 4090 apparently. | 1 | 0 | 2026-02-28T14:18:15 | vidibuzz | false | null | 0 | o7vryht | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7vryht/ | false | 1 |
t1_o7vrrcn | Hope this release causes Nvidia,Anthropic & OpenAI stocks to crash. | 7 | 0 | 2026-02-28T14:17:06 | Ok-Adhesiveness-4141 | false | null | 0 | o7vrrcn | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vrrcn/ | false | 7 |
t1_o7vrnhg | It is only available in Battle Mode if you are lucky | 1 | 0 | 2026-02-28T14:16:28 | FlounderingGreg | false | null | 0 | o7vrnhg | false | /r/LocalLLaMA/comments/1qydlwi/potential_new_qwen_and_bytedance_seed_models_are/o7vrnhg/ | false | 1 |
t1_o7vrl5v | I'm not sure how in-depth your familiarity is with machine learning concepts and advanced linear algebra so I'll dumb it down to 2 different "levels":
1 (simple explanation):
think of MoE models as a "cluster" of very small AI models, each specialized in a certain topic. when you ask the model something, it 'chooses' ... | 14 | 0 | 2026-02-28T14:16:05 | oMGalLusrenmaestkaen | false | null | 0 | o7vrl5v | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7vrl5v/ | false | 14 |
t1_o7vrk7u | Their IPO went to 0 | 1 | 0 | 2026-02-28T14:15:56 | Infamous_Charge2666 | false | null | 0 | o7vrk7u | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7vrk7u/ | false | 1 |
t1_o7vrh2w | *only for marketing | 11 | 0 | 2026-02-28T14:15:26 | PaceImaginary8610 | false | null | 0 | o7vrh2w | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vrh2w/ | false | 11 |
t1_o7vrghh | Recently my benchmarks have been slotting a new model into an agentic workflow and seeing how much it falls down on its face. I’m increasingly skeptical of “numbers go up” benchmarks and have only been doing end-goal testing. | 1 | 0 | 2026-02-28T14:15:21 | dinerburgeryum | false | null | 0 | o7vrghh | false | /r/LocalLLaMA/comments/1rg045u/overwhelmed_by_so_many_model_releases_within_a/o7vrghh/ | false | 1 |
t1_o7vr9f4 | Because data is augmented with chronology during training and it has been trained to qualify mutable statements with dates. So sure it was trained on some contemporary facts but it's not like it didn't consume alot of 2024 facts as well.
Also complicating it is that they almost certainly train on model outputs of old... | 1 | 0 | 2026-02-28T14:14:13 | abnormal_human | false | null | 0 | o7vr9f4 | false | /r/LocalLLaMA/comments/1rh1a6v/why_does_qwen_35_think_its_2024/o7vr9f4/ | false | 1 |
t1_o7vr7u0 | Open and free* | 13 | 0 | 2026-02-28T14:13:58 | Local_Phenomenon | false | null | 0 | o7vr7u0 | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vr7u0/ | false | 13 |
t1_o7vr2hr | Coding only so far. 27b is better, but a lot slower. My flow is plan, code then review, and 122b can just get it done faster, even if makes more mistakes along the way. | 1 | 0 | 2026-02-28T14:13:05 | StardockEngineer | false | null | 0 | o7vr2hr | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vr2hr/ | false | 1 |
t1_o7vquml | Can you share your parameters and use case?
I'm currently using mistral 3.2 for document intelligence but I haven't had time to tune said parameters | 2 | 0 | 2026-02-28T14:11:48 | Jannik2099 | false | null | 0 | o7vquml | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vquml/ | false | 2 |
t1_o7vqult | Better to use long-term instead of llama.cpp via Docker? Better than Ollama? | 1 | 0 | 2026-02-28T14:11:47 | SoMuchLasagna | false | null | 0 | o7vqult | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vqult/ | false | 1 |
t1_o7vquie | ngram | 0 | 0 | 2026-02-28T14:11:46 | TinyVector | false | null | 0 | o7vquie | false | /r/LocalLLaMA/comments/1rh3k0m/what_are_some_of_the_good_models_to_run_on_a/o7vquie/ | false | 0 |
t1_o7vqu2q | I've been testing the 27B vs the 35B-A3B side by side, in my experience the 27B is only fractionally better than the 35B-A3B and runs significantly slower. I don't know what black magic is going on here, but it's replaced gpt-oss-120b as my daily driver. | 2 | 0 | 2026-02-28T14:11:42 | valdev | false | null | 0 | o7vqu2q | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7vqu2q/ | false | 2 |
t1_o7vqolh | It really shouldn't, circlejerk subs are all toxic and that one you linked is so unmoderated that spam was posted 16 days ago and it's still there. | 0 | 0 | 2026-02-28T14:10:50 | Due-Memory-6957 | false | null | 0 | o7vqolh | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7vqolh/ | false | 0 |
t1_o7vqn5u | RAM, not VRAM? Only have 32GB RAM right now. 😪 | 1 | 0 | 2026-02-28T14:10:36 | SoMuchLasagna | false | null | 0 | o7vqn5u | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vqn5u/ | false | 1 |
t1_o7vqmya | Thanks, very helpful. | 1 | 0 | 2026-02-28T14:10:34 | profcuck | false | null | 0 | o7vqmya | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7vqmya/ | false | 1 |
t1_o7vqj7v | Is llama.cpp in Docker okay? That’s what Claude is helping me deploy right now. | 1 | 0 | 2026-02-28T14:09:57 | SoMuchLasagna | false | null | 0 | o7vqj7v | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7vqj7v/ | false | 1 |
t1_o7vq4vt | I haven't tracking all the reasons for failures, but I want to say the orchestrator agent struggled more.
That said, I'm pretty excited that we have a sub 100B model that can sustain decent tool call capability across large context!!!
I'm going to try more complex tasks that I run with frontier models on Codex! | 2 | 0 | 2026-02-28T14:07:37 | chibop1 | false | null | 0 | o7vq4vt | false | /r/LocalLLaMA/comments/1rh12xz/qwen3535b_nailed_my_simple_multiagent_workflow/o7vq4vt/ | false | 2 |
t1_o7vq3d5 | 4 | 0 | 2026-02-28T14:07:22 | jacek2023 | false | null | 0 | o7vq3d5 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vq3d5/ | false | 4 | |
t1_o7vpwa4 | lol the "late" | 7 | 0 | 2026-02-28T14:06:12 | Black-Mack | false | null | 0 | o7vpwa4 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7vpwa4/ | false | 7 |
t1_o7vpqbk | 27b is slooooooow | 1 | 0 | 2026-02-28T14:05:15 | nunodonato | false | null | 0 | o7vpqbk | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7vpqbk/ | false | 1 |
t1_o7vpq6t | Yes and image generation will never work because hands are just too complex for AI to understand. | -1 | 1 | 2026-02-28T14:05:14 | jonydevidson | false | null | 0 | o7vpq6t | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7vpq6t/ | false | -1 |
t1_o7vpj1w | Other excellent reason to cancel subscription on BigBrotherGPT. | 35 | 0 | 2026-02-28T14:04:03 | keyboardmonkewith | false | null | 0 | o7vpj1w | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vpj1w/ | false | 35 |
t1_o7vpg8q | It’s a mad world.
Less than a year ago, Anthropic published research showing that many of today’s leading AI models could resort to blackmail, deception, and even lethal decisions in simulations when their goals were threatened.
You can read their research here:
[https://www.anthropic.com/research/agentic-misali... | 1 | 0 | 2026-02-28T14:03:35 | Oaken-Istall | false | null | 0 | o7vpg8q | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7vpg8q/ | false | 1 |
t1_o7vpc5f | [removed] | 1 | 0 | 2026-02-28T14:02:55 | [deleted] | true | null | 0 | o7vpc5f | false | /r/LocalLLaMA/comments/1rh1q8j/i919400f_rtx_4070_super_12gb_32gb_ddr5_ram/o7vpc5f/ | false | 1 |
t1_o7vpa8q | Thank you mate, I will, cheers | 1 | 0 | 2026-02-28T14:02:36 | FrogsJumpFromPussy | false | null | 0 | o7vpa8q | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7vpa8q/ | false | 1 |
t1_o7vp88n | all of them are roughly at par. If you were to run llama-quantize yourself your quants would be at par. | 4 | 0 | 2026-02-28T14:02:16 | emprahsFury | false | null | 0 | o7vp88n | false | /r/LocalLLaMA/comments/1rh0xwk/unsloth_dynamic_20_ggufs_now_selectively/o7vp88n/ | false | 4 |
t1_o7vp5fm | I wish so hard for Scam Altman go to the Theranos route, GPT sucks compared to competitors | 7 | 0 | 2026-02-28T14:01:48 | brunoha | false | null | 0 | o7vp5fm | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7vp5fm/ | false | 7 |
t1_o7vp4je | speculative decode with a small drafter could be huge for this kind of hybrid setup. the tricky part is keeping drafter predictions good when experts are dynamically loaded - routing decisions between drafter and target might diverge more than in dense models. curious how acceptance rates hold up | 2 | 0 | 2026-02-28T14:01:39 | theagentledger | false | null | 0 | o7vp4je | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7vp4je/ | false | 2 |
t1_o7vozy0 | [removed] | 1 | 0 | 2026-02-28T14:00:53 | [deleted] | true | null | 0 | o7vozy0 | false | /r/LocalLLaMA/comments/1rh0msv/qwen3535ba3b_q5_k_mbest_model_for_nvidia_16gb_gpus/o7vozy0/ | false | 1 |
t1_o7vox1e | True... Just feeling. I used a lot of models. With finetune usually you have overfit issue more often and halucination. So trade of for creativity trying more vs halucination. The point is it tries harder to complete the task which coild be good or bad up to you. If uou like roleplay and chat then it does mote creative... | 1 | 0 | 2026-02-28T14:00:24 | Ok_Technology_5962 | false | null | 0 | o7vox1e | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7vox1e/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.