name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7r82r8 | Yes that's correct. You should read: [https://unsloth.ai/docs/basics/unsloth-dynamic-2.0-ggufs](https://unsloth.ai/docs/basics/unsloth-dynamic-2.0-ggufs) | 5 | 0 | 2026-02-27T19:40:57 | yoracale | false | null | 0 | o7r82r8 | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r82r8/ | false | 5 |
t1_o7r82p9 | It's a perfect tool to create hype and make money out of it. Grifters hype the sh!t out of it for engagement farming , some useless AI tools hyping it so that they can promote their sh!t, providers hyping/promoting it so that they can sell their plans.
I like the idea of it but as someone said it became like NFTs now... | 2 | 0 | 2026-02-27T19:40:56 | Codemonkeyzz | false | null | 0 | o7r82p9 | false | /r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7r82p9/ | false | 2 |
t1_o7r8172 | Thanks so much for testing them out and glad it's better :) | 2 | 0 | 2026-02-27T19:40:44 | yoracale | false | null | 0 | o7r8172 | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r8172/ | false | 2 |
t1_o7r7z2q | I only have 8GB VRAM and I'm still going to give it a shot. A very long slow ass shot, but still a shot. | 2 | 0 | 2026-02-27T19:40:26 | Cool-Chemical-5629 | false | null | 0 | o7r7z2q | false | /r/LocalLLaMA/comments/1rgexmk/qwen3527bclaude46opusreasoningdistilledgguf_is_out/o7r7z2q/ | false | 2 |
t1_o7r7xi4 | We did not compare some quants as we only selected the most important ones and also because other quanters did not upload them.
The purple dots? What do you mean, they're together in the graph. | 1 | 0 | 2026-02-27T19:40:13 | yoracale | false | null | 0 | o7r7xi4 | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r7xi4/ | false | 1 |
t1_o7r7w68 | I think companies are really scared to look foolish, and having their llm admit a mistake is omitted from their training. After all, why would they let us (the consumer) know that the product we are using is "garbage" when it could make us feel better about our decision of using the model in the first place. | 1 | 0 | 2026-02-27T19:40:02 | gamblingapocalypse | false | null | 0 | o7r7w68 | false | /r/LocalLLaMA/comments/1rgaccz/dishonesty_in_thinking_block/o7r7w68/ | false | 1 |
t1_o7r7tao | Ok, i've always been a bit unsure of what UNSLOTH is for? What's the difference between the official models that come out, and the Unsloth versions? | 2 | 0 | 2026-02-27T19:39:38 | Savantskie1 | false | null | 0 | o7r7tao | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r7tao/ | false | 2 |
t1_o7r7swr | RAM modules soldered to a GPU card aren’t going to be cheaper. The value proposition is still there. | 7 | 0 | 2026-02-27T19:39:34 | zadiraines | false | null | 0 | o7r7swr | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7r7swr/ | false | 7 |
t1_o7r7rhq | [Liked Ling-mini for similar size](https://www.reddit.com/r/LocalLLaMA/comments/1qp7so2/bailingmoe_ling17b_models_speed_is_better_now/). | 1 | 0 | 2026-02-27T19:39:22 | pmttyji | false | null | 0 | o7r7rhq | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7r7rhq/ | false | 1 |
t1_o7r7po6 | I think a really fun benchmark would be to compare small quants of modern large models and a small quant of medium or small models. Just some food for thoughts. Personally I would bet on Q8 of qwen 35 to mop the floor with Q1 of qwen 110 just because of the brain damage | 1 | 0 | 2026-02-27T19:39:07 | Long_comment_san | false | null | 0 | o7r7po6 | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r7po6/ | false | 1 |
t1_o7r7nfg | He never left. His vid on using Linux was a pretty big hit. Also he's enjoying family life in Japan. | 92 | 0 | 2026-02-27T19:38:49 | DraconPern | false | null | 0 | o7r7nfg | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7r7nfg/ | false | 92 |
t1_o7r7lh4 | are you sure your power supply is enough to take all of that? The 2060 draws less power than a V100 | 1 | 0 | 2026-02-27T19:38:33 | -dysangel- | false | null | 0 | o7r7lh4 | false | /r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7r7lh4/ | false | 1 |
t1_o7r7eth | Obviously the lack of x16 slots is not a problem, because the system boots with a V100 and a 2060 Super. I don't need the GPUs to run at x16. I'm fine with x1, I don't need the throughput.
Yes, I can physically install two V100s on the motherboard. I chose it specifically because it has 5 full width PCIe slots.
The C... | 3 | 0 | 2026-02-27T19:37:38 | MackThax | false | null | 0 | o7r7eth | false | /r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7r7eth/ | false | 3 |
t1_o7r7e5x | EXO maintainer here.
Are you running with RDMA / Tensor Parallelism? You should get a speedup compared to 1 node. | 1 | 0 | 2026-02-27T19:37:32 | Longjumping_Crow_597 | false | null | 0 | o7r7e5x | false | /r/LocalLLaMA/comments/1rga9x4/qwen35122ba10b_pooled_on_dual_mac_studio_m4_max/o7r7e5x/ | false | 1 |
t1_o7r7drv | u/saeedzou did you find any good models that are good for Persian? | 1 | 0 | 2026-02-27T19:37:29 | Js-2075 | false | null | 0 | o7r7drv | false | /r/LocalLLaMA/comments/1m7rwgo/best_tts_model_with_new_language_support/o7r7drv/ | false | 1 |
t1_o7r7cbw | This is so awesome and you can use it right away with the testing LLM out of the box! I installed the browser add-on immediately because I want to be able to automate repetitive tasks in the browser and the extension allows the agent to keep going even when the agent clicks the link to load a different page. Some time ... | 1 | 0 | 2026-02-27T19:37:17 | Cool-Chemical-5629 | false | null | 0 | o7r7cbw | false | /r/LocalLLaMA/comments/1rgfrxp/pageagent_browser_ai_agent_that_runs_inside_the/o7r7cbw/ | false | 1 |
t1_o7r78jc | Tried to run this on NVIDIA Spark / DGX? | 1 | 0 | 2026-02-27T19:36:45 | Silver_Patient_7253 | false | null | 0 | o7r78jc | false | /r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o7r78jc/ | false | 1 |
t1_o7r75sg | Commenting here so I don't lose this | 1 | 0 | 2026-02-27T19:36:22 | cmdr-William-Riker | false | null | 0 | o7r75sg | false | /r/LocalLLaMA/comments/1rg8gkx/qwen35_35b_a3b_45_ts_128k_ctx_on_single_16gb_5060/o7r75sg/ | false | 1 |
t1_o7r751l | 15B MOE model, Q4/Q8 will fit 8/16 GB VRAM so it would be faster. (Q4 of 30B MOE gives me 35-40 t/s for 8GB VRAM+32GB RAM)
5-10B Dense model to beat famous outlier superior Qwen3-4B! | 2 | 0 | 2026-02-27T19:36:16 | pmttyji | false | null | 0 | o7r751l | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7r751l/ | false | 2 |
t1_o7r71tv | It requires less space, it is in 2 files and I want to develop it in the direction of code more than the chat itself | -2 | 0 | 2026-02-27T19:35:49 | KRZYZYK33 | false | null | 0 | o7r71tv | false | /r/LocalLLaMA/comments/1rgfoji/cmdai_a_simple_tool_for_loading_models/o7r71tv/ | false | -2 |
t1_o7r6t6y | 2 | 0 | 2026-02-27T19:34:37 | -dysangel- | false | null | 0 | o7r6t6y | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7r6t6y/ | false | 2 | |
t1_o7r6np0 | Is there a way you can not have this be triggered by like other qr's or other extensions generating? I have some extensions where it also auto generates but it keeps triggering the echo chamber repeatedly. | 1 | 0 | 2026-02-27T19:33:52 | Thekittymixy | false | null | 0 | o7r6np0 | false | /r/LocalLLaMA/comments/1q4tken/release_echochamber_add_aigenerated_audience/o7r6np0/ | false | 1 |
t1_o7r6ldt | LM Studio user here, is there an easy way to use vllm like there is for llama.cpp? | 2 | 0 | 2026-02-27T19:33:32 | Significant_Fig_7581 | false | null | 0 | o7r6ldt | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7r6ldt/ | false | 2 |
t1_o7r6edj | It's awful (and ironic) that this sub is being taken over by bots. | 1 | 0 | 2026-02-27T19:32:33 | LickMyTicker | false | null | 0 | o7r6edj | false | /r/LocalLLaMA/comments/1rg8qp0/howwhere_to_run_an_uncensored_model_using_cloud/o7r6edj/ | false | 1 |
t1_o7r6adk | ubergarm here, thanks for sharing more of your methodologies and results such that others can reproduce and analyze the data too! (the AesSedai KLD logs are missing at the moment tho, probably forgot to upload them into the HF repo?).
As most folks know quantizing is all trade-offs. Thanks for including my mainline co... | 44 | 0 | 2026-02-27T19:31:59 | VoidAlchemy | false | null | 0 | o7r6adk | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r6adk/ | false | 44 |
t1_o7r69j8 | I suspect he "started" when he was building circuits in Minecraft. He seemed pretty taken with that. That may have been what led to wanting to tinker with things more. | 10 | 0 | 2026-02-27T19:31:52 | -dysangel- | false | null | 0 | o7r69j8 | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7r69j8/ | false | 10 |
t1_o7r68x6 | I think OP probably already asked and that isnt even a good answer as many people in this sub have run multiple gpus off of cheap mobos. | 2 | 0 | 2026-02-27T19:31:48 | lemondrops9 | false | null | 0 | o7r68x6 | false | /r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7r68x6/ | false | 2 |
t1_o7r66ju | I assume the reason is that you never go outside | 10 | 0 | 2026-02-27T19:31:28 | ILikeBubblyWater | false | null | 0 | o7r66ju | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7r66ju/ | false | 10 |
t1_o7r65dx | Thank u for the tip. I created this app for that reason: for business users it's very unsafe to send data on the cloud and could be not compliant to companies rules. So I think this is the opportunity | 1 | 0 | 2026-02-27T19:31:19 | dai_app | false | null | 0 | o7r65dx | false | /r/LocalLLaMA/comments/1rg3b5v/what_do_you_think_if_you_have_the_possibility_to/o7r65dx/ | false | 1 |
t1_o7r652c | That's fair. It's allot to cover. Too much for this message. Here's what I can do. I've already put the app installer on HF free to use. I will also open SRC the code on git hub. I will put the engine that drives it on HF. Most of learning is building. I've read allot of books, watched horrendous amounts of YouTube. Bu... | 2 | 0 | 2026-02-27T19:31:16 | melanov85 | false | null | 0 | o7r652c | false | /r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7r652c/ | false | 2 |
t1_o7r649w | ty | 0 | 0 | 2026-02-27T19:31:09 | ShortFishing6749 | false | null | 0 | o7r649w | false | /r/LocalLLaMA/comments/1rg3gka/llm_terminology_explained_simply_weights/o7r649w/ | false | 0 |
t1_o7r5y9v | Are you able to achieve real time generation using conventional hardware ? I would prefer if the voice is sufficiently natural, stable and ofcourse low latency. A lot of low latency models exist one of my favorites is supertonic very clear, has cpu inference but slightly lacking that emotional range and ofcourse is not... | 1 | 0 | 2026-02-27T19:30:20 | intptr64 | false | null | 0 | o7r5y9v | false | /r/LocalLLaMA/comments/1rgb8tj/discussion_local_contextaware_tts_what_do_you/o7r5y9v/ | false | 1 |
t1_o7r5uib | But it will work, if you don't need the bandwidth. It should still boot. | 2 | 0 | 2026-02-27T19:29:49 | Nota_ReAlperson | false | null | 0 | o7r5uib | false | /r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7r5uib/ | false | 2 |
t1_o7r5sw7 | If you are using it with llama.cpp, try vLLM with whatever quantization supports your compute, probably AWQ. Prompt processing difference is mindblowing in my case, like 1200tps vs 100tps | 1 | 0 | 2026-02-27T19:29:36 | catplusplusok | false | null | 0 | o7r5sw7 | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7r5sw7/ | false | 1 |
t1_o7r5rgh | Can you explain the advantages of this program over for example llama-cli? | 2 | 0 | 2026-02-27T19:29:24 | jacek2023 | false | null | 0 | o7r5rgh | false | /r/LocalLLaMA/comments/1rgfoji/cmdai_a_simple_tool_for_loading_models/o7r5rgh/ | false | 2 |
t1_o7r5rce | They're around 10k now in Newegg we are stuck with the battlemage I guess | 1 | 0 | 2026-02-27T19:29:23 | Ok-Ad-8976 | false | null | 0 | o7r5rce | false | /r/LocalLLaMA/comments/1qx5i2g/rtx6000_pro_price_is_very_volatile/o7r5rce/ | false | 1 |
t1_o7r5qac | The first graph is a bit hard to read. Like missing labels for many quants like the 2 purple dots. | 1 | 0 | 2026-02-27T19:29:14 | klop2031 | false | null | 0 | o7r5qac | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r5qac/ | false | 1 |
t1_o7r5fov | What do you mean when you say that the Dynamic 2.0 method uses a conversational format? Is it somehow better adapted to do conversations instead of some other uses? | 3 | 0 | 2026-02-27T19:27:48 | Hot_Strawberry1999 | false | null | 0 | o7r5fov | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r5fov/ | false | 3 |
t1_o7r516l | 10 | 0 | 2026-02-27T19:25:49 | jslominski | false | null | 0 | o7r516l | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r516l/ | false | 10 | |
t1_o7r4yql | \>10K and approacing 11K on newegg | 1 | 0 | 2026-02-27T19:25:29 | Ok-Ad-8976 | false | null | 0 | o7r4yql | false | /r/LocalLLaMA/comments/1qx5i2g/rtx6000_pro_price_is_very_volatile/o7r4yql/ | false | 1 |
t1_o7r4x10 | Awesome. I've been assuming that if I'm not going to fit in VRAM anyway, I might as well bump the quant up to Q6. Seems like for the size, there's really not a huge difference between Q6 and Q4. Even IQ3\_XXS, which fits in VRAM for 16GB cards with enough Q8\_KV cache to be useful, doesn't have huge divergence. I ki... | 6 | 0 | 2026-02-27T19:25:15 | sine120 | false | null | 0 | o7r4x10 | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r4x10/ | false | 6 |
t1_o7r4rjv | Are you sure you are not just... running out of memory and hitting a page/cache issue? I tested q4 just now on a 5060ti 16gb with 64gb ram @ 28gb allocated in RAM 85k context before doing anything. After writing this far, I realized you're also using the laptop 12gb further constraining you. I wouldn't expect this mod... | 2 | 0 | 2026-02-27T19:24:31 | Xp_12 | false | null | 0 | o7r4rjv | false | /r/LocalLLaMA/comments/1rgaw5c/gpu_shared_vram_makes_qwen3535b_prompt_processing/o7r4rjv/ | false | 2 |
t1_o7r4osf | One other thing, I was looking at [https://huggingface.co/unsloth/Qwen3.5-35B-A3B-Experiments-GGUF/tree/main/KLD\_Logs](https://huggingface.co/unsloth/Qwen3.5-35B-A3B-Experiments-GGUF/tree/main/KLD_Logs) and noticed that the measurements for my quants weren't there? Will they be uploaded later or something? | 13 | 0 | 2026-02-27T19:24:08 | Digger412 | false | null | 0 | o7r4osf | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r4osf/ | false | 13 |
t1_o7r4o7d | 4o has been completely removed, so *any* model is better. | -3 | 0 | 2026-02-27T19:24:04 | roselan | false | null | 0 | o7r4o7d | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7r4o7d/ | false | -3 |
t1_o7r4jwr | Did you find any solution or you gave up? | 1 | 0 | 2026-02-27T19:23:28 | ANTONBORODA | false | null | 0 | o7r4jwr | false | /r/LocalLLaMA/comments/1rg05k7/qwen_35_122b_a10b_3584_score_on_natint_ugi/o7r4jwr/ | false | 1 |
t1_o7r4g62 | Very cool! Unfortunately, RAM is not that cheap anymore.... | 5 | 0 | 2026-02-27T19:22:57 | Tempstudio | false | null | 0 | o7r4g62 | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7r4g62/ | false | 5 |
t1_o7r4cy5 | Yeah that too, I hate this new qwen architecture but maybe thats the reason they can fit so much performance on such a small size | 1 | 0 | 2026-02-27T19:22:31 | Significant_Fig_7581 | false | null | 0 | o7r4cy5 | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7r4cy5/ | false | 1 |
t1_o7r4bvt | Heck yes, so glad you targeted the SSM layers! They feel so sensitive, I never touch them with the possible exception of ssm\_out at q8\_0 for sub-4 bit quant types. | 2 | 0 | 2026-02-27T19:22:22 | dinerburgeryum | false | null | 0 | o7r4bvt | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r4bvt/ | false | 2 |
t1_o7r4aux | I mean, it's definitely something about the setup, because each V100 works on its own, or along with a 2060 Super. | 1 | 0 | 2026-02-27T19:22:14 | MackThax | false | null | 0 | o7r4aux | false | /r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7r4aux/ | false | 1 |
t1_o7r4am7 | I had the same experience, but with OpenCode. | 1 | 0 | 2026-02-27T19:22:12 | LastAd7195 | false | null | 0 | o7r4am7 | false | /r/LocalLLaMA/comments/1rg05k7/qwen_35_122b_a10b_3584_score_on_natint_ugi/o7r4am7/ | false | 1 |
t1_o7r4ab2 | There is no contribution, it's just made up jargon that doesn't mean anything. | 2 | 0 | 2026-02-27T19:22:10 | MelodicFuntasy | false | null | 0 | o7r4ab2 | false | /r/LocalLLaMA/comments/1rfrnus/academic_plagiarism_and_the_misappropriation_of/o7r4ab2/ | false | 2 |
t1_o7r4a1r | -4 | 0 | 2026-02-27T19:22:08 | false79 | false | null | 0 | o7r4a1r | false | /r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7r4a1r/ | false | -4 | |
t1_o7r45vw | 15B A3B !!! | 1 | 0 | 2026-02-27T19:21:33 | Adventurous-Paper566 | false | null | 0 | o7r45vw | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7r45vw/ | false | 1 |
t1_o7r45a8 | Yup. Both V100s work alone, or in conjunction with the 2060 Super. | 1 | 0 | 2026-02-27T19:21:28 | MackThax | false | null | 0 | o7r45a8 | false | /r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7r45a8/ | false | 1 |
t1_o7r42d5 | So it’s absolutely useless | -8 | 0 | 2026-02-27T19:21:05 | mindondrugs | false | null | 0 | o7r42d5 | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7r42d5/ | false | -8 |
t1_o7r3zqd | Ok.
SCIFs are generally for Top Secret info | 1 | 0 | 2026-02-27T19:20:43 | thrownawaymane | false | null | 0 | o7r3zqd | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o7r3zqd/ | false | 1 |
t1_o7r3zij | Now THIS is what I call a useful project. Thank you! | 2 | 0 | 2026-02-27T19:20:42 | Cool-Chemical-5629 | false | null | 0 | o7r3zij | false | /r/LocalLLaMA/comments/1rgfrxp/pageagent_browser_ai_agent_that_runs_inside_the/o7r3zij/ | false | 2 |
t1_o7r3zka | This is great work! But I wonder about the effect of dropping the batch size adjustments. Normally you increase the ubatch size to improve prompt processing speed. It can increase drastically (eg 3x) when you raise ubatch from, say, 512 to 2048. But generation speed will suffer due to VRAM pressure. You didn't seem to ... | 3 | 0 | 2026-02-27T19:20:42 | OsmanthusBloom | false | null | 0 | o7r3zka | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7r3zka/ | false | 3 |
t1_o7r3yct | going to refine this one 1 more time. | 1 | 0 | 2026-02-27T19:20:32 | jaigouk | false | null | 0 | o7r3yct | false | /r/LocalLLaMA/comments/1renq5y/qwen35_model_comparison_27b_vs_35b_on_rtx_4090/o7r3yct/ | false | 1 |
t1_o7r3y19 | Daniel I'm a bit lost, which one is best for a strix halo with 128gb ram? Should I go for ud-4Q? Or 6Q? I cannot find much info on AMD | 1 | 0 | 2026-02-27T19:20:30 | oliveoilcheff | false | null | 0 | o7r3y19 | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r3y19/ | false | 1 |
t1_o7r3xgv | Don't jump on it so soon. I wrote my expereince with some prompt injection testing
[https://jranjan.destinjidee.com/blogs/ai/openclaw-your-agent-their-commands](https://jranjan.destinjidee.com/blogs/ai/openclaw-your-agent-their-commands) | 1 | 0 | 2026-02-27T19:20:25 | Alternative-Hippo207 | false | null | 0 | o7r3xgv | false | /r/LocalLLaMA/comments/1rbjxpv/i_think_openclaw_is_overhyped_just_use_skills/o7r3xgv/ | false | 1 |
t1_o7r3vlx | Does this mean qwen 3 32b beats gpt 4o? I currently use gpt 5.2 on subscription for coding, but I started out using 4o last year. Can I really run a quant of qwen 3 on my 3090 and get equivalent performance? | 1 | 0 | 2026-02-27T19:20:10 | dr_lm | false | null | 0 | o7r3vlx | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7r3vlx/ | false | 1 |
t1_o7r3vet | Great work - your quants are actually very good! Yes more different quant types in llama.cpp would be much appreciated :)
Thank you as always for the hard work! | 31 | 0 | 2026-02-27T19:20:08 | danielhanchen | false | null | 0 | o7r3vet | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r3vet/ | false | 31 |
t1_o7r3v8p | Le problème de 35B c'est le temps de traitement des prompts, même complètement déchargé sur 2x4060ti 16Go la latence est insupportable pour une utilisation TTS/STT, alors qu'avec 30B c'était instantané. Je suppose que c'est négligeable pour ceux qui ont des GPU puissants. | 4 | 0 | 2026-02-27T19:20:07 | Adventurous-Paper566 | false | null | 0 | o7r3v8p | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7r3v8p/ | false | 4 |
t1_o7r3nsa | so a popular vibecoded crap? | 16 | 0 | 2026-02-27T19:19:04 | 4baobao | false | null | 0 | o7r3nsa | false | /r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7r3nsa/ | false | 16 |
t1_o7r3lzt | Idk what i'm doing wrong, i just managed to try it in llama.cpp after a SyCL bug has been fixed and made it into the docker image.
But the results are just unusable. I mean, what are we supposed to do with these results like below?
https://preview.redd.it/v5lvr8xn73mg1.png?width=618&format=png&auto=webp&s=afb188b7ba... | 2 | 0 | 2026-02-27T19:18:50 | inphaser | false | null | 0 | o7r3lzt | false | /r/LocalLLaMA/comments/1qz5uww/qwen3_coder_next_as_first_usable_coding_model_60/o7r3lzt/ | false | 2 |
t1_o7r3lr2 | They used the same made up words, but there is nothing there to plagiarize or steal. You haven't invented a conscious computer or programmed morality. It's all fake. | 2 | 0 | 2026-02-27T19:18:48 | MelodicFuntasy | false | null | 0 | o7r3lr2 | false | /r/LocalLLaMA/comments/1rfrnus/academic_plagiarism_and_the_misappropriation_of/o7r3lr2/ | false | 2 |
t1_o7r3jzz | It might be equal footing but it's not a very useful result, as you yourself said.
I'd be much more interested in KLD results for 100k tokens of the base model implementing an application in the flavor of the month coding harness. That still disregards a wide variety of use cases but I'd expect it to be much more repr... | 3 | 0 | 2026-02-27T19:18:33 | GroundbreakingLlama | false | null | 0 | o7r3jzz | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r3jzz/ | false | 3 |
t1_o7r3j2i | Well if 35B is the successor of 30B then we should expect at least 15B | 3 | 0 | 2026-02-27T19:18:25 | jacek2023 | false | null | 0 | o7r3j2i | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7r3j2i/ | false | 3 |
t1_o7r3gcx | Unfortunately I'm not sure - our quants used to work well in Ollama, but now their chat template system is much more complicated to support :( | 5 | 0 | 2026-02-27T19:18:03 | danielhanchen | false | null | 0 | o7r3gcx | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r3gcx/ | false | 5 |
t1_o7r3g3n | Hi Daniel, AesSedai here - thanks for publishing this research! KLD and PPL don't tell the entire story but they are good starting points when deciding what quantization (both uploader and which quant level) to use! I'm happy to see more investigation being done here as it benefits the entire community.
I think it hel... | 92 | 0 | 2026-02-27T19:18:01 | Digger412 | false | null | 0 | o7r3g3n | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r3g3n/ | false | 92 |
t1_o7r3cqw | Thanks! Will post them soon hopefully! | 3 | 0 | 2026-02-27T19:17:33 | danielhanchen | false | null | 0 | o7r3cqw | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r3cqw/ | false | 3 |
t1_o7r3a0m | [https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new\_qwen3535ba3b\_unsloth\_dynamic\_ggufs\_benchmarks/](https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/)
More results and brand new Unsloth UD quants. | 2 | 0 | 2026-02-27T19:17:11 | chris_0611 | false | null | 0 | o7r3a0m | false | /r/LocalLLaMA/comments/1rg4fb7/should_qwen3535ba3b_be_this_much_slower_than/o7r3a0m/ | false | 2 |
t1_o7r36ov | Do you see any quality drop when using q4 for kv cache? | 1 | 0 | 2026-02-27T19:16:44 | bobaburger | false | null | 0 | o7r36ov | false | /r/LocalLLaMA/comments/1rg8gkx/qwen35_35b_a3b_45_ts_128k_ctx_on_single_16gb_5060/o7r36ov/ | false | 1 |
t1_o7r35zp | I dunno if you got a friend who will led you pop in your 2 gpus into their system. But they need to have PSU that can handle it. swapping another PSU is such a pain in the. | 1 | 0 | 2026-02-27T19:16:38 | false79 | false | null | 0 | o7r35zp | false | /r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7r35zp/ | false | 1 |
t1_o7r32rx | Imagine reading this headline a year ago | 6 | 0 | 2026-02-27T19:16:12 | Pro-editor-1105 | false | null | 0 | o7r32rx | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7r32rx/ | false | 6 |
t1_o7r30n5 | I was actually trying to find some of your great quants, but sadly could only find that 1 :( | 1 | 0 | 2026-02-27T19:15:55 | danielhanchen | false | null | 0 | o7r30n5 | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r30n5/ | false | 1 |
t1_o7r2z6c | If you’re already thinking embeddings + retrieval, you might not need the sequential LLM calls at all.
For something hierarchical like this, I’d train small classifiers per layer and run them locally. Much faster and cheaper than 10s per ticket with an LLM.
There are lightweight tools that sit on top of sentence embe... | 1 | 0 | 2026-02-27T19:15:43 | Individual_Round7690 | false | null | 0 | o7r2z6c | false | /r/LocalLLaMA/comments/1nvre5c/ticket_categorization_classifying_tickets_into/o7r2z6c/ | false | 1 |
t1_o7r2x1c | Yeah 45 minutes is brutal for that many pages. I've had good luck with Marker too it's noticeably faster than running everything through a full vision model, especially if your PDFs have selectable text already. The markdown output is pretty clean for things like research papers.
Docling's another solid option if you ... | 1 | 0 | 2026-02-27T19:15:25 | Traditional-Taro383 | false | null | 0 | o7r2x1c | false | /r/LocalLLaMA/comments/1r0ser2/any_latest_ocr_model_i_can_run_locally_in_18gb_ram/o7r2x1c/ | false | 1 |
t1_o7r2uvz | [removed] | 1 | 0 | 2026-02-27T19:15:07 | [deleted] | true | null | 0 | o7r2uvz | false | /r/LocalLLaMA/comments/1rgfrxp/pageagent_browser_ai_agent_that_runs_inside_the/o7r2uvz/ | false | 1 |
t1_o7r2sjn | Thanks for your hard work! Can't wait for your updated Qwen3.5 122b version! | 3 | 0 | 2026-02-27T19:14:48 | Admirable-Star7088 | false | null | 0 | o7r2sjn | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r2sjn/ | false | 3 |
t1_o7r2pgt | Be interested to test on my 12900k 32gb ram, 4080 super. But I run nixos.. hmmm 🤔 | 1 | 0 | 2026-02-27T19:14:23 | lundrog | false | null | 0 | o7r2pgt | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7r2pgt/ | false | 1 |
t1_o7r2nnm | Right, the ubergarm quant compared here is a Vulkan backend optimized quant using only legacy mainline compatible quantization types.
So none of ik\_llama.cpp's SOTA quantization types were compared. | 5 | 0 | 2026-02-27T19:14:08 | VoidAlchemy | false | null | 0 | o7r2nnm | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r2nnm/ | false | 5 |
t1_o7r2mn4 | I use it all the time actually, but a 20b would be a great addition, the model has a very poor performance on cpu so in the mean time they could just do a 20B that'd be nice. | 3 | 0 | 2026-02-27T19:14:00 | Significant_Fig_7581 | false | null | 0 | o7r2mn4 | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7r2mn4/ | false | 3 |
t1_o7r2mgr | Did you try the other v100 alone? Is it dead? | 1 | 0 | 2026-02-27T19:13:58 | olnickyboy | false | null | 0 | o7r2mgr | false | /r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7r2mgr/ | false | 1 |
t1_o7r2lyk | yup, makes a big difference. | 1 | 0 | 2026-02-27T19:13:54 | darkdeepths | false | null | 0 | o7r2lyk | false | /r/LocalLLaMA/comments/1rg0487/system_prompt_for_qwen35_27b35ba3b_to_reduce/o7r2lyk/ | false | 1 |
t1_o7r2lah | Thanks for answering:
This machine is my home server and is running around 60 containers including Emby and Frigate. Emby and Frigate use the GPUs for reencoding video streams. Also gpu accelerated TTS and STT for homeassistant as well as llama.cpp/ollama for my local only smart voice assistant pipeline. I would love... | 1 | 0 | 2026-02-27T19:13:49 | koriwi | false | null | 0 | o7r2lah | false | /r/LocalLLaMA/comments/1rgb8tj/discussion_local_contextaware_tts_what_do_you/o7r2lah/ | false | 1 |
t1_o7r2i6r | Ngl since tool use came around I’ve just been waiting for the first death we can attribute to its usage going horribly off the rails. Imagining things up to and including a game of Thermonuclear War | 1 | 0 | 2026-02-27T19:13:23 | thrownawaymane | false | null | 0 | o7r2i6r | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o7r2i6r/ | false | 1 |
t1_o7r2h77 | Yes agreed! Q4_K_XL is the best :) | 12 | 0 | 2026-02-27T19:13:16 | danielhanchen | false | null | 0 | o7r2h77 | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r2h77/ | false | 12 |
t1_o7r2gt5 | No? Why would I be joking. Openclaw was super buggy and not very easy to set up with a local model (my use case). Even Kimi K2.5 and Grok 4.2 and chatGPT 5.2 couldnt figure out why my telegram kept breaking and why my mission control wasnt set up properly. So I just gave up on it.
A simple approach to automated, daily... | 0 | 0 | 2026-02-27T19:13:13 | Vibraniumguy | false | null | 0 | o7r2gt5 | false | /r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7r2gt5/ | false | 0 |
t1_o7r2frw | For speed on a MacBook Pro, check out Marker (VikParuchuri/marker on GitHub). It's optimized for PDF to markdown and handles layouts well. If you need high accuracy OCR for trickier scans, I've used Qoest's API for batch processing documents and it's been solid for converting PDFs to structured text quickly | 2 | 0 | 2026-02-27T19:13:04 | Medical-Road-5690 | false | null | 0 | o7r2frw | false | /r/LocalLLaMA/comments/1r0ser2/any_latest_ocr_model_i_can_run_locally_in_18gb_ram/o7r2frw/ | false | 2 |
t1_o7r2f5o | I fixed it sorry - use the UD-MXFP4 one! | 2 | 0 | 2026-02-27T19:12:59 | danielhanchen | false | null | 0 | o7r2f5o | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r2f5o/ | false | 2 |
t1_o7r2dtk | Has anyone got this running on ollama? | -1 | 0 | 2026-02-27T19:12:48 | _twrecks_ | false | null | 0 | o7r2dtk | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r2dtk/ | false | -1 |
t1_o7r2al6 | Actually, that's likely the same for me, I only have 12GB of VRAM, but I'll give it a shot. | 2 | 0 | 2026-02-27T19:12:22 | OrbMan99 | false | null | 0 | o7r2al6 | false | /r/LocalLLaMA/comments/1rgexmk/qwen3527bclaude46opusreasoningdistilledgguf_is_out/o7r2al6/ | false | 2 |
t1_o7r28pa | The extra 16gb VRAM will absolutely allow more context faster than it overflowing to system RAM, but it's still going to drastically reduce the performance of your 5090 as it has less than 1/3 the memory bandwidth
As for getting the modest VRAM usage of the desktop off your 5090, your Ryzen has an iGPU
Have you also ... | 1 | 0 | 2026-02-27T19:12:07 | BigYoSpeck | false | null | 0 | o7r28pa | false | /r/LocalLLaMA/comments/1rebq2x/adding_a_5060ti_16gb_to_a_5090_32gb_192gb_ddr5/o7r28pa/ | false | 1 |
t1_o7r26ua | Creator here.
The reason it works with text DOM instead of screenshots is that most web UIs are already well-structured HTML — feeding that directly to the LLM is way cheaper and faster than vision-based approaches. Also make it possible to run entirely in a web page. Trade-off is it won't work on canvas-heavy apps, b... | 2 | 0 | 2026-02-27T19:11:53 | Alarmed-Ad-6201 | false | null | 0 | o7r26ua | false | /r/LocalLLaMA/comments/1rgfrxp/pageagent_browser_ai_agent_that_runs_inside_the/o7r26ua/ | false | 2 |
t1_o7r24zm | I'm using CPU + GPU (CUDA) offload. | 2 | 0 | 2026-02-27T19:11:35 | Admirable-Star7088 | false | null | 0 | o7r24zm | false | /r/LocalLLaMA/comments/1rfzfgf/minimax_m25_gguf_perform_poorly_overall/o7r24zm/ | false | 2 |
t1_o7r24a8 | Why? Just use a 3.5 35b quant. It's great. | 4 | 0 | 2026-02-27T19:11:30 | Long_comment_san | false | null | 0 | o7r24a8 | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7r24a8/ | false | 4 |
t1_o7r227h | So not fully retired (changed wording sorry), just only a verified select few layers will use MXFP4 - majority will use Q4_K.
Q4_K_* anything will always use Q4_K only.
Q3_K_* might use MXFP4 for some layers, but unlikely. | 4 | 0 | 2026-02-27T19:11:12 | danielhanchen | false | null | 0 | o7r227h | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r227h/ | false | 4 |
t1_o7r1yd7 | I'm making my own SLM that has MAMBA-like architecture (meaning it uses a latent vector to predict next tokens) and I'm impressed by this model, my model has split architecture with 2 smaller networks - a reader and a generator, in short, reader reads all previous tokens while updating a latent state and generator pred... | 1 | 0 | 2026-02-27T19:10:41 | ominotomi | false | null | 0 | o7r1yd7 | false | /r/LocalLLaMA/comments/1rfddpi/training_a_144m_spiking_neural_network_for_text/o7r1yd7/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.