name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7reuft
I haven’t looked into it as I’m still trying to focus on getting it functionally feature-rich before looking at UX. In reality, I see it being a CLI or MCP clip-in to an agentic layer such as openclaw.
1
0
2026-02-27T20:14:44
sfwinder
false
null
0
o7reuft
false
/r/LocalLLaMA/comments/1rg9p5c/loom_a_local_execution_harness_for_complex_tasks/o7reuft/
false
1
t1_o7reszg
What models do you think I should go with?
1
0
2026-02-27T20:14:32
Shimk52
false
null
0
o7reszg
false
/r/LocalLLaMA/comments/1rg1ixc/why_do_coding_benchmarks_ignore_code_review/o7reszg/
false
1
t1_o7reshi
doesn't huggingface do the same thing if you set your hardware in the web ui?
4
0
2026-02-27T20:14:28
Deep_Traffic_7873
false
null
0
o7reshi
false
/r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7reshi/
false
4
t1_o7resap
If possible, I would love if Q8_0 would still be released for mmproj’s desprite the worse performance. It can be the difference between fitting and not fitting on consumer hardware. Using the Q8_0 mmproj from various first-party sources and mrachenmacher show in real world usage for me that it’s quite usable.
2
0
2026-02-27T20:14:26
Kahvana
false
null
0
o7resap
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7resap/
false
2
t1_o7rerr0
I'll consider myself lucky once I get to issues with Linux :D
1
0
2026-02-27T20:14:22
MackThax
false
null
0
o7rerr0
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7rerr0/
false
1
t1_o7repcm
Still WIP
1
0
2026-02-27T20:14:02
Shimk52
false
null
0
o7repcm
false
/r/LocalLLaMA/comments/1rg1ixc/why_do_coding_benchmarks_ignore_code_review/o7repcm/
false
1
t1_o7reobn
Generally - yes, but I'm somewhat curious to whether this still hold weight today in 2026 - smaller dense models aren't bad at all anymore, and the the brain damage of Q1 quants is quite severe.
1
0
2026-02-27T20:13:53
Long_comment_san
false
null
0
o7reobn
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7reobn/
false
1
t1_o7remxn
Thank you! I always wanted to know myself so it was an enlightening investigation!
1
0
2026-02-27T20:13:41
danielhanchen
false
null
0
o7remxn
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7remxn/
false
1
t1_o7reiyr
Yes, I have my own build with KleidiAI optimisations. I’m still working on it. I would advise avoiding cross compilation and building it on the Pi itself, unless you really know what you are doing.
1
0
2026-02-27T20:13:08
jslominski
false
null
0
o7reiyr
false
/r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o7reiyr/
false
1
t1_o7reilv
Yes that's a good idea!
1
0
2026-02-27T20:13:05
danielhanchen
false
null
0
o7reilv
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7reilv/
false
1
t1_o7rehad
Wait for Unsloth to re-upload, though. They had some issues with the first release and while they have already reposted 35B, the 122B files are still in progress.
2
0
2026-02-27T20:12:54
NoahFect
false
null
0
o7rehad
false
/r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7rehad/
false
2
t1_o7reh5b
Yes sadly it is an official issue
2
0
2026-02-27T20:12:53
danielhanchen
false
null
0
o7reh5b
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7reh5b/
false
2
t1_o7refcp
Thank you! Oh yes use the chat template file in the Hugging Face online GGUF reader and you can copy paste that
2
0
2026-02-27T20:12:38
danielhanchen
false
null
0
o7refcp
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7refcp/
false
2
t1_o7redwn
No it’s still active on the API
5
0
2026-02-27T20:12:26
DistanceSolar1449
false
null
0
o7redwn
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7redwn/
false
5
t1_o7re9sz
Yep we generally also sometimes collaborate with labs directly to get them out sooner so folks can try them out! The community is always great!
6
0
2026-02-27T20:11:52
danielhanchen
false
null
0
o7re9sz
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7re9sz/
false
6
t1_o7re6is
Oh yea, this could be a real bottleneck, its a shame we cant have more than pci 4.0 x4 on strix halo.
2
0
2026-02-27T20:11:24
FlexFreak
false
null
0
o7re6is
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7re6is/
false
2
t1_o7re5bj
im running the 122B 2bit K M on 2x 3090
2
0
2026-02-27T20:11:14
frozenYogurtLover2
false
null
0
o7re5bj
false
/r/LocalLLaMA/comments/1rg4apu/qwen_35_architecture_analysis_parameter/o7re5bj/
false
2
t1_o7re4hq
Then there is sth wrong on your side. Go with Donato‘s toolboxes.
1
0
2026-02-27T20:11:07
Potential-Leg-639
false
null
0
o7re4hq
false
/r/LocalLLaMA/comments/1r7l7q5/the_strix_halo_feels_like_an_amazing_super_power/o7re4hq/
false
1
t1_o7re2j6
It's an actual LLM. Look at the profile.
3
0
2026-02-27T20:10:50
dark-light92
false
null
0
o7re2j6
false
/r/LocalLLaMA/comments/1rgaccz/dishonesty_in_thinking_block/o7re2j6/
false
3
t1_o7rdz3i
Ah Nice! I'm actually working on something similar built on modified llama.cpp. Same streaming mechanism basically.
3
0
2026-02-27T20:10:20
Front_Eagle739
false
null
0
o7rdz3i
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7rdz3i/
false
3
t1_o7rdx6w
Oh yes that I agree - there was a benchmark I forgot essentially showing a lower bit quant of a big model generally always does better than a higher bit quant of a small quant
2
0
2026-02-27T20:10:04
danielhanchen
false
null
0
o7rdx6w
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rdx6w/
false
2
t1_o7rdrc8
:facepalm: From the model card: > \> Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled is a highly capable reasoning model fine-tuned on top of the powerful Qwen3.5 MoE architecture. How was Jackrong able to fine-tune this without understanding it's a dense model, and not MoE? O_o Are there fine-tuning tools for peop...
15
0
2026-02-27T20:09:14
ttkciar
false
null
0
o7rdrc8
false
/r/LocalLLaMA/comments/1rgexmk/qwen3527bclaude46opusreasoningdistilledgguf_is_out/o7rdrc8/
false
15
t1_o7rdqh7
Well i tried for 2 evening to make them run on my rig with VLLM and i can't get it to work... even tried Sglang but got nothing close to a /v1/ api response...
1
0
2026-02-27T20:09:07
Imakerocketengine
false
null
0
o7rdqh7
false
/r/LocalLLaMA/comments/1reb313/qwen_35_35b_a3b_and_122b_a10b_solid_performance/o7rdqh7/
false
1
t1_o7rdpp6
Thanks for the discussions as well - yes sadly I forgot to upload them sorry - I'll get to them after I get a nap :)
10
0
2026-02-27T20:09:01
danielhanchen
false
null
0
o7rdpp6
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rdpp6/
false
10
t1_o7rdjwl
Sure, just load another model that uses about 8GB
1
0
2026-02-27T20:08:11
sometimes_angery
false
null
0
o7rdjwl
false
/r/LocalLLaMA/comments/1rezhyq/why_isnt_my_gpu_utilizing_all_of_its_vram/o7rdjwl/
false
1
t1_o7rdjx8
It will handle shorter prompts fine but the numbers will be more like maybe 300-1000 tok/sec prefill. The mains reason is there’s a fixed cost to push the model through PCIE, so on my PCIE 4.0 I think it takes about 3.5 seconds or so for Q3CN so that’s basically a minimum for any prompt. The main goal of Krasis though...
6
0
2026-02-27T20:08:11
mrstoatey
false
null
0
o7rdjx8
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7rdjx8/
false
6
t1_o7rdj31
My man is out here just stirring the pot lol.
3
0
2026-02-27T20:08:05
RedParaglider
false
null
0
o7rdj31
false
/r/LocalLLaMA/comments/1rgexmk/qwen3527bclaude46opusreasoningdistilledgguf_is_out/o7rdj31/
false
3
t1_o7rdid1
No I wasn't going to, I was meant to say that we were thinking if our post in r/localllama should have the picture or no picture. We ended up choosing no picture for r/localllama to fit all the info and images. So just wanted to thank you for reposting the image ahah! :)
1
0
2026-02-27T20:07:58
yoracale
false
null
0
o7rdid1
false
/r/LocalLLaMA/comments/1rggo5n/qwen35_unsloth_ggufs_update/o7rdid1/
false
1
t1_o7rdh48
Some examples are under 122B when you get to it
1
0
2026-02-27T20:07:48
jwpbe
false
null
0
o7rdh48
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rdh48/
false
1
t1_o7rdh59
Yep it can be quite a large change when quantizing them!
2
0
2026-02-27T20:07:48
danielhanchen
false
null
0
o7rdh59
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rdh59/
false
2
t1_o7rdgws
honestly, im a little worried its the point.  people download the files to investigate and then they are now in possession of csam and targettable 
2
0
2026-02-27T20:07:46
No-Marionberry-772
false
null
0
o7rdgws
false
/r/LocalLLaMA/comments/1re35iv/built_an_imagefirst_rag_pipeline_on_the_epstein/o7rdgws/
false
2
t1_o7rde46
Thanks. Will try
3
0
2026-02-27T20:07:23
bruckout
false
null
0
o7rde46
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7rde46/
false
3
t1_o7rd7li
Yes Q6 is nice generally - will update you if we manage to get some dense benchmarks done!
2
0
2026-02-27T20:06:27
danielhanchen
false
null
0
o7rd7li
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rd7li/
false
2
t1_o7rd5es
memetic gravity assist
1
0
2026-02-27T20:06:09
mrtie007
false
null
0
o7rd5es
false
/r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7rd5es/
false
1
t1_o7rd2k3
Oh but that's why we mentioned perplexity and KLD are not good in general, in fact it can be opposite like the minimax benchmarks I linked in the post. But the community wanted answers fast on some of our quants so we decided for now KLD is a reasonably fast approach.
1
0
2026-02-27T20:05:46
danielhanchen
false
null
0
o7rd2k3
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rd2k3/
false
1
t1_o7rd2e5
It doesn't work that way. By training models on the widest possible variety of data/tasks, the optimizer finds the kinds of broad inflection points which make for the most useful "generalized knowledge" (heuristics) for all task types. Restricting training to exclude task types you don't care about would have undesir...
11
0
2026-02-27T20:05:44
ttkciar
false
null
0
o7rd2e5
false
/r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7rd2e5/
false
11
t1_o7rczof
oh, were you going to put that up? Shall I take this one down?
1
0
2026-02-27T20:05:20
rm-rf-rm
false
null
0
o7rczof
false
/r/LocalLLaMA/comments/1rggo5n/qwen35_unsloth_ggufs_update/o7rczof/
false
1
t1_o7rczbw
https://preview.redd.it/…chy writers? ;-)
1
0
2026-02-27T20:05:17
timeshifter24
false
null
0
o7rczbw
false
/r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o7rczbw/
false
1
t1_o7rcqxc
Thanks for the repost! :) We decided to make a post without the graphic in r/LocalLLaMA to attach images and a lot more info in the body.
1
0
2026-02-27T20:04:06
yoracale
false
null
0
o7rcqxc
false
/r/LocalLLaMA/comments/1rggo5n/qwen35_unsloth_ggufs_update/o7rcqxc/
false
1
t1_o7rcqfa
50 T/s auf einer 5090 mit Debian 13. Kann leider nicht vollen Kontext fahren weil da noch Platz für meinen Desktop und so bleiben muss. ``` /root/llama.cpp/build/bin/llama-server \ --hf-repo mradermacher/Qwen3.5-27B-GGUF:Q6_K \ --ctx-size 131072 \ --host 0.0.0.0 \ --port 11337 \ --parallel 1 \ ...
1
0
2026-02-27T20:04:02
tecneeq
false
null
0
o7rcqfa
false
/r/LocalLLaMA/comments/1rdvq3s/qwen35_27b_is_match_made_in_heaven_for_size_and/o7rcqfa/
false
1
t1_o7rcq24
Bet 🤞
1
0
2026-02-27T20:03:59
Alarmed-Ad-6201
false
null
0
o7rcq24
false
/r/LocalLLaMA/comments/1rgfrxp/pageagent_browser_ai_agent_that_runs_inside_the/o7rcq24/
false
1
t1_o7rcnbq
Oh wait apologies I think I forgot to upload them sorry haha - will re get them and upload!
15
0
2026-02-27T20:03:37
danielhanchen
false
null
0
o7rcnbq
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rcnbq/
false
15
t1_o7rcmvk
yup, constant memory, just linear attention
2
0
2026-02-27T20:03:33
Additional-Record367
false
null
0
o7rcmvk
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rcmvk/
false
2
t1_o7rcmw0
It's currently deprecated, and will be fully removed April 3. so not really a great benchmark to compare against 4o but it probably was a better SEO model to pick to get the 4oids on his side
0
0
2026-02-27T20:03:33
panthereal
false
null
0
o7rcmw0
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7rcmw0/
false
0
t1_o7rci8h
Yeah, mess around with all the settings. Above 4G decoding, etc etc. I think it ultimately has to do with PCIe address allocation or something. Just 'too many' big PCIe devices.
2
0
2026-02-27T20:02:53
chris_0611
false
null
0
o7rci8h
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7rci8h/
false
2
t1_o7rcgau
Yeah, the models are better now. When my little project managed to extract some image urls for me back then I was so happy I almost cried lol but now this is a whole new level!
1
0
2026-02-27T20:02:37
Cool-Chemical-5629
false
null
0
o7rcgau
false
/r/LocalLLaMA/comments/1rgfrxp/pageagent_browser_ai_agent_that_runs_inside_the/o7rcgau/
false
1
t1_o7rcg46
Good point, It wasn't better at these specific details in this test though: * Qwen3-VL-8B-Thinking-UD-Q6\_K\_XL: Just one paragraph of reasoning and output. Gets the overall image sort of right, but "eyes ... likely from the bread’s crust". * Qwen3-VL-30B-A3B-Instruct-UD-Q4\_K\_XL: No reasoning of course, but **it ace...
2
0
2026-02-27T20:02:35
Chromix_
false
null
0
o7rcg46
false
/r/LocalLLaMA/comments/1rf92k0/qwen_35_vision_gets_the_big_picture_right_but_is/o7rcg46/
false
2
t1_o7rcccd
Impressive result given it's an RPi 5. I'm making experiments with an Orion O6 which is a stronger SoC but getting poorer results :(
2
0
2026-02-27T20:02:04
segabor
false
null
0
o7rcccd
false
/r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o7rcccd/
false
2
t1_o7rc9na
"Benchmarks use 10K–50K token prompts for prefill (best of 20K/35K/50K reported) and 64-token generation for decode (average of 3 runs)." did you run those on more normal prompts? Those values seem to be a bit extreme.
7
0
2026-02-27T20:01:41
jslominski
false
null
0
o7rc9na
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7rc9na/
false
7
t1_o7rbxlk
Its quants are YUUUGE. Bigly. The biggest.
1
0
2026-02-27T20:00:01
Infinite100p
false
null
0
o7rbxlk
false
/r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o7rbxlk/
false
1
t1_o7rbt1z
Interesting. There is a setting to \*disable\* CSM, I tried messing with it. I don't know if I could force it to switch to legacy BIOS. I'll try some more.
1
0
2026-02-27T19:59:23
MackThax
false
null
0
o7rbt1z
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7rbt1z/
false
1
t1_o7rbmuo
Wow, thanks so much for all of this work on this! BTW, I love the fact you and your team investigated the importance of quantizing the least, or not quantizing at all the SSM attention layers in Qwen 3.5 models :)
1
0
2026-02-27T19:58:31
BlueSwordM
false
null
0
o7rbmuo
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rbmuo/
false
1
t1_o7rbl1r
Looks like it is indeed all of them you have to redownload.
0
0
2026-02-27T19:58:16
mister2d
false
null
0
o7rbl1r
false
/r/LocalLLaMA/comments/1rf38xe/do_not_download_qwen_35_unsloth_gguf_until_bug_is/o7rbl1r/
false
0
t1_o7rbjsj
It first appeared in the pricing page when GLM 5 was released but no official communication about it yet so I'm assuming this will be their next model.
24
0
2026-02-27T19:58:06
Quack66
false
null
0
o7rbjsj
false
/r/LocalLLaMA/comments/1rggpu9/glm5code/o7rbjsj/
false
24
t1_o7rbi6j
Thanks!
1
0
2026-02-27T19:57:52
mrstoatey
false
null
0
o7rbi6j
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7rbi6j/
false
1
t1_o7rbevs
Bet it works great on the moltbook site!
1
0
2026-02-27T19:57:25
l33t-Mt
false
null
0
o7rbevs
false
/r/LocalLLaMA/comments/1rgfrxp/pageagent_browser_ai_agent_that_runs_inside_the/o7rbevs/
false
1
t1_o7rbcgt
I hope the 4B will serve as a useful draft model for accelerating inference with the 27B.
9
0
2026-02-27T19:57:05
ttkciar
false
null
0
o7rbcgt
false
/r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7rbcgt/
false
9
t1_o7rbbdw
this is super dope!
2
0
2026-02-27T19:56:56
No_Occasion_3288
false
null
0
o7rbbdw
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7rbbdw/
false
2
t1_o7rb7zv
We can't put labels for every quanter otherwise the graphs would be overloaded and unreadable
1
0
2026-02-27T19:56:28
yoracale
false
null
0
o7rb7zv
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rb7zv/
false
1
t1_o7rb7ug
Thanks! Really cool that you were working on a similar idea back in the Llama 2 days. Local models really improved a lot to make complex tool\_call possible now.
1
0
2026-02-27T19:56:27
Alarmed-Ad-6201
false
null
0
o7rb7ug
false
/r/LocalLLaMA/comments/1rgfrxp/pageagent_browser_ai_agent_that_runs_inside_the/o7rb7ug/
false
1
t1_o7rb31n
I second that, use donato‘s toolboxes and everything is fine
2
0
2026-02-27T19:55:48
Potential-Leg-639
false
null
0
o7rb31n
false
/r/LocalLLaMA/comments/1r7l7q5/the_strix_halo_feels_like_an_amazing_super_power/o7rb31n/
false
2
t1_o7rb0fu
What are people running this on? I tried to load 27B and both KoboldCPP and TextGen WebUI have crashed on me at some point... either on load, a few messages in, or when I have longer context.
1
0
2026-02-27T19:55:27
silenceimpaired
false
null
0
o7rb0fu
false
/r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7rb0fu/
false
1
t1_o7razza
I've had similar issues with 2x Mi60's. I think it was finally solved when changing from UEFI to legacy BIOS or something. Keep in mind this will mess up your Windows installation, it can't handle the change from BIOS to UEFI or back (you'll need to reinstall). Linux is fine.
2
0
2026-02-27T19:55:23
chris_0611
false
null
0
o7razza
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7razza/
false
2
t1_o7razsn
Would be interesting to see how it performed. The PCIE bandwidth does make a difference and that would be lower on an eGPU I think - the model has to be fed through it one time per prompt so there’s a fixed cost, but for larger prompts I think it’s often bottlenecked on compute.
3
0
2026-02-27T19:55:21
mrstoatey
false
null
0
o7razsn
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7razsn/
false
3
t1_o7rawfe
Hm hm... I could take it over to the friend again... The PSU does have 2 CPU cables though, That one is on a different rail than PCIe cables. A single V100 works when connected to it, with or without the 2060 Super.
1
0
2026-02-27T19:54:53
MackThax
false
null
0
o7rawfe
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7rawfe/
false
1
t1_o7rauxo
Please let me know how to install this (if "pip install pydub" doesn't seem to do it), and tell me if your app works on a CPU with AMD Ryzen 5, 32GB RAM, Win11, etc. Perhaps you can add another .bat file to your fantastic zip download to make our life easier? ;-) THX https://preview.redd.it/ymw6p3s6d3mg1.png?width=192...
1
0
2026-02-27T19:54:41
timeshifter24
false
null
0
o7rauxo
false
/r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o7rauxo/
false
1
t1_o7raun5
6 GB. Dell didn't think it necessary to use the 12 GB version in its (at the time) 1500 € laptop.
1
0
2026-02-27T19:54:39
DesertCookie_
false
null
0
o7raun5
false
/r/LocalLLaMA/comments/189qbhq/how_well_can_3060_gpu_run_ai_models/o7raun5/
false
1
t1_o7raub7
Try LFM 10bba1b
2
0
2026-02-27T19:54:36
ScoreUnique
false
null
0
o7raub7
false
/r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7raub7/
false
2
t1_o7rat92
Official models typically don't come with llama.cpp (and ripoffs like ollama) compatible GGUF, so someone has to do the conversion. The unsloth folks publish a lot of docs so they tend to be the default, but for the Qwen3.5 release their original ones were partially messed up. There are various other people doing the ...
13
0
2026-02-27T19:54:27
Pristine-Woodpecker
false
null
0
o7rat92
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rat92/
false
13
t1_o7raljf
I’m praying the 9b meets the performance of gpt-oss 20b for tool calling and such
3
0
2026-02-27T19:53:24
cms2307
false
null
0
o7raljf
false
/r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7raljf/
false
3
t1_o7rajqa
Yeah noctrex and the other quantizers dont have labels. Thabk you for the comparisons tho
1
0
2026-02-27T19:53:09
klop2031
false
null
0
o7rajqa
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rajqa/
false
1
t1_o7raaoz
https://youtu.be/TJOKEFdCkv0?si=AzYyuB5255DnkWt9 There is a lot of fluff/talking so feel free to mute/skip to whatever part you actually want to see. I'm thinking about doing a followup/replacement to this, condensing certain sections and testing with bigger models like Kimi to really show what it can do, but for no...
1
0
2026-02-27T19:51:54
SweetHomeAbalama0
false
null
0
o7raaoz
false
/r/LocalLLaMA/comments/1rg0rr5/people_who_running_3_gpu_build_in_close_case_can/o7raaoz/
false
1
t1_o7ra9pv
There aren't any great references you can consult, but you can describe your use-case and the folks here on this sub can recommend models to you. It would also help to know what kind of hardware you are expecting to run it on (which GPU, how much VRAM, at least) and whether you are going to also use that hardware for ...
3
0
2026-02-27T19:51:46
ttkciar
false
null
0
o7ra9pv
false
/r/LocalLLaMA/comments/1rgf12v/how_to_chose_the_right_model/o7ra9pv/
false
3
t1_o7ra95g
I had the same problem and the latest update of LM Studio settled it for me.
2
0
2026-02-27T19:51:42
Tombstone_53
false
null
0
o7ra95g
false
/r/LocalLLaMA/comments/1rgcp2k/qwen35_27b_e_llmstudio_per_windows/o7ra95g/
false
2
t1_o7ra91b
my man I have a serious concern regarding your topic, "Abliterated and Ameliorated" may be a form of Orwellian doublespeak. Where is the particular primary source the terms have originated from?
1
0
2026-02-27T19:51:41
Atlantiades_
false
null
0
o7ra91b
false
/r/LocalLLaMA/comments/1qa0w6c/it_works_abliteration_can_reduce_slop_without/o7ra91b/
false
1
t1_o7ra7ow
It should be. It's 1250W.
1
0
2026-02-27T19:51:30
MackThax
false
null
0
o7ra7ow
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7ra7ow/
false
1
t1_o7ra7qh
Humans would probably also make better quality code that is easier to read and improve/modify.
2
0
2026-02-27T19:51:30
MelodicFuntasy
false
null
0
o7ra7qh
false
/r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7ra7qh/
false
2
t1_o7ra698
I would suspect the psu. The Zalman 1250 is a dual rail design. So only 780 watts are available to the gpus, i think. As it is also very old circa 2012, it likely has degraded some. I have a similar psu, a antec 500 watt, with two 250 watt 12 volt rails, but it only puts out 150 reliably. So try a different psu. The ba...
1
0
2026-02-27T19:51:18
Nota_ReAlperson
false
null
0
o7ra698
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7ra698/
false
1
t1_o7ra2z3
My problem turned out to be a missing kernel parameter in /etc/default/grub: GRUB_CMDLINE_LINUX_DEFAULT="... ttm.page_pool_size=31457280" Combined with some bugs in the Qwen35 support that were subsequently fixed. Now I can run Qwen3.5 with the config below and see O(15 tps). [Qwen3.5-122B-A10B Q4_K_M (mult...
1
0
2026-02-27T19:50:51
615wonky
false
null
0
o7ra2z3
false
/r/LocalLLaMA/comments/1red6sv/update_your_llamacpp_for_qwen_35/o7ra2z3/
false
1
t1_o7r9xg3
Right, I see that's the PPL / mean KLD / 99.9% but doesn't have the rest of the statistics like the other KLD\_logs comparison entries do (eg, the per-chunk and full correlation stats, etc).
12
0
2026-02-27T19:50:06
Digger412
false
null
0
o7r9xg3
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r9xg3/
false
12
t1_o7r9x8l
Yeah! Curse you Sam Altman!
1
0
2026-02-27T19:50:05
Iory1998
false
null
0
o7r9x8l
false
/r/LocalLLaMA/comments/1rdptw8/more_qwens_will_appear/o7r9x8l/
false
1
t1_o7r9tu4
Was expecting vibecoded llama.cpp ripoff, got piles and piles of Rust with hand-optimized assembler intrinsic kernels. Sometimes it's fun to be wrong.
41
0
2026-02-27T19:49:36
Pristine-Woodpecker
false
null
0
o7r9tu4
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7r9tu4/
false
41
t1_o7r9rw2
Follow your heart.
5
0
2026-02-27T19:49:20
jslominski
false
null
0
o7r9rw2
false
/r/LocalLLaMA/comments/1rgf12v/how_to_chose_the_right_model/o7r9rw2/
false
5
t1_o7r9qwn
I clicked because there is ollama keyword :). Could you please clarify why I shouldn't?
0
0
2026-02-27T19:49:13
Budget-Secretary4085
false
null
0
o7r9qwn
false
/r/LocalLLaMA/comments/1rf2b90/benchmarking_qwen3535b_vs_gptoss20b_for_agentic/o7r9qwn/
false
0
t1_o7r9k34
We could do with a version of Quantization-Aware Distillation for llama.cpp
2
0
2026-02-27T19:48:16
loadsamuny
false
null
0
o7r9k34
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r9k34/
false
2
t1_o7r9ib9
My friend uses 2x4060 Ti 16GB to host Qwen3 30B-A3B on vLLM to use in home assistant and as a local AI. The prompt processing for it on vLLM is over 3 to 4k. The latency is negligible. Albeit, I don't know how Qwen3 35B-A3B is with him, I haven't asked him yet. If you've only been using llama.cpp for it, 100% try vLLM....
1
0
2026-02-27T19:48:02
Kamal965
false
null
0
o7r9ib9
false
/r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7r9ib9/
false
1
t1_o7r9hka
Grats! I will have a look. In a mean time have a look at [https://github.com/DEEP-PolyU/LinearRAG](https://github.com/DEEP-PolyU/LinearRAG) I almost finished pure java implementation on spring boot (the hardest part was NER so far), as database and semantic search I utilize milvus
1
0
2026-02-27T19:47:55
Familiar-Crow6608
false
null
0
o7r9hka
false
/r/LocalLLaMA/comments/1rea7fb/spent_months_building_a_fully_offline_rag/o7r9hka/
false
1
t1_o7r9go5
Exactly why I abandoned Perplexity; web search made it go off the rails. None of the SOTA models could understand the difference between new and relevant or old and cruddy, often smashing completely contradictory information into the same response. And I'd suppose Perplexity is a much more polished implementation than ...
1
0
2026-02-27T19:47:48
Zestyclose839
false
null
0
o7r9go5
false
/r/LocalLLaMA/comments/1rcmlwk/so_is_openclaw_local_or_not/o7r9go5/
false
1
t1_o7r9ehp
I wonder if geometric stabilization contributed to the NatInt performance of that model, as I noted that effect in my Gemma 3 12B experiments. Inquiring minds.
2
0
2026-02-27T19:47:29
grimjim
false
null
0
o7r9ehp
false
/r/LocalLLaMA/comments/1rf6s0d/qwen3527bhereticgguf/o7r9ehp/
false
2
t1_o7r93wz
$1.2 input is crazy
29
0
2026-02-27T19:46:02
culoacido69420
false
null
0
o7r93wz
false
/r/LocalLLaMA/comments/1rggpu9/glm5code/o7r93wz/
false
29
t1_o7r910m
Great, thank you! Would you add LongCat Flash Lite too please?
1
0
2026-02-27T19:45:38
hiiammk
false
null
0
o7r910m
false
/r/LocalLLaMA/comments/1reds0p/qwen_35_craters_on_hard_coding_tasks_tested_all/o7r910m/
false
1
t1_o7r8qt4
Wow this could be interesting for strix halo + egpu, great work!
33
0
2026-02-27T19:44:15
FlexFreak
false
null
0
o7r8qt4
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7r8qt4/
false
33
t1_o7r8q43
Where can I read more about this call tooling chat template bug? Is this bug in the official chat template on the Qwen HF repo?
1
0
2026-02-27T19:44:09
indicava
false
null
0
o7r8q43
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r8q43/
false
1
t1_o7r8mct
There usually isn't a difference as we just quantize them to GGUFs or whatever for local home users to use on their laptops, PCs etc. Sometimes we fix model bugs like e.g. this one for tool-calling. You should read our past work here: [https://unsloth.ai/blog/reintroducing](https://unsloth.ai/blog/reintroducing)
9
0
2026-02-27T19:43:39
yoracale
false
null
0
o7r8mct
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r8mct/
false
9
t1_o7r8h46
I mean, its not that all vibe codex stuff is crap, have a handful of vibe oded comfyui workflows and nodes that work just perfectly. If the person is serious with the project, they will address the issue within their workflow.
-1
1
2026-02-27T19:42:55
ReasonablePossum_
false
null
0
o7r8h46
false
/r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7r8h46/
false
-1
t1_o7r8gou
found it here [https://docs.z.ai/guides/overview/pricing](https://docs.z.ai/guides/overview/pricing)
9
0
2026-02-27T19:42:51
axseem
false
null
0
o7r8gou
false
/r/LocalLLaMA/comments/1rggpu9/glm5code/o7r8gou/
false
9
t1_o7r8ffh
Thanks for spending so much time on this! > We also fixed a tool calling chat template bug Is there a template diff available? or description of the bug?
1
0
2026-02-27T19:42:41
cristoper
false
null
0
o7r8ffh
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r8ffh/
false
1
t1_o7r8dy1
They're in our blogpost: [https://unsloth.ai/docs/models/qwen3.5/gguf-benchmarks#full-benchmarks](https://unsloth.ai/docs/models/qwen3.5/gguf-benchmarks#full-benchmarks)
6
0
2026-02-27T19:42:29
yoracale
false
null
0
o7r8dy1
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r8dy1/
false
6
t1_o7r8d9p
LFM2-8B-A1B was an interesting model. I wish others explored the idea of small MoEs that can run on phones, now that with the current Ram prices it looks like the era of "affordable" phones with 12 or 16gb is over.
2
0
2026-02-27T19:42:24
cibernox
false
null
0
o7r8d9p
false
/r/LocalLLaMA/comments/1rdptw8/more_qwens_will_appear/o7r8d9p/
false
2
t1_o7r89qw
Yes, you can go for any of them. They all work werll with AMD
1
0
2026-02-27T19:41:55
yoracale
false
null
0
o7r89qw
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7r89qw/
false
1