name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7rmz8f
MXFp4 isn't always bad. It sometimes makes layers better. We are still going to update them yes. Especially with the tool calling fixes
3
0
2026-02-27T20:55:48
yoracale
false
null
0
o7rmz8f
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rmz8f/
false
3
t1_o7rmudo
Doesn't work in Ollama. GGUFs don't work in Ollama anymore due to chat template incompatibility
3
0
2026-02-27T20:55:08
yoracale
false
null
0
o7rmudo
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rmudo/
false
3
t1_o7rmofd
From what I understand, it's 300watts for the 16gb version, 350 for the 32gb sxm version. Which specific v100s do you have?
2
0
2026-02-27T20:54:18
Nota_ReAlperson
false
null
0
o7rmofd
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7rmofd/
false
2
t1_o7rmk64
I don't get why all these damn LLM harnesses don't make it a priority to make it easy for users to hook and view what content is being sent in to help us trace how things are working and what went wrong when things go wrong.
1
0
2026-02-27T20:53:44
michaelsoft__binbows
false
null
0
o7rmk64
false
/r/LocalLLaMA/comments/1rgelk1/the_supply_chain_problem_nobody_talks_about_agent/o7rmk64/
false
1
t1_o7rmgzk
That was another thing I was thinking a mind needed to exist. It needs urges. Whether it has them embedded from the beginning (like a biological urge to find food or mate) or something it decides to be interested in later (like some girls like horses).
1
0
2026-02-27T20:53:17
chuckaholic
false
null
0
o7rmgzk
false
/r/LocalLLaMA/comments/1rewz9p/we_build_sleep_for_local_llms_model_learns_facts/o7rmgzk/
false
1
t1_o7rmgfg
Is quantizing KV to q8\_0 worth it like the other [experiments](https://www.reddit.com/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/) thread said? It's not clear to me (sorry if it should be from your results post).
4
0
2026-02-27T20:53:12
oxygen_addiction
false
null
0
o7rmgfg
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rmgfg/
false
4
t1_o7rmcqz
Absolute cheapest? First, find a $100k dollar bill for upfront expenses, add thousands more for electrical/cooling infrastructure, set aside a 5-10% extra for wiggle room, then expect hundreds in monthly opex upkeep for as long as the unit is in operation. If you cannot administrate it yourself, add additional budget ...
2
0
2026-02-27T20:52:42
SweetHomeAbalama0
false
null
0
o7rmcqz
false
/r/LocalLLaMA/comments/1rg47i3/say_i_want_my_own_claude/o7rmcqz/
false
2
t1_o7rmahn
I use them with the apps I build. Chatting, writing, custom knowledge. I find LLMs are more useful trained on a specific knowledge. MoE does this but routes between the models under the hood. I just make a small quant for a specific job.
1
0
2026-02-27T20:52:23
melanov85
false
null
0
o7rmahn
false
/r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7rmahn/
false
1
t1_o7rm6ny
https://github.com/mistralai/mistral-vibe
1
0
2026-02-27T20:51:51
ComeOnIWantUsername
false
null
0
o7rm6ny
false
/r/LocalLLaMA/comments/1q7zywf/devstral_small_2_q4_k_m_on_5060_ti_16gb_and_zed/o7rm6ny/
false
1
t1_o7rm40t
I knew before even reading the post devstral would be better. qwen 3.5 is a good model, just not good at coding. we need to wait for a dedicated coding model from qwen I think
1
0
2026-02-27T20:51:29
lemon07r
false
null
0
o7rm40t
false
/r/LocalLLaMA/comments/1rg41ss/qwen35_27b_vs_devstral_small_2_nextjs_solidity/o7rm40t/
false
1
t1_o7rly7r
It's really handy to allow an LLM to search the internet for info, but allowing it to do that while preventing it from exfiling sensitive data seems like it'd be a really hard problem
1
0
2026-02-27T20:50:41
capnspacehook
false
null
0
o7rly7r
false
/r/LocalLLaMA/comments/1rgelk1/the_supply_chain_problem_nobody_talks_about_agent/o7rly7r/
false
1
t1_o7rluce
Are you also planning to update the Qwen3.5-397B-A17B GGUFs? I noticed that the UD-Q4\_K\_XL version still has some MXFP4 layers, and the template is slightly different from the new ones uploaded for Qwen3.5-35B-A3B. Thank you for all your efforts!
1
0
2026-02-27T20:50:09
Doomslayer606
false
null
0
o7rluce
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rluce/
false
1
t1_o7rlqfh
According to this comment in that PR, llama.cpp is already doing that fix. "Until it's landed you can still compile with -DCMAKE_HIP_FLAGS="-mllvm --amdgpu-unroll-threshold-local=600" That's what llama.cpp is doing for example."
3
0
2026-02-27T20:49:37
fallingdowndizzyvr
false
null
0
o7rlqfh
false
/r/LocalLLaMA/comments/1rgdo3s/fix_for_rocm_performance_regression_for_strix/o7rlqfh/
false
3
t1_o7rlqa2
Your last post just said "it's still active" which is a very different statement than "api endpoint is not deprecated" Incredibly rude to harass someone when you said something completely different. Maybe use your whole words to explain things the first time instead of half ass it so you can harass them in a second p...
-2
0
2026-02-27T20:49:36
panthereal
false
null
0
o7rlqa2
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7rlqa2/
false
-2
t1_o7rlmh2
And it was broken, who knew... https://old.reddit.com/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/
0
0
2026-02-27T20:49:04
mantafloppy
false
null
0
o7rlmh2
false
/r/LocalLLaMA/comments/1rfds1h/qwen3535ba3b_q4_quantization_comparison/o7rlmh2/
false
0
t1_o7rlkzd
Yeah, even so though PCIE4.0x4 is 8GB/sec so to push Q3CN at Q4 though (around 38GB) would be maybe at least 4.5 seconds minimum. Could still potentially work for opencode I think, might just be a bit longer fixed wait for a response.
2
0
2026-02-27T20:48:52
mrstoatey
false
null
0
o7rlkzd
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7rlkzd/
false
2
t1_o7rlkfp
I'll note that it's been extremely surprisingly good for me even at iq2 m with reasoning forced off. This gives absolutely wild speeds that you really need to experience, it makes agentic stuff really fun. It takes about 3 back and forth for me to fix up perplexing problems, but the iteration speed still makes it fun, ...
3
0
2026-02-27T20:48:48
ethereal_intellect
false
null
0
o7rlkfp
false
/r/LocalLLaMA/comments/1rg8gkx/qwen35_35b_a3b_45_ts_128k_ctx_on_single_16gb_5060/o7rlkfp/
false
3
t1_o7rlibt
Yeah And I think it's worth it, Qwen3.5 is a very very good model. Do it one time for the community is very worth it. Qwen3.5 probably will be SOTA for a while. Shout out to Unsloth team.
28
0
2026-02-27T20:48:31
Round_Document6821
false
null
0
o7rlibt
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rlibt/
false
28
t1_o7rlgqa
yes, I've seen reports of ReBAR issues when RAM amount is lower than VRAM. Try to disable ReBAR in the BIOS if it is enabled, or enable if it is disabled.
2
0
2026-02-27T20:48:17
MelodicRecognition7
false
null
0
o7rlgqa
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7rlgqa/
false
2
t1_o7rlfqs
70's tools are underrated. Grep is fast and dope.
3
0
2026-02-27T20:48:09
_supert_
false
null
0
o7rlfqs
false
/r/LocalLLaMA/comments/1rg7oj1/bash_commands_outperform_vector_search_for/o7rlfqs/
false
3
t1_o7rlc30
Wonder, if it could generate music... Have you seen this? [https://www.reddit.com/r/Bard/comments/1rg9n1n/gemini\_31\_can\_oneshot\_compose\_jrpg\_music\_a\_43/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/Bard/comments/1rg9n1n/gemini_31_can_on...
-1
0
2026-02-27T20:47:39
Medium_Chemist_4032
false
null
0
o7rlc30
false
/r/LocalLLaMA/comments/1rgiait/switched_to_qwen35122ba10bi1gguf/o7rlc30/
false
-1
t1_o7rl8qn
we need to go deeper
3
0
2026-02-27T20:47:11
MelodicRecognition7
false
null
0
o7rl8qn
false
/r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7rl8qn/
false
3
t1_o7rl8q2
I’m still getting Error unable to load model in Ollama on my Mac Studio 64GB. Works on LM Studio. Any ideas?
-1
0
2026-02-27T20:47:10
ConspicuousSomething
false
null
0
o7rl8q2
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rl8q2/
false
-1
t1_o7rl6hi
That's a very interesting concept and although the ram+disk trade-offs are brutal and the tg seems to be a little bit low, it's good to see a different angle, very well done!
2
0
2026-02-27T20:46:52
cosimoiaia
false
null
0
o7rl6hi
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7rl6hi/
false
2
t1_o7rky2d
If MXFP4 is not a good idea, what about NVFP4 or AWQ? I always wonder about these two.
1
0
2026-02-27T20:45:41
lemon07r
false
null
0
o7rky2d
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rky2d/
false
1
t1_o7rkxpn
Yes. I'm trying with 8GB of RAM. A single V100 boots fine though. Am I supposed to have more RAM than VRAM?
1
0
2026-02-27T20:45:38
MackThax
false
null
0
o7rkxpn
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7rkxpn/
false
1
t1_o7rkre8
😭 I'll try updating the BIOS on the MB. I really hope it's not a hard limit on the memory on the MB.
1
0
2026-02-27T20:44:44
MackThax
false
null
0
o7rkre8
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7rkre8/
false
1
t1_o7rkms0
Well if all you're doing is benchmarking almost anyone could train a sufficient small model to beat a generalist at one specific benchmark it was trained to excel at, probably at the cost of it's other capabilities.  Also nowhere does it prove it's better *at coding,* it simply gets a higher benchmark score. 
3
0
2026-02-27T20:44:06
Fit-Produce420
false
null
0
o7rkms0
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7rkms0/
false
3
t1_o7rkk6p
I haven’t focused on handling multiple concurrent requests for multi-user setups so far so it’s essentially serialising requests for now. The prompt will get chunked if it’s long (>5000 tokens), but largest component of the memory is the weights (particularly for system RAM as it has two copies). Because of the layer...
2
0
2026-02-27T20:43:43
mrstoatey
false
null
0
o7rkk6p
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7rkk6p/
false
2
t1_o7rkhad
I am super impressed with Qwen3.5 397B A17B. It writes fantastic prose in languages. It beats my previous favorite (that was cloud only model), or, at least, very, very close. (but I still feel it beats everything). It gives me hope we could have fantastic "model at home".
13
0
2026-02-27T20:43:19
uti24
false
null
0
o7rkhad
false
/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7rkhad/
false
13
t1_o7rkh5g
This is a really cool approach. Generating the MCP server from the spec instead of hand writing it removes a ton of boilerplate. One thing I would think about early: what happens when the upstream OpenAPI spec changes? If someone renames a field, removes an endpoint, or changes a parameter from optional to required, t...
1
0
2026-02-27T20:43:18
Extra-Pomegranate-50
false
null
0
o7rkh5g
false
/r/LocalLLaMA/comments/1rgf9zb/mcpforge_generate_mcp_servers_from_openapi_specs/o7rkh5g/
false
1
t1_o7rkgmg
From a high level, I am very satisfied. The last 2 years have seen amazing strides in open models of all sizes. I am very thankful to the community and am always exited with each release, especially the tiny releases like 8B or lower. It’s just so much fun to see tiny models get better and better by the day.
7
0
2026-02-27T20:43:14
DelayedPot
false
null
0
o7rkgmg
false
/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7rkgmg/
false
7
t1_o7rkg4s
You hid some unexpected free lunch in the middle of your posting. >Unsloth Dynamic IQ2\_XXS **performs better** than AesSedai’s IQ3\_S on real world evals (LiveCodeBench v6, MMLU Pro) despite being 11GB smaller. Yet, AesSedai’s perplexity and KLD benchmarks suggest the **opposite**. (PPL: 0.3552 vs 0.2441; KLD: 9.0338...
5
0
2026-02-27T20:43:10
Chromix_
false
null
0
o7rkg4s
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rkg4s/
false
5
t1_o7rkejm
it’s bench maxing bro
3
0
2026-02-27T20:42:56
dbzunicorn
false
null
0
o7rkejm
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7rkejm/
false
3
t1_o7rkd64
This is nice work! For many local usecases, you might actually want to actively track and manage state between two approaches: 1. PP on GPU, token gen on CPU 2. Traditional llama.cpp approach Assuming no parallelism (i.e. often the typical local usecase), you can look at the next prompt and quickly decide if it will ...
6
0
2026-02-27T20:42:45
Leopold_Boom
false
null
0
o7rkd64
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7rkd64/
false
6
t1_o7rkbhk
Bojan's been busy.
1
0
2026-02-27T20:42:30
YoelFievelBenAvram
false
null
0
o7rkbhk
false
/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7rkbhk/
false
1
t1_o7rka6r
If you have $2000K I would spend it on the paid version of Google Collab or Anthropic's AI enabled IDE.
2
0
2026-02-27T20:42:19
imtourist
false
null
0
o7rka6r
false
/r/LocalLLaMA/comments/1rg68e6/starting_a_phd_in_ml_what_is_the_best_infra_i_can/o7rka6r/
false
2
t1_o7rk9wt
Only one V100 can go on the CPU rail. The V100 is supposed to max out at 250W (by official specs). I haven't run any load on the GPUs asside from getting them to boot.
1
0
2026-02-27T20:42:17
MackThax
false
null
0
o7rk9wt
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7rk9wt/
false
1
t1_o7rjwy1
Hey, i've been having a lot of trouble with opencode failing when doing file writes and edits. i get `~ Preparing write...` `Tool execution aborted` `"Invalid diff: now finding less tool calls!"` this has occurred with lots of different models, MiniMax-M2.5, GLM-4.7-Flash, Qwen3-Next-Coder-80B, and now all versio...
1
0
2026-02-27T20:40:27
rema1000fan
false
null
0
o7rjwy1
false
/r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7rjwy1/
false
1
t1_o7rjqj3
And an easy target for a political hit. "Leftists are all pedophiles" confirmed through planting.
1
0
2026-02-27T20:39:32
_raydeStar
false
null
0
o7rjqj3
false
/r/LocalLLaMA/comments/1re35iv/built_an_imagefirst_rag_pipeline_on_the_epstein/o7rjqj3/
false
1
t1_o7rjoad
They open sourced 9TB of their quants! Literally quantized every part of the model slightly differently, tested it, and then uploaded their results. This is 1800s inventor style science LOL. “Testing 1000 different materials for lightbulb filaments” type shit.
35
0
2026-02-27T20:39:13
DistanceSolar1449
false
null
0
o7rjoad
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rjoad/
false
35
t1_o7rjntm
> hand-optimized not sure about that $ git log |grep -i claude Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Co-Auth...
35
0
2026-02-27T20:39:09
MelodicRecognition7
false
null
0
o7rjntm
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7rjntm/
false
35
t1_o7rjmij
Thanks for the support, we hope to deliver benchmarks more often (even if they're not the best measurement)
2
0
2026-02-27T20:38:57
yoracale
false
null
0
o7rjmij
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rjmij/
false
2
t1_o7rjlvn
Well, I would not do that in your place; it looks like some bot response that feeds you with 2-year-old data. All these models are retired history.
1
0
2026-02-27T20:38:52
Exciting_Garden2535
false
null
0
o7rjlvn
false
/r/LocalLLaMA/comments/1rg3da6/are_there_any_particular_offline_models_i_could/o7rjlvn/
false
1
t1_o7rjk5r
Don't expect cheap RAM anytime soon :(
6
0
2026-02-27T20:38:37
HumanDrone8721
false
null
0
o7rjk5r
false
/r/LocalLLaMA/comments/1rgi6ky/openai_raises_110_billion_in_the_largest_private/o7rjk5r/
false
6
t1_o7rjjp8
Well need to investigate more, it might not be no as MXFP4 isn't always bad.
1
0
2026-02-27T20:38:33
yoracale
false
null
0
o7rjjp8
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rjjp8/
false
1
t1_o7rjjq4
No one is actually shipping those yet, though, and I think they only have 384GB of VRAM.
1
0
2026-02-27T20:38:33
NoahFect
false
null
0
o7rjjq4
false
/r/LocalLLaMA/comments/1rg47i3/say_i_want_my_own_claude/o7rjjq4/
false
1
t1_o7rjbu9
1/2 a wafer of n7 is much cheaper than a full 5nm wafer ($5000 vs $20000). Yield would be much higher as well. It might not be commodity hardware, but it would cost less than a single b100 ($30000).
1
0
2026-02-27T20:37:26
Nota_ReAlperson
false
null
0
o7rjbu9
false
/r/LocalLLaMA/comments/1r9e27i/free_asic_llama_31_8b_inference_at_16000_toks_no/o7rjbu9/
false
1
t1_o7rj8tm
MSI also has one $1000 cheaper than DGX spark with 4tb. same spec, but better cooling I think.
1
0
2026-02-27T20:37:00
flink33
false
null
0
o7rj8tm
false
/r/LocalLLaMA/comments/1ptdtmz/dgx_spark_an_unpopular_opinion/o7rj8tm/
false
1
t1_o7rj3x5
Hey, been down this rabbit hole. Your actual problem isn’t retrieval - it’s that facts are stored as flat text with no update mechanism. “Alice moved to SF” and “Alice works at Stripe” are just strings. You can’t update Alice’s location, you can only append new chunks on top of old ones. Graph doesn’t fix this, it just...
1
0
2026-02-27T20:36:18
loookashow
false
null
0
o7rj3x5
false
/r/LocalLLaMA/comments/1r3w2jp/are_vector_databases_fundamentally_insufficient/o7rj3x5/
false
1
t1_o7rj1y1
isnt 4o pre-dating dinosaur age
1
1
2026-02-27T20:36:01
louis3195
false
null
0
o7rj1y1
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7rj1y1/
false
1
t1_o7riugh
Are you stupid? The 4o api endpoint is not deprecated. It’ll be working on April 4th.
5
0
2026-02-27T20:34:57
DistanceSolar1449
false
null
0
o7riugh
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7riugh/
false
5
t1_o7rij6f
> o3 > 2024 > 70B model kthxbye
6
0
2026-02-27T20:33:22
MelodicRecognition7
false
null
0
o7rij6f
false
/r/LocalLLaMA/comments/1rghuaq/pure_llms_score_0_on_arcagi2_humans_60_meanwhile/o7rij6f/
false
6
t1_o7ri9ms
Yup, but in this case Node/bun/deno would have thousands of open issues... wait actually your spot on. Parity achieved.
2
0
2026-02-27T20:32:00
txmail
false
null
0
o7ri9ms
false
/r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7ri9ms/
false
2
t1_o7ri9f1
is total VRAM amount larger than RAM? might be resizable BAR problem.
1
0
2026-02-27T20:31:58
MelodicRecognition7
false
null
0
o7ri9f1
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7ri9f1/
false
1
t1_o7ri7jb
[Pants wash test](https://ps.reddit.com/r/LocalLLaMA/comments/1rf2ulo/qwen35_122b_in_72gb_vram_3x3090_is_the_best_model/o7he759/) is the new car wash test. Worth trying out on everybody's favorite models, IMO.
2
0
2026-02-27T20:31:42
NoahFect
false
null
0
o7ri7jb
false
/r/LocalLLaMA/comments/1rg2yl7/qwen35_27b_at_q3_k_m_passes_the_car_wash_test/o7ri7jb/
false
2
t1_o7ri4bx
They open sourced 9TB of their quants! Literally quantized every part of the model slightly differently, tested it, and then uploaded their results. This is 1800s inventor style science LOL. “Testing 1000 different materials for lightbulb filaments” type shit.
0
0
2026-02-27T20:31:15
DistanceSolar1449
false
null
0
o7ri4bx
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7ri4bx/
false
0
t1_o7ri0l0
I’ll be honest: the token generation in your tests is so slow that I personally wouldn’t bother with it. At 15 t/s on the latest A3B Qwen, it doesn’t seem very practical. And as others have pointed out, with current DDR prices, it makes even less sense vs buying a bigger gpu/accelerator.
2
0
2026-02-27T20:30:43
jslominski
false
null
0
o7ri0l0
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7ri0l0/
false
2
t1_o7rhzk2
ok but where are they getting this data
13
0
2026-02-27T20:30:34
ongrabbits
false
null
0
o7rhzk2
false
/r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7rhzk2/
false
13
t1_o7rhy76
I think that KLD is good in general as long as the dataset matches real world model output. Some models (gpt-oss, minimax) have had their chat template baked into them so hard that they are incapable of acting as simple text completion models anymore. With no chat template they quickly start generating nonsense. Those...
1
0
2026-02-27T20:30:23
GroundbreakingLlama
false
null
0
o7rhy76
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rhy76/
false
1
t1_o7rhxkt
56.25% if you have it generate output.
6
0
2026-02-27T20:30:17
bambamlol
false
null
0
o7rhxkt
false
/r/LocalLLaMA/comments/1rggpu9/glm5code/o7rhxkt/
false
6
t1_o7rhwq3
LLM's are large enough that they REALLY need to fix the snaps cant access other disks stupidity.
1
0
2026-02-27T20:30:10
makegeneve
false
null
0
o7rhwq3
false
/r/LocalLLaMA/comments/1rfmzfp/new_upcoming_ubuntu_2604_lts_will_be_optimized/o7rhwq3/
false
1
t1_o7rhp5m
You didn't ask for the SOTA which meant the model wasn't inclined to check for up to date info, as ever the quality of the prompt dictates the quality of the response
1
0
2026-02-27T20:29:06
MerePotato
false
null
0
o7rhp5m
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7rhp5m/
false
1
t1_o7rhozr
I didn't look through the repo, but honestly they should just expose a CSV and let people make corrections via PR. Could be a great community project that way.
1
0
2026-02-27T20:29:05
Much-Researcher6135
false
null
0
o7rhozr
false
/r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7rhozr/
false
1
t1_o7rhf28
the --optimize flag using Claude to trim endpoints is the right call - most OpenAPI specs are bloated for LLM use. one thing to think about: once you generate servers at scale, you will want policy controls over which tools the agent can actually call. peta.io is tackling that side if you hit it.
0
0
2026-02-27T20:27:43
BC_MARO
false
null
0
o7rhf28
false
/r/LocalLLaMA/comments/1rgf9zb/mcpforge_generate_mcp_servers_from_openapi_specs/o7rhf28/
false
0
t1_o7rhaxw
Prompt processing / prefill speed increases with batch size - and so do the memory requirements. What batch size do you use by default? >you need \~2.5x the quantised model weight in system RAM
1
0
2026-02-27T20:27:09
Chromix_
false
null
0
o7rhaxw
false
/r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7rhaxw/
false
1
t1_o7rhadt
Yea, I was fiddling with settings and then just mostly stopped fiddling. Probably wouldn't have the issue if I undervolted. But they're just at default now. What I discovered trying to train is I can't seem to go past 7B models. Because once you start all the training more and more loads into the VRAM. Even with 48gb...
2
0
2026-02-27T20:27:04
alphatrad
false
null
0
o7rhadt
false
/r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7rhadt/
false
2
t1_o7rh9t3
I wish I had money to fucking donate because you guy are amazing.
6
0
2026-02-27T20:26:59
Borkato
false
null
0
o7rh9t3
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rh9t3/
false
6
t1_o7rh8kw
I would recommend to start with qlora, then move to Lora then move to FFT if needed. Starting with FFT is always a big trap. Lora can replicate FFT if you follow the correct hyper parameters which this guide shows you how to: https://unsloth.ai/docs/get-started/fine-tuning-llms-guide/lora-hyperparameters-guide
2
0
2026-02-27T20:26:49
yoracale
false
null
0
o7rh8kw
false
/r/LocalLLaMA/comments/1rgbwwh/lora_training_vs_fft_what_do_i_need_to_know/o7rh8kw/
false
2
t1_o7rh67w
Quick summary of what I think matters most for this community: 1. The "neural hypothesis + symbolic verification" pattern works at any scale. AlphaProof proves it at the top end. But a local 7B + linter/tests is the same architectural principle. 2. The energy argument is real: GPT-3 training used 1,287 MWh. ...
1
0
2026-02-27T20:26:29
Sensitive-Two9732
false
null
0
o7rh67w
false
/r/LocalLLaMA/comments/1rghuaq/pure_llms_score_0_on_arcagi2_humans_60_meanwhile/o7rh67w/
false
1
t1_o7rh4ox
I'll run the same benchmark when I have time. I built the thing for a tutoring app for my kids so I've been benchmarking that. [122b-single-vs-pooled-compute.md](http://122b-single-vs-pooled-compute.md)   \- 122b-single-vs-pooled-compute.json   Result:   \- Recommendation: KEEP\_SINGLE   \- Hoot-like workload (...
1
0
2026-02-27T20:26:16
Imaginary_Abies_9176
false
null
0
o7rh4ox
false
/r/LocalLLaMA/comments/1rga9x4/qwen35122ba10b_pooled_on_dual_mac_studio_m4_max/o7rh4ox/
false
1
t1_o7rh42z
50% if you cache
13
0
2026-02-27T20:26:10
tomt610
false
null
0
o7rh42z
false
/r/LocalLLaMA/comments/1rggpu9/glm5code/o7rh42z/
false
13
t1_o7rh2qt
Nice work guys! Can't wait to try the updated ggufs. Is qwen3-coder-next affected by this too?
1
0
2026-02-27T20:25:59
alhinai_03
false
null
0
o7rh2qt
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rh2qt/
false
1
t1_o7rgucl
Dual RX 7900 XTX's - but if you didn't mean hardware - I could not get it loaded in Ollama and got it loaded and running inside llama.cpp
2
0
2026-02-27T20:24:50
alphatrad
false
null
0
o7rgucl
false
/r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7rgucl/
false
2
t1_o7rgt1a
I'll be satisfied when I can have a world dominating superintelligence in a $200 box
28
0
2026-02-27T20:24:39
digitaltransmutation
false
null
0
o7rgt1a
false
/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7rgt1a/
false
28
t1_o7rgsjp
I'll keep that need in mind
2
0
2026-02-27T20:24:34
OmarBessa
false
null
0
o7rgsjp
false
/r/LocalLLaMA/comments/1reqdpb/overwhelmed_by_so_many_quantization_variants/o7rgsjp/
false
2
t1_o7rghzx
Looking forward to deepseek v4 (lite)! Hope they will release a model runnable on a single RTX 5090 this year.
2
0
2026-02-27T20:23:05
Kahvana
false
null
0
o7rghzx
false
/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7rghzx/
false
2
t1_o7rggyd
Without the first sentence, the challenge is harder, but it still gets it. Produces some hilarious checks ("any chance the user is a robot?") and dithers for a long, long time, but it gets it!
1
0
2026-02-27T20:22:56
boutell
false
null
0
o7rggyd
false
/r/LocalLLaMA/comments/1rf5y13/qwen3535ba3b_thinks_less_if_tools_available/o7rggyd/
false
1
t1_o7rgg3i
Oh ok, I noticed that the 122b still said "3 days ago". Make sense. Thanks for all your hard work!
4
0
2026-02-27T20:22:49
Zc5Gwu
false
null
0
o7rgg3i
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rgg3i/
false
4
t1_o7rgg5u
The cpu rail has 540 watts spec, and a v100 draws 300, so 240 left for the cpu. But assuming degradation, it could be a lot less. That might explain why ram speed has an impact. Also, a gpu can draw up to 75 watts from the pcie slot, which would be supplied by the cpu rail. So when you add the second v100, you only hav...
2
0
2026-02-27T20:22:49
Nota_ReAlperson
false
null
0
o7rgg5u
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7rgg5u/
false
2
t1_o7rgdk3
This snds like another case of the awfulness of rebar + 4g decoding pcie mapping. When you plug a big gpu into a mobo, you need to turn on rebar & 4g decoding supp. Mobos need to do these two funcs to enable gpu drivers to access large amts of gpu vram (usually abv 24gb). Older, cheaper, crappier mobos may simply not ...
4
0
2026-02-27T20:22:27
__E8__
false
null
0
o7rgdk3
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7rgdk3/
false
4
t1_o7rg721
You can take my code and do whatever you want, it's my idea, let it live, maybe you'll take something for yourself.
1
0
2026-02-27T20:21:33
zemondza
false
null
0
o7rg721
false
/r/LocalLLaMA/comments/1rfddpi/training_a_144m_spiking_neural_network_for_text/o7rg721/
false
1
t1_o7rg6tu
Ce début d'année est incroyable, c'est un travail à temps plein de tout suivre!
1
0
2026-02-27T20:21:31
Adventurous-Paper566
false
null
0
o7rg6tu
false
/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7rg6tu/
false
1
t1_o7rg64m
I was posting (perhaps too obnoxiously) about this earlier today — it gets the car wash problem right in thinking mode... even if you forget to include "I want to wash my car" (my mistake). Which is quite impressive really. But, it thinks about it a LOT. Especially if you're running this on an M4 Macbook Pro with 32GB...
1
0
2026-02-27T20:21:25
boutell
false
null
0
o7rg64m
false
/r/LocalLLaMA/comments/1rf5y13/qwen3535ba3b_thinks_less_if_tools_available/o7rg64m/
false
1
t1_o7rg5k5
This is Exmachina starting. People are just a bit braindead ig.
0
0
2026-02-27T20:21:21
Dudebro-420
false
null
0
o7rg5k5
false
/r/LocalLLaMA/comments/1reoiqj/local_llm_tool_calling_anyone_heard_of_this/o7rg5k5/
false
0
t1_o7rftez
> been using that less since their weird privacy stances. What's the weird privacy stance? I use FF for privacy since FF has so many addons that let you change so many aspects like hash signatures and reported agent. IMO, it's the most private browser.
1
0
2026-02-27T20:19:37
fallingdowndizzyvr
false
null
0
o7rftez
false
/r/LocalLLaMA/comments/1reqdpb/overwhelmed_by_so_many_quantization_variants/o7rftez/
false
1
t1_o7rft7y
Right, I went by New Model Flair of this sub. Let me add that model to this list.
2
0
2026-02-27T20:19:36
pmttyji
false
null
0
o7rft7y
false
/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7rft7y/
false
2
t1_o7rfs35
How about rtx 5000, are those faster with mxfp4?
0
0
2026-02-27T20:19:27
Pentium95
false
null
0
o7rfs35
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rfs35/
false
0
t1_o7rfkqa
😂 I feel you! Extracting HTML is honestly the worst part of the job. I actually got a huge head start by borrowing on **browser-use**’s logic for that.
1
0
2026-02-27T20:18:25
Alarmed-Ad-6201
false
null
0
o7rfkqa
false
/r/LocalLLaMA/comments/1rgfrxp/pageagent_browser_ai_agent_that_runs_inside_the/o7rfkqa/
false
1
t1_o7rfigh
Only 20% crazier than $1.
30
0
2026-02-27T20:18:06
bambamlol
false
null
0
o7rfigh
false
/r/LocalLLaMA/comments/1rggpu9/glm5code/o7rfigh/
false
30
t1_o7rfhyz
Thank you very much for this! I really don’t mind waiting a (few) week(s) for your quant releases if it would come with this level of research. Stability andnperformance over fomo.
2
0
2026-02-27T20:18:02
Kahvana
false
null
0
o7rfhyz
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7rfhyz/
false
2
t1_o7rff71
We’re eating good this year so far
37
0
2026-02-27T20:17:38
o5mfiHTNsH748KVq
false
null
0
o7rff71
false
/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7rff71/
false
37
t1_o7rfdwz
Couple of hopefully quick questions for you that are likely born out of ignorance but I've been trying to resolve long response times with Qwen3tts on a local setup (between 12-40s) on a 5090. Ported my project to WSL to enable FlashAttn and despite getting Flash installed and injected Qwen3TTS just never seemed to ut...
1
0
2026-02-27T20:17:28
ImMichaelB
false
null
0
o7rfdwz
false
/r/LocalLLaMA/comments/1rfc3ic/introducing_fasterqwentts/o7rfdwz/
false
1
t1_o7rfcfx
That looks broken, but in a special way. It looks like your prompt isn't being sent to the model. These "free form" results are what you get when you run inference without specifying a prompt. Try it via CLI to see if you get better results: `llama-cli -m Qwen3-Coder-Next-IQ4_XS.gguf -fa on -c 4096 --temp 0 -p "hi"`
1
0
2026-02-27T20:17:15
Chromix_
false
null
0
o7rfcfx
false
/r/LocalLLaMA/comments/1qz5uww/qwen3_coder_next_as_first_usable_coding_model_60/o7rfcfx/
false
1
t1_o7rf85x
deprecated is the status in which something is discouraged for use and planned to be removed but is not yet removed given it's not yet April 3rd, it is obviously still active until then.
-1
1
2026-02-27T20:16:40
panthereal
false
null
0
o7rf85x
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7rf85x/
false
-1
t1_o7rf6ix
Slightly off topic, but what’s the general consensus surrounding the new Qwen3.5 models? Are most people using 35B-A3B model or the 27B dense model for coding?
1
0
2026-02-27T20:16:26
MainFunctions
false
null
0
o7rf6ix
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7rf6ix/
false
1
t1_o7rf60m
Depends what you mean by "beat" in my eyes. Purely knowledge wise, GPT-4o will be superior as it's simply a much larger model. But for like a year now, we've had local models performing better than 4o intelligence wise, like significantly so. Even Qwen3-4B-2507 & Qwen3-VL-4B beats it.
8
0
2026-02-27T20:16:22
ayylmaonade
false
null
0
o7rf60m
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7rf60m/
false
8
t1_o7rf2x2
So you are connecting one gpu to the cpu rail, and the other to the pcie? Or are both v100s on the cpu rail?
1
0
2026-02-27T20:15:56
Nota_ReAlperson
false
null
0
o7rf2x2
false
/r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7rf2x2/
false
1
t1_o7rf2lu
Il manque le MoE LFM2 24B A2B dans la liste!
4
0
2026-02-27T20:15:53
Adventurous-Paper566
false
null
0
o7rf2lu
false
/r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7rf2lu/
false
4
t1_o7reyvf
Thank you :) We wrote up our investigation just a few hours ago as well :)
1
0
2026-02-27T20:15:21
danielhanchen
false
null
0
o7reyvf
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7reyvf/
false
1