name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8iv58c
Qwen instruct models have always been an oddball. Even the earlier Qwen bases with the exception of maybe Qwen 7B from waaay back in the day very clearly had some instruct in their data mix.
1
0
2026-03-04T02:17:21
Electroboots
false
null
0
o8iv58c
false
/r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8iv58c/
false
1
t1_o8iv4n3
Hey I crossed upon this and was curious about trying it out, do you still have this up?
1
0
2026-03-04T02:17:15
Meowliketh
false
null
0
o8iv4n3
false
/r/LocalLLaMA/comments/1gjfajq/i_got_laid_off_so_i_have_to_start_applying_to_as/o8iv4n3/
false
1
t1_o8iv1kw
why? I don't get it. it seems to me the first table is is evidence that the naive strategy really works well: just get the biggest unsloth quant that fits you (they are increasingly better and seem the most reliable quants). But what would you do with the efficiency score? It is likely dataset specific, so OP did well comparing wikitext and a closed custom.
1
0
2026-03-04T02:16:45
erubim
false
null
0
o8iv1kw
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8iv1kw/
false
1
t1_o8iv1g4
I can run this machine headless 99% of the time.
1
0
2026-03-04T02:16:44
AdCreative8703
false
null
0
o8iv1g4
false
/r/LocalLLaMA/comments/1rk90zw/what_to_pair_with_3080ti_for_qwen_35_27b/o8iv1g4/
false
1
t1_o8iuzwl
For sure; i forget exactly, but mac m4 max has ~500gb/s memory bandwidth; believe 3090 is something like twice that. so MoE makes the most sense for macs with unified memory, but dense model (smaller) makes more sense for discrete graphics. Curious what t/s you get with 27b on 3090? m4 max 128GB gets ~15t/s for the 27b and ~30t/s for the 122b.
1
0
2026-03-04T02:16:29
slypheed
false
null
0
o8iuzwl
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8iuzwl/
false
1
t1_o8iul35
I switched over to llama server for Linux, and I'm now getting 75 TPS. I guess AMD just got to keep improving the Linux drivers. https://preview.redd.it/41bfyyxctxmg1.png?width=762&format=png&auto=webp&s=1d176d2f0b71a1be79325afe0b5d3fa485923ee1
1
0
2026-03-04T02:14:03
JackTheif52
false
null
0
o8iul35
false
/r/LocalLLaMA/comments/1rfrsr6/rx_7900_xtx_24g_rocm_72_with_r1_32b_awq_vs_gptq/o8iul35/
false
1
t1_o8iub0x
Yeah dude. Local llms are the future. Fuck the Anthropic and OpenAI techno feudalism!
1
0
2026-03-04T02:12:23
CalvaoDaMassa
false
null
0
o8iub0x
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8iub0x/
false
1
t1_o8iu952
Every dang day. The new Qwen3-Coder-Next beats Sonnet 3.5 and Sonnet 3.7 in my personal benchmarks (just bug fixing my code, developing new features). I'm about to dive into Qwen3.5-122B-A10B this week to see if I can just use one model for both coding & chat...
1
0
2026-03-04T02:12:04
txgsync
false
null
0
o8iu952
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8iu952/
false
1
t1_o8iu5s8
Bro bro, in 2026 no human is reading your cover letters and savoring your flow and actual voice.
1
0
2026-03-04T02:11:30
1-800-methdyke
false
null
0
o8iu5s8
false
/r/LocalLLaMA/comments/1rjsf7f/i_spent_6_hours_last_night_failing_to_finetune/o8iu5s8/
false
1
t1_o8iu2oq
unlikely, manufacturers for complete systems like apple should have years of parts warehoused.
1
0
2026-03-04T02:10:59
J0kooo
false
null
0
o8iu2oq
false
/r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8iu2oq/
false
1
t1_o8ittsz
Would love to chat more more. Using Claude for PE and other related work, but trying to migrate more to local.
1
0
2026-03-04T02:09:33
trailsman
false
null
0
o8ittsz
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8ittsz/
false
1
t1_o8itpny
I just deployed both 35B and 122B to some production servers this week, and for the folks who use LLMs for recall on stored information, there is a large difference between the two. I guess if you are just using it for agentic loops, calling tools, etc, the difference may not be worth it.
1
0
2026-03-04T02:08:53
xfalcox
false
null
0
o8itpny
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8itpny/
false
1
t1_o8itdvz
Are these models fully uncensored? I'm trying to figure out their use case.
1
0
2026-03-04T02:06:58
PromiseMePls
false
null
0
o8itdvz
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8itdvz/
false
1
t1_o8it9xx
GGUF?
1
0
2026-03-04T02:06:19
sunshinecheung
false
null
0
o8it9xx
false
/r/LocalLLaMA/comments/1rk8knf/qwen3518breapa3bcoding_50_expertpruned/o8it9xx/
false
1
t1_o8it89f
RemindMe! 11 days 13 hours 46 minutes
1
0
2026-03-04T02:06:03
pot_sniffer
false
null
0
o8it89f
false
/r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/o8it89f/
false
1
t1_o8it78k
Yes
1
0
2026-03-04T02:05:53
PromiseMePls
false
null
0
o8it78k
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8it78k/
false
1
t1_o8isv09
my meat brain can not do this advanced meth stuff.
1
0
2026-03-04T02:03:54
Succubus-Empress
false
null
0
o8isv09
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8isv09/
false
1
t1_o8islr8
I can't wait to release the software I'm working on that will put those nice LLMs like Llama 70b Mixtral 8x7b right on your simple GPU. I can't say anything. It's not vaporware. I'm working on it now. I'll post to this forum when I can spill all the beans and offer you the freedom from frontiers..... https://preview.redd.it/fx8oeg6drxmg1.jpeg?width=2268&format=pjpg&auto=webp&s=c9129497c3327f9d8a4f1505a38638b7e5797022
1
0
2026-03-04T02:02:24
Tough_Frame4022
false
null
0
o8islr8
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8islr8/
false
1
t1_o8isjt1
[deleted]
1
0
2026-03-04T02:02:05
[deleted]
true
null
0
o8isjt1
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8isjt1/
false
1
t1_o8isgyh
How much improvement did it make in code generation quality tho?
1
0
2026-03-04T02:01:37
segmond
false
null
0
o8isgyh
false
/r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8isgyh/
false
1
t1_o8isgtl
Except ROCm doesn't support my 7600. RIP.
1
0
2026-03-04T02:01:36
National_Meeting_749
false
null
0
o8isgtl
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8isgtl/
false
1
t1_o8isfgb
Then on that note then I have been corrected, my difficulty may have been caused my an older version considering ROCm (to my knowledge) was a project that was started by amd before AI became a big thing but never got much traction so it got open sourced and only recently due to the AI boom did someone start updating it.
1
0
2026-03-04T02:01:23
Ill-Oil-2027
false
null
0
o8isfgb
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8isfgb/
false
1
t1_o8isbc2
$3,599 is the starting price for the Max, not the full configuration, the post did originally say "Starting price." Anyway, just edited to make it clearer.
1
0
2026-03-04T02:00:42
luke_pacman
false
null
0
o8isbc2
false
/r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8isbc2/
false
1
t1_o8is49d
I have LM studio and I’m trying to connect it to local host to open code. But I’m the API calls mis match for some reason?
1
0
2026-03-04T01:59:33
16GB_of_ram
false
null
0
o8is49d
false
/r/LocalLLaMA/comments/1rjve9e/possible_to_run_local_model_for_opencode_with_m3/o8is49d/
false
1
t1_o8irw4c
Thanks for all the help, I managed to install it in the desktop folder but yeah, I installed two models and my disk space is almost fully occupied. Lets see if I can make it work like that
1
0
2026-03-04T01:58:14
lubezki
false
null
0
o8irw4c
false
/r/LocalLLaMA/comments/1mitsok/which_ai_image_generators_are_good_for/o8irw4c/
false
1
t1_o8iruwq
ComfyUI supports ROCm for image generation and MMAudio for both Linux and Windows. It is very stable and fast.
1
0
2026-03-04T01:58:02
AbsolutelyStateless
false
null
0
o8iruwq
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8iruwq/
false
1
t1_o8irogy
Neither polyphonic nor doesn't play 😅 but cool clickdummy
1
0
2026-03-04T01:57:00
kyr0x0
false
null
0
o8irogy
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8irogy/
false
1
t1_o8irk5x
That does not surprise me at all, lol! I've given up on trying to convince colleagues to use AI in ministry. I think I understand the risks (allowing the AI to dictate/ decide the bottom line of your message/ application being one of the most dangerous) but if you take responsibility for prayerful study and knowing where you want to get to, AI just becomes a means to tapping into the Human knowledge of the ages... I use it to connect the dots between Psychology/ Sociology Studies, Classical and Modern Bible Commentaries (Protestant, Jewish, Messianic, Fathers of the Church, etc) and a Bible passage. It also helps me immensely with creating outlines for preaching as well as a 3 paragraph summary of my sermon, and 1 paragraph version as well so I can reflect on "what is truly my bottom line/ main argument here?". Another obvious use is with expanded materials/ games for families, children, teenager, handouts, social media posts, etc. based on a sermon. This used to take me days, literally. Now, I've collected skills and prompts based on my preferences, press play (once I have the final manuscript) and I have all of it freshly backed in the morning. To be fair, it is not 100%. I often ask for 3-5 different versions when I don't even know what I want (as far as supportive materials). I often can get 2 or 3 very great results to chose from. This past weekend, I got a little carried away, and had 10+ HTML presentation/ slides to choose from. It almost took me more time to choose/ modify than to do it from scratch. So that's one of the dangers... the sheer amount of output can be maddening to sort/ choose from (it depletes your "judging" power from being used on more important things). Overall I find it extremely useful. Here's a kicker -> You'll see it in the attached photo, I created a "COUNCILS" area... I just create different personas, aggregable and not, experts in different fields with tools to search the web and read from local files for context, and let them talk freely for 30+minutes. 60% of it is garbage, or nothing new. But the other 40% has provided me with genuinely new, deep insight that, upon reflection, has also been useful to other people as it distills and applies powerful life lessons based on research+basic human experience across the ages. How do I know it for sure that it is useful? (AI delusion of grandeur is a thing...) I've gotten great feedback from people... In the end, collecting your own skills/ prompts/ practices/ experiences into actionable directions, tools, sequences, etc. is the most valuable thing to do. Everything else will be (or already is) a widely available commodity when it comes to AI. Do share your experiences as well. Best.
1
0
2026-03-04T01:56:18
FigZestyclose7787
false
null
0
o8irk5x
false
/r/LocalLLaMA/comments/1rk7un4/im_running_a_graph_workflow_with_multiple/o8irk5x/
false
1
t1_o8iriqr
You see this character: . The . is used to end sentences. It’s called a full stop or a period. See how I ended that last sentence with one? It makes your paragraphs easier to understand for the reader. Please try using them. Thank you, People reading your shit.
1
0
2026-03-04T01:56:04
__JockY__
false
null
0
o8iriqr
false
/r/LocalLLaMA/comments/1rk2pll/step_flash_35_toolcall_and_thinking_godforsaken/o8iriqr/
false
1
t1_o8irdjd
I also uploaded a 25B (30% pruned) version which I have not tested yet: https://huggingface.co/Flagstone8878/Qwen3.5-25B-REAP-A3B-Coding
1
0
2026-03-04T01:55:13
17hoehbr
false
null
0
o8irdjd
false
/r/LocalLLaMA/comments/1rk8knf/qwen3518breapa3bcoding_50_expertpruned/o8irdjd/
false
1
t1_o8irbz9
Ok. You mentioned -nkvo flag. First time i hear it. What does it do and how do you use it? One last question someone said use headless mode to save 1-2 GB. Are you talking about vram or normal ram saving?
1
0
2026-03-04T01:54:58
wisepal_app
false
null
0
o8irbz9
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8irbz9/
false
1
t1_o8irbf1
> ROCm imo is to much of a hassle to get functioning I've been using ROCm for both LLMs and diffusion and the only time I had any difficulty was when the 9000-series models were brand new. It works great.
1
0
2026-03-04T01:54:53
AbsolutelyStateless
false
null
0
o8irbf1
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8irbf1/
false
1
t1_o8ir6dc
Yeah I’m excited to throw some of these slimmer quants at my current task set. Hopefully ik will fix the current mmproj issues with 3.5 I wanna come home dude haha. 
1
0
2026-03-04T01:54:03
dinerburgeryum
false
null
0
o8ir6dc
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ir6dc/
false
1
t1_o8ir15d
No 🤣 And never ask for permission
1
0
2026-03-04T01:53:12
kyr0x0
false
null
0
o8ir15d
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8ir15d/
false
1
t1_o8iqveg
They're focused on telling others how good they are instead of actually building anything at all. Check social media and you'll see 🙈.
1
0
2026-03-04T01:52:14
kbderrr
false
null
0
o8iqveg
false
/r/LocalLLaMA/comments/1pfad2a/why_india_is_far_behind_in_ai_research/o8iqveg/
false
1
t1_o8iqtl3
My understanding is Macs have been limited by TOPS for prompt processing, so long prompts can take a very long time.
1
0
2026-03-04T01:51:56
BrianJThomas
false
null
0
o8iqtl3
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8iqtl3/
false
1
t1_o8iqrki
I have an M1 Max 64GB that I bought end of 2021 so this is the first model that feels upgrade-worthy, but I'd need to get the 128GB machine to be worth bothering with the upgrade at all.
1
0
2026-03-04T01:51:37
iansltx_
false
null
0
o8iqrki
false
/r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8iqrki/
false
1
t1_o8iqr0b
Can 122B run un 96 unified ram m3 ultra
1
0
2026-03-04T01:51:31
beau_pi
false
null
0
o8iqr0b
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8iqr0b/
false
1
t1_o8iqn1o
Thank you. This is definitely helpful. Gives some clear pointers on when this migjt be a good fit. Well...the price points definitely makes me pause....a lot 😉
1
0
2026-03-04T01:50:52
Aprocastrinator
false
null
0
o8iqn1o
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o8iqn1o/
false
1
t1_o8iqm65
Hallucinated.
1
0
2026-03-04T01:50:43
iansltx_
false
null
0
o8iqm65
false
/r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8iqm65/
false
1
t1_o8iqjg5
Interesting. I've mainly just used free cloud compute because what I'm doing is child's play for those systems. I've got a bunch more inputs and outputs than that, but I'm desperately trying to avoid making it vision based. But I've got a much more modern processor than that, so I gonna at least see how much performance
1
0
2026-03-04T01:50:16
National_Meeting_749
false
null
0
o8iqjg5
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8iqjg5/
false
1
t1_o8iq9q5
1) fully textured 2) cleaned 3) rigged If you can get the first two, you’d have something good. Get the third, it’s great.
1
0
2026-03-04T01:48:40
youre__
false
null
0
o8iq9q5
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8iq9q5/
false
1
t1_o8iq6dw
😂
1
0
2026-03-04T01:48:07
jeff_actuate
false
null
0
o8iq6dw
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8iq6dw/
false
1
t1_o8iq5f5
.edu pricing is where it's at with Macs. 14" MBP: $2,409 for binned M5 Pro/48GB/1TB $2,599 for M5 Pro/48GB/1TB $2,779 for M5 Pro/64GB/1TB $3,929 for M5 Max/64GB/2TB $4,649 for M5 Max/128GB/2TB It used to be a bigger premium on the Pro + RAM upgrades. The sweet spot now appears to be the non-binned M5 Pro with 64GB. I've had a binned M4 Pro/48GB/1TB since they first came out, and it's a great machine for \~30b models @ 8-bit and smaller. I'm slightly tempted to upgrade, but all I'd be getting is a lot faster prefill and slightly faster token generation for my two grand. I'm not tossing that much context around most of the time to justify it. Going from 48GB to 64GB doesn't allow me access to many more models, and anything that big should really be running on a Max anyway. The Max might make a dent in the prefill naysayers that pop up in this sub every time someone says "Apple". 😆
1
0
2026-03-04T01:47:58
MrPecunius
false
null
0
o8iq5f5
false
/r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8iq5f5/
false
1
t1_o8iq3rj
I think it's just there as a frame of reference for their harness. Typically they use their own harness but if you are skeptical about that you would want to compare it against Claude models using baseline Claude Code harness.
1
0
2026-03-04T01:47:42
nullmove
false
null
0
o8iq3rj
false
/r/LocalLLaMA/comments/1rk5qzz/qwen3codernext_scored_40_on_latest_swerebench/o8iq3rj/
false
1
t1_o8iq3pf
Will the recent DRAM shortage also hit Mac's product line?
1
0
2026-03-04T01:47:41
Ralph_mao
false
null
0
o8iq3pf
false
/r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8iq3pf/
false
1
t1_o8iq08g
You know what they say, when one door closes, another door opens... on the plane to your next job at 30000 feet with you on board...
1
0
2026-03-04T01:47:07
Cool-Chemical-5629
false
null
0
o8iq08g
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8iq08g/
false
1
t1_o8iplun
A MacBook Air? And it is effective? What are you running on it? I mean this sincerely. I just didn't think they could handle it....
1
0
2026-03-04T01:44:43
QuantumFrothLatte
false
null
0
o8iplun
false
/r/LocalLLaMA/comments/1ap1l81/how_to_host_your_llms_for_free/o8iplun/
false
1
t1_o8ipjh7
Does this support FIM? I tried Qwen3.5 4B and it seems to break on FIM and only provides empty responses.
1
0
2026-03-04T01:44:19
PANIC_EXCEPTION
false
null
0
o8ipjh7
false
/r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o8ipjh7/
false
1
t1_o8ipfr8
The smallest Q4 I guess. Idk if Q3 is viable considering the number of parameters (27B).
1
0
2026-03-04T01:43:42
TitwitMuffbiscuit
false
null
0
o8ipfr8
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ipfr8/
false
1
t1_o8ip578
Not that I know of.
1
0
2026-03-04T01:41:57
TitwitMuffbiscuit
false
null
0
o8ip578
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ip578/
false
1
t1_o8ip1vt
Alright so this week then?
1
0
2026-03-04T01:41:25
No_Afternoon_4260
false
null
0
o8ip1vt
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o8ip1vt/
false
1
t1_o8ip0qh
Alrighty then.
0
0
2026-03-04T01:41:13
3spky5u-oss
false
null
0
o8ip0qh
false
/r/LocalLLaMA/comments/1rk6rro/super_35_4b/o8ip0qh/
false
0
t1_o8ioxf8
update: i managed to run a Qwen3.5 0.8B model on V100 vllm with \`--skip-mm-profiling\` and \`--enforce-eager\` and \`--gpu-memory-utilization 0.8\` argument, but wield thing happens: 1. The memory consumption is **abusrd**. A very simple prompt of " who are you" eat up 6GB of free memory for pp, actually it should be a few hundred megabytes. 2. The tg thoughput is **ridiculous,** 1 tok/s at best. 3. If i deliberately choke the memory for the model, \`--gpu-memory-utilization 0.9\` i can see a Triton OOM error inside this function: vllm/model_executor/layers/fla/ops/chunk_scaled_dot_kkt.py", line 141, in chunk_scaled_dot_kkt_fwd
1
0
2026-03-04T01:40:40
Substantial_Log_1707
false
null
0
o8ioxf8
false
/r/LocalLLaMA/comments/1rjjvqo/vllm_on_v100_for_qwen_newer_models/o8ioxf8/
false
1
t1_o8iowka
Oh crap. But awesome.
1
0
2026-03-04T01:40:32
J_GUMBAINIA
false
null
0
o8iowka
false
/r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/o8iowka/
false
1
t1_o8iotja
I just re read it. Sounds like some hippy witchcraft incantations. Welcome to a world where a website called Hugging Face hosts large language models.
1
0
2026-03-04T01:40:01
Badger-Purple
false
null
0
o8iotja
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8iotja/
false
1
t1_o8iot2j
Every time someone says they use qwen2.5, either they are a bot or took advice from one with absolutely 0 research or brain power put in.
1
0
2026-03-04T01:39:56
Emotional-Baker-490
false
null
0
o8iot2j
false
/r/LocalLLaMA/comments/1rk17h6/i_stopped_vibechecking_my_llms_and_started_using/o8iot2j/
false
1
t1_o8ionnv
It is a rabbit hole and it's worse with benchmarks. Like, what's the one that is not completely saturated by recent models and representative of the type of tasks I run, is it qualitative or is there bad/vague questions on the dataset, what's the latest, the quickest to run. Eval is hard.
1
0
2026-03-04T01:39:03
TitwitMuffbiscuit
false
null
0
o8ionnv
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ionnv/
false
1
t1_o8iomh1
A few feature wish lists: 1. Native quad-remeshing. Triangle meshes are a nightmare to sculpt or animate. 2. No baking shadows into the texture. 3. Model generation with a basic skeleton and decent skin weights.
1
0
2026-03-04T01:38:51
aiyakisoba
false
null
0
o8iomh1
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8iomh1/
false
1
t1_o8ioh6p
i have 16 gb vram and 96 gb ddr5 ram. which quant do you suggest and with which flags?
1
0
2026-03-04T01:38:00
wisepal_app
false
null
0
o8ioh6p
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ioh6p/
false
1
t1_o8ioex4
Thanks, downloading to check
1
0
2026-03-04T01:37:36
Legitimate-ChosenOne
false
null
0
o8ioex4
false
/r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/o8ioex4/
false
1
t1_o8inu96
Significantly more parameters = gooder.
1
0
2026-03-04T01:34:13
TheRealMasonMac
false
null
0
o8inu96
false
/r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8inu96/
false
1
t1_o8inr5v
!remindme 48h
1
0
2026-03-04T01:33:42
FusionCow
false
null
0
o8inr5v
false
/r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/o8inr5v/
false
1
t1_o8inqa4
WTH?! This is ridiculous...
1
0
2026-03-04T01:33:34
Fun_Smoke4792
false
null
0
o8inqa4
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8inqa4/
false
1
t1_o8ino5o
Is there a list of what topics do these 0/100 refusals cover?
1
0
2026-03-04T01:33:13
Complex-Maybe3123
false
null
0
o8ino5o
false
/r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8ino5o/
false
1
t1_o8innm1
I was like, wait a minute... Anyway, thanks for experimenting.
1
0
2026-03-04T01:33:08
TitwitMuffbiscuit
false
null
0
o8innm1
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8innm1/
false
1
t1_o8innif
I have almost weekly debates with a friend who is in ministry about the use of AI. I can confidently say he would hate this. I am really hopeful the tech can be put to good use in that area, so I'd love to hear more about what you are doing.
1
0
2026-03-04T01:33:07
ryanp102694
false
null
0
o8innif
false
/r/LocalLLaMA/comments/1rk7un4/im_running_a_graph_workflow_with_multiple/o8innif/
false
1
t1_o8inmwo
I will be messaging you in 7 days on [**2026-03-11 01:32:05 UTC**](http://www.wolframalpha.com/input/?i=2026-03-11%2001:32:05%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8inh6v/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FLocalLLaMA%2Fcomments%2F1rjuccw%2Fwould_you_be_interested_in_a_fully_local_ai_3d%2Fo8inh6v%2F%5D%0A%0ARemindMe%21%202026-03-11%2001%3A32%3A05%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201rjuccw) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|
1
0
2026-03-04T01:33:01
RemindMeBot
false
null
0
o8inmwo
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8inmwo/
false
1
t1_o8inkf2
[removed]
1
0
2026-03-04T01:32:37
[deleted]
true
null
0
o8inkf2
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8inkf2/
false
1
t1_o8injeg
Or, if launched, just suck unlike Qwen3.5 which has pushed the frontiers of small / medium LLMs.
1
0
2026-03-04T01:32:27
MrRandom04
false
null
0
o8injeg
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8injeg/
false
1
t1_o8inh6v
RemindMe! 1 Week "Check the repo"
1
0
2026-03-04T01:32:05
shikima
false
null
0
o8inh6v
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8inh6v/
false
1
t1_o8ingjf
What are your use-cases?
1
0
2026-03-04T01:31:59
overand
false
null
0
o8ingjf
false
/r/LocalLLaMA/comments/1rk6rro/super_35_4b/o8ingjf/
false
1
t1_o8in7dj
Love these analysis. Did AesSedai not quant a 27B? I recall his IQ4 being the best for the 35B model
1
0
2026-03-04T01:30:28
metigue
false
null
0
o8in7dj
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8in7dj/
false
1
t1_o8in6tn
>30gb if im not mistaken That's not nearly enough. As soon as you start downloading models, disk space will evaporate fast. SD1.5 models are about 4 GB in size, per model file. SDXL and Pony models can be 6 GB to 12 GB in size. Flux.1 and Z-Image models can be anywhere between 8 GB to 24+ GB per file. My own installation of "Invoke" consumes 320+ GB as of now. > du -hs invokeai 323G invokeai > I will create a folder on my desktop As their help pages suggest: Try installing in a different path or even a different disk drive.
1
0
2026-03-04T01:30:23
scorp123_CH
false
null
0
o8in6tn
false
/r/LocalLLaMA/comments/1mitsok/which_ai_image_generators_are_good_for/o8in6tn/
false
1
t1_o8in09j
Any luck at getting all layers to GPU for an RTX5080?
1
0
2026-03-04T01:29:17
InternationalNebula7
false
null
0
o8in09j
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8in09j/
false
1
t1_o8imzmp
No, i think anthropic, openai, google will do that way in future.
1
0
2026-03-04T01:29:11
Guilty_Nothing_2858
false
null
0
o8imzmp
false
/r/LocalLLaMA/comments/1rk6ulw/prediction_nextgen_frontier_llms_will_be/o8imzmp/
false
1
t1_o8imta3
I'm running headless, thankfully.
1
0
2026-03-04T01:28:08
InternationalNebula7
false
null
0
o8imta3
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8imta3/
false
1
t1_o8imsjj
My args: ./llama-server \ --model "$MODEL" \ --fit on \ --tensor-split 0.45,0.55 \ --no-mmap \ --flash-attn on \ --checkpoint-every-nb 3 \ --ctx-size 253952 \ --parallel 1 \ --threads 12 \ --threads-batch 12 \ --cache-ram -1 \ --batch-size 2048 \ --ubatch-size 1024 \ --chat-template-kwargs '{"enable_thinking":true}' \ --temp "$USER_TEMP" \ --top-p 0.95 \ --top-k 20 \ --min-p 0.00 \ --host 0.0.0.0 \ --port 8081 Error persists. eval time = 13055.63 ms / 842 tokens ( 15.51 ms per token, 64.49 tokens per second) total time = 27547.58 ms / 31489 tokens slot release: id 0 | task 88 | stop processing: n_tokens = 34516, truncated = 0 srv update_slots: all slots are idle srv params_from_: Chat format: peg-constructed slot get_availabl: id 0 | task -1 | selected slot by LRU, t_last = 23710558623 srv get_availabl: updating prompt cache srv prompt_save: - saving prompt with length 34516, total state size = 737.349 MiB srv alloc: - removing obsolete cached prompt with length 3028 srv load: - looking for better prompt, base f_keep = 0.000, sim = 0.001 srv update: - cache state: 1 prompts, 1177.042 MiB (limits: 0.000 MiB, 253952 tokens, 253952 est) srv update: - prompt 0x650b45a66ba0: 34516 tokens, checkpoints: 7, 1177.042 MiB srv get_availabl: prompt cache update took 503.56 ms slot launch_slot_: id 0 | task -1 | sampler chain: logits -> ?penalties -> ?dry -> ?top-n-sigma -> top-k -> ?typical -> top-p -> ?min-p -> ?xtc -> ?temp-ext -> dist slot launch_slot_: id 0 | task 946 | processing task, is_child = 0 slot update_slots: id 0 | task 946 | new prompt, n_ctx_slot = 253952, n_keep = 0, task.n_tokens = 1053 slot update_slots: id 0 | task 946 | n_past = 1, slot.prompt.tokens.size() = 34516, seq_id = 0, pos_min = 34515, n_swa = 1 slot update_slots: id 0 | task 946 | forcing full prompt re-processing due to lack of cache data (likely due to SWA or hybrid/recurrent memory, see https://github.com/ggml-org/llama.cpp/pull/13194#issuecomment-2868343055) slot update_slots: id 0 | task 946 | erased invalidated context checkpoint (pos_min = 2047, pos_max = 2047, n_tokens = 2048, n_swa = 1, size = 62.813 MiB) slot update_slots: id 0 | task 946 | erased invalidated context checkpoint (pos_min = 2431, pos_max = 2431, n_tokens = 2432, n_swa = 1, size = 62.813 MiB) slot update_slots: id 0 | task 946 | erased invalidated context checkpoint (pos_min = 9171, pos_max = 9171, n_tokens = 9172, n_swa = 1, size = 62.813 MiB) slot update_slots: id 0 | task 946 | erased invalidated context checkpoint (pos_min = 15315, pos_max = 15315, n_tokens = 15316, n_swa = 1, size = 62.813 MiB) slot update_slots: id 0 | task 946 | erased invalidated context checkpoint (pos_min = 21459, pos_max = 21459, n_tokens = 21460, n_swa = 1, size = 62.813 MiB) slot update_slots: id 0 | task 946 | erased invalidated context checkpoint (pos_min = 27603, pos_max = 27603, n_tokens = 27604, n_swa = 1, size = 62.813 MiB) slot update_slots: id 0 | task 946 | erased invalidated context checkpoint (pos_min = 33162, pos_max = 33162, n_tokens = 33163, n_swa = 1, size = 62.813 MiB) slot update_slots: id 0 | task 946 | n_tokens = 0, memory_seq_rm [0, end) slot update_slots: id 0 | task 946 | prompt processing progress, n_tokens = 541, batch.n_tokens = 541, progress = 0.513770 slot update_slots: id 0 | task 946 | n_tokens = 541, memory_seq_rm [541, end) slot init_sampler: id 0 | task 946 | init sampler, took 0.08 ms, tokens: text = 1053, total = 1053 slot update_slots: id 0 | task 946 | prompt processing done, n_tokens = 1053, batch.n_tokens = 512 slot update_slots: id 0 | task 946 | created context checkpoint 1 of 8 (pos_min = 540, pos_max = 540, n_tokens = 541, size = 62.813 MiB) srv params_from_: Chat format: peg-constructed slot print_timing: id 0 | task 946 | prompt eval time = 519.16 ms / 1053 tokens ( 0.49 ms per token, 2028.29 tokens per second) eval time = 67027.27 ms / 4942 tokens ( 13.56 ms per token, 73.73 tokens per second) total time = 67546.43 ms / 5995 tokens slot release: id 0 | task 946 | stop processing: n_tokens = 5994, truncated = 0 slot get_availabl: id 0 | task -1 | selected slot by LRU, t_last = 23778688609 srv get_availabl: updating prompt cache srv prompt_save: - saving prompt with length 5994, total state size = 179.952 MiB srv log_server_r: done request: POST /chat/completions 127.0.0.1 200 srv params_from_: Chat format: peg-constructed srv load: - looking for better prompt, base f_keep = 0.000, sim = 0.000 srv update: - cache state: 2 prompts, 1419.807 MiB (limits: 0.000 MiB, 253952 tokens, 253952 est) srv update: - prompt 0x650b45a66ba0: 34516 tokens, checkpoints: 7, 1177.042 MiB srv update: - prompt 0x650b4ef487a0: 5994 tokens, checkpoints: 1, 242.766 MiB srv get_availabl: prompt cache update took 114.18 ms slot launch_slot_: id 0 | task -1 | sampler chain: logits -> ?penalties -> ?dry -> ?top-n-sigma -> top-k -> ?typical -> top-p -> ?min-p -> ?xtc -> ?temp-ext -> dist slot launch_slot_: id 0 | task 1826 | processing task, is_child = 0 slot update_slots: id 0 | task 1826 | new prompt, n_ctx_slot = 253952, n_keep = 0, task.n_tokens = 34469 slot update_slots: id 0 | task 1826 | n_past = 1, slot.prompt.tokens.size() = 5994, seq_id = 0, pos_min = 5993, n_swa = 1 slot update_slots: id 0 | task 1826 | forcing full prompt re-processing due to lack of cache data (likely due to SWA or hybrid/recurrent memory, see https://github.com/ggml-org/llama.cpp/pull/13194#issuecomment-2868343055) slot update_slots: id 0 | task 1826 | erased invalidated context checkpoint (pos_min = 540, pos_max = 540, n_tokens = 541, n_swa = 1, size = 62.813 MiB) Willing to take further testing steps, if you'd like.
1
0
2026-03-04T01:28:01
Xp_12
false
null
0
o8imsjj
false
/r/LocalLLaMA/comments/1rjxmvo/qwen35_checkpointing_fix_pr_testing/o8imsjj/
false
1
t1_o8imrc7
The 27B was able to one shot a breakout game in HTML in my testing, in fact it managed to one shot Tetris and Snake as well… Doom well it tried. I think the MoE model is great for something like a chatbot where fast responses trump precision, but for coding or structured outputs the dense models (or a large parameter MoE model) seems to be the way to go. Still I am completely blown away by the quality of these models for their size. Even the 9B parameter model is very capable and with an MLX version runs very nicely on my 24Gb M4 MacBook Pro.
1
0
2026-03-04T01:27:49
Idarubicin
false
null
0
o8imrc7
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8imrc7/
false
1
t1_o8imnxr
Yea guilty. I kept the attention, output and embedding tensors in Q8 (and ssm_out in bf16) since I’m on a 24+16G build and often do long horizon work. Still, I’ll experiment with mradermacher’s Q4 based on your efficiency chart. Thanks as always for putting this together!
1
0
2026-03-04T01:27:16
dinerburgeryum
false
null
0
o8imnxr
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8imnxr/
false
1
t1_o8iml0c
I download model. I copy paste vllm command from model card, everything works. 
1
0
2026-03-04T01:26:48
laterbreh
false
null
0
o8iml0c
false
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8iml0c/
false
1
t1_o8imihg
How can you guys let 4b models run on your android phone? I know google has this AI edge gallery app, which can run Gemma 3 4b very well, but it didn't support guff models, and the app is in beta that it doesn't even have a chat history. Qwen3.5 4b on pockets would be a game changer.
1
0
2026-03-04T01:26:23
JinPing89
false
null
0
o8imihg
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8imihg/
false
1
t1_o8imihx
Apple should now focus on building their machines for AI/LLMs. Everyone knows you can run premier pro on a MacBook Air, not rocket science in today’s standards.
1
0
2026-03-04T01:26:23
Whyme-__-
false
null
0
o8imihx
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8imihx/
false
1
t1_o8im8mk
He can get a job at any place. Zuck is probably sending out a small arm of recruiters
1
0
2026-03-04T01:24:47
sunflowerapp
false
null
0
o8im8mk
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8im8mk/
false
1
t1_o8im7ss
This is a bit slop. For one, hitting the max RAM bandwidth requires getting the full-fat Max chip, which will run you $4199. The standard Max chip is slower and doesn't let you hit 128GB of RAM. Still a great machine. But basically you're buying something with 2.5x the cost of a Strix Halo for 2.5x the speed of a Strix Halo.
1
0
2026-03-04T01:24:39
iansltx_
false
null
0
o8im7ss
false
/r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8im7ss/
false
1
t1_o8im739
For an RTX 5080, is there any reason to go full NVFP4?
1
0
2026-03-04T01:24:32
InternationalNebula7
false
null
0
o8im739
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8im739/
false
1
t1_o8im51y
[removed]
1
0
2026-03-04T01:24:11
[deleted]
true
null
0
o8im51y
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8im51y/
false
1
t1_o8im2bh
I am very interested in this, but not as a game developer, but as a player. It would be interesting for me if the model could fully generate environment and basic physics. In general, give a large language model to create a world and rules of this world, to fill the virtual world with content.  Perhaps make this process step-by-step and modular (the choice of each individual element of the world from several generated variants). I'm sorry if my English is bad. I use Offline Translation (android) for translation, it’s a great app!
1
0
2026-03-04T01:23:43
Penis-Thicc-9586
false
null
0
o8im2bh
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8im2bh/
false
1
t1_o8im1pw
You publish your distilled intent for llms to train on? lol
1
0
2026-03-04T01:23:37
numberwitch
false
null
0
o8im1pw
false
/r/LocalLLaMA/comments/1rk6ulw/prediction_nextgen_frontier_llms_will_be/o8im1pw/
false
1
t1_o8ilx0y
I will be messaging you in 12 hours on [**2026-03-04 13:21:58 UTC**](http://www.wolframalpha.com/input/?i=2026-03-04%2013:21:58%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/o8ilrhz/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FLocalLLaMA%2Fcomments%2F1rk74ap%2Fqwen359b_uncensored_aggressive_release_gguf%2Fo8ilrhz%2F%5D%0A%0ARemindMe%21%202026-03-04%2013%3A21%3A58%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201rk74ap) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|
1
0
2026-03-04T01:22:52
RemindMeBot
false
null
0
o8ilx0y
false
/r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/o8ilx0y/
false
1
t1_o8ilrhz
!remindme 12h
1
0
2026-03-04T01:21:58
Charming_Skirt3363
false
null
0
o8ilrhz
false
/r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/o8ilrhz/
false
1
t1_o8ilnk8
$3599 for a machine with state of the art cpu and 128 gb ram, is that a real price it's going on sale somewhere, or just hallucinated by the llm you used to write this post?
1
0
2026-03-04T01:21:20
Marshall_Lawson
false
null
0
o8ilnk8
false
/r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8ilnk8/
false
1
t1_o8ilgr5
Stefanie
1
0
2026-03-04T01:20:13
somatt
false
null
0
o8ilgr5
false
/r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8ilgr5/
false
1
t1_o8il0xd
As an engine developer who wants to create some proof-of-concept games to validate my work but isn't really a creative type, YES I am absolutely interested in this. I would probably use this for adding custom background assets and such, especially if it could handle texturing as well. I'd love to see it be able to take some reference data and generate assets in a similar style.
1
0
2026-03-04T01:17:39
nullptr777
false
null
0
o8il0xd
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8il0xd/
false
1
t1_o8il0tg
It's a chat template kwarg. `chat_template_kwargs = { "enable_thinking": false }` needs to make it to your LLM server, either as a launch argument or via the API. Personally, I use the API; here's how you set that up in open-webui: https://preview.redd.it/0dszvuhgjxmg1.png?width=1080&format=png&auto=webp&s=e7d344550af5ae0b5826ab08356c6aabd329ec27 It's under `Admin Panel -> Models -> [Model Name] -> Advanced Parameters`
1
0
2026-03-04T01:17:38
Thunderstarer
false
null
0
o8il0tg
false
/r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/o8il0tg/
false
1
t1_o8iky27
It should be possible to run it via CPU processing as long as the game doesn't use to much CPU on its own, I tried a training test using the CPU only branch of pytorch (only ~500MB of packages instead of 3-4GB of packages), I was training a fresh model to play 2048, 16 inputs, and then 4 outputs (model parameters are set as 16, 32, 16, 4), utilizing color matching for the 4x4 grid of numbers or tesseract OCR if the color doesn't match the list (though OCR does mix up 32 and think is 8...so I use it as a last resort option, it's also fairly slow). And I have this running off a 2 core 4 thread 2.4GHz 9th gen Pentium gold processor. Without OCR and running a simulated 2048 via code instead of using the actual program to train it, the model reaches ~150-300 moves in less than .1 seconds. All running via a CPU that doesn't have AVX nor AVX2 instruction sets and uses 15W for all 4 threads to be running at the max clock speed of 2.4GHz
1
0
2026-03-04T01:17:11
Ill-Oil-2027
false
null
0
o8iky27
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8iky27/
false
1
t1_o8ikmtf
Rather buy 4 blackwell 6000 pros, put it a system with DDR5 and keep the rest of the change.
1
0
2026-03-04T01:15:21
segmond
false
null
0
o8ikmtf
false
/r/LocalLLaMA/comments/1rjwjf3/hardware_usaca_8gpu_a100_40gb_sxm4_cluster_2x/o8ikmtf/
false
1
t1_o8ikfzz
Building now. Will report back.
1
0
2026-03-04T01:14:16
Xp_12
false
null
0
o8ikfzz
false
/r/LocalLLaMA/comments/1rjxmvo/qwen35_checkpointing_fix_pr_testing/o8ikfzz/
false
1
t1_o8ikf4e
30gb if im not mistaken. Yeah I am supposed to be able to install there. I will create a folder on my desktop and try installing there
1
0
2026-03-04T01:14:08
lubezki
false
null
0
o8ikf4e
false
/r/LocalLLaMA/comments/1mitsok/which_ai_image_generators_are_good_for/o8ikf4e/
false
1
t1_o8ikddw
You're a gem mate. some of us really need to see stuff like this. Thanks. This might be just the post i needed to jump-start me back into figuring out how to run similar comparative tests. I started looking into this casually several months back but got distracted away and never went back to it. What I'd love to be able to do is get qualitative comparisons across a range of different parameters with different quantisation levels. Unfortunately you often find tests for the specific model you are interested in but its only pp/tg reported, or if it is more qualitative comparison of model vs model its never the model variant you can fit, its always the full OR 'wrong' weights. Though it looks like I need to immerse myself a bit more into the academia of LLM first to get a handle on some of the principles you were talking about. For example I have come to acknowledge that I am looking for lower KL Divergence but what does that **actually** mean, I couldn't explain that properly to someone because I still cant really explain that to myself. Im still only 'number' bigger or smaller comprehension.
1
0
2026-03-04T01:13:51
munkiemagik
false
null
0
o8ikddw
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ikddw/
false
1