name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o83hwpa
They might just be on the spectrum and not really get why the rest of the family doesn’t care
7
0
2026-03-01T18:43:03
bittabet
false
null
0
o83hwpa
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o83hwpa/
false
7
t1_o83hlce
At full context (260k) it slows down considerably but still around 30tg and between 200-300 pp at worst, 700 or so if mostly cached. Still workable, but I tend to start new sessions regularly now. 1800pp and 60tg when fresh
1
0
2026-03-01T18:41:36
stormy1one
false
null
0
o83hlce
false
/r/LocalLLaMA/comments/1rhchvi/qwen35_family_running_notes/o83hlce/
false
1
t1_o83hkcx
Smart ones
-3
1
2026-03-01T18:41:28
kosdfjhgi0ser09gniod
false
null
0
o83hkcx
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83hkcx/
false
-3
t1_o83hgvg
could you describe in a few words what that AI hallucination is about and why we might need it?
1
0
2026-03-01T18:41:01
MelodicRecognition7
false
null
0
o83hgvg
false
/r/LocalLLaMA/comments/1ri51y0/p_aurastate_formally_verified_llm_state_machine/o83hgvg/
false
1
t1_o83hfr7
https://preview.redd.it/…aaf21d943142e5df
1
1
2026-03-01T18:40:52
Velocita84
false
null
0
o83hfr7
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83hfr7/
false
1
t1_o83h9pw
Why? What I'm saying is how it's going to play out, not how I want it to play out, not how I'm rooting for this to go..... what's going to happen.
1
0
2026-03-01T18:40:05
FrostyParking
false
null
0
o83h9pw
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o83h9pw/
false
1
t1_o83h484
I also looked at these going dirt cheap, but also the cost of running and cooling will just add up to the cost of a 1-3 year old GPU anyways. I have some 1080 TI not worth even firing it for the cost and loss doesn't = any gain.
1
0
2026-03-01T18:39:23
Ztoxed
false
null
0
o83h484
false
/r/LocalLLaMA/comments/1ri232z/worth_it_to_buy_tesla_p40s/o83h484/
false
1
t1_o83h2n0
Any model trained on the Internet should be public domain
1
0
2026-03-01T18:39:11
821835fc62e974a375e5
false
null
0
o83h2n0
false
/r/LocalLLaMA/comments/1rcpmwn/anthropic_weve_identified_industrialscale/o83h2n0/
false
1
t1_o83gw30
the alexa replacement angle is cool tho, thats probably the easiest sell since its a direct habit swap.
1
0
2026-03-01T18:38:20
eibrahim
false
null
0
o83gw30
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o83gw30/
false
1
t1_o83gv8g
Why the overhead of both when u can use llama.cpp?
1
0
2026-03-01T18:38:14
sagiroth
false
null
0
o83gv8g
false
/r/LocalLLaMA/comments/1ri0iep/ollama_or_openvino/o83gv8g/
false
1
t1_o83gpgv
No, and that's mostly for privacy reasons. DuckDuckGo, Startpage, and Brave, are all privacy focused search engines. The only one that isn't is Bing, but that's why it's last. WebSearch AI is a privacy focused AI application. So it wouldn't make sense to use Google.
1
0
2026-03-01T18:37:31
DrinkingPants74
false
null
0
o83gpgv
false
/r/LocalLLaMA/comments/1q6zslx/websearch_ai_let_local_models_use_the_interwebs/o83gpgv/
false
1
t1_o83glzb
I have to get stuff running on cpu only for my work project
1
0
2026-03-01T18:37:04
piexil
false
null
0
o83glzb
false
/r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o83glzb/
false
1
t1_o83gjed
been messing around with this exact use case for a while. tried rolling my own setup with a local model and some custom scripts but keeping the rules updated got tedious once i had more than a handful of conditions. ended up switching to PinchClaw AI for most of my email stuff now. it runs an agent in the cloud conn...
1
0
2026-03-01T18:36:44
subropho
false
null
0
o83gjed
false
/r/LocalLLaMA/comments/1ri1l4o/who_is_doing_useful_things_with_local_ai_and_email/o83gjed/
false
1
t1_o83gg0k
would be amazing for my CPU only project
1
0
2026-03-01T18:36:19
piexil
false
null
0
o83gg0k
false
/r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o83gg0k/
false
1
t1_o83gfre
I use searxng daily, and use it for the web search for my bots on open-webui. works better than google in the browser.
2
0
2026-03-01T18:36:17
Complainer_Official
false
null
0
o83gfre
false
/r/LocalLLaMA/comments/1rhj0l9/mcp_server_for_searxngnonapi_local_search/o83gfre/
false
2
t1_o83ga6c
Maybe you are missing --jinja?
1
0
2026-03-01T18:35:33
Equivalent_Time1724
false
null
0
o83ga6c
false
/r/LocalLLaMA/comments/1ri1h5n/ik_llamacpp_reasoning_not_working_with_glm_models/o83ga6c/
false
1
t1_o83g73g
O MA GA
1
0
2026-03-01T18:35:10
hyxon4
false
null
0
o83g73g
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83g73g/
false
1
t1_o83g6a0
Same experiences I flattened my codebase to text file and maxed out 64k context with a task to audit it(8gb vram 32gb ram) and it found legit issues and future considerations perfectly
4
0
2026-03-01T18:35:04
sagiroth
false
null
0
o83g6a0
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o83g6a0/
false
4
t1_o83g0p4
I do not know if this is reasonable. I mean r9700 have 1/3 of the ram and have is 1/8 of the price. I know it is totally different manufacturer but came on! Like 4 times the price would be reasonable. Not 8. 
1
0
2026-03-01T18:34:23
komio
false
null
0
o83g0p4
false
/r/LocalLLaMA/comments/1q8fagh/rtx_blackwell_pro_6000_wholesale_pricing_has/o83g0p4/
false
1
t1_o83ftds
Most probably are, but we never really know, chinese models from glm to deepseek mostly use huawei's ascend gpus (parent company of honor) to host their models, glm 5 was specifically made optimised for huawei's GPUs, so they might know something we dont, But for most of the parts this looks like slop marketing and t...
1
0
2026-03-01T18:33:28
Acceptable_Home_
false
null
0
o83ftds
false
/r/LocalLLaMA/comments/1ri0puh/honor_would_use_deepseek/o83ftds/
false
1
t1_o83filz
Llama cpp still doesn't have support yet though, does it?
1
0
2026-03-01T18:32:06
piexil
false
null
0
o83filz
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83filz/
false
1
t1_o83fi3g
because AI IDEs use about double the ram of normal IDEs
1
0
2026-03-01T18:32:02
Scutoidzz
false
null
0
o83fi3g
false
/r/LocalLLaMA/comments/1qt76qs/mistral_vibe_20/o83fi3g/
false
1
t1_o83fhg2
fair point. the node standup and the legal loop kind of compounded each other which made everything feel slower than it was
1
0
2026-03-01T18:31:57
Olivia_Davis_09
false
null
0
o83fhg2
false
/r/LocalLLaMA/comments/1rhb1xb/fine_tuning_on_proprietary_data_is_way_harder_to/o83fhg2/
false
1
t1_o83fg7l
draft model incoming!
1
0
2026-03-01T18:31:48
PANIC_EXCEPTION
false
null
0
o83fg7l
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83fg7l/
false
1
t1_o83fe5g
Buy a gpu? Tbh the 4b and below should be viable on a CPU based on previous models
2
0
2026-03-01T18:31:33
piexil
false
null
0
o83fe5g
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83fe5g/
false
2
t1_o83fcec
……what if I used all the models to speculatively decode for all the models?
1
0
2026-03-01T18:31:20
Guinness
false
null
0
o83fcec
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83fcec/
false
1
t1_o83fane
💀GTX 1080 Ti 11G💀
17
0
2026-03-01T18:31:06
NegotiationNo1504
false
null
0
o83fane
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83fane/
false
17
t1_o83f7d5
vendor timelines are basically where fine-tuning projects go to die - good luck on round two
1
0
2026-03-01T18:30:42
theagentledger
false
null
0
o83f7d5
false
/r/LocalLLaMA/comments/1rhb1xb/fine_tuning_on_proprietary_data_is_way_harder_to/o83f7d5/
false
1
t1_o83f5vf
I've got the 397b one running, I'm going to give it a shot.
3
0
2026-03-01T18:30:31
Head_Bananana
false
null
0
o83f5vf
false
/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o83f5vf/
false
3
t1_o83f16h
I did - I tried both Qwen3.5-27B-GGUF and Qwen3.5-35B-A3B-GGUF (The Q8\_0 versions) They both really struggled with this prompt. Even after I added a bunch of reminders to the prompt like so: > 6. CRITICAL FEEDBACK FROM PREVIOUS ATTEMPTS (DO NOT MAKE THESE MISTAKES): >\- FATAL ERROR: In previous attempts, the \`spa...
3
0
2026-03-01T18:29:55
jacobpederson
false
null
0
o83f16h
false
/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o83f16h/
false
3
t1_o83ew5z
bro i wanna use the unsloth Qwen 3.5 27B UD Q4\_K\_XL version but i have a few constraints: rtx 4050 onm my laptop, 6gb vram and 16gb ram. should i go for it or do you have a better suggestion for this particular setup?
1
0
2026-03-01T18:29:17
Giyuforlife
false
null
0
o83ew5z
false
/r/LocalLLaMA/comments/1refvmr/qwen_3_27b_is_impressive/o83ew5z/
false
1
t1_o83euy6
They're good for ollama and vs-code
1
0
2026-03-01T18:29:08
Mashic
false
null
0
o83euy6
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83euy6/
false
1
t1_o83eu3v
Have been playing around with one of the medium models over the weekend. They are great! It's a good thing they provide this many different sizes.
1
0
2026-03-01T18:29:02
Beautiful-Honeydew10
false
null
0
o83eu3v
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83eu3v/
false
1
t1_o83emd1
Qwen 3.5 9B thinking might prolly be able to pull the same off, Or try nanbeige 4.1 3B, the best thing after qwen 3.5
3
0
2026-03-01T18:28:05
Acceptable_Home_
false
null
0
o83emd1
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o83emd1/
false
3
t1_o83ekvh
this is a uefi program that runs directly on top of the processor. ring 0. i have not built in any sort of custom filesystem. what you are seeing is the uefi firmware from the dell connecting to the fat32 file system on the usb.
1
0
2026-03-01T18:27:53
Electrical_Ninja3805
false
null
0
o83ekvh
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o83ekvh/
false
1
t1_o83ehwo
If you like grep you gonna love ripgrep (10x faster)
1
0
2026-03-01T18:27:31
Irrationalender
false
null
0
o83ehwo
false
/r/LocalLLaMA/comments/1rg7oj1/bash_commands_outperform_vector_search_for/o83ehwo/
false
1
t1_o83ebcp
Im just surprised with all the stuff qwen 3.5 35B can pull of, no shi it is the first model which i can daily drive with a massive amount if trust and at 25tp/s+ speed, and it always stands above glm 4.7 flash in every use case of mine Tho it does overthink sometimes, or even at just hi or good morning, really happy ...
4
0
2026-03-01T18:26:40
Acceptable_Home_
false
null
0
o83ebcp
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o83ebcp/
false
4
t1_o83e3wm
And I'm the weirdo that upgraded my GTX 1080 to a modest 5060ti 16gb, then crazy with local AI models and bought a 5090. But hey, 48gb VRAM fit qwen3-coder-next reasonably in q4, so champagne problems I guess.
1
0
2026-03-01T18:25:42
_-_David
false
null
0
o83e3wm
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o83e3wm/
false
1
t1_o83dzei
i cant, i need the vision, too useful for engineering problems.
2
0
2026-03-01T18:25:08
Far-Low-4705
false
null
0
o83dzei
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83dzei/
false
2
t1_o83dw1p
the 'waiting on vendor x' loop was exactly what killed our timeline.. might revisit it for the next round..
1
0
2026-03-01T18:24:42
Olivia_Davis_09
false
null
0
o83dw1p
false
/r/LocalLLaMA/comments/1rhb1xb/fine_tuning_on_proprietary_data_is_way_harder_to/o83dw1p/
false
1
t1_o83ds75
Excellent! Thx for the childhood reminder. I'm gonna start Simon the Sorcerer now
4
0
2026-03-01T18:24:13
AppealSame4367
false
null
0
o83ds75
false
/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o83ds75/
false
4
t1_o83dlgo
maybe you should try qwen3.5 27b, it performs on par with the 122b and might run faster than the 122b if you can fit it into VRAM.
5
0
2026-03-01T18:23:23
Far-Low-4705
false
null
0
o83dlgo
false
/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o83dlgo/
false
5
t1_o83deq8
But by then current Opus will feel dumb… the circle of life
2
0
2026-03-01T18:22:30
MasterScrat
false
null
0
o83deq8
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o83deq8/
false
2
t1_o83d4j1
yeah we did it completely backwards.. vendor first, legal second.. learned that one the hard way..
1
0
2026-03-01T18:21:13
Olivia_Davis_09
false
null
0
o83d4j1
false
/r/LocalLLaMA/comments/1rhb1xb/fine_tuning_on_proprietary_data_is_way_harder_to/o83d4j1/
false
1
t1_o83d1v2
Wondering this 9B enough for basic/medium level Agentic coding
1
0
2026-03-01T18:20:52
pmttyji
false
null
0
o83d1v2
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83d1v2/
false
1
t1_o83cyq3
I use them for web search. But there isn’t really just one thing you’re supposed to do with them.
1
0
2026-03-01T18:20:28
Ok_Cow_8213
false
null
0
o83cyq3
false
/r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o83cyq3/
false
1
t1_o83cnoe
For time testing, sure, but most of your questions are related to RAM limits, context, kv cache, etc. You can answer all of those questions on a local system before moving to a VPS.
1
0
2026-03-01T18:19:04
suicidaleggroll
false
null
0
o83cnoe
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o83cnoe/
false
1
t1_o83cn91
Cool!
3
0
2026-03-01T18:19:01
GuiltyBookkeeper4849
false
null
0
o83cn91
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o83cn91/
false
3
t1_o83cldm
Grok 5, grok 6 doesn't exist yet
1
0
2026-03-01T18:18:46
Background-Fig-3967
false
null
0
o83cldm
false
/r/LocalLLaMA/comments/1rgq0vc/can_a_local_hosted_llm_keep_up_with_grok_41_fast/o83cldm/
false
1
t1_o83cl7m
Damn. I'd love something around the 14b space. 9b and less is usually unusable. 27b dense is too much for me.
6
0
2026-03-01T18:18:45
Icy-Degree6161
false
null
0
o83cl7m
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83cl7m/
false
6
t1_o83cjbw
This is why true open source AI is needed
2
0
2026-03-01T18:18:31
GuiltyBookkeeper4849
false
null
0
o83cjbw
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o83cjbw/
false
2
t1_o83cdor
Remember its your hobby and not theirs ;)  Also, show, don't tell. Show them what cool stuff you can do with it and they'll be more inclined to use it. My GF doesn't use the awesome streamlined recipe app I host for her either. That's fine I still enjoy the food and happily track my own recipes in the app. I occasi...
8
0
2026-03-01T18:17:48
BERLAUR
false
null
0
o83cdor
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o83cdor/
false
8
t1_o83ccfe
It’s extra stupid
-2
0
2026-03-01T18:17:39
cockachu
false
null
0
o83ccfe
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o83ccfe/
false
-2
t1_o83ca9f
Would it work if we disable the vision part?
1
0
2026-03-01T18:17:22
spaceman_
false
null
0
o83ca9f
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o83ca9f/
false
1
t1_o83c82d
Am I reading that right - The Mac Studio would provide a snappy AI receptionist, but the reasoning/token use may run slower for document review? If so, that actually works for Me.
1
0
2026-03-01T18:17:05
IndianaAttorneyGuy
false
null
0
o83c82d
false
/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o83c82d/
false
1
t1_o83c7xi
How do you know?
6
0
2026-03-01T18:17:04
AppealSame4367
false
null
0
o83c7xi
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83c7xi/
false
6
t1_o83c5cj
Very useful!
1
0
2026-03-01T18:16:44
GuiltyBookkeeper4849
false
null
0
o83c5cj
false
/r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o83c5cj/
false
1
t1_o83c4op
You've a 960\~gb/s bandwidth vga?
2
0
2026-03-01T18:16:39
soyalemujica
false
null
0
o83c4op
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o83c4op/
false
2
t1_o83c293
Q8 possibly 9-10GB. Q4 - 4-5GB
2
0
2026-03-01T18:16:20
pmttyji
false
null
0
o83c293
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83c293/
false
2
t1_o83c1p2
Cool
1
0
2026-03-01T18:16:15
GuiltyBookkeeper4849
false
null
0
o83c1p2
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o83c1p2/
false
1
t1_o83bvw3
Any of these are coding or are we still at -coder-next
1
0
2026-03-01T18:15:32
Legitimate-Pumpkin
false
null
0
o83bvw3
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83bvw3/
false
1
t1_o83buaq
With how incompetent this government is, it wouldn't surprise me if they literally are asking a chatbot to identify targets. They don't care to have expertise.  Maybe there is a legitimate use for anthropic tech for missile strike or other military planning. As a first pass to narrow information before the experts op...
2
0
2026-03-01T18:15:19
one-wandering-mind
false
null
0
o83buaq
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o83buaq/
false
2
t1_o83bu7l
Finally
1
0
2026-03-01T18:15:18
GuiltyBookkeeper4849
false
null
0
o83bu7l
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83bu7l/
false
1
t1_o83bqkf
My requirement is textual analysis, textual sentiment, categorization, etc. I understand that the latest flash model of qwen3.5 gives better results due to its thinking and the MOE.
1
0
2026-03-01T18:14:50
MohaMBS
false
null
0
o83bqkf
false
/r/LocalLLaMA/comments/1nu7neu/seeking_advice_best_model_framework_for_max/o83bqkf/
false
1
t1_o83bhw4
did you ever get to testing it with larger prompts? 2k, 5k, 10k, 20k?
1
0
2026-03-01T18:13:44
MotokoAGI
false
null
0
o83bhw4
false
/r/LocalLLaMA/comments/1n70v8v/rtx_6000_pro_workstation_to_run_deepseek/o83bhw4/
false
1
t1_o83bgik
6 or 7
4
0
2026-03-01T18:13:33
lun4r
false
null
0
o83bgik
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83bgik/
false
4
t1_o83bb68
Drafts are not supported at all for VL models
2
0
2026-03-01T18:12:52
xanduonc
false
null
0
o83bb68
false
/r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o83bb68/
false
2
t1_o83b6aj
Qwen3 8b is already a surprisingly good computer use model for it's size, and it can run reasonably fast on 16gb cards. If this is better than that you won't really need anything else for that use case.
1
0
2026-03-01T18:12:13
sonicnerd14
false
null
0
o83b6aj
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o83b6aj/
false
1
t1_o83axtl
I see 👍
1
0
2026-03-01T18:11:07
Long_comment_san
false
null
0
o83axtl
false
/r/LocalLLaMA/comments/1rg9lli/little_qwen_35_27b_and_qwen_35ba3b_models_did/o83axtl/
false
1
t1_o83axbl
I don't see this video "educational" in any sense. He shareed very little techinical detail about it. It's a 20min victory speech
2
0
2026-03-01T18:11:03
TimeLimitExceeeeded
false
null
0
o83axbl
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o83axbl/
false
2
t1_o83aoc6
Hive mind here. In general, I ignore strongly negative *and* positive posts that don't bother to specify their model size, quant, and ideally their CLI parameters as well. There is just too much variance among all the possible configurations to allow any sweeping conclusions to be drawn.
3
0
2026-03-01T18:09:54
NoahFect
false
null
0
o83aoc6
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o83aoc6/
false
3
t1_o83ams4
Cool to see! What configs did you set for this?
1
0
2026-03-01T18:09:41
MeditateBreathe
false
null
0
o83ams4
false
/r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o83ams4/
false
1
t1_o83aez4
I hope the instruct models are next.
1
0
2026-03-01T18:08:40
beedunc
false
null
0
o83aez4
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83aez4/
false
1
t1_o83a59l
when it says "initialising filesystem" etc.. that's your OS. I guess you meant no GUI
2
0
2026-03-01T18:07:25
-dysangel-
false
null
0
o83a59l
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o83a59l/
false
2
t1_o839zua
What command line parameters are you running?
1
0
2026-03-01T18:06:43
NoahFect
false
null
0
o839zua
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o839zua/
false
1
t1_o839ulk
4_K_L. There probably is. The more you quant a model the worse the PPL numbers get on the quantized cache.
2
0
2026-03-01T18:06:04
a_beautiful_rhind
false
null
0
o839ulk
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o839ulk/
false
2
t1_o839qit
Great project! The push-to-talk approach is smart. For simpler desktop use cases, local STT solutions that work out of the box exist too.
1
0
2026-03-01T18:05:33
Weesper75
false
null
0
o839qit
false
/r/LocalLLaMA/comments/1rhjavd/aipi_local_voice_assistant_bridge_esp32s3/o839qit/
false
1
t1_o839pz0
I'm not really that knowledgeable one to answer your question. Better you post this (comment) as new thread for instant & more responses.
1
0
2026-03-01T18:05:28
pmttyji
false
null
0
o839pz0
false
/r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o839pz0/
false
1
t1_o839ol3
Qwen3.5 is out though
-4
1
2026-03-01T18:05:17
Emotional-Baker-490
false
null
0
o839ol3
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o839ol3/
false
-4
t1_o839n3f
Is this satire ? Your video is just you basically saying HI to the LLM…. wtf is this
5
0
2026-03-01T18:05:06
cmndr_spanky
false
null
0
o839n3f
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o839n3f/
false
5
t1_o839j4h
[chat\_template.jinja · zai-org/GLM-5 at main](https://huggingface.co/zai-org/GLM-5/blob/main/chat_template.jinja)
1
0
2026-03-01T18:04:35
Expensive-Paint-9490
false
null
0
o839j4h
false
/r/LocalLLaMA/comments/1ri1h5n/ik_llamacpp_reasoning_not_working_with_glm_models/o839j4h/
false
1
t1_o839g3e
my concern was mainly how this translates to KVM/shared environments....
1
0
2026-03-01T18:04:13
Fine_Factor_456
false
null
0
o839g3e
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o839g3e/
false
1
t1_o839c66
[ubergarm/GLM-5-GGUF · Hugging Face](https://huggingface.co/ubergarm/GLM-5-GGUF) maybe try to follow ubergarm's suggested setting, if not work, then download ubergarm's quant.
1
0
2026-03-01T18:03:42
kironlau
false
null
0
o839c66
false
/r/LocalLLaMA/comments/1ri1h5n/ik_llamacpp_reasoning_not_working_with_glm_models/o839c66/
false
1
t1_o839aew
Claude absolutely loves reverse engineering. Much more eager than Codex.
7
0
2026-03-01T18:03:30
lxe
false
null
0
o839aew
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o839aew/
false
7
t1_o8397xj
so the constraint is really just total parameters, not the model family?
1
0
2026-03-01T18:03:11
Fine_Factor_456
false
null
0
o8397xj
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o8397xj/
false
1
t1_o8396v9
Finally found a model I actually use for real work stuff. Setup: 16GB VRAM + 64GB DDR5. Pushing \~68-73 t/s on 65k context. Quality is solid. Tried the 27B version, but it crawled at 20-30 t/s. Quantization was too heavy, suspecting a loss in reasoning quality.
5
0
2026-03-01T18:03:03
BORIS3443
false
null
0
o8396v9
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o8396v9/
false
5
t1_o838wup
my concern was mainly how this translates to KVM/shared environments
1
0
2026-03-01T18:01:49
Fine_Factor_456
false
null
0
o838wup
false
/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o838wup/
false
1
t1_o838utg
4b is real!?
0
0
2026-03-01T18:01:34
Emotional-Baker-490
false
null
0
o838utg
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o838utg/
false
0
t1_o838tph
Except their own morality... their own mind.
5
0
2026-03-01T18:01:26
Cute_Obligation2944
false
null
0
o838tph
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o838tph/
false
5
t1_o838mbf
Can't believe I have to do this because I'm probably half your age. Sorry in advance for being so on the nose. \--- It's both/neither and you're literally rotting mentally the way you think. Yes/No = 1/0 = binary/boolean Reality is almost entirely **gradients**. You aren't even in the right subspace to be comment...
3
0
2026-03-01T18:00:31
brownman19
false
null
0
o838mbf
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o838mbf/
false
3
t1_o838lt5
Yeah... This also was my thinking of the overhype.
0
0
2026-03-01T18:00:27
mkMoSs
false
null
0
o838lt5
false
/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o838lt5/
false
0
t1_o838l9r
Check this recent thread which's filled with many experiments & comparisons using llama.cpp command. [Follow-up: Qwen3.5-35B-A3B — 7 community-requested experiments on RTX 5080 16GB](https://www.reddit.com/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/?utm_source=share&utm_medium=web3x&utm_n...
2
0
2026-03-01T18:00:23
pmttyji
false
null
0
o838l9r
false
/r/LocalLLaMA/comments/1ri3mxa/ideal_llamacpp_settings_for_12gb_vram_and_64gb/o838l9r/
false
2
t1_o838gxs
I use OpenVINO( on Lunar Lake GPU ) to generate subtitles for TV shows. Better than whisper.cpp.
1
0
2026-03-01T17:59:51
giant3
false
null
0
o838gxs
false
/r/LocalLLaMA/comments/1ri0iep/ollama_or_openvino/o838gxs/
false
1
t1_o838fui
If it can summarize your private text without hallucinating, that’s a legit breakthrough 💯
2
0
2026-03-01T17:59:42
loxotbf
false
null
0
o838fui
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o838fui/
false
2
t1_o838c0e
Wonder if there is a correlation to the bit size of the model's quant? What q is your Devstra running l?
2
0
2026-03-01T17:59:13
DinoAmino
false
null
0
o838c0e
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o838c0e/
false
2
t1_o838bxa
Welcome to product management.
1
0
2026-03-01T17:59:12
Any_Protection_8
false
null
0
o838bxa
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o838bxa/
false
1
t1_o8388t8
What token per second output was giving can you mention
1
0
2026-03-01T17:58:49
Easy_Improvement754
false
null
0
o8388t8
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o8388t8/
false
1
t1_o8388ho
En ce moment Gemma a un problème nommé Qwen3.5 27B, je pense que ça va prendre un peu de temps 🤣
-4
0
2026-03-01T17:58:47
Adventurous-Paper566
false
null
0
o8388ho
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8388ho/
false
-4
t1_o8386z5
I came
1
0
2026-03-01T17:58:36
larrytheevilbunnie
false
null
0
o8386z5
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8386z5/
false
1
t1_o837wf5
Brother! I found you! I am going to use it!
1
0
2026-03-01T17:57:16
Pitiful_Astronaut_93
false
null
0
o837wf5
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o837wf5/
false
1