name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7xuti9
weirdly, It does not think that much in qwen's offical site and only do when It's needed
2
0
2026-02-28T20:43:24
Fault23
false
null
0
o7xuti9
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7xuti9/
false
2
t1_o7xujjj
There exists some competition now, but capitalism will lead to corporate consolidation, monopoly or close to monopoly and then massive enshitification of multimodal AI. Open source is the only hope for AI long term.
5
0
2026-02-28T20:41:54
bform2
false
null
0
o7xujjj
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xujjj/
false
5
t1_o7xug4u
:D Time to post a survey thread on this
1
0
2026-02-28T20:41:23
pmttyji
false
null
0
o7xug4u
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xug4u/
false
1
t1_o7xufyz
I mean duh
2
0
2026-02-28T20:41:22
aivi_mask
false
null
0
o7xufyz
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7xufyz/
false
2
t1_o7xubis
You won't find a good model for only 12gb vram, including context. I suggest the new Qwen3.5 35B-A3B model with cpu offload. I remember with Qwen3 you could offload entire experts instead of layers to the CPU, and that made it a lot faster. Expect somewhere from \~10-20 tok/sec and \~40k-64k tokens depending on how man...
1
0
2026-02-28T20:40:42
Presstabstart
false
null
0
o7xubis
false
/r/LocalLLaMA/comments/1rhcnbt/best_coding_model_to_run_entirely_on_12gb_vram/o7xubis/
false
1
t1_o7xu9gf
can it be run on a 2080ti 11gb, 32gb ram ? what the approximative tokens/s i'm getting if it can ?
1
0
2026-02-28T20:40:23
azngaming63
false
null
0
o7xu9gf
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xu9gf/
false
1
t1_o7xttij
I get it what you're saying. But for that old man, those are too much. I'm just trying to give possible tiny/small models with enough World knowledge runnable CPU-only with current laptop. Just like offline Google search or ChatGPT. It's just for his hobby(along with his book reading), he's not doing any research. He j...
-1
0
2026-02-28T20:37:57
pmttyji
false
null
0
o7xttij
false
/r/LocalLLaMA/comments/1rhcs8p/tiny_small_faster_models_for_13_year_old_laptop/o7xttij/
false
-1
t1_o7xts9z
Even without system prompt under any conditions it cannot be that bad at geberating simple text/code
1
0
2026-02-28T20:37:46
Acrobatic_Donkey5089
false
null
0
o7xts9z
false
/r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o7xts9z/
false
1
t1_o7xtnmm
Honestly it does make some mistakes in German and Italian but if you use it as input for LLMs (or use them for correction) it really doesn't matter and it is very consistent and fast.
1
0
2026-02-28T20:37:04
cosimoiaia
false
null
0
o7xtnmm
false
/r/LocalLLaMA/comments/1qvvcd6/new_voxtralminirealtime_from_mistral_stt_in_under/o7xtnmm/
false
1
t1_o7xtgnm
There are dozens of us :)
4
0
2026-02-28T20:36:01
DreamingInManhattan
false
null
0
o7xtgnm
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xtgnm/
false
4
t1_o7xt9fp
Yeah. My assumption was that all the MoE felt the same way 😂 I gave it one more expert and it stopped.
1
0
2026-02-28T20:34:57
Joscar_5422
false
null
0
o7xt9fp
false
/r/LocalLLaMA/comments/1rhaoty/anyone_noticing_qwen35_27b_getting_stuck_in/o7xt9fp/
false
1
t1_o7xt5b4
IIRC he's right if you have a Blackwell card, it can run FP4 natively without unpacking to FP8 or FP16.
4
0
2026-02-28T20:34:20
spaceman_
false
null
0
o7xt5b4
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xt5b4/
false
4
t1_o7xt1fr
Hot take: No government agency responsible for it's actions should have those actions subject to the decisions of a private company or its appointees. DoD should have total control over everything they rely on. They should also be held accountable for every action they take. Anthropic agreeing to terms to get into...
1
0
2026-02-28T20:33:45
ieatrox
false
null
0
o7xt1fr
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7xt1fr/
false
1
t1_o7xsylj
Replying at my own comment so the conversation is visible. u/No-Refrigerator-1672 you're right that each token's KV is conditioned on everything before it. But AVP doesn't splice a slice of one agent's cache into another agent's existing cache. It transfers the entire KV-cache. Agent A processes its prompt, runs 20...
2
0
2026-02-28T20:33:19
proggmouse
false
null
0
o7xsylj
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7xsylj/
false
2
t1_o7xsu2x
Its definitly not nearly as fast as 3090, but it does great for internal project where I dont want to worry about making API calls. I have it run stable diffusion 3.0, gpt-oss 20b, it's pretty great for entry level stuff.
5
0
2026-02-28T20:32:38
Pretty_Challenge_634
false
null
0
o7xsu2x
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xsu2x/
false
5
t1_o7xsj4m
I'm glad you managed to get it working on your own! Enjoy your unlimited free 6x real-time fast cloning voice studio!! Also, don't forget to connect llama.cpp, Ollama, vllm, or the OpenAI API to the voice chat for real-time conversations with your LLM, it feels really different!
1
0
2026-02-28T20:30:58
RIP26770
false
null
0
o7xsj4m
false
/r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o7xsj4m/
false
1
t1_o7xs99w
If you want world facts and history without hilariously inaccurate hallucinations you need to have an agentic model that looks up data from a wiki clone or something. I would not trust a small local model to get it right(I've tried some models on my phone through pocketpal asking them to list facts when I'm off-line or...
6
0
2026-02-28T20:29:30
Equal_Passenger9791
false
null
0
o7xs99w
false
/r/LocalLLaMA/comments/1rhcs8p/tiny_small_faster_models_for_13_year_old_laptop/o7xs99w/
false
6
t1_o7xs5rg
I’ve got 256gb ddr5-6000 and a 9950x3d and was getting about 15T/s on cpu using ik-llama. I had to switch to the mainline llamacpp to get vision working and speed dropped to 8 T/s. The model uses about 20gb ram and the kv cache will eat up another 1-5 depending on your context window.
2
0
2026-02-28T20:28:58
someone383726
false
null
0
o7xs5rg
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xs5rg/
false
2
t1_o7xs26y
There are 50 DeepSeek v4 posts per week for 52 weeks.
1
0
2026-02-28T20:28:27
Danny_Davitoe
false
null
0
o7xs26y
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7xs26y/
false
1
t1_o7xrx82
>The only thing left is getting a raise Best we can do is replacing you with AI. The same AI you’re excited about fine tuning.
15
0
2026-02-28T20:27:43
GoFigYourself
false
null
0
o7xrx82
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xrx82/
false
15
t1_o7xrvd1
> I saw llm models more as inference networks with with insanely well trained connection recognition by very complex combinations of parameters which generate the most logically deducable output based on extreme complex training on very large datasets. That describes humans to some extent as well.
1
0
2026-02-28T20:27:27
ScuffedBalata
false
null
0
o7xrvd1
false
/r/LocalLLaMA/comments/1msvs0i/what_happened_to_the_uncensored_models_like/o7xrvd1/
false
1
t1_o7xrnf7
front-loading a data tier classification conversation with legal in week 0 saves weeks of back-and-forth - get them to define what tier this data is and what contracts each tier requires before anyone picks a provider. running on-prem or choosing a vendor that already has a GDPR DPA / BAA in place cuts most of the cont...
2
0
2026-02-28T20:26:16
BC_MARO
false
null
0
o7xrnf7
false
/r/LocalLLaMA/comments/1rhb1xb/fine_tuning_on_proprietary_data_is_way_harder_to/o7xrnf7/
false
2
t1_o7xrnbm
>I have been using gpt-oss Didn't you have to filter the gpt-oss reasoning? I've been filtering the `<think>...</think>` with [this bit of code](https://github.com/Jay4242/llm-scripts/blob/95de1ddf2781dd658094b787b33917208f5915fd/llm-funnyornot.py#L65). I need to update some of my other scripts there to include that...
1
0
2026-02-28T20:26:15
SM8085
false
null
0
o7xrnbm
false
/r/LocalLLaMA/comments/1rhcj7b/qwen35_with_lm_studio_api_without_thinking_output/o7xrnbm/
false
1
t1_o7xrgno
No, there are. Check the 2nd section of the thread(after that line).
1
0
2026-02-28T20:25:15
pmttyji
false
null
0
o7xrgno
false
/r/LocalLLaMA/comments/1rhcs8p/tiny_small_faster_models_for_13_year_old_laptop/o7xrgno/
false
1
t1_o7xrdrf
try a light presence_penalty (0.1-0.2) - reasoning models are more susceptible to looping on visual grounding tasks where all candidates look plausible. also helps to tell it explicitly in the system prompt to commit to its best guess after one pass rather than keep second-guessing.
1
0
2026-02-28T20:24:49
BC_MARO
false
null
0
o7xrdrf
false
/r/LocalLLaMA/comments/1rhaoty/anyone_noticing_qwen35_27b_getting_stuck_in/o7xrdrf/
false
1
t1_o7xrbiz
really like the idea behind this. half the battle with local LLMs is just figuring out what fits in RAM/VRAM without crashing
2
0
2026-02-28T20:24:30
re-vox
false
null
0
o7xrbiz
false
/r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7xrbiz/
false
2
t1_o7xr8of
Your dense vs moe speed part is severly flawed. You do mention it needs to fit fully in vram, but you test one that doesnt fit fully in vram. You also dont mention prompt processing speed. I get 2000t/s pp and 28t/s tg at q8. I do like your other tests. If you wamt to do more i would love to see quality differences be...
2
0
2026-02-28T20:24:04
Gringe8
false
null
0
o7xr8of
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7xr8of/
false
2
t1_o7xr0of
Try a Qwen3 next REAP version?
2
0
2026-02-28T20:22:51
catplusplusok
false
null
0
o7xr0of
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7xr0of/
false
2
t1_o7xqp54
Solo cpu non credo riesca ad avere una buona esperienza d'uso a livello di token generati.. Prova con il modello più leggero e poi magari passi a quelli più pesanti
1
0
2026-02-28T20:21:09
tamerlanOne
false
null
0
o7xqp54
false
/r/LocalLLaMA/comments/1rhcs8p/tiny_small_faster_models_for_13_year_old_laptop/o7xqp54/
false
1
t1_o7xqkur
Thank you for the writeup! So i take it this is why nemotron 3 super is taking forever. Nvidia is training natively on fp4 to get past this obstacle ... and therefore verify the whole purpose in choosing ot pay extra for blackwell
1
0
2026-02-28T20:20:31
sir_creamy
false
null
0
o7xqkur
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7xqkur/
false
1
t1_o7xqge6
I would love if EU will try to relocate them.
4
0
2026-02-28T20:19:51
Intrepid_Card8950
false
null
0
o7xqge6
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7xqge6/
false
4
t1_o7xqg3x
I have zero intention of actually running local models but this is one of the highest quality subs and actually grounded in experience and reality
15
0
2026-02-28T20:19:48
CondiMesmer
false
null
0
o7xqg3x
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xqg3x/
false
15
t1_o7xqdup
Should rename to WhoreAI
1
0
2026-02-28T20:19:28
One-Employment3759
false
null
0
o7xqdup
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7xqdup/
false
1
t1_o7xqaiv
I didn't know that there was another latest Qwen model. I was only trying to compare it with the latest Gemma model. Thanks for letting me know
2
0
2026-02-28T20:18:59
GrennKren
false
null
0
o7xqaiv
false
/r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7xqaiv/
false
2
t1_o7xq2xv
Thanks for the questions and the feedback! To answer your points: 1. **Setup/Harness/Dataset:** I didn't share all the technical details or the repo link initially because I didn't want the post to get auto-removed by Reddit's self-promotion filters. But everything I did is open-source! You can see the full setup, the...
2
0
2026-02-28T20:17:52
dumbelco
false
null
0
o7xq2xv
false
/r/LocalLLaMA/comments/1rh2tmu/benchmarking_opensource_llms_for_security/o7xq2xv/
false
2
t1_o7xpyn1
It felt like a weird effective altruist coup, and it wasn't long after the all SBF kerfuffle.
6
0
2026-02-28T20:17:13
One-Employment3759
false
null
0
o7xpyn1
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7xpyn1/
false
6
t1_o7xpuyp
On Device? I am not sure how reliable it is. If you have a dekstop you can host a llama cpp server + open-webui instance and tailscale yourself into the network, open-webui has a very polished iOS PWA you can add to home screen. Could be considered an overkill as you have anything there, MCPS, Tools, funcions, image ...
2
0
2026-02-28T20:16:41
iChrist
false
null
0
o7xpuyp
false
/r/LocalLLaMA/comments/1rhca31/ios_apps_with_toolcalling_web_search/o7xpuyp/
false
2
t1_o7xprpb
Desperately needs a thinking budget. When I first set it up I prompted “test”. It spent ~1000 tokens thinking about how to respond 🤣
6
0
2026-02-28T20:16:12
noob10
false
null
0
o7xprpb
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xprpb/
false
6
t1_o7xpq1p
I love your readme \^\^
2
0
2026-02-28T20:15:57
Extraaltodeus
false
null
0
o7xpq1p
false
/r/LocalLLaMA/comments/1rg9lli/little_qwen_35_27b_and_qwen_35ba3b_models_did/o7xpq1p/
false
2
t1_o7xpp8m
I faced an issue in loading the gguf embedding model in Llama cpp python module. LLM loads and runs easily. Issue with embedding model. Can you help me. Embedding code is import llama_cpp llm = llama_cpp.Llama(model_path="path/to/model.gguf", embedding=True) embeddings = llm.create_embedding("Hello, world!") # or cr...
1
0
2026-02-28T20:15:50
MahabharataHindi
false
null
0
o7xpp8m
false
/r/LocalLLaMA/comments/1nqyi1x/embedding_with_llamacpp_server/o7xpp8m/
false
1
t1_o7xpmqi
Yes I set it up to work with Claude Code on my M1 Max Macbook 64GB via llama-server, the exact settings and performance notes are here: [https://pchalasani.github.io/claude-code-tools/integrations/local-llms/#qwen35-35b-a3b--smart-general-purpose-moe](https://pchalasani.github.io/claude-code-tools/integrations/loc...
1
0
2026-02-28T20:15:28
SatoshiNotMe
false
null
0
o7xpmqi
false
/r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/o7xpmqi/
false
1
t1_o7xp11x
I saw today in reddit specific settings to make Qwen 3.5 work on it's best. It was said that the model is very sensitive to those settings, but once applied is impressively good. Please do a quick search for it.
2
0
2026-02-28T20:12:17
Hour_Cartoonist5239
false
null
0
o7xp11x
false
/r/LocalLLaMA/comments/1rh9dt3/do_you_find_qwen314bq8_0_15gb_smarter_than/o7xp11x/
false
2
t1_o7xozl1
Your wrapper is exactly what I wanted to have, but I didn't have the time to implement it so far. Is it open-source? Thank you.
9
0
2026-02-28T20:12:04
ayu-wraith
false
null
0
o7xozl1
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7xozl1/
false
9
t1_o7xoygc
if you invested alot of time into building software around a certain model, its not alway just as easy as drop in the newest model.
2
0
2026-02-28T20:11:54
Lesser-than
false
null
0
o7xoygc
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7xoygc/
false
2
t1_o7xosez
Maybe, maybe in like 20 years or something those 6000s will become dirt cheap. I am hoping for that because I'll never have enough money to buy them at their current price.
5
0
2026-02-28T20:11:02
Maleficent_Celery_55
false
null
0
o7xosez
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xosez/
false
5
t1_o7xorkx
I don't think the problem is dense vs MoE, but that no one is tuning smaller models for creative tasks. Modern models are better at long context than any dense model ever was.
1
0
2026-02-28T20:10:55
nomorebuttsplz
false
null
0
o7xorkx
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7xorkx/
false
1
t1_o7xoad4
Yes it's very annoying
1
0
2026-02-28T20:08:21
tomakorea
false
null
0
o7xoad4
false
/r/LocalLLaMA/comments/1rhaoty/anyone_noticing_qwen35_27b_getting_stuck_in/o7xoad4/
false
1
t1_o7xo8f7
Well it's kinda a pun as he wrote it in small letters meaning "small qwen" but regardless, I don't disagree that there are many bots.
6
0
2026-02-28T20:08:04
Black-Mack
false
null
0
o7xo8f7
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7xo8f7/
false
6
t1_o7xo6q4
It’s weird that AI models have some of the thinking problems as people like spiralling. “But wait, am I really right about this? Is my wording? Maybe this is the wrong message to send.”
11
0
2026-02-28T20:07:50
Zomunieo
false
null
0
o7xo6q4
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7xo6q4/
false
11
t1_o7xnxn3
I don't use XCode so not sure if you're looking for that specific IDE For VSCode Claude Code is actually pretty good, you can configure your own model via ENV vars in the settings. I have tried Kilo Code, Roo Code, Cline, Continue, Aider with varying success too. I personally use the CLI so I use Claude Code connecte...
1
0
2026-02-28T20:06:29
Djagatahel
false
null
0
o7xnxn3
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7xnxn3/
false
1
t1_o7xnwi5
1. See here how I used llama-bench for 35B-A3B: https://www.reddit.com/r/LocalLLaMA/comments/1rg4zqv/comment/o7rszuj/?context=3&utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button I recommend b=2048 ub=2048. But depends on your setup. 2. Yes, it increases PP speed a lot. TG may...
1
0
2026-02-28T20:06:19
OsmanthusBloom
false
null
0
o7xnwi5
false
/r/LocalLLaMA/comments/1rhbqoq/how_do_i_figure_out_b_batch_size_to_increase/o7xnwi5/
false
1
t1_o7xnrx2
Pen and paper? Fancy. I use an abacus.
4
0
2026-02-28T20:05:39
fallingdowndizzyvr
false
null
0
o7xnrx2
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xnrx2/
false
4
t1_o7xner5
The gap exists because MoE scaling laws push you toward either more experts with smaller activation (Mixtral-style 8x7B) or fewer larger experts. A 60-70B total with 8-10B active is awkward architecturally — you need enough experts to justify the routing overhead but each expert needs enough capacity to be useful. At 8...
8
0
2026-02-28T20:03:45
tom_mathews
false
null
0
o7xner5
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7xner5/
false
8
t1_o7xncwi
Ok name one
1
0
2026-02-28T20:03:29
numberwitch
false
null
0
o7xncwi
false
/r/LocalLLaMA/comments/1rhca31/ios_apps_with_toolcalling_web_search/o7xncwi/
false
1
t1_o7xn8c3
Just write to their support team.
1
0
2026-02-28T20:02:49
archadigi
false
null
0
o7xn8c3
false
/r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o7xn8c3/
false
1
t1_o7xmxix
Go with the 27B the 35B only has 3B active so if you can fit it it's very fast but also dumb compared to a dense model.
6
0
2026-02-28T20:01:18
GoldPanther
false
null
0
o7xmxix
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7xmxix/
false
6
t1_o7xmwdy
I have the same setup as OP, and 27B spends so long thinking! It’s practically neurotic. Good output though… eventually.
4
0
2026-02-28T20:01:08
ConspicuousSomething
false
null
0
o7xmwdy
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7xmwdy/
false
4
t1_o7xmvbv
From everything that I have read about LLMs, I've always seen that each new token's K and V values depend on previous token's processing result. Therefore, if you replace the entire KV cache, then it's functionally the same as replacing the entire prompt; while if you replace a slice of KV cache from a different prompt...
9
0
2026-02-28T20:00:59
No-Refrigerator-1672
false
null
0
o7xmvbv
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7xmvbv/
false
9
t1_o7xmukp
Ran DeepSeek-R1 locally on a 3090 last week — inference speed was surprisingly decent for a 70B quant. The reasoning traces are genuinely useful for debugging prompts. Anyone else using it for code generation workflows?
-4
0
2026-02-28T20:00:52
GillesCode
false
null
0
o7xmukp
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7xmukp/
false
-4
t1_o7xmuky
I like to call this phenomenon semantic spiraling. Propagation through latent space with each thinking token can lead to the thread getting trapped in an errant region. Like making a wrong turn and just getting more lost as you go. Eventually you start going in circles. If only a model could ask for directions.
2
0
2026-02-28T20:00:52
fervoredweb
false
null
0
o7xmuky
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7xmuky/
false
2
t1_o7xmtsi
It's a M4 MacBook air with 32GB on 35B currently doing around 18 tps - just feels a bit slow. The 4 bit LMX version is much faster but quality much worse
2
0
2026-02-28T20:00:45
ChickenShieeeeeet
false
null
0
o7xmtsi
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xmtsi/
false
2
t1_o7xmqag
The average user has 12-16gb vram.
12
0
2026-02-28T20:00:14
Birdinhandandbush
false
null
0
o7xmqag
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7xmqag/
false
12
t1_o7xmme8
Where have you been comrade?
1
0
2026-02-28T19:59:42
joosefm9
false
null
0
o7xmme8
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7xmme8/
false
1
t1_o7xmilp
Increasing batch (-b) and microbatch (-ub) makes a huge difference to me. With 4090, usually 4096 for both options is optimal. You can try different batches sizes with llama-bench. I've also found --no--mmap to be critical to improve pp
2
0
2026-02-28T19:59:09
kevin_1994
false
null
0
o7xmilp
false
/r/LocalLLaMA/comments/1rhbqoq/how_do_i_figure_out_b_batch_size_to_increase/o7xmilp/
false
2
t1_o7xmhkh
I can't be the only one that couldn't give a fuck less about image processing? I want a model that can hold an interactive voice conversation with me in real-time.
4
0
2026-02-28T19:59:00
nullptr777
false
null
0
o7xmhkh
false
/r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7xmhkh/
false
4
t1_o7xmev4
All LLM apps have internet searching functionality
-1
0
2026-02-28T19:58:37
Realistic_Muscles
false
null
0
o7xmev4
false
/r/LocalLLaMA/comments/1rhca31/ios_apps_with_toolcalling_web_search/o7xmev4/
false
-1
t1_o7xme7l
it supports openai api compatibility and since its dockerized you can basically run in windows and ollama and lmstudio will be auto detected upon installation , llama.cpp can be added in providers
1
0
2026-02-28T19:58:31
BadBoy17Ge
false
null
0
o7xme7l
false
/r/LocalLLaMA/comments/1rhalir/local_llms_are_slow_i_have_too_many_things_to_try/o7xme7l/
false
1
t1_o7xmdkx
How are you serving up the model?
1
0
2026-02-28T19:58:26
Rollingsound514
false
null
0
o7xmdkx
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xmdkx/
false
1
t1_o7xm2t0
Please add llama.cpp too as some of us don't use any wrappers.
4
0
2026-02-28T19:56:54
pmttyji
false
null
0
o7xm2t0
false
/r/LocalLLaMA/comments/1rhbfya/shunyanet_sentinel_a_selfhosted_rss_aggregator/o7xm2t0/
false
4
t1_o7xlyjv
These are totally acceptable numbers for most single user use.
7
0
2026-02-28T19:56:17
schnauzergambit
false
null
0
o7xlyjv
false
/r/LocalLLaMA/comments/1rhbtnw/the_state_of_openweights_llms_performance_on/o7xlyjv/
false
7
t1_o7xlqjx
Toight, I did miss that! Interesting! I love how everything is endlessly confusing and never makes sense for more than 8 minutes.
2
0
2026-02-28T19:55:09
Hector_Rvkp
false
null
0
o7xlqjx
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xlqjx/
false
2
t1_o7xlp1r
It's all fun and games until the AIs conclude that we human people are useless and a threat and decide to terminate us.
2
0
2026-02-28T19:54:56
taoyx
false
null
0
o7xlp1r
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7xlp1r/
false
2
t1_o7xlg0r
You know you can turn that off right? You should be turning off a bunch of things in firefox or any other browser if you want privacy. At least in firefox it's easy to turn things off. Not so much in Chrome.
1
0
2026-02-28T19:53:38
fallingdowndizzyvr
false
null
0
o7xlg0r
false
/r/LocalLLaMA/comments/1reqdpb/overwhelmed_by_so_many_quantization_variants/o7xlg0r/
false
1
t1_o7xlb1d
Good for Dario. I know where my money will be going.
1
0
2026-02-28T19:52:57
Own_Version_5081
false
null
0
o7xlb1d
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7xlb1d/
false
1
t1_o7xl7ev
Yes, just by following the unsloth instructions - but I am seeing the stopping-mid-task problem you describe. I’ve not had a chance to see if it happens with thinking turned off.
2
0
2026-02-28T19:52:25
cromagnone
false
null
0
o7xl7ev
false
/r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/o7xl7ev/
false
2
t1_o7xkump
I am using llama.cpp with the unsloth Q8-K-XL quant. Limited context to 128K to fit it in 48GB VRAM
2
0
2026-02-28T19:50:36
eribob
false
null
0
o7xkump
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7xkump/
false
2
t1_o7xkixc
Upgrade your version of llama.cpp. I benchmarked Qwen3 Coder Next a couple days ago just fine with llama-bench. In my testing, larger batch and ubatch sizes only increased speed up to 2048 for each. That was on Strix Halo with Vulkan, so your experience may be different depending on your hardware.
3
0
2026-02-28T19:48:55
isugimpy
false
null
0
o7xkixc
false
/r/LocalLLaMA/comments/1rhbqoq/how_do_i_figure_out_b_batch_size_to_increase/o7xkixc/
false
3
t1_o7xkh0t
Can it run openclaw?
-4
0
2026-02-28T19:48:39
cfipilot715
false
null
0
o7xkh0t
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xkh0t/
false
-4
t1_o7xkfqd
Anthropic will not fall. Too much Amazon money tied up there. The lawsuits will start on Monday. Claude as a lawyer agent will beat this case
1
0
2026-02-28T19:48:28
cvandyke01
false
null
0
o7xkfqd
false
/r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7xkfqd/
false
1
t1_o7xkeqe
I am really looking forward to the 1.7B version.
2
0
2026-02-28T19:48:19
charles25565
false
null
0
o7xkeqe
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7xkeqe/
false
2
t1_o7xk8o0
You missed out on a lot of discussion around the MXFP4 regarding Qwen 3.5 in the past days [https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new\_qwen3535ba3b\_unsloth\_dynamic\_ggufs\_benchmarks/](https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/) There was a...
6
0
2026-02-28T19:47:27
Maximum_Use_8404
false
null
0
o7xk8o0
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xk8o0/
false
6
t1_o7xk6ca
The sad part about what companies like Anthropic say is that people believe it and then parrot it. "Distillation attack". Heh.
2
0
2026-02-28T19:47:07
Dry_Yam_4597
false
null
0
o7xk6ca
false
/r/LocalLLaMA/comments/1rgips0/how_does_training_an_ai_on_another_ai_actually/o7xk6ca/
false
2
t1_o7xjxov
M4 what? I have an M4 mini 16gb that only runs embeddings. I have an M2 Pro 32GB that runs 35B at 21tps. I have an M3 Ultra that runs 122B at 50tps. But with unified memory systems like Macs, and especially with these Qwen models, the preload is the big potential bottleneck.
4
0
2026-02-28T19:45:53
zipzag
false
null
0
o7xjxov
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xjxov/
false
4
t1_o7xjolb
> Isn't mxfp4 supposed to be super optimized for the hardware? No. Why do you think that?
3
0
2026-02-28T19:44:35
fallingdowndizzyvr
false
null
0
o7xjolb
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xjolb/
false
3
t1_o7xjgl6
Some extremely skilled people here - and people are polite and shows respect.. I value that ALOT
15
0
2026-02-28T19:43:25
leonbollerup
false
null
0
o7xjgl6
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7xjgl6/
false
15
t1_o7xjg5h
Yep, it's looking like I will make the switch to Qwen after swearing by Devstral Small 2 24B for the past few months. Although for any model it's a good idea to wait for the early adopters to find all the llamacpp issues, and for faster/better IQ quants to come out...
4
0
2026-02-28T19:43:22
kiwibonga
false
null
0
o7xjg5h
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7xjg5h/
false
4
t1_o7xjel0
Yes. And there are several ways to get more out of a couple / normally ram environment. For example I read recently here on Reddit that the vast majority of DDR ram (other than Samsung ram) has an inherent and very good performance inference capability as a by product of its internal electronics design. Off you can ...
1
0
2026-02-28T19:43:08
Protopia
false
null
0
o7xjel0
false
/r/LocalLLaMA/comments/1rgixk7/accuracy_vs_speed_my_top_5/o7xjel0/
false
1
t1_o7xjdi4
Afaik every token is conditioned on prior tokens in KV cache. So if the prefix is different and you copy the KV tensors after the prefix, this won't give the same result. What about RoPE? The agent prompts are different, do you undo and reapply the RoPE as well?
11
0
2026-02-28T19:42:59
audioen
false
null
0
o7xjdi4
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7xjdi4/
false
11
t1_o7xjcrf
I haven't tried DeepSeek V3 much yet. Qwen 3.5 is good for roleplay. The 3.5 flash version has a huge context window. But Qwen 3.5 also has a much higher hallucination rate. Good for maintaining character... weaker for memory extraction and accuracy. That said, Qwen 3.5, I think, writes more beautifully. They're ...
1
0
2026-02-28T19:42:52
Alerak1984
false
null
0
o7xjcrf
false
/r/LocalLLaMA/comments/1r8ogab/qwen35_vs_deepseekv3_the_openweight_battle/o7xjcrf/
false
1
t1_o7xj4lf
I will share the links after I post it on git hub. I've only gotten as far as putting the apps themselves on HuggingFace. Between working full time, developing, and genuinely trying to help people when I can. You can imagine there's not allot of time.https://www.melanovproducts.com/ go to: see the apps> diget lite, cod...
1
0
2026-02-28T19:41:42
melanov85
false
null
0
o7xj4lf
false
/r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7xj4lf/
false
1
t1_o7xiyzj
* We suggest using the following sets of sampling parameters depending on the mode and task type: * **Thinking mode for general tasks**: `temperature=1.0`, `top_p=0.95`, `top_k=20`, `min_p=0.0`, `presence_penalty=1.5`, `repetition_penalty=1.0` * **Thinking mode for precise coding tasks (e.g., WebDev)**: `temperat...
13
0
2026-02-28T19:40:52
kironlau
false
null
0
o7xiyzj
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xiyzj/
false
13
t1_o7xis3h
Hadn't thought of that. Fuck me, right?! 
1
0
2026-02-28T19:39:52
richardbaxter
false
null
0
o7xis3h
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7xis3h/
false
1
t1_o7xir0z
1. article 2. 𝕏 3. Clawdbot 4. polymarket 5. make money 6. I came
3
0
2026-02-28T19:39:43
valdev
false
null
0
o7xir0z
false
/r/LocalLLaMA/comments/1rhazbc/13_m1_mbp_instead_of_m4_mac_mini/o7xir0z/
false
3
t1_o7xiol0
I am sure he’d be open to feedback, he doesn’t claim to be a ai expert but like a java or software dev in normal day to day job. Like give him a more realistic protocol but I like watching him learn about local AI solutions and trying out new rigs to know what is happening in hobbyist level or small business level of e...
1
0
2026-02-28T19:39:22
Responsible-Taste772
false
null
0
o7xiol0
false
/r/LocalLLaMA/comments/1rcbm66/8_dgx_cluster_by_alex_ziskind_easily_the_most/o7xiol0/
false
1
t1_o7xilvr
previous employer was also in the US and so are the Amazon and Microsoft cloud services they were running on, so if the feds really wanted the data for a US customer we wouldn't have been able to stop them either. we actually did have our own older in-house vision models for EU customers because of EU data handling co...
2
0
2026-02-28T19:38:59
HopePupal
false
null
0
o7xilvr
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xilvr/
false
2
t1_o7xieih
It's 9
1
0
2026-02-28T19:37:54
jacek2023
false
null
0
o7xieih
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7xieih/
false
1
t1_o7xibh2
What are people using / recommending for 3x 3090 and 256gb DDR4 2400? I haven't tried the 397b at q4, but I think I could fit it. Wondering if anyone has similar hardware and can make a recommendation... my internet connection is slowwww it'll take a day to download that model
1
0
2026-02-28T19:37:27
Judtoff
false
null
0
o7xibh2
false
/r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7xibh2/
false
1
t1_o7xiaao
Para cosas de obediencia complejas, por ejemplo un algoritmo de Grafos donde tu le dices al sistema como hacer el algoritmo, a mi el que mejor que ha funcionado es Qwen 3 Next.
1
0
2026-02-28T19:37:17
RareRecommendation94
false
null
0
o7xiaao
false
/r/LocalLLaMA/comments/1reds0p/qwen_35_craters_on_hard_coding_tasks_tested_all/o7xiaao/
false
1
t1_o7xi9ws
Yes, this is with llama-server I run the model with these configs: \[Qwen3.5 35B-A3B General\] model = /models/Qwen3.5-35B-A3B-UD-Q4\_K\_XL.gguf temp = 1.0 top-p = 0.95 top-k = 20 min-p = 0.0 ctk = q8\_0 ctv = q8\_0 cpu-moe = on c = 262144 \[Qwen3.5 35B-A3B Coding\] model = /models/Qwen3.5-35B-A3B-UD-Q4\...
1
0
2026-02-28T19:37:13
Hammer-Evader-5624
false
null
0
o7xi9ws
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7xi9ws/
false
1
t1_o7xi7ot
Web search must be tough. Now that an AI can hire meat puppets, perhaps it can send one down to the local library
0
0
2026-02-28T19:36:53
zipzag
false
null
0
o7xi7ot
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7xi7ot/
false
0