name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8el0d0
[deleted]
1
0
2026-03-03T13:12:08
[deleted]
true
null
0
o8el0d0
false
/r/LocalLLaMA/comments/1rjoeok/is_qwen35_08b_more_powerful_than_mistral_7b/o8el0d0/
false
1
t1_o8ekxao
Very simple approach. Thanks for sharing.
1
0
2026-03-03T13:11:35
Intelligent-Bat-2469
false
null
0
o8ekxao
false
/r/LocalLLaMA/comments/1rjo1tp/building_a_simple_rag_pipeline_from_scratch/o8ekxao/
false
1
t1_o8ekvvg
Tool use is all done *in the client*. E.g. you ask the model to look at a file. You send a user message to llama-server, it runs it through a Jinja text template to convert it to whichever format the model was trained on, runs it through inference, and decodes the model's response text back into a message object. This assistant message object contains the usual text and maybe reasoning (although both may be empty), and a tool_calls array. The client sees this and actually executes the work, responding back to the server with the results. So in your example, the web UI inside llama.cpp would have to support this directly. Which, as luck would have it, it's about to do: https://github.com/ggml-org/llama.cpp/pull/18655
1
0
2026-03-03T13:11:20
666666thats6sixes
false
null
0
o8ekvvg
false
/r/LocalLLaMA/comments/1rjoimo/tools_noob_how_to_get_llamaserver_and_searxng/o8ekvvg/
false
1
t1_o8ekvlv
Honestly, this is a bit like with old hardware. Keeping it for the sake of archiving and fun? Yes, absolutely. For actual productive work? Hell now, we are just wasting precious energy and accelerator cycles on something that has been replaced with something that has similar capabilities with less parameters, or far exceeding capabilities.
1
0
2026-03-03T13:11:18
MrHighVoltage
false
null
0
o8ekvlv
false
/r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8ekvlv/
false
1
t1_o8ekokj
For STT workloads like Whisper, the M1 Pro/Max with 32GB should definitely handle your use case well for the next few years. The key is using quantized models (like faster-whisper with q8_0 or q4_1) which run great on the Neural Engine even on M1. For voice dictation specifically, models like small or base give good accuracy with acceptable latency. If you want better accuracy and can tolerate more heat/lower battery, the larger models work too.
1
0
2026-03-03T13:10:05
Weesper75
false
null
0
o8ekokj
false
/r/LocalLLaMA/comments/1ripjzc/choosing_the_right_apple_silicon_for_backend/o8ekokj/
false
1
t1_o8ekof7
for me it just stops in the middle of the "think" process. lm studio 0.46
1
0
2026-03-03T13:10:03
MomentTimely8277
false
null
0
o8ekof7
false
/r/LocalLLaMA/comments/1rdyia7/trouble_with_qwen_35_with_lmstudio/o8ekof7/
false
1
t1_o8eklsh
I tried running this with vllm. It just produces !!!! as output. Any insights?
1
0
2026-03-03T13:09:36
Traditional_Tap1708
false
null
0
o8eklsh
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o8eklsh/
false
1
t1_o8ekjb9
Totally fair. To clarify - I’m not selling Trellis itself. It’s open source, and anyone is free to compile it and its dependencies on their own. The notebook I shared is just a demonstration of running it smoothly in Colab. What I’m building with MissingLink is an optimized dependency stack that makes setups like this work reliably on A100/L4 runtimes. With many ML projects, the real friction isn’t the repo - it’s compiling and aligning everything around it: - CUDA-specific builds - FlashAttention / xformers - memory-heavy kernels - dependency conflicts - OOM during compilation on Colab I’ve spent a lot of time trimming builds, patching unnecessary components, and compiling hardware-targeted wheels so they’re lightweight and stable against the default Colab CUDA stack. Many of these libraries are broadly useful across other ML projects as well, and several are notoriously painful to compile cleanly in constrained environments. The goal is to make those builds small, targeted, reusable and reliable. If someone prefers building and tuning everything from source, that’s completely valid. This is simply an optional shortcut for people who want to prototype or ship quickly instead of fighting the toolchain. For me, I would've happily paid to skip the pain, and I'll keep adding more libraries here it isn't a fixed list.
1
0
2026-03-03T13:09:10
Interesting-Town-433
false
null
0
o8ekjb9
false
/r/LocalLLaMA/comments/1rjdob7/generate_3d_models_with_trellis2_in_colab_working/o8ekjb9/
false
1
t1_o8ekj2z
I'm desperately trying to disable thinkig in lm studio, can someone help me? With thinking the model is not useable for me, i'm on a mac m1 max 32gb of ram, the tok/sec are good but the model loses its mind in thinking loops....30 seconds to say hello ....thanks https://preview.redd.it/rp29l3qcxtmg1.png?width=1071&format=png&auto=webp&s=08d496fee400d115af37dcb285e54279cc564d97
1
0
2026-03-03T13:09:08
arkham00
false
null
0
o8ekj2z
false
/r/LocalLLaMA/comments/1rjof0g/qwen_35_nonthinking_scores_are_out_on_aa/o8ekj2z/
false
1
t1_o8eki4u
Interesting! Moonshine looks promising for real-time transcription. Have you compared it to Faster-Whisper in terms of latency vs accuracy tradeoffs? For on-device use cases like voice dictation, the balance between speed and transcription quality is crucial - especially when running on consumer hardware.
1
0
2026-03-03T13:08:58
Weesper75
false
null
0
o8eki4u
false
/r/LocalLLaMA/comments/1rhd5b6/streaming_moonshine_asr/o8eki4u/
false
1
t1_o8ekf5c
[removed]
1
0
2026-03-03T13:08:27
[deleted]
true
null
0
o8ekf5c
false
/r/LocalLLaMA/comments/1rjof0g/qwen_35_nonthinking_scores_are_out_on_aa/o8ekf5c/
false
1
t1_o8ekcqs
GPT-4.1 is the best non-reasoning model OpenAI ever released but no, it's not something they will open source. There is no hope. If you want a non-reasoning model that feels pretty good and has vision and will talk to you about whatever you want, within reason, you want Mistral Large 3.
1
0
2026-03-03T13:08:03
LoveMind_AI
false
null
0
o8ekcqs
false
/r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8ekcqs/
false
1
t1_o8eka9t
That's Jeronimos Monastery. There's no Basilica of Santa Clara in Lisbon. I don't know why you consider it "impressive" if it got a basic fact wrong.
1
0
2026-03-03T13:07:37
Relevant_Helicopter6
false
null
0
o8eka9t
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8eka9t/
false
1
t1_o8ek7oi
I’m curious about this too. I’ve been using LM Studio and am not sure how to interact with images, though the hugging face page has code for passing them in, I’ve been hoping I don’t have to setup llama.cpp to use vision.
1
0
2026-03-03T13:07:09
JumboShock
false
null
0
o8ek7oi
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8ek7oi/
false
1
t1_o8ek5hv
\> the main players didn't come out with something like that I guess it's harder to charge $$ for a local model?
1
0
2026-03-03T13:06:46
HappyHarry-HardOn
false
null
0
o8ek5hv
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o8ek5hv/
false
1
t1_o8ek4mi
Big Model Smell. For me this is most apparent in voice applications. I'm a big GPT-OSS 120b fan. But there's a significant difference in apparent personality using that model compared to Gemini Flash in a Jarvis-type application. But even 35B can be a snarky weirdo at lower temps, which I like. Could just be the system prompts we don't see. But it feels like the improvement is more fundamental.
1
0
2026-03-03T13:06:37
zipzag
false
null
0
o8ek4mi
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8ek4mi/
false
1
t1_o8ek0gm
This looks really impressive! The feature set is comprehensive - especially the Audio Notebook mode with LM Studio integration. Have you considered adding support for punctuation/formatting models like Punctuate or语言 learned in-context? They can significantly improve the naturalness of transcriptions without needing a GPU upgrade.
1
0
2026-03-03T13:05:53
Weesper75
false
null
0
o8ek0gm
false
/r/LocalLLaMA/comments/1r9y6s8/transcriptionsuite_a_fully_local_private_open/o8ek0gm/
false
1
t1_o8ek05x
Even at lower quants. I'm used to anything under 4 bits is lobotomized, but 3bit UD seems perfectly ok.
1
0
2026-03-03T13:05:50
WetSound
false
null
0
o8ek05x
false
/r/LocalLLaMA/comments/1rjof0g/qwen_35_nonthinking_scores_are_out_on_aa/o8ek05x/
false
1
t1_o8ejvw7
Aren't IQ quants bad on CPU?
1
0
2026-03-03T13:05:04
guiopen
false
null
0
o8ejvw7
false
/r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8ejvw7/
false
1
t1_o8ejuml
Yeah, I'm really impressed with that model for its size, both for its long context handling and overall feel.
1
0
2026-03-03T13:04:50
Jobus_
false
null
0
o8ejuml
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8ejuml/
false
1
t1_o8ejtj6
Great initiative! This looks really useful for comparing STT models. Have you considered adding faster-whisper variants with different quantization levels (e.g., tiny, base, small) to see the speed/accuracy tradeoff? Also curious if you've tested any fine-tuned Whisper models for specific accents or domains - those can sometimes beat the base models significantly.
1
0
2026-03-03T13:04:38
Weesper75
false
null
0
o8ejtj6
false
/r/LocalLLaMA/comments/1rgzga6/an_opensource_local_speech_ai_benchmarking_tool/o8ejtj6/
false
1
t1_o8ejr6v
https://preview.redd.it/…dad6c0e4878389
1
0
2026-03-03T13:04:13
LoveMind_AI
false
null
0
o8ejr6v
false
/r/LocalLLaMA/comments/1rjo81a/gemini_31_pro_hidden_thought_process_exposed/o8ejr6v/
false
1
t1_o8ejnu0
interesting play
1
0
2026-03-03T13:03:39
alichherawalla
false
null
0
o8ejnu0
false
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8ejnu0/
false
1
t1_o8ejmmk
122b-a10b is twice as fast on my hardware and based on this benchmark, almost as good as dense 27b. IMO It's the best model in this release if you can run it.
1
0
2026-03-03T13:03:25
po_stulate
false
null
0
o8ejmmk
false
/r/LocalLLaMA/comments/1rjof0g/qwen_35_nonthinking_scores_are_out_on_aa/o8ejmmk/
false
1
t1_o8ejl9c
try increasing the temperature to .5
1
0
2026-03-03T13:03:11
yay-iviss
false
null
0
o8ejl9c
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8ejl9c/
false
1
t1_o8ejkvy
Addition: for multi user you have to look at total throughout for a realistic parallel scenario. You should be able to get a few thousand tokens per second with the rtx 6000 and way faster prefill which is important for long contexts.
1
0
2026-03-03T13:03:07
DanielWe
false
null
0
o8ejkvy
false
/r/LocalLLaMA/comments/1rjmlbi/local_llm_infrastructure_for_an_it_consulting/o8ejkvy/
false
1
t1_o8ejfta
This is seriously impressive — privacy-first local inference is the right direction. Phone hardware has come a long way. For anyone who loves this idea but wants to step up to bigger models (9B+), the NVIDIA Jetson Orin Nano is worth a look. 67 TOPS at ~15W, runs models up to 9B comfortably with hardware-accelerated inference. There's a prebuilt box called ClawBox (by OpenClaw) that comes ready to go with Ollama, OpenWebUI, and an AI assistant agent already configured — basically plug-and-play local AI at home or office for ~€549. Still air-gapped/private like your phone setup, just more headroom. The 2B on-device phone use case and the always-on home server use case complement each other nicely tbh.
1
0
2026-03-03T13:02:12
Scared-Department342
false
null
0
o8ejfta
false
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8ejfta/
false
1
t1_o8ejerj
i jumped from ollama to lm studio more than a year ago, and it was very good overall, I like the interface to configure and use these models and the easy of use of plugins/mcp
1
0
2026-03-03T13:02:01
yay-iviss
false
null
0
o8ejerj
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8ejerj/
false
1
t1_o8ejcpl
Aint no way. Qwen may be smarter, but its tuned to be more of an academic than a roleplayer. Just use Ministral 2512 8B or 3B.
1
0
2026-03-03T13:01:38
Fresh_Finance9065
false
null
0
o8ejcpl
false
/r/LocalLLaMA/comments/1rjoeok/is_qwen35_08b_more_powerful_than_mistral_7b/o8ejcpl/
false
1
t1_o8ej9zc
API rate limit reached. Please try again later.
1
0
2026-03-03T13:01:09
MelodicRecognition7
false
null
0
o8ej9zc
false
/r/LocalLLaMA/comments/1rjikwz/help_me_create_my_llm_ecosystem/o8ej9zc/
false
1
t1_o8ej4p3
The audacity of such a brazen distillation attack!!
1
0
2026-03-03T13:00:12
Ylsid
false
null
0
o8ej4p3
false
/r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8ej4p3/
false
1
t1_o8ej3bs
I suggest looking into Cogito 2.1 671B. It’s an underappreciated post-train. I was skeptical first, but we did a bunch of mechinterp on it and it’s remarkably intact. There is much depth is in that model, it’s pretty strange and probably the most interesting out of DeepSeek-likes. K2.5 is quite incoherent by comparison. The original K2 is obv good but its own entity.
1
0
2026-03-03T12:59:57
refo32
false
null
0
o8ej3bs
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8ej3bs/
false
1
t1_o8ej36t
Why the qwen VL? All 3.5 models are vision models. Also I am not sure if coder next is really better than the qwen 3.5 models. Still trying to figuring that out. And hardware wise not sure. It depends on the number of users. But two rtx 6000 blackwell give you so much more throughout especially with a multi user setup compared to any Mac or dgx or strix halo. Even a single one could be enough if you limit your models. And don't use llama cpp for a multi user setup.
1
0
2026-03-03T12:59:55
DanielWe
false
null
0
o8ej36t
false
/r/LocalLLaMA/comments/1rjmlbi/local_llm_infrastructure_for_an_it_consulting/o8ej36t/
false
1
t1_o8ej1jn
There has been reupload of bartowski weights either should I redownload them as well?
1
0
2026-03-03T12:59:37
Single_Ring4886
false
null
0
o8ej1jn
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8ej1jn/
false
1
t1_o8eizfc
Man, I feel you, so many of the big models are lobotomized right out of the box. Honestly, I've been using NyxPortal.com a lot lately because their models are properly unfiltered and just work.
1
0
2026-03-03T12:59:15
Defro777
false
null
0
o8eizfc
false
/r/LocalLLaMA/comments/1rjk9tt/are_all_models_censored_like_this/o8eizfc/
false
1
t1_o8eipm5
I’m not blind to cluelessness. I just think people are a bit lazy in asking questions :) > It was more about empathy towards people seeing things for the first time, not knowing where to look, what to ask and how to ask Yet they managed to do fine by installing olama or whatever and play around with some models. > because they just never had to Yeah, yeah… and I never had to fix electrical problems before which is why I don’t have electricity to part of the house. I just used a long cord rated for the proper watt to run the washing machine. But I did gather good insights from people how to fix the problem by mentioning the year of the build and the type of the damage and observable problems. Today I feel good enough to tackle the problem finally. :)
1
0
2026-03-03T12:57:30
ProfessionalSpend589
false
null
0
o8eipm5
false
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8eipm5/
false
1
t1_o8eilnn
qen next 80b
1
0
2026-03-03T12:56:47
qwen_next_gguf_when
false
null
0
o8eilnn
false
/r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8eilnn/
false
1
t1_o8eil3z
Oh damn, what kind of throughput are you getting?
1
0
2026-03-03T12:56:41
l34sh
false
null
0
o8eil3z
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8eil3z/
false
1
t1_o8eijwc
Especially since they probably used similar data sets and Kimi linear was likely under trained
1
0
2026-03-03T12:56:28
silenceimpaired
false
null
0
o8eijwc
false
/r/LocalLLaMA/comments/1rj08k1/k2_not_25_distillation_still_worth_it/o8eijwc/
false
1
t1_o8eiiqv
For what I do at work mostly Sonnet 4.6 or even Gemini 3 Flash are enough. But on private projects I hit the limit of Gemini Pro 3.1 at the moment, need to check if Codex 5.3 can do better since I have that available now. Don’t have private usage for Opus though. Concrete thing I tried to do: get Qwen3 TTS 1.7B model to work based on GGML by letting the model read Python implementations and come up with the CPP code. Not quite there yet. https://github.com/Danmoreng/qwen3-tts.cpp/commits/feat/kotlin-multiplatform/
1
0
2026-03-03T12:56:16
Danmoreng
false
null
0
o8eiiqv
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8eiiqv/
false
1
t1_o8eiigm
I recommend iq4nl as a CPU friendly quantization. Assuming you have 2x32 ddr5 ram you should get about 15 tokens generation on empty context and about 8 to 10 at 32k filled context. Not blazingly fast, but a very smart model, clearly better than the 35b.
1
0
2026-03-03T12:56:13
LagOps91
false
null
0
o8eiigm
false
/r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8eiigm/
false
1
t1_o8eie9h
Surprisingly, my experience is the opposite. 9b is around 1.5 the speed of 35b (45 vs 30 t/s). I'd say: 4b (yes!) and 9b for Image descriptions, 4b for story writing (still ahh but better than 9b and 35b) and 35b for everything else really. Best regards
1
0
2026-03-03T12:55:27
cookieGaboo24
false
null
0
o8eie9h
false
/r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8eie9h/
false
1
t1_o8eicgm
Too big. Im on the A10G and the max VRAM itself is 22.6GB or something. I need something that fits in that and something that doesnt require CPU offload becuase the CPU itself is just 4VPU 16GB RAM
1
0
2026-03-03T12:55:07
Civil-Top-8167
false
null
0
o8eicgm
false
/r/LocalLLaMA/comments/1rjmnh7/help_loading_qwen35_35b_a3b_gguf_on_vllm/o8eicgm/
false
1
t1_o8eibzj
[removed]
1
0
2026-03-03T12:55:02
[deleted]
true
null
0
o8eibzj
false
/r/LocalLLaMA/comments/1rjo49d/one_yaml_file_fully_local_agents_on_ollama/o8eibzj/
false
1
t1_o8ei4cg
You have to understand that these companies use open models as a tool to devalue their competition. This is why openai stopped releasing any open models from the time that they had gpt 3.5 and only released gpt-oss as soon as anthropic started beating them in benchmarks. We obviously win when they fight like this, but make no mistake, whoever is on top will not release anything opensource and everyone else only does so when it's convenient for hurting their competition. I would bet money on google not releasing gemma 4 until someone else beats them in the metrics that they care about.
1
0
2026-03-03T12:53:39
waitmarks
false
null
0
o8ei4cg
false
/r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8ei4cg/
false
1
t1_o8ei1t2
Have you ever tried Pairaphrase??
1
0
2026-03-03T12:53:11
Realistic_Metal_865
false
null
0
o8ei1t2
false
/r/LocalLLaMA/comments/1pvpd87/end_of_2026_whats_the_best_local_translation_model/o8ei1t2/
false
1
t1_o8ehw5o
would be nice your benchmark with and without thinking to see the impact. bonus if you measure the time/tokens, then one can calculate the tradeoff between analysis time and response quality
1
0
2026-03-03T12:52:10
asraniel
false
null
0
o8ehw5o
false
/r/LocalLLaMA/comments/1rjbw0p/benchmarked_qwen_35_small_models_08b2b4b9b_on/o8ehw5o/
false
1
t1_o8ehv9i
That is massively easier said than done, and it won't be the same in the end pretty much all the time. Its not a solution for people who simply love a model.
1
0
2026-03-03T12:52:00
henk717
false
null
0
o8ehv9i
false
/r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8ehv9i/
false
1
t1_o8ehsvw
Incredible
1
0
2026-03-03T12:51:34
guiopen
false
null
0
o8ehsvw
false
/r/LocalLLaMA/comments/1rjmnv4/meet_swerebenchv2_the_largest_open_multilingual/o8ehsvw/
false
1
t1_o8ehoyi
Running GLM-4.7-Flash-Q8 on a Ryzen AI Max 395 in Charm and its honestly pretty amazing.
1
0
2026-03-03T12:50:51
Yellow_Curry
false
null
0
o8ehoyi
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8ehoyi/
false
1
t1_o8ehodb
I don't use kde but isnt there something about using pam and setting the kwallet passphrase same as user password, and then pam will handle kwallet autologin?
1
0
2026-03-03T12:50:45
jonahbenton
false
null
0
o8ehodb
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8ehodb/
false
1
t1_o8ehfhz
I like it
1
0
2026-03-03T12:49:09
Dramatic_Entry_3830
false
null
0
o8ehfhz
false
/r/LocalLLaMA/comments/1rh0bkz/tempted_to_prompt_qwen_on_this_craigslist_rig_but/o8ehfhz/
false
1
t1_o8ehent
That is IP thef! Thank you
1
0
2026-03-03T12:49:00
UnbeliebteMeinung
false
null
0
o8ehent
false
/r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8ehent/
false
1
t1_o8ehc56
[removed]
1
0
2026-03-03T12:48:32
[deleted]
true
null
0
o8ehc56
false
/r/LocalLLaMA/comments/1rjmrj0/hello_i_am_a_guy_who_has_no_prior_ai_experience/o8ehc56/
false
1
t1_o8ehazz
Kimi K2.5 is great and lightweight... >!nah just joking I haven't tried it but the new Qwen 3.5 0.8 or maybe 2B models? maybe even 4B depends on your phone!<
1
0
2026-03-03T12:48:20
Tall-Ad-7742
false
null
0
o8ehazz
false
/r/LocalLLaMA/comments/1rctpx4/best_small_local_llm_to_run_on_a_phone/o8ehazz/
false
1
t1_o8eh9fu
not sure why op put behind a link. but here it is \[QUESTION BY ENDUSER\] why doesnt this work for example for Qwen coder next or Nemo Nano models? \[/END QUESTION BY ENDUSER\] \[GEMINI 3.1 PRO THINKING PROCESS SEEN BY ENDUSER\] Investigating Speculative Decoding I'm currently looking into the most recent developments in llama.cpp's self-speculative decoding… \[/END GEMINI 3.1 PRO THINKING PROCESS SEEN BY ENDUSER\] \[GEMINI 3.1 PRO INTERNAL THINKING PROCESS HIDDEN FOR ENDUSER\] Gemini said The search query has returned several results. Here's a summary: Architecture Differences: Qwen3 and Mistral Small/NeMo Nano have different core features… ... (technical explanation about self-speculative decoding, recurrent states, MoE, sliding window attention) ... and why these make the technique unsupported in llama.cpp. \*\*The Bottom Line:\*\* This is 95% an engineering hurdle within \`llama.cpp\`. The contributors have to write entirely new memory management code to handle rollbacks… ... (includes highly technical explanation, ending in a sample offer to help set up a draft model) \[/END GEMINI 3.1 PRO INTERNAL THINKING PROCESS HIDDEN FOR ENDUSER\] (tags: thinking Gemini)
1
0
2026-03-03T12:48:04
CrypticZombies
false
null
0
o8eh9fu
false
/r/LocalLLaMA/comments/1rjo81a/gemini_31_pro_hidden_thought_process_exposed/o8eh9fu/
false
1
t1_o8eh95m
Really? At what quantification? What would be the token per second I'd expect ?
1
0
2026-03-03T12:48:01
soyalemujica
false
null
0
o8eh95m
false
/r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8eh95m/
false
1
t1_o8eh8ls
Has anyone done perplexity and KLD comparison of qwen3 coder next like qwen3.5? If yes can someone share which post is that?
1
0
2026-03-03T12:47:55
GlobalLadder9461
false
null
0
o8eh8ls
false
/r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8eh8ls/
false
1
t1_o8eh65u
So far I have that one and two others… 1. My car is dirty and needs a wash. There is a car wash 50m from my house, should I walk or drive? 2. A man has 5 sisters, each sister has 2 brothers, how many people are in the family and why? With 1 you may get walk cause it’s close responses. For 2, the key is why. It should easily pick up 7, however the key is if it considers parents in the equation despite parents not being mentioned
1
0
2026-03-03T12:47:27
lenjet
false
null
0
o8eh65u
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8eh65u/
false
1
t1_o8eh5s5
My ongoing thread got many interesting responses on Qwen3.5-9B, check it out. [Is Qwen3.5-9B enough for Agentic Coding?](https://www.reddit.com/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)
1
0
2026-03-03T12:47:23
pmttyji
false
null
0
o8eh5s5
false
/r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8eh5s5/
false
1
t1_o8eh1nf
I think it was a different model but someone did a benchmark on these Opus finetunes and it performed worse because the dataset was only like 250 rows I could be wrong
1
0
2026-03-03T12:46:38
HornyGooner4401
false
null
0
o8eh1nf
false
/r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8eh1nf/
false
1
t1_o8eh06p
Interesting idea, i've been curious if something like this could be done through lora's but never got around to testing it and you went straight for the source. I need to install my gpus, never moved it to my replacement machine.
1
0
2026-03-03T12:46:22
trahloc
false
null
0
o8eh06p
false
/r/LocalLLaMA/comments/1rfvbql/what_happens_when_you_train_personality_into_the/o8eh06p/
false
1
t1_o8egw5f
You can also try the 122b model, it's a good fit for your hardware 
1
0
2026-03-03T12:45:38
LagOps91
false
null
0
o8egw5f
false
/r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8egw5f/
false
1
t1_o8egvo0
Interesting! What is the maximum amount of tools it can manage without hallucinating also number of steps before it hallucinates?
1
0
2026-03-03T12:45:33
malav399
false
null
0
o8egvo0
false
/r/LocalLLaMA/comments/1rjm4bl/tool_calling_is_where_agents_fail_most/o8egvo0/
false
1
t1_o8egtv0
ik-llama.cpp gives its best with pure GPU inference.
1
0
2026-03-03T12:45:12
Expensive-Paint-9490
false
null
0
o8egtv0
false
/r/LocalLLaMA/comments/1rjk2dq/im_a_noob_to_local_inference_how_do_you_choose/o8egtv0/
false
1
t1_o8egmi5
Good remark. I suppose it will be the same as with Qwen3-Next and Qwen3-Code-Next, but now I'm not sure
1
0
2026-03-03T12:43:51
ExistingAd2066
false
null
0
o8egmi5
false
/r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8egmi5/
false
1
t1_o8eglzk
That is MTP, not speculative decoding with a draft model
1
0
2026-03-03T12:43:45
Far-Low-4705
false
null
0
o8eglzk
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8eglzk/
false
1
t1_o8egkpw
Bei mir reagiert qwen3.5:9b nur so: Overthinking für simple Aufgaben. Und bei qwen3.5:4b sieht es genau so aus... :( https://preview.redd.it/vacuybt9stmg1.jpeg?width=1867&format=pjpg&auto=webp&s=e7c2fbcedbf0f46fcdb15e0064ce186da889a07e
1
0
2026-03-03T12:43:31
Zeitgeist4K
false
null
0
o8egkpw
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8egkpw/
false
1
t1_o8eggoi
It is extremely good at context understanding. It is like a smaller version of qwen 3 next, also undertrained. It’s not “smart” and doesn’t have knowledge, but it currently is the best model at long context understanding
1
0
2026-03-03T12:42:46
Far-Low-4705
false
null
0
o8eggoi
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o8eggoi/
false
1
t1_o8eg971
Instead of using GGUF, why not use AWQ?
1
0
2026-03-03T12:41:20
DeltaSqueezer
false
null
0
o8eg971
false
/r/LocalLLaMA/comments/1rjmnh7/help_loading_qwen35_35b_a3b_gguf_on_vllm/o8eg971/
false
1
t1_o8eg89m
:D Alright, check my old thread for t/s stats. [Poor GPU Club : 8GB VRAM - MOE models' t/s with llama.cpp](https://www.reddit.com/r/LocalLLaMA/comments/1o7kkf0/poor_gpu_club_8gb_vram_moe_models_ts_with_llamacpp/) What's your full llama.cpp command?
1
0
2026-03-03T12:41:09
pmttyji
false
null
0
o8eg89m
false
/r/LocalLLaMA/comments/1rji5bc/how_do_the_small_qwen35_models_compare_to_the/o8eg89m/
false
1
t1_o8eg85u
BYOK 방식도 가능해!
1
0
2026-03-03T12:41:08
ImprovementBasic7349
false
null
0
o8eg85u
false
/r/LocalLLaMA/comments/1qoa9h5/github_introduces_copilot_sdk_open_source_anyone/o8eg85u/
false
1
t1_o8eg74i
You mean you were getting poor latency with fine-tuning or with inference?
1
0
2026-03-03T12:40:56
luke_pacman
false
null
0
o8eg74i
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o8eg74i/
false
1
t1_o8eg6t5
Impressive. Might be a dumb question but how are search web and web fetch integrated? Is that a feature of LM Studio
1
0
2026-03-03T12:40:53
AdInternational5848
false
null
0
o8eg6t5
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8eg6t5/
false
1
t1_o8eg6ov
I'm using 9B at the moment. I'd like to use 27B but it uses a little more VRAM than I'd like. It makes me wish I had a 5090. If you use MTP, you might be able to squeeze a bit more performance out of it.
1
0
2026-03-03T12:40:51
DeltaSqueezer
false
null
0
o8eg6ov
false
/r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8eg6ov/
false
1
t1_o8eg5c6
Interesting how Gemini 3.1 Flash Lite will fit in.
1
0
2026-03-03T12:40:36
piggledy
false
null
0
o8eg5c6
false
/r/LocalLLaMA/comments/1rjnpuv/costsperformance_tradeoff_for_qwen3_qwen35_and/o8eg5c6/
false
1
t1_o8eg2j8
Answering your question, vnc is the best for me tbh. It might sound weird but it works great for me. I use tigervnc specifically.
1
0
2026-03-03T12:40:04
robertpro01
false
null
0
o8eg2j8
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8eg2j8/
false
1
t1_o8eg1w5
When a single typo or bug in the chat template or something can completely putz the model, it's best to wait a week or so until the smart humans have ironed those things out. If you're not interested in developing or improving the use of these models from the day they come out, rather wait before jumping to the conclusion that theu're useless. This is the conventional wisdom around these parts. Then again, it might be that I'm talking to a bot, since you have 6 duplicated comments of the same comment you just made above and only one other comment on Reddit ever...
1
0
2026-03-03T12:39:56
lookwatchlistenplay
false
null
0
o8eg1w5
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8eg1w5/
false
1
t1_o8efzd1
yes i know im launching qwen3.5:9b model, it did work and asked for permission, created files etc... just takes alot of time that's it
1
0
2026-03-03T12:39:27
Business_Writer4634
false
null
0
o8efzd1
false
/r/LocalLLaMA/comments/1rjm73j/agentic_workflow_with_ollama/o8efzd1/
false
1
t1_o8efrbv
It can be extreme. But using the exact parameters and 16bit kv cache helps But what most helps: give the model any kind of tool (just a current time tool is enough) ans it will greatly reduce the amount of thinking. So I guess you can also achieve this with a good system prompt.
1
0
2026-03-03T12:37:54
DanielWe
false
null
0
o8efrbv
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8efrbv/
false
1
t1_o8efqfc
Can the 0.8B model be run on the CPU?
1
0
2026-03-03T12:37:43
EmptyVolition242
false
null
0
o8efqfc
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8efqfc/
false
1
t1_o8efpk6
Nice! looking forward to the training code.
1
0
2026-03-03T12:37:33
Direct-Argument-7066
false
null
0
o8efpk6
false
/r/LocalLLaMA/comments/1rjjvge/update_tinytts_the_smallest_english_tts_model/o8efpk6/
false
1
t1_o8efom0
Jesus Christ, stop with the slop.
1
0
2026-03-03T12:37:22
__JockY__
false
null
0
o8efom0
false
/r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o8efom0/
false
1
t1_o8efehx
I use a computer vision model on my CCTV system for object recognition. It's more useful than you're giving it credit for.
1
0
2026-03-03T12:35:24
Ironfields
false
null
0
o8efehx
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8efehx/
false
1
t1_o8efbqs
Thanks! You're right, if I write it like that it has no problems with the pronunciation.
1
0
2026-03-03T12:34:52
bambamlol
false
null
0
o8efbqs
false
/r/LocalLLaMA/comments/1rjjvge/update_tinytts_the_smallest_english_tts_model/o8efbqs/
false
1
t1_o8efbnt
Love when they do stuff like this. GPT-5.3 used perl on my mac to edit files with regex when it was struggling with the edit file tool on opencode.
1
0
2026-03-03T12:34:51
_yustaguy_
false
null
0
o8efbnt
false
/r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/o8efbnt/
false
1
t1_o8ef3x8
Q4.
1
0
2026-03-03T12:33:18
boisheep
false
null
0
o8ef3x8
false
/r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o8ef3x8/
false
1
t1_o8eeygo
tried both, i did not even realize the difference in output quality... but larger is often better haha.
1
0
2026-03-03T12:32:13
luke_pacman
false
null
0
o8eeygo
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o8eeygo/
false
1
t1_o8eemok
I had this happen in open code too! Pretty cool to see!
1
0
2026-03-03T12:29:54
SocialDinamo
false
null
0
o8eemok
false
/r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/o8eemok/
false
1
t1_o8eefpv
make a web app to access it. server running on pc. buy a domain. cloudflare tunnel to securely connect the server to the domain and handle all the scary net stuff
1
0
2026-03-03T12:28:31
megacewl
false
null
0
o8eefpv
false
/r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o8eefpv/
false
1
t1_o8eeec7
So impressive, this is the future American AI labs promised and never delivered.
1
0
2026-03-03T12:28:15
HugoCortell
false
null
0
o8eeec7
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8eeec7/
false
1
t1_o8eeci4
While I don't know what ollama does with that command exactly - if I understand that it's just a wrapper for starting the serving of the model and configuring Claude code to use it then technically yes. But what model are you launching there with "Qwen 3.5" and at what quantisation and context size? "Qwen3.5" isn't a model it's a family of models, in other words is that the 27b, 35-a3b, 122-a10b etc? And is it Q6_K? Q5_K_M? Q4_K_M? Etc Because the smaller models may not be reliable when wrapped with Claude code's harness especially if they're a low quality quant. It's quite infuriating that Ollama tries to hide away such critical information in a misleading way.
1
0
2026-03-03T12:27:53
sammcj
false
null
0
o8eeci4
false
/r/LocalLLaMA/comments/1rjm73j/agentic_workflow_with_ollama/o8eeci4/
false
1
t1_o8eebd1
Many linux or mac users won't know much about git or logs. It was more about empathy towards people seeing things for the first time, not knowing where to look, what to ask and how to ask because they just never had to, at least when it comes to CLI stuff. [https://xkcd.com/1053/](https://xkcd.com/1053/) https://preview.redd.it/sb8zlcjqotmg1.png?width=462&format=png&auto=webp&s=6ea0190551fd4fe7e88af3aee2d9907853468b40 Not saying you have to help. Just maybe you can't remember how clueless someone can start.
1
0
2026-03-03T12:27:40
bobby-chan
false
null
0
o8eebd1
false
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8eebd1/
false
1
t1_o8ee722
the 9b is slower because it is a dense model. it might produce better code than the 35b im not sure, but i know its a good model for its size so far. the 35b is a Mixture of experts model. i believe the dense models run the entire thing for every token and the MoE will run however man active parameters it has on each token. the 122b has 10b active parameters so it will run 10b on each token i believe. because dense models run the entire model on every token they tend to give better results. the moe will run part of the model.
1
0
2026-03-03T12:26:48
woolcoxm
false
null
0
o8ee722
false
/r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8ee722/
false
1
t1_o8ee3dw
Oh, my bad, I got that mixed up myself. Thanks for the correction!
1
0
2026-03-03T12:26:04
_yustaguy_
false
null
0
o8ee3dw
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8ee3dw/
false
1
t1_o8ee0sk
yo I have been building an app called Local AGI for my personal use as a way to protect my data. Happy to share if you would like to give it a try and provide some feedback.
1
0
2026-03-03T12:25:33
luke_pacman
false
null
0
o8ee0sk
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o8ee0sk/
false
1
t1_o8educ7
[removed]
1
0
2026-03-03T12:24:17
[deleted]
true
null
0
o8educ7
false
/r/LocalLLaMA/comments/1rjn4wf/llm_observability_is_the_new_logging_quick/o8educ7/
false
1
t1_o8edtky
Haha fair enough
1
0
2026-03-03T12:24:08
Mayion
false
null
0
o8edtky
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8edtky/
false
1
t1_o8edqbe
schema quality matters a lot here too - agents fail way less when tool descriptions clearly define when NOT to use them, not just what they do. the unnecessary call problem is usually a description problem, not a model problem.
1
0
2026-03-03T12:23:28
BC_MARO
false
null
0
o8edqbe
false
/r/LocalLLaMA/comments/1rjm4bl/tool_calling_is_where_agents_fail_most/o8edqbe/
false
1
t1_o8edozl
That’s a quality post. Op, have considered joining Anthropic as an Ai ethics specialist? You clearly witnessed a birth of a sentient AI. At the very least please publish a paper. I wish research of this kind was accepted at Neurips type of conferences. But they are gatekeeping hard
1
0
2026-03-03T12:23:12
No_You3985
false
null
0
o8edozl
false
/r/LocalLLaMA/comments/1rjmrj0/hello_i_am_a_guy_who_has_no_prior_ai_experience/o8edozl/
false
1