name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8a14lt
Got way too excited and tried it with `Qwen3.5-4B-Q8_0.gguf` but it crashed every time I tried to load it into chat. On `v0.8.8`.
1
0
2026-03-02T19:08:11
ANONYMOUSEJR
false
null
0
o8a14lt
false
/r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o8a14lt/
false
1
t1_o8a14kd
[deleted]
1
0
2026-03-02T19:08:10
[deleted]
true
null
0
o8a14kd
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a14kd/
false
1
t1_o8a11wt
Yes, I start with 22-23.5 on VRAM, then the rest gets loaded onto RAM. I try to aim for a max of 60-65% of total memory budget (152GB combined) when I pick models... so no more than 99-ish GB GGUF file size when I pick models. Bartowski and Unsloth IQ2's work fine (GLM, Minimax, etc) Sure, it's slow 2-5 tk/s. But I'm a very patient person. Privacy > Super Fast Token Speeds For smaller models I can get 15-20+ tk/s if I'm in a hurry
1
0
2026-03-02T19:07:49
misterflyer
false
null
0
o8a11wt
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o8a11wt/
false
1
t1_o8a0z82
Nice Post! I can share a very simple example and visuals to understand Temperature, Top P and Top K. The video is for AWS Bedrock but Temperature, Top P and Top K concepts are the same: [https://youtu.be/dHmf1Xojr5w](https://youtu.be/dHmf1Xojr5w)
1
0
2026-03-02T19:07:28
Significant-Pitch-22
false
null
0
o8a0z82
false
/r/LocalLLaMA/comments/1pj6t0u/i_want_to_help_people_understand_what_the_topk/o8a0z82/
false
1
t1_o8a0yus
I just tried Qwen, and yes, it's very good. glm-ocr is definitely also capable of it though and is tiny. Maybe give it a better chance? They have their SDK also so it is a bit like Paddle. I am developing an app where I need good OCR and I was very happy yo see a model like glm-ocr. btw their online service is also amazing: [https://ocr.z.ai/](https://ocr.z.ai/)
1
0
2026-03-02T19:07:25
danihend
false
null
0
o8a0yus
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8a0yus/
false
1
t1_o8a0vyz
Tried with Pi Coding Agent? With local models we have to be much more conserative with token usage, and the tooling usage is much better implemented in Pi so that it works alot better with local models. I highly suggest everyone to try it out!
1
0
2026-03-02T19:07:02
Freaker79
false
null
0
o8a0vyz
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8a0vyz/
false
1
t1_o8a0rrh
If you were okay with ollama (which contains an outdated/broken llama.cpp inside), you'll be happy with "bare" llama.cpp. It works well with qwen3.5.
1
0
2026-03-02T19:06:28
666666thats6sixes
false
null
0
o8a0rrh
false
/r/LocalLLaMA/comments/1riz7dv/unslothqwen359bggufq8_0_failing_on_ollama/o8a0rrh/
false
1
t1_o8a0rkt
not for qwen, since it's already included
1
0
2026-03-02T19:06:26
Negative-Web8619
false
null
0
o8a0rkt
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8a0rkt/
false
1
t1_o8a0qxt
issue solved once updating lm studio to latest and mlx to beta
1
0
2026-03-02T19:06:21
BitXorBit
false
null
0
o8a0qxt
false
/r/LocalLLaMA/comments/1rfacu3/qwen35_122b397b_extremely_slow_json_processing/o8a0qxt/
false
1
t1_o8a0jux
Why so? What makes those better than cline?
1
0
2026-03-02T19:05:25
Kawaiiwaffledesu
false
null
0
o8a0jux
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o8a0jux/
false
1
t1_o8a0jc9
Glad you did — this kind of connected writeup is rare. Isolated reports never build momentum; this one should.a
1
0
2026-03-02T19:05:21
theagentledger
false
null
0
o8a0jc9
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8a0jc9/
false
1
t1_o8a0fje
Thanks! Training framework: HuggingFace Transformers + TRL (SFTTrainer) throughout. Adafactor optimizer for both CPT and SFT. On a 24GB card with a 3B full fine-tune, Adam's optimizer states alone would eat \~12GB, Adafactor gets that down to a few hundred MB which made the difference. CPT context length: 1792 tokens (reduced from 2048 to give the backward pass some headroom on the RTX 3090.. full fine-tune gradients are large). For instruct post-training: planning to stay at 1792 for the active loop SFT iterations, then potentially push to 2048 or higher for the final full SFT pass depending on what the patristic Q&A pairs actually need. Most theological Q&A fits comfortably in 1792 but some of the longer homily passages might benefit from more context.
1
0
2026-03-02T19:04:51
Financial-Fun-8930
false
null
0
o8a0fje
false
/r/LocalLLaMA/comments/1ribjum/i_trained_a_3b_patristic_theology_llm_on_a_single/o8a0fje/
false
1
t1_o8a0f10
2023 is the beginning of the end: https://old.reddit.com/r/ChatGPT/comments/15nvb6y/as_an_ai_language_model_in_research_papers/ 2026 is the end: you can see the vibe-written abstracts with vibe-generated plots in this very sub
1
0
2026-03-02T19:04:47
MelodicRecognition7
false
null
0
o8a0f10
false
/r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o8a0f10/
false
1
t1_o8a0e3e
With your RTX 5060 Ti 16GB, you have plenty of VRAM for those models! For RAG and coding, I'd recommend: 1) Qwen2.5-Coder 14B at Q4 - excellent for code understanding. 2) If you want larger models, try Qwen3 32B at Q4\_K\_M - runs great on 16GB. 3) For even better coding performance, DeepSeek Coder 33B at IQ2.5. Also make sure you're using GPU acceleration in LMStudio and have the right context size set - too large context can slow things down significantly.
1
0
2026-03-02T19:04:40
Pure-Fruit2654
false
null
0
o8a0e3e
false
/r/LocalLLaMA/comments/1rj0dyn/best_compatible_suitable_localllm_model_suggestion/o8a0e3e/
false
1
t1_o8a0dpt
I was referring to GPT-OSS; personally, I think those models were a failure. First, they delayed the release to add more censorship, to the point that it feels like those base models have more censorship than the closed-source models in ChatGPT. The fact that they haven't made them multimodal also seems disrespectful to me. If they're going to launch a new model this year, it should have a default view. It's outrageous that in 2026 there are still text-only models. But hey, OpenAI is basically Apple in AI, and if Apple keeps releasing phones with a single camera and a 60Hz screen, why wouldn't OpenAI release models without multimodality functionality? XD
1
0
2026-03-02T19:04:36
Samy_Horny
false
null
0
o8a0dpt
false
/r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o8a0dpt/
false
1
t1_o8a0ars
I would leave twitter if you dont want to see engagement bait lol
1
0
2026-03-02T19:04:13
Frequent-Mud8705
false
null
0
o8a0ars
false
/r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o8a0ars/
false
1
t1_o8a05rd
both bartowski and unsloth just updated their available 27b models. Qwen3.5 small models dropped today. Looking forward to future updates if you are so inclined. Thank you!
1
0
2026-03-02T19:03:32
-_Apollo-_
false
null
0
o8a05rd
false
/r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o8a05rd/
false
1
t1_o8a01t4
Both.
2
0
2026-03-02T19:03:01
jslominski
false
null
0
o8a01t4
false
/r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o8a01t4/
false
2
t1_o8a001j
used qwen 3.5 27b fp16 to finish claude tasks. 100% completed. Python + webapp
1
0
2026-03-02T19:02:47
LegacyRemaster
false
null
0
o8a001j
false
/r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8a001j/
false
1
t1_o89zyou
Hi there, as kind of a noob in this area, considering your systems specs - I should also be able to run it on my 16GB 9070XT right? Or is it going to suck cause of missing cuda cores? I've been dabbling in learning java and using ai (claude and chatgpt) to help where I struggle to understand stuff or find solutions in the past 2 months for a private purpose and was astonished how good this works even for "low-skilled" programmers as myself. I would love to use my own hardware though and ditch those cloude services even if its going to impact performance and quality a little. I've got llama running with whisper.cpp locally but as far as I had researched I was left to believe that using local models for coding would be a subpar experience.
1
0
2026-03-02T19:02:36
Pr0tuberanz
false
null
0
o89zyou
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89zyou/
false
1
t1_o89zxws
I'm clawcruising for a clawbruising
1
0
2026-03-02T19:02:30
luncheroo
false
null
0
o89zxws
false
/r/LocalLLaMA/comments/1rd8nr7/andrej_karpathy_survived_the_weekend_with_the/o89zxws/
false
1
t1_o89zwxp
the part that resonates most is the difference between data and judgment. ive been building with agents for a while and the frustrating thing is they make decisions that are technically correct but contextually wrong. like an agent will suggest the most popular library for a task when i know from experience that library has a maintainer who disappears every 6 months for your question 2, i think structured retrieval is the pragmatic starting point. fine tuning on personal data sounds cool but the failure mode is way worse, you get a model thats confidently wrong in YOUR specific way instead of generically wrong. at least with retrieval you can inspect whats being pulled and fix it the creepiness problem is real but i think its less about local vs cloud and more about whether people trust that the system wont be used against them later. fully local helps but the real barrier is organizational not technical
1
0
2026-03-02T19:02:22
Pitiful-Impression70
false
null
0
o89zwxp
false
/r/LocalLLaMA/comments/1rj1sbq/ai_agents_dont_have_a_context_problem_they_have_a/o89zwxp/
false
1
t1_o89zuwi
Can you share your command
1
0
2026-03-02T19:02:07
texasdude11
false
null
0
o89zuwi
false
/r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o89zuwi/
false
1
t1_o89zujf
Yeah the hardware struggle is real, I feel that. It's honestly part of the reason I mess around on stuff like NyxPortal.com, just to test out different models without having to deal with the local setup hassle. You're definitely deep in the weeds on the parsing complexities, though.
1
0
2026-03-02T19:02:04
Defro777
false
null
0
o89zujf
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o89zujf/
false
1
t1_o89zsww
can this be used for target seeking missiles? Asking for a friend.
1
0
2026-03-02T19:01:51
tengo_harambe
false
null
0
o89zsww
false
/r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o89zsww/
false
1
t1_o89zqjs
I have Qwen3.5-27B q3_k_m working with 65536 context q8_0 cache on my 9070xt. 300tps pp and 27 tps tg. It is crazy good for 16gb. I am happy.
1
0
2026-03-02T19:01:31
hp1337
false
null
0
o89zqjs
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o89zqjs/
false
1
t1_o89znob
Both good catches. On deduplication: we ran two passes, exact hash dedup followed by semantic embedding similarity (LaBSE + FAISS, cosine threshold 0.92) at both the corpus and Q&A generation levels. The 0.92 threshold is tight enough that it catches most near-duplicate Russian translations of the same passage. LaBSE is cross-lingual so semantic equivalence across translations does register. But you're right that it wasn't explicitly designed for the cross-translation case, and some stylistically distinct retranslations of the same Chrysostom homily probably survived. There are also a lot of citations in Patristic texts, so that may be difficult to remove. The 3.4% removal rate on Q&A pairs (4,189 from 124K) suggests the corpus was cleaner than expected, but that number could be higher with a translation-aware approach. On tokenizer: we used Qwen2.5's existing vocab as-is. The Cyrillic coverage is genuinely good, Church Slavonic loanwords and theological terminology like θεωρία/θέωσις in transliteration tokenize reasonably well without extension. The main gap is untransliterated Greek and the occasional Latin. Those get subword-fragmented. We considered extending the vocab for high-frequency patristic terms but the tradeoff of re-initializing embeddings for new tokens on a 3B model felt riskier than just letting the existing vocab absorb it, especially since the \~98% Russian corpus would naturally reinforce the Cyrillic token representations during CPT anyway. What approach did you use for tokenizer extension in your project?
1
0
2026-03-02T19:01:08
Financial-Fun-8930
false
null
0
o89znob
false
/r/LocalLLaMA/comments/1ribjum/i_trained_a_3b_patristic_theology_llm_on_a_single/o89znob/
false
1
t1_o89zmk6
Don’t code with <16GB and a local model, lol. Not yet.
1
0
2026-03-02T19:00:59
Usual-Orange-4180
false
null
0
o89zmk6
false
/r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o89zmk6/
false
1
t1_o89zgc3
Things change fast. Qwen does regular model releases, and Qwen 3.5 series is great step forward compared to previous Qwen models, and better integrated vision support is great too. Other labs also have major updates relatively often. GLM-5 was a great recent release for example. Before that, Kimi K2 Thinking (released in November last year) was deprecated in January with release of K2.5 with vision support and much better long context awareness. But Qwen3.5 stands out because they offer large family of models, from small 0.8B to large 397B, and many models in between.
1
0
2026-03-02T19:00:09
Lissanro
false
null
0
o89zgc3
false
/r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o89zgc3/
false
1
t1_o89zfax
I guess its because you can build better dataset over time as model evolves.
1
0
2026-03-02T19:00:01
SGmoze
false
null
0
o89zfax
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o89zfax/
false
1
t1_o89zb6e
Attention is all you need
1
0
2026-03-02T18:59:26
No_Cantaloupe6900
false
null
0
o89zb6e
false
/r/LocalLLaMA/comments/1aq14kg/understanding_embeddings/o89zb6e/
false
1
t1_o89z9x3
For 35B it's good, but I just realized that bartowski/Qwen\_Qwen3.5-4B-GGUF:IQ4\_XS works much better for 4B than the Q3\_K\_XL quant i used above. Better reasoning.
1
0
2026-03-02T18:59:16
AppealSame4367
false
null
0
o89z9x3
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89z9x3/
false
1
t1_o89z9s0
Claude?
1
0
2026-03-02T18:59:15
Altruistwhite
false
null
0
o89z9s0
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o89z9s0/
false
1
t1_o89z9bl
I'm calling overfitted bullshit on closed and open source. Especially for ultra small modles (>10B) that "beat" full models in whatever. It's just cap and hinders development for real tasks.
1
0
2026-03-02T18:59:11
Technical-Earth-3254
false
null
0
o89z9bl
false
/r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o89z9bl/
false
1
t1_o89yy1k
the 122b is good match in terms of size to pro 600 and it is fast, though minimax is quite a bit better if there is a combo of 5090 + pro 6000 with 128 g vram in total. the prompt processing and token generation speed is about the same in both models at least here.
1
0
2026-03-02T18:57:43
MinimumCourage6807
false
null
0
o89yy1k
false
/r/LocalLLaMA/comments/1riz0db/qwen35_397ba17b_1bit_quantization_udtq1_0_vs_27b/o89yy1k/
false
1
t1_o89ywqv
i've lost count of the number of papers that are unreproducible.
1
0
2026-03-02T18:57:33
One-Employment3759
false
null
0
o89ywqv
false
/r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o89ywqv/
false
1
t1_o89yqvp
Look into llama-server, it comes with llama.cpp which you can install with homebrew. You should just have to host the model with llama server and update your endpoint to use localhost:8080 or something 
1
0
2026-03-02T18:56:47
-Django
false
null
0
o89yqvp
false
/r/LocalLLaMA/comments/1riyi54/i_am_using_qwen_ai_model_for_openclaw_and_i/o89yqvp/
false
1
t1_o89yqn1
Endless thinking and 100% hallucinated facts. (4bit quant MLX conversion with 12tk/s on Apple M1)
1
0
2026-03-02T18:56:46
Synor
false
null
0
o89yqn1
false
/r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o89yqn1/
false
1
t1_o89ymvn
The brain is small as a pee :P
1
0
2026-03-02T18:56:17
PhotographerUSA
false
null
0
o89ymvn
false
/r/LocalLLaMA/comments/1rizjco/qwen3508b_released_today_speed_is_insane_157tksec/o89ymvn/
false
1
t1_o89ymkh
I'm using minimax m2.5 with a combo of 5090 + rtx pro 600 in iq\_4\_xs. It is a blast, with token generation of arounf 100t/s and quality very good. So I would suggest to keep also the 5090 :D.
1
0
2026-03-02T18:56:14
MinimumCourage6807
false
null
0
o89ymkh
false
/r/LocalLLaMA/comments/1riz0db/qwen35_397ba17b_1bit_quantization_udtq1_0_vs_27b/o89ymkh/
false
1
t1_o89ymdt
I've used menchmaxxed ai, fell for them lots of times back when people were posting them here and making wild claims. You could tell within a few minutes that they weren't really that smart tho so we shall see. 
1
0
2026-03-02T18:56:13
ArchdukeofHyperbole
false
null
0
o89ymdt
false
/r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o89ymdt/
false
1
t1_o89yl5n
Genuinely curious, who uses those kinds of models? I've never ever seen a model refusal in my everyday usage, while for "roleplay" there's specific finetunes with enhanced creativity and "specific knowledge". Are there so much unethical hackers so that their finetunes are all of the HuggingFace, or what?
1
0
2026-03-02T18:56:03
No-Refrigerator-1672
false
null
0
o89yl5n
false
/r/LocalLLaMA/comments/1rixh53/qwen35122b_heretic_ggufs/o89yl5n/
false
1
t1_o89yk06
Amex Black
1
0
2026-03-02T18:55:55
btc_maxi100
false
null
0
o89yk06
false
/r/LocalLLaMA/comments/1rj12me/how_are_you_handling_spending_controls_for_your/o89yk06/
false
1
t1_o89yatn
When you say > worse results than 2 bit quants of Qwen3.5 a3b is that referring to generation speed, quality of output, or both?
1
0
2026-03-02T18:54:43
ryrothedino
false
null
0
o89yatn
false
/r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o89yatn/
false
1
t1_o89y9jh
Personally for me is a great way to play with AI on CPU only with low ram 16GB 2B at Q4\_M uses around 3GB ram.
1
0
2026-03-02T18:54:34
OrdinaryTransition57
false
null
0
o89y9jh
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89y9jh/
false
1
t1_o89y9dv
That is indeed harsh feedback
1
0
2026-03-02T18:54:32
-Django
false
null
0
o89y9dv
false
/r/LocalLLaMA/comments/1rj18h4/built_a_local_memory_layer_for_ai_agents_where/o89y9dv/
false
1
t1_o89y8u7
Just asked why not include it, I will 100% use 0.8b because I have RPi 3b+ with 1GB of RAM
1
0
2026-03-02T18:54:28
stopbanni
false
null
0
o89y8u7
false
/r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o89y8u7/
false
1
t1_o89y7mf
Thanks for the heads up. Last time I tried the geohot driver was more than a year ago and had some UI issues. Since then I'm using the dual RTX in a headless setting, so it might be worth another shot.
1
0
2026-03-02T18:54:18
Sufficient-Rent6078
false
null
0
o89y7mf
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o89y7mf/
false
1
t1_o89y785
[deleted]
1
0
2026-03-02T18:54:15
[deleted]
true
null
0
o89y785
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89y785/
false
1
t1_o89y0v6
[removed]
1
0
2026-03-02T18:53:26
[deleted]
true
null
0
o89y0v6
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89y0v6/
false
1
t1_o89xyou
completely agree
1
0
2026-03-02T18:53:09
Distinct_Track_5495
false
null
0
o89xyou
false
/r/LocalLLaMA/comments/1riboy2/learnt_about_emergent_intention_maybe_prompt/o89xyou/
false
1
t1_o89xx2k
exactly
1
0
2026-03-02T18:52:57
Distinct_Track_5495
false
null
0
o89xx2k
false
/r/LocalLLaMA/comments/1riboy2/learnt_about_emergent_intention_maybe_prompt/o89xx2k/
false
1
t1_o89xwnl
[bruh even the Google sheets are scuffed in any view I can get](https://i.imgur.com/1UQ20xa.jpeg)
1
0
2026-03-02T18:52:54
letsgoiowa
false
null
0
o89xwnl
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o89xwnl/
false
1
t1_o89xv4b
Clairement
1
0
2026-03-02T18:52:42
No_Cantaloupe6900
false
null
0
o89xv4b
false
/r/LocalLLaMA/comments/1dzqa6s/how_are_embeddings_trained/o89xv4b/
false
1
t1_o89xs7d
git clone https://github.com/yourusername/yourmemory > /yourusername/ take your vibecoded shit and get the fuck out
1
0
2026-03-02T18:52:20
MelodicRecognition7
false
null
0
o89xs7d
false
/r/LocalLLaMA/comments/1rj18h4/built_a_local_memory_layer_for_ai_agents_where/o89xs7d/
false
1
t1_o89xrp0
First find out which model are you willing to run according to your demands. You can try models online (qwen, z-ai, minimax, etc), once you find out, look for the hardware that is needed to run it.
1
0
2026-03-02T18:52:16
dionisioalcaraz
false
null
0
o89xrp0
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o89xrp0/
false
1
t1_o89xnhf
You know, sometimes you want to launch a web app, right? So you could either click the bookmark you made to that webapp in the browser, right? But you know what you also could do? Buy 4x RTX3090, install linux, mess for 3 days to get cuda working, compile llama-cpp from source, try to get docker running, run open-claw and then ask reddit why it gives an error message. But in the end it should also allow you to launch the web apps.
1
0
2026-03-02T18:51:43
chris_0611
false
null
0
o89xnhf
false
/r/LocalLLaMA/comments/1riyi54/i_am_using_qwen_ai_model_for_openclaw_and_i/o89xnhf/
false
1
t1_o89xejz
I checked the tiny ones in lineage-bench (27B for scale): |Nr|model\_name|lineage|lineage-8|lineage-64|lineage-128|lineage-192| |:-|:-|:-|:-|:-|:-|:-| |1|qwen/qwen3.5-27b|0.944|1.000|1.000|0.925|0.850| |2|qwen/qwen3.5-9b|0.556|1.000|0.775|0.275|0.175| |3|qwen/qwen3.5-4b|0.469|1.000|0.650|0.175|0.050| There seems to be a spark of intellect still present in 9B and 4B.
1
0
2026-03-02T18:50:35
fairydreaming
false
null
0
o89xejz
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o89xejz/
false
1
t1_o89xedv
It does
1
0
2026-03-02T18:50:34
suicidaleggroll
false
null
0
o89xedv
false
/r/LocalLLaMA/comments/1riz7dv/unslothqwen359bggufq8_0_failing_on_ollama/o89xedv/
false
1
t1_o89xcph
I asked it to do research on what are the best money making app ideas and then asked it to build them for.me
1
0
2026-03-02T18:50:21
utsavsarkar
false
null
0
o89xcph
false
/r/LocalLLaMA/comments/1riyi54/i_am_using_qwen_ai_model_for_openclaw_and_i/o89xcph/
false
1
t1_o89x90j
Yes but this is nearly a daily occurrence by now and was explained a million times. The people that post this shit could literally just scroll down a bit to see it was already posted but they dont ...
1
0
2026-03-02T18:49:53
Finanzamt_Endgegner
false
null
0
o89x90j
false
/r/LocalLLaMA/comments/1riy7cw/lmao/o89x90j/
false
1
t1_o89x63g
Intresing, I'll check that out.
1
0
2026-03-02T18:49:31
kindofbluetrains
false
null
0
o89x63g
false
/r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o89x63g/
false
1
t1_o89x3oy
and is it prod ready like vllm ? i have rtx 5070 ti and vllm has some issues with latest arch. and fix is still not out for that one
1
0
2026-03-02T18:49:12
callmedevilthebad
false
null
0
o89x3oy
false
/r/LocalLLaMA/comments/1riz7dv/unslothqwen359bggufq8_0_failing_on_ollama/o89x3oy/
false
1
t1_o89x2a8
I mean, I have zero looping ? Nada ! llama.server.exe -m E:\\LLMa\_Models\\Huihui-Qwen3.5-35B-A3B-abliterated.Q5\_K\_S.gguf --mmproj E:\\LLMa\_Models\\mmproj-BF16.gguf --port 1337 --host [127.0.0.1](http://127.0.0.1) \-c 40960 -ngl 49 -fa on -ctk q8\_0 -ctv q8\_0 --samplers top\_k;temperature --sampling-seq kt --top-k 80 --temp 0.8 this is how I run mine on a 5090
1
0
2026-03-02T18:49:01
Not4Fame
false
null
0
o89x2a8
false
/r/LocalLLaMA/comments/1riunee/how_to_fix_endless_looping_with_qwen35/o89x2a8/
false
1
t1_o89wwzi
It's not a bug, as such, just that when a smaller model doesn't have the capacity to predict a complex pattern it often "falls back" to repetition (which is a very easy pattern to learn, and slightly better than no-skill). Qwen 3 was okay, even at 30BA3B or 4B, but did have this problem on difficult documents in my testing. Haven't run 3.5 yet.
1
0
2026-03-02T18:48:20
the__storm
false
null
0
o89wwzi
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o89wwzi/
false
1
t1_o89wx3h
Thanks for the comment. Really appreciate. Let me explore if openwebui works with  "llama.cpp behind llama-swap".
1
0
2026-03-02T18:48:20
callmedevilthebad
false
null
0
o89wx3h
false
/r/LocalLLaMA/comments/1riz7dv/unslothqwen359bggufq8_0_failing_on_ollama/o89wx3h/
false
1
t1_o89wrsk
0.8B = 800M . now you know why!
1
0
2026-03-02T18:47:39
kayteee1995
false
null
0
o89wrsk
false
/r/LocalLLaMA/comments/1rizjco/qwen3508b_released_today_speed_is_insane_157tksec/o89wrsk/
false
1
t1_o89wqnb
Reminds me of gemini 3 flash being far superior at chess than the thinking version and other flag ship thinking models at the time
1
0
2026-03-02T18:47:31
EclecticAcuity
false
null
0
o89wqnb
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o89wqnb/
false
1
t1_o89wpei
122B with FP4 would be perfection for RTX Pro 6000.
1
0
2026-03-02T18:47:21
Expensive-Paint-9490
false
null
0
o89wpei
false
/r/LocalLLaMA/comments/1riz0db/qwen35_397ba17b_1bit_quantization_udtq1_0_vs_27b/o89wpei/
false
1
t1_o89wn8e
Can you turn reasoning off in ollama?
1
0
2026-03-02T18:47:04
Mashic
false
null
0
o89wn8e
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o89wn8e/
false
1
t1_o89wh2h
Qwen3.5-0.8B-Q8\_0.gguf: with 16k context, one sentence prompt and 500 token output I got 8.05t/s without using SSD.
1
0
2026-03-02T18:46:16
jslominski
false
null
0
o89wh2h
false
/r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o89wh2h/
false
1
t1_o89wf2n
If you decide to keep it going I would be up for contributing / testing
1
0
2026-03-02T18:46:01
Initial-Argument2523
false
null
0
o89wf2n
false
/r/LocalLLaMA/comments/1rj08k1/k2_not_25_distillation_still_worth_it/o89wf2n/
false
1
t1_o89wbv6
You can see how qwen3-VL transformers handle video: [https://github.com/huggingface/transformers/tree/main/src/transformers/models/qwen3\_vl](https://github.com/huggingface/transformers/tree/main/src/transformers/models/qwen3_vl) The Qwen3.5 tranformers are also there, in the qwen3\_5 directories. I'm not sure if any of the backends (llama.cpp, ollama, lmstudio, etc.) have video implemented. I created [llm-python-vision-multi-images.py](https://github.com/Jay4242/llm-scripts/blob/main/llm-python-vision-multi-images.py) to be able to send an arbitrary number of frames to the bot at a time. I've been using it with [llm-ffmpeg-edit.bash](https://github.com/Jay4242/llm-scripts/blob/main/llm-ffmpeg-edit.bash) to step through video 10 seconds at a time at 2 FPS by default. You can technically do whatever fits in context though. Any other video options are going to be doing *basically* the same thing, chopping the video into frames, maybe transcribing the audio, and organizing things in context somehow. Qwen-Omni series also have the audio multimodality, but Qwen3-Omni never got llama.cpp support for reasons beyond my understanding.
1
0
2026-03-02T18:45:36
SM8085
false
null
0
o89wbv6
false
/r/LocalLLaMA/comments/1riv5kc/whats_possible_with_video_now/o89wbv6/
false
1
t1_o89w76f
Why would this even matter? Isn't LM Studio just a GUI running something like that under the hood?
1
0
2026-03-02T18:45:00
ArkCoon
false
null
0
o89w76f
false
/r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o89w76f/
false
1
t1_o89w4vc
It should, it will just take a bit more time. Same for embedding models, and TTS and ASR (audio to speech and viceversa).
1
0
2026-03-02T18:44:42
guesdo
false
null
0
o89w4vc
false
/r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o89w4vc/
false
1
t1_o89w4r3
Nice, it's not even working in Alibaba's own MNN Chat yet--just crashes every time.
1
0
2026-03-02T18:44:41
DeProgrammer99
false
null
0
o89w4r3
false
/r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o89w4r3/
false
1
t1_o89vxep
if the error says "unsupported arch" then compile latest from source, first version that supported the qwen35 architecture is less than a month old.
1
0
2026-03-02T18:43:47
quilso
false
null
0
o89vxep
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89vxep/
false
1
t1_o89vwnl
Old model? It hasn't even been out for a year, but oh well, those models don't even have vision to begin with.
1
0
2026-03-02T18:43:41
Samy_Horny
false
null
0
o89vwnl
false
/r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o89vwnl/
false
1
t1_o89vjyc
> hi <send> > What did he mean by "hi"? Wait a minute, what do any of us ever mean by that word? Or is it a phrase? Anyway usually it's a friendly tone, so maybe I should say hi back. Nah that's too simple, I'm a sophisticated thinking LLM. Better dig into the philosophical underpinnings of short un-grammatical phrases and work back to a discrete distribution of the user's intent, choosing the maximum likelihood from there to construct a well-reasoned response.
1
0
2026-03-02T18:42:03
Much-Researcher6135
false
null
0
o89vjyc
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89vjyc/
false
1
t1_o89vjoh
Small models are perfect for edge devices and local processing! I use them for quick text classification, sentiment analysis, and even as coding assistants on my laptop without needing cloud access. The quantized versions run super fast on CPU-only setups, which is great for privacy-sensitive tasks or when you're offline. Plus they're amazing for prototyping before scaling up to larger models.
1
0
2026-03-02T18:42:01
Vey_TheClaw
false
null
0
o89vjoh
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89vjoh/
false
1
t1_o89vfqm
Finetune it with symbolic semantic graphs and go intent golden tokens saved approach
1
0
2026-03-02T18:41:31
fab_space
false
null
0
o89vfqm
false
/r/LocalLLaMA/comments/1rizlkn/qwen_27b_is_a_beast_but_not_for_agentic_work/o89vfqm/
false
1
t1_o89vcbk
GPT-OSS is old MoE model, Qwen3.5 is very recent, and 9B is a dense model, so it should easily beat old GPT-OSS 20B MoE easily in most areas. GPT-OSS 120B still may have greater world knowledge though compared to 9B, but it is still old model, so it makes sense it lags behind by now.
1
0
2026-03-02T18:41:04
Lissanro
false
null
0
o89vcbk
false
/r/LocalLLaMA/comments/1rj0mxt/why_are_people_so_quick_to_say_closed_frontiers/o89vcbk/
false
1
t1_o89v9mq
Sorry I can't really help with this specific question, I can just offer advice to move on from Ollama. It's slow and unreliable, and the devs don't care about it anymore (they've switched focus to their cloud offerings). Take a few hours to research the alternatives and spin one up, my guess is this and other problems will disappear when you do. I'm a fan of llama.cpp behind llama-swap personally, other people prefer vLLM or SGLang, but really anything is better than Ollama.
1
0
2026-03-02T18:40:44
suicidaleggroll
false
null
0
o89v9mq
false
/r/LocalLLaMA/comments/1riz7dv/unslothqwen359bggufq8_0_failing_on_ollama/o89v9mq/
false
1
t1_o89v8i2
Interesting direction. For multi-agent systems, I’d separate memory into canonical facts, role-local working memory, and temporary scratchpads with expiry. If Hippocampus controls retention, I’d benchmark contradiction rate + task success under context shifts (not just token savings). The offload-and-recall fallback is a strong safety valve.
1
0
2026-03-02T18:40:35
xing_horizon
false
null
0
o89v8i2
false
/r/LocalLLaMA/comments/1riz852/what_if_a_small_ai_decided_what_your_llm_keeps_in/o89v8i2/
false
1
t1_o89v8fq
You are the only one addressing the points I would have liked the discussion would have developed LOL, thanks!
1
0
2026-03-02T18:40:34
dionisioalcaraz
false
null
0
o89v8fq
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o89v8fq/
false
1
t1_o89v6gd
> + "turn this into a meme/comic" That was not needed. Just a screenshot of like 15% of the OP and this part of the comments, including long comment san's "some sort of retirement meme would fit amazingly here".
1
0
2026-03-02T18:40:19
themoregames
false
null
0
o89v6gd
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89v6gd/
false
1
t1_o89uy63
Oh that's easy, just add this as an argument: `--chat-template-kwargs "{\"enable_thinking\": false}"`
1
0
2026-03-02T18:39:16
cultoftheilluminati
false
null
0
o89uy63
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89uy63/
false
1
t1_o89uu3a
Does this mean that the 27B model is best for coding?
1
0
2026-03-02T18:38:45
fernando782
false
null
0
o89uu3a
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o89uu3a/
false
1
t1_o89utwu
VALIDATION_STATUS.md: **ascend-compat is simulation-validated, not hardware-validated.** The architecture, test suite, and patching machinery work correctly in CPU-fallback mode. The CUDA-to-NPU argument mappings are based on Huawei's documentation, not empirical NPU execution. more AI-hallucinated crap
1
0
2026-03-02T18:38:44
MelodicRecognition7
false
null
0
o89utwu
false
/r/LocalLLaMA/comments/1rj0dsf/running_llms_on_huawei_ascend_without_rewriting/o89utwu/
false
1
t1_o89uovx
The context is coding. Which instruct variant are you suggesting is better than qwen3-next at coding?
1
0
2026-03-02T18:38:06
Terminator857
false
null
0
o89uovx
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89uovx/
false
1
t1_o89uly3
Actually, that table is just a rounded version of the same raw data I used for the chart (from my [Google Sheet](https://docs.google.com/spreadsheets/d/1A5jmS7rDJe114qhRXo8CLEB3csKaFnNKsUdeCkbx_gM/edit?usp=sharing)). To keep the chart readable, I averaged the scores into the general categories Qwen uses (Knowledge, Math, Coding, etc.) rather than listing out 25 individual benchmarks. It's not a copy-paste from Artificial Analysis; it's pulled directly from the official Qwen3.5 model cards.
1
0
2026-03-02T18:37:43
Jobus_
false
null
0
o89uly3
false
/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o89uly3/
false
1
t1_o89ugbj
It's seems so close, and not much further from perfect than other solutions I've tried. I commend their work on the front end, though, and that the work was balanced between the front and back offering a lot to both in 0.4.x. Add mcp/youtube-transcript and mcp/webfetch and qwen3.5 (with thinking turned off), and with the semi-persistent KV cache it's an amazing deep researcher up into 64Ktoken+ context windows even on an old RTX-3090.
1
0
2026-03-02T18:36:59
One-Cheesecake389
false
null
0
o89ugbj
false
/r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o89ugbj/
false
1
t1_o89ucd5
Highly autonomous potatoes!
1
0
2026-03-02T18:36:28
Much-Researcher6135
false
null
0
o89ucd5
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89ucd5/
false
1
t1_o89ubbw
Hi writers 👋 I’m one of the people behind **Blocwrite.com**. It started because I kept breaking my own stories. Once my drafts got long, things would fall apart. A character would show up before being introduced. I’d accidentally repeat the same scene beat in a different chapter. My timeline would drift. My “story bible” was scattered across docs and notes. Blocwrite isn’t just an AI text generator. It’s a **structured writing studio** built around story control: * You build your novel **scene by scene (“blocs”)**, so you can actually see the structure. * It maintains a **living canon/story bible** for characters, locations, lore, and timeline. * It flags **continuity issues** — like early character appearances, contradictions, or duplicated scenes. * You can draft with AI if you want, or write completely on your own inside the system. It’s not credit-based — no watching a token counter while you think. The goal is to reduce plot sprawl and blank-page paralysis so you can focus on telling the story. If you’ve ever gotten 40k words in and realized your own plot doesn’t make sense anymore, I’d genuinely love to know if this sounds useful — or what you’d want something like this to do better. You can check it out at **Blocwrite.com**.
1
0
2026-03-02T18:36:19
Afternoon-Doodles
false
null
0
o89ubbw
false
/r/LocalLLaMA/comments/1qt2po4/a_list_of_creative_writing_benchmarks/o89ubbw/
false
1
t1_o89ub4n
It's q2k weak lol
1
0
2026-03-02T18:36:18
PhotographerUSA
false
null
0
o89ub4n
false
/r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o89ub4n/
false
1
t1_o89uaev
GLM-OCR is amazing for text, but I have lots of documents with tables, etc. Qwens are greate in reproducing tables.
1
0
2026-03-02T18:36:12
Pjotrs
false
null
0
o89uaev
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o89uaev/
false
1
t1_o89u6w3
You can also easily load them inside of a web application using WebLLM!
1
0
2026-03-02T18:35:44
brandon-i
false
null
0
o89u6w3
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o89u6w3/
false
1
t1_o89u5n2
Benchmarks aside, I'm not entirely convinced 110b beats gpt-oss-120b yet though it could just be the fact I can run gpt at native quant vs the qwen quant I had being flawed 27b fails a lot of my own benchmarks that gpt handles as well. So I'm sure a 14b Qwen3.5 will benchmark great, will be fast, and may outperform in some areas, but I wouldn't pin my hopes in it being the solid all-rounder gpt is
1
0
2026-03-02T18:35:34
BigYoSpeck
false
null
0
o89u5n2
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89u5n2/
false
1
t1_o89u4yv
Yes, if you are looking for hints for what to do. No, if you expect the agent to write clean code and not deceive you.
1
0
2026-03-02T18:35:28
Terminator857
false
null
0
o89u4yv
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89u4yv/
false
1
t1_o89twnv
For simple agentic tasks (single-file edits, basic scaffolding), 9B works surprisingly well - I've been using it with Roo Code for quick prototyping. But for multi-step workflows that require maintaining context across 10+ tool calls, it starts to lose coherence around step 5-6. The sweet spot I found: use 9B for initial exploration and small tasks, then switch to 27B-35B A3B for the actual implementation phase. The MoE models handle long-horizon planning way better while still being runnable on consumer hardware. Also depends heavily on your quant - Q6_K or higher makes a noticeable difference for tool calling accuracy vs Q4. If you're stuck at 8GB VRAM, try running 35B-A3B with heavy CPU offload. Slower (8-12 t/s) but more reliable than pushing 9B beyond its limits.
1
0
2026-03-02T18:34:24
IulianHI
false
null
0
o89twnv
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o89twnv/
false
1