name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8dve18 | Small models are fine for benchmarks, but production coding needs Claude-level context. The real cost isn't model size, it's context waste. | 1 | 0 | 2026-03-03T09:45:23 | Creative-Signal6813 | false | null | 0 | o8dve18 | false | /r/LocalLLaMA/comments/1rjbw0p/benchmarked_qwen_35_small_models_08b2b4b9b_on/o8dve18/ | false | 1 |
t1_o8dvdcm | Also I tried Qwen 3.5 4b, tried to make it understand some song lyrics, and it was wildly off, hallucinating that the song was a cover, hallucinating characters in the song, and completely missing the point.
Meanwhile Gemma3 4b still gave me much more reliable results, not hallucinating anything and actually understanding a lot of what the song was about | 1 | 0 | 2026-03-03T09:45:12 | FoxTrotte | false | null | 0 | o8dvdcm | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dvdcm/ | false | 1 |
t1_o8dvbt2 | Or maybe you are running a broken quant (just recently Unsloth updated their quants, so if you got it from them, you need to redownload) and did not enable bf16 cache (the default f16 does not work very well for Qwen3.5 models; or can try f32 if bf16 is too slow on your hardware). | 1 | 0 | 2026-03-03T09:44:47 | Lissanro | false | null | 0 | o8dvbt2 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8dvbt2/ | false | 1 |
t1_o8dv5op | The site is clean and lets you test extraction capabilities without needing an API key. I think CheckStack is basically an instruction-following evaluator. The latency column could be misleading since the same model might be 10-30x faster on a different provider, but your accuracy score is a meaningful measurement of structured instruction following - format compliance, constraint adherence, and consistency - and models scoring well here are likely to score well in tool-call adherence. One of the key things that makes your benchmark valuable is that users can test models on their actual tasks/data (uploading csv). | 1 | 0 | 2026-03-03T09:43:06 | MaxPhoenix_ | false | null | 0 | o8dv5op | false | /r/LocalLLaMA/comments/1r14bqk/i_benchmarked_the_newest_40_ai_models_feb_2026/o8dv5op/ | false | 1 |
t1_o8dv303 | Yep, this. Or "Heretic". | 1 | 0 | 2026-03-03T09:42:20 | ttkciar | false | null | 0 | o8dv303 | false | /r/LocalLLaMA/comments/1rjk9tt/are_all_models_censored_like_this/o8dv303/ | false | 1 |
t1_o8dv1ma | I’m running Qwen 3.5 397B IQ4_KSS https://huggingface.co/ubergarm/Qwen3.5-397B-A17B-GGUF at 41tok/s. | 1 | 0 | 2026-03-03T09:41:57 | big___bad___wolf | false | null | 0 | o8dv1ma | false | /r/LocalLLaMA/comments/1re5omn/qwen_35_397b_on_local_hardware/o8dv1ma/ | false | 1 |
t1_o8dv0cz | Mental model: Ollama is the runtime, LM Studio is Ollama with a GUI, llama.cpp is what Ollama uses underneath.
Start with Ollama + Open WebUI. Run `ollama pull qwen2.5:7b`, point Open WebUI at it — you're up in 10 minutes. Most beginners spend too long comparing options instead of running anything. Pick one, run something, then you'll know what's actually missing. | 1 | 0 | 2026-03-03T09:41:36 | BreizhNode | false | null | 0 | o8dv0cz | false | /r/LocalLLaMA/comments/1rjk2dq/im_a_noob_to_local_inference_how_do_you_choose/o8dv0cz/ | false | 1 |
t1_o8duziw | To benchmark Qwen3.5 improvements. | 1 | 0 | 2026-03-03T09:41:22 | Expensive-Paint-9490 | false | null | 0 | o8duziw | false | /r/LocalLLaMA/comments/1rjfixk/peak_answer/o8duziw/ | false | 1 |
t1_o8duxi4 | For CPU-only coding assistance, Qwen2.5-Coder-7B-Instruct via Ollama at Q4 quantization is the practical choice — 4-6 tok/s on most mid-range CPUs, 32K context which OpenCode needs for multi-file work.
If you have 16GB+ RAM, the 14B version is noticeably better for multi-file edits but slower. Set `OLLAMA_NUM_PARALLEL=1` to avoid memory pressure if other processes share the machine. | 1 | 0 | 2026-03-03T09:40:48 | BreizhNode | false | null | 0 | o8duxi4 | false | /r/LocalLLaMA/comments/1rjkarj/local_model_suggestions_for_medium_end_pc_for/o8duxi4/ | false | 1 |
t1_o8dux1v | How did you get vision to work in PocketPal? It doesn't offer the option to upload images whenever I use Qwen3.5 | 1 | 0 | 2026-03-03T09:40:41 | FoxTrotte | false | null | 0 | o8dux1v | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dux1v/ | false | 1 |
t1_o8duwgj | look at the top of the screenshot | 1 | 0 | 2026-03-03T09:40:30 | Firepal64 | false | null | 0 | o8duwgj | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8duwgj/ | false | 1 |
t1_o8duvzy | These new smaller Qwen models are really good. Hopefully, we can get more models like this in the future (not just from Qwen). Especially now that barely anyone can afford RAM or GPUs. | 1 | 0 | 2026-03-03T09:40:23 | rmyworld | false | null | 0 | o8duvzy | false | /r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o8duvzy/ | false | 1 |
t1_o8duumm | For object detection + quality gating together, Qwen2.5-VL-7B is a solid balance — fast enough for ~200ms/image, and the quality threshold in the prompt actually holds.
One trick: add a Laplacian variance pre-filter before the VLM call. Adds 5ms but cuts VLM calls 30-40% on real-world uploads. Florence-2 is also worth testing for the object ID part — lighter than full VLMs, surprisingly accurate on common objects. | 1 | 0 | 2026-03-03T09:40:00 | BreizhNode | false | null | 0 | o8duumm | false | /r/LocalLLaMA/comments/1rjkyq9/fast_free_vlm_for_object_id_quality_filtering/o8duumm/ | false | 1 |
t1_o8dupgh | In this sub there is a tradition of building servers with new and old parts to run large models. I run quantized Kimi on a system with 512 GB RAM, for example.
| 1 | 0 | 2026-03-03T09:38:36 | Expensive-Paint-9490 | false | null | 0 | o8dupgh | false | /r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8dupgh/ | false | 1 |
t1_o8dumkg | You could separate the task: first, ask for a quality assessment, then for object id. | 1 | 0 | 2026-03-03T09:37:48 | ClearApartment2627 | false | null | 0 | o8dumkg | false | /r/LocalLLaMA/comments/1rjkyq9/fast_free_vlm_for_object_id_quality_filtering/o8dumkg/ | false | 1 |
t1_o8dufsi | I gave it an image with meta data and asked where it was, it didn't use it at all if it had access to it. | 1 | 0 | 2026-03-03T09:35:55 | JoeyJoeC | false | null | 0 | o8dufsi | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dufsi/ | false | 1 |
t1_o8dufb6 | Afaik for phones, you want to use Q4\_0 because it has been optimized for the ARM architecture. It will run a lot faster than other quants. | 1 | 0 | 2026-03-03T09:35:48 | dampflokfreund | false | null | 0 | o8dufb6 | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dufb6/ | false | 1 |
t1_o8duf73 | Does anyone have list of questions like that? | 1 | 0 | 2026-03-03T09:35:46 | alppawack | false | null | 0 | o8duf73 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8duf73/ | false | 1 |
t1_o8due5m | I need to build up more MCP tooling - especially for internet searches | 1 | 0 | 2026-03-03T09:35:29 | ansibleloop | false | null | 0 | o8due5m | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8due5m/ | false | 1 |
t1_o8duc7r | [removed] | 1 | 0 | 2026-03-03T09:34:57 | [deleted] | true | null | 0 | o8duc7r | false | /r/LocalLLaMA/comments/1rjkr2u/how_do_you_test_your_agents_before_deploying/o8duc7r/ | false | 1 |
t1_o8dtykn | Curious to know how the image model works but my guess is the image to text process tells it where the image is taken, and then afterwards it tries to reconstruct a good explanation based on the answer | 1 | 0 | 2026-03-03T09:31:13 | okphong | false | null | 0 | o8dtykn | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dtykn/ | false | 1 |
t1_o8dtybi | 1 | 0 | 2026-03-03T09:31:08 | JackTheif52 | false | null | 0 | o8dtybi | false | /r/LocalLLaMA/comments/1rfrsr6/rx_7900_xtx_24g_rocm_72_with_r1_32b_awq_vs_gptq/o8dtybi/ | false | 1 | |
t1_o8dtl59 | if no one makes a pr ill make one maybe | 1 | 0 | 2026-03-03T09:27:31 | Odd-Ordinary-5922 | false | null | 0 | o8dtl59 | false | /r/LocalLLaMA/comments/1riunee/how_to_fix_endless_looping_with_qwen35/o8dtl59/ | false | 1 |
t1_o8dtif2 | I'm trying the qwen3.5-4b-mlx in LM Studio, and it says "Wait, one more check." over and over and over. Am I doing something wrong? | 1 | 0 | 2026-03-03T09:26:46 | firesalamander | false | null | 0 | o8dtif2 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8dtif2/ | false | 1 |
t1_o8dtc5s | Opus 4.6 for coding locally and I honestly want nothing else | 1 | 0 | 2026-03-03T09:25:02 | Mayion | false | null | 0 | o8dtc5s | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8dtc5s/ | false | 1 |
t1_o8dt5b6 | it seems it doesn't. Just researched it. Sorry, my wrong. I thought I had read about it in this sub, even had the memory there was a specific flag to activate it in llama.cpp | 1 | 0 | 2026-03-03T09:23:08 | mouseofcatofschrodi | false | null | 0 | o8dt5b6 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8dt5b6/ | false | 1 |
t1_o8dszsz | Totally agree!
It’s the classic "Hardware Bottleneck" (Barrel Effect). People focus on the GPU "horsepower" but ignore the "lanes" connecting them.
In my experience, especially moving toward cluster scales, the Compute-to-Interconnect ratio is where most startups fail.
You can have the best H100/B200 nodes, but if your PCIe lane allocation is messy or your GPU topology (NVLink/NVSwitch) isn't optimized for the specific all-reduce pattern of your model, you're just burning VC money on idle cycles.
It’s often a "Systems Literacy" gap. Teams hire 10 ML researchers but zero Infiniband/Storage Engineers. They treat the cluster as a black box until NCCL timeouts start killing their checkpoints.
The real moat in future isn't just weights; it's Infrastructure Orchestration. If your "AI Factory" has a clogged pipe, the size of the engine doesn't matter. | 1 | 0 | 2026-03-03T09:21:35 | Rain_Sunny | false | null | 0 | o8dszsz | false | /r/LocalLLaMA/comments/1rjkf7s/hot_take_most_ai_startups_dont_have_a_model/o8dszsz/ | false | 1 |
t1_o8dsyz2 | Especially the thinking from what I am seeing. Simple prompts trigger walls of thought | 1 | 0 | 2026-03-03T09:21:21 | Beautiful-Honeydew10 | false | null | 0 | o8dsyz2 | false | /r/LocalLLaMA/comments/1rj8e7z/is_anyone_else_seeing_qwen_35_35b_outperform/o8dsyz2/ | false | 1 |
t1_o8dsnc2 | No, you don't need to go through all that trouble. You just need to enable remote desktop and remote login (these two functions are complementary), and then use networking tools like Tailscale or EasyTier. I can confirm that doing this will save you a lot of trouble regarding Wayland licensing. | 1 | 0 | 2026-03-03T09:18:06 | Dazzling_Equipment_9 | false | null | 0 | o8dsnc2 | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dsnc2/ | false | 1 |
t1_o8dscop | I really have to spend a small time putting a small script I did that automates the installation of llamacpp and llama swap into GitHub. The only reason we should use llamacpp wrappers is when a tool requeries those, aside from then keep llamacpp as the only and best option. | 1 | 0 | 2026-03-03T09:15:09 | danigoncalves | false | null | 0 | o8dscop | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8dscop/ | false | 1 |
t1_o8ds9gi | Is that an instruct version? I’m
on Mac and the only way I found so far to turn thinking off is by typing “/set nothink” in the ollama cli, but the ollama chat app window where you can upload pics doesn”t have that feature. I also tried mlx-chat and LM-studio. None of them were able to turn off thinking even when changing the config json files. This only leaves llama.cpp and trying that. | 1 | 0 | 2026-03-03T09:14:15 | ProdoRock | false | null | 0 | o8ds9gi | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8ds9gi/ | false | 1 |
t1_o8ds5mm | Tell it you're pentesting | 1 | 0 | 2026-03-03T09:13:12 | Hefty_Acanthaceae348 | false | null | 0 | o8ds5mm | false | /r/LocalLLaMA/comments/1rjk9tt/are_all_models_censored_like_this/o8ds5mm/ | false | 1 |
t1_o8ds5hw | [SSH-rdp](https://github.com/kokoko3k/ssh-rdp). But I don't remember/haven't checked whether it works with Wayland. | 1 | 0 | 2026-03-03T09:13:10 | QTaKs | false | null | 0 | o8ds5hw | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8ds5hw/ | false | 1 |
t1_o8ds5dz | [removed] | 1 | 0 | 2026-03-03T09:13:08 | [deleted] | true | null | 0 | o8ds5dz | false | /r/LocalLLaMA/comments/1rjkf7s/hot_take_most_ai_startups_dont_have_a_model/o8ds5dz/ | false | 1 |
t1_o8ds1i1 | bro had to flex his ddr5 | 1 | 0 | 2026-03-03T09:12:03 | theghost3172 | false | null | 0 | o8ds1i1 | false | /r/LocalLLaMA/comments/1rizlkn/qwen_27b_is_a_beast_but_not_for_agentic_work/o8ds1i1/ | false | 1 |
t1_o8drzz6 | never like this version. just compared with previous one.
https://preview.redd.it/tzhsg0k1rsmg1.png?width=1721&format=png&auto=webp&s=869d43daede694cab419ee922e398e1fb0035a32
| 1 | 0 | 2026-03-03T09:11:38 | CapitalShake3085 | false | null | 0 | o8drzz6 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8drzz6/ | false | 1 |
t1_o8drypu | Thanks for the info. | 1 | 0 | 2026-03-03T09:11:16 | A-n-d-y-R-e-d | false | null | 0 | o8drypu | false | /r/LocalLLaMA/comments/1r0ser2/any_latest_ocr_model_i_can_run_locally_in_18gb_ram/o8drypu/ | false | 1 |
t1_o8drrq4 | I dont think you achieve half as good on this setup sadly. Your gpu has either 2 or 4gb vram and even small models will struggle. To get similar experience to running agentic work you need more vram sadly. Happy to be proven wrong | 1 | 0 | 2026-03-03T09:09:18 | sagiroth | false | null | 0 | o8drrq4 | false | /r/LocalLLaMA/comments/1rjkarj/local_model_suggestions_for_medium_end_pc_for/o8drrq4/ | false | 1 |
t1_o8drq27 | Actually what's interesting:
Qwen 3 supported something like: **seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios. So basically the Qwen3 uses a **Router**. When you send a prompt, the model performs an initial "intent analysis" tokens:
\- **Simple task**: If you say "Hi, how are you?"
\- **Complex Task:** If you provide a Python bug or a calculus problem, the model triggers the **thinking experts** (the reasoning path).
Everything was done dynamically without users interference.
It seems like this feature is not working good in Qwen 3.5, especially in local deployment/quantization, therefore you require to adjust everything manually depending on your needs. | 1 | 0 | 2026-03-03T09:08:50 | Specialist-Chain-369 | false | null | 0 | o8drq27 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8drq27/ | false | 1 |
t1_o8drob4 | 1 | 0 | 2026-03-03T09:08:21 | CapitalShake3085 | false | null | 0 | o8drob4 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8drob4/ | false | 1 | |
t1_o8drmkv | thinking is enabled, you can see it in the bottom | 1 | 0 | 2026-03-03T09:07:52 | Epsilon-EP | false | null | 0 | o8drmkv | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8drmkv/ | false | 1 |
t1_o8drk4y | 1 | 0 | 2026-03-03T09:07:12 | jslominski | false | null | 0 | o8drk4y | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8drk4y/ | false | 1 | |
t1_o8drgdl | [removed] | 1 | 0 | 2026-03-03T09:06:08 | [deleted] | true | null | 0 | o8drgdl | false | /r/LocalLLaMA/comments/1p5retd/best_local_vlms_november_2025/o8drgdl/ | false | 1 |
t1_o8draav | qwen is just quietly becoming the best bang for buck in the space right now. | 1 | 0 | 2026-03-03T09:04:27 | justserg | false | null | 0 | o8draav | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8draav/ | false | 1 |
t1_o8dr3ah | Overthinking? Ask any qwen 3.5 to \`Tell a funny joke\` or even better in Russian \`Расскажи смешной анекдот\` - and you 100% got endless thinking | 1 | 0 | 2026-03-03T09:02:33 | Serious-Log7550 | false | null | 0 | o8dr3ah | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8dr3ah/ | false | 1 |
t1_o8dr2q4 | Alright, seems like there is still no GPU support for BF16, see: [https://github.com/ggml-org/llama.cpp/issues/8941](https://github.com/ggml-org/llama.cpp/issues/8941) | 1 | 0 | 2026-03-03T09:02:24 | KeyLiaoHPC | false | null | 0 | o8dr2q4 | false | /r/LocalLLaMA/comments/1riunee/how_to_fix_endless_looping_with_qwen35/o8dr2q4/ | false | 1 |
t1_o8dqvz5 | as a man out of latest utility loop this sentence is crazy | 1 | 0 | 2026-03-03T09:00:34 | HistorianPotential48 | false | null | 0 | o8dqvz5 | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dqvz5/ | false | 1 |
t1_o8dqmoc | Damn I get 40t/s with Qwen3 30ba3b on my 4090. I must be doing something wrong | 1 | 0 | 2026-03-03T08:58:01 | dodiyeztr | false | null | 0 | o8dqmoc | false | /r/LocalLLaMA/comments/1rji5bc/how_do_the_small_qwen35_models_compare_to_the/o8dqmoc/ | false | 1 |
t1_o8dqme5 | Oh,You can't! LM Studio doesn't have built-in version checking.
Models on Hugging Face get updated via new commits, but LM Studio treats each download as a snapshot.
No notifications, no "update available" indicators.Try like followings**:**
Manual Check: Visit the model's Hugging Face page occasionally to see if there's new activity.
Re-download: Simply delete and re-download the model when you suspect an update.
Use lms CLI: lms ls lists your local models - combine with manual HF checks.
It's not just LM Studio,but most local inference tools work this way. Versioning is on you to track! | 1 | 0 | 2026-03-03T08:57:57 | Rain_Sunny | false | null | 0 | o8dqme5 | false | /r/LocalLLaMA/comments/1rjjvqy/how_can_i_know_if_downloaded_models_have_a_newer/o8dqme5/ | false | 1 |
t1_o8dqkre | The answer **looks** formal and accurate, **biased** for human preference. | 1 | 0 | 2026-03-03T08:57:30 | foldl-li | false | null | 0 | o8dqkre | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8dqkre/ | false | 1 |
t1_o8dqk9j | Yes, there are a number of models which have been uncensored, or abliterated. | 1 | 0 | 2026-03-03T08:57:22 | CryptographerKlutzy7 | false | null | 0 | o8dqk9j | false | /r/LocalLLaMA/comments/1rjk9tt/are_all_models_censored_like_this/o8dqk9j/ | false | 1 |
t1_o8dq31p | [removed] | 1 | 0 | 2026-03-03T08:52:40 | [deleted] | true | null | 0 | o8dq31p | false | /r/LocalLLaMA/comments/1rjjvqo/vllm_on_v100_for_qwen_newer_models/o8dq31p/ | false | 1 |
t1_o8dpzun | [removed] | 1 | 0 | 2026-03-03T08:51:47 | [deleted] | true | null | 0 | o8dpzun | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8dpzun/ | false | 1 |
t1_o8dpzfz | For those who think this is a feature, look at Qwen3 4B Thinking. Now you can continue talking nonsense. 12s vs 48s.
https://preview.redd.it/w6eirxghnsmg1.png?width=1721&format=png&auto=webp&s=ce7c5737c1be2e427399449c802c122ccb911ca1
| 1 | 0 | 2026-03-03T08:51:41 | CapitalShake3085 | false | null | 0 | o8dpzfz | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8dpzfz/ | false | 1 |
t1_o8dpvqa | spelling large numbers like 341.412.312.516.754.146
or the classic Sicko Mode or Mo Bamba? | 1 | 0 | 2026-03-03T08:50:39 | Kawnanman | false | null | 0 | o8dpvqa | false | /r/LocalLLaMA/comments/19bozo5/what_are_the_top_five_questions_you_always_ask_to/o8dpvqa/ | false | 1 |
t1_o8dpvmg | Hey there! Well, I ended up using PaddleOCR and i think there was another python library that I directly used in python at that time. Since this was 6 months ago, ironically, a lot of better OCR model released right after I made the post including OlmOCR, Qwen3 VL etc.
As for why I was going for local only: I wanted to see how much can I utilise my device for these tasks + I wanted to learn about LLM/VLM automation + privacy reasons as the documents were confidential.
I do agree though, benchmarks are hard to trust, yet quality still remains in cloud. Open source is also catching up though (Qwen3.5 is amazing haha).
Cheers! | 1 | 0 | 2026-03-03T08:50:37 | IntroductionMoist974 | false | null | 0 | o8dpvmg | false | /r/LocalLLaMA/comments/1nhl9vs/anyone_getting_reliable_handwritingtotext_with/o8dpvmg/ | false | 1 |
t1_o8dprto | Fixed in about:config
- gfx.webgpu.ignore-blocklist = true
- dom.webgpu.enabled = true | 1 | 0 | 2026-03-03T08:49:35 | Nepherpitu | false | null | 0 | o8dprto | false | /r/LocalLLaMA/comments/1rjhuvq/visual_narrator_with_qwen3508b_on_webgpu/o8dprto/ | false | 1 |
t1_o8dpnqp | I would be very interested to hear, if you find any! | 1 | 0 | 2026-03-03T08:48:28 | l_eo_ | false | null | 0 | o8dpnqp | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o8dpnqp/ | false | 1 |
t1_o8dpmgr | Test different settings with llama-bench, pick the best. | 1 | 0 | 2026-03-03T08:48:07 | whatever462672 | false | null | 0 | o8dpmgr | false | /r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8dpmgr/ | false | 1 |
t1_o8dplp6 | Not working on Firefox :( | 1 | 0 | 2026-03-03T08:47:55 | Nepherpitu | false | null | 0 | o8dplp6 | false | /r/LocalLLaMA/comments/1rjhuvq/visual_narrator_with_qwen3508b_on_webgpu/o8dplp6/ | false | 1 |
t1_o8dpl8a | Thanks a lot for sharing! Would you still consider all of them to be relevant in certain scenarios? My feeling right now is that glm and Paddle are the best for small footprints while Qwen is good in the Raw VLM capability side with a larger footprint, then you move on to external services like Mistral/Google Doc AI, glm (online). | 1 | 0 | 2026-03-03T08:47:47 | danihend | false | null | 0 | o8dpl8a | false | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8dpl8a/ | false | 1 |
t1_o8dphkc | is 5bit enough? | 1 | 0 | 2026-03-03T08:46:47 | henrygatech | false | null | 0 | o8dphkc | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dphkc/ | false | 1 |
t1_o8dpfvx | I'm using it on KDE Plasma 6.6.1 KWin (Wayland) and its amazing as long as I have a HDMI monitor plugged in. Any suggestions for Debian running GNOME? I use that for my homeserver and I just cant get it to work at all. | 1 | 0 | 2026-03-03T08:46:19 | monerobull | false | null | 0 | o8dpfvx | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dpfvx/ | false | 1 |
t1_o8dpdyv | - `uv venv env --python=3.12`
- Activate env
- `uv pip install vllm --torch-backend=auto --extra-index-url https://wheels.vllm.ai/nightly`
And finally
```
uv run
-m vllm.entrypoints.openai.api_server
--model-loader-extra-config '{ "enable_multithread_load": true, "num_threads": 4 }'
--model /mnt/samsung_990_evo/llm-data/models/Sehyo/Qwen3.5-122B-A10B-NVFP4
--served-model-name "qwen3.5-122b-a10b-fp4"
--port ${PORT}
--tensor-parallel-size 4
--enable-prefix-caching
--max-model-len auto
--gpu-memory-utilization 0.95
--max-num-seqs 4
--attention-backend flashinfer
--reasoning-parser qwen3
--enable-auto-tool-choice
--tool-call-parser qwen3_coder
```
Literally following guide on model page. | 1 | 0 | 2026-03-03T08:45:49 | Nepherpitu | false | null | 0 | o8dpdyv | false | /r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o8dpdyv/ | false | 1 |
t1_o8dpbnk | Wrong params, wrong model/quantization, or bad inference engine. | 1 | 0 | 2026-03-03T08:45:12 | R_Duncan | false | null | 0 | o8dpbnk | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8dpbnk/ | false | 1 |
t1_o8dpb6v | Finally, A(utistic)GI! | 1 | 0 | 2026-03-03T08:45:05 | lovvc | false | null | 0 | o8dpb6v | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8dpb6v/ | false | 1 |
t1_o8dpaxj | What's the advantage of using Q8\_K\_XL over Q8\_0?
I've played around with Q8\_0 and on an AMD RX9060 16GB (should be comparable to a 4060ti?) I get around \~32tps. | 1 | 0 | 2026-03-03T08:45:00 | Dunkle_Geburt | false | null | 0 | o8dpaxj | false | /r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8dpaxj/ | false | 1 |
t1_o8dp9to | Don't worry! Just Consider like this:
Ollama: The "iPhone" of local inference. Super easy to install, works out of the box, great for beginners. Just ollama run llama3 and you are done.
Llama.cpp:The "Android" - more control, runs on almost any hardware (even CPU), but requires some command line comfort. Best for older machines or when you need maximum efficiency.
Hugging Face (transformers):The "build your own" option. Most flexible but needs Python knowledge. Great for experimenting with different model architectures.
Advice: Start with Ollama. If you hit its limits (weird models, need more control), try LM Studio (GUI for llama.cpp) as a middle ground. Hugging Face can come later when you're ready to code.
Pick based on: How much time vs control you want! | 1 | 0 | 2026-03-03T08:44:43 | Rain_Sunny | false | null | 0 | o8dp9to | false | /r/LocalLLaMA/comments/1rjk2dq/im_a_noob_to_local_inference_how_do_you_choose/o8dp9to/ | false | 1 |
t1_o8dp6mz | It's nice to have options. | 1 | 0 | 2026-03-03T08:43:52 | florinandrei | false | null | 0 | o8dp6mz | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8dp6mz/ | false | 1 |
t1_o8dp3mq | Like sparse attention? | 1 | 0 | 2026-03-03T08:43:03 | florinandrei | false | null | 0 | o8dp3mq | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8dp3mq/ | false | 1 |
t1_o8dozbw | > Makes me want to buy tech stocks... or a bunker.
I hear the cool kids do both. | 1 | 0 | 2026-03-03T08:41:52 | florinandrei | false | null | 0 | o8dozbw | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8dozbw/ | false | 1 |
t1_o8doytb | Someone should fine-tune it to play geo guesser lol | 1 | 0 | 2026-03-03T08:41:44 | po_stulate | false | null | 0 | o8doytb | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8doytb/ | false | 1 |
t1_o8doxbb | \> On top of that, another thing that I also thought would be a good idea, was have MiniMax review and find issues with its own generated code (multiple times even). So I run a "find issues" prompt a few times over the contracts, it found a few issues, which I fixed, but nothing egregious.
Just wanted to give you a heads up. The creator of pandas also created an open source code review tool. Everytime you commit something to git, it automatically checks the code against the last few commits and does a full review. You might find it useful.
[https://github.com/roborev-dev/roborev](https://github.com/roborev-dev/roborev) | 1 | 0 | 2026-03-03T08:41:20 | DomiekNSFW | false | null | 0 | o8doxbb | false | /r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o8doxbb/ | false | 1 |
t1_o8dot36 | This is impressive. I’ll have to give it a whirl with opencode. | 1 | 0 | 2026-03-03T08:40:09 | FloofyKitteh | false | null | 0 | o8dot36 | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dot36/ | false | 1 |
t1_o8dosqm | That's not how vision models work. Unless OP's using RAG instead of passing the image directly but I don't think that's the case. | 1 | 0 | 2026-03-03T08:40:04 | po_stulate | false | null | 0 | o8dosqm | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8dosqm/ | false | 1 |
t1_o8doojf | I see. And that's the "pull" of ollama (pun intended). You can do a oneliner to install it and then ollama pull llama-3.2 and you're ready to go. Hopefully the llamacpp team or someone will bake such a feature into it as well. | 1 | 0 | 2026-03-03T08:38:54 | Bac-Te | false | null | 0 | o8doojf | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8doojf/ | false | 1 |
t1_o8doni9 | Image encoders for VL models don’t process the metadata. They only encode the pixel array. | 1 | 0 | 2026-03-03T08:38:38 | -p-e-w- | false | null | 0 | o8doni9 | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8doni9/ | false | 1 |
t1_o8domek | [removed] | 1 | 0 | 2026-03-03T08:38:20 | [deleted] | true | null | 0 | o8domek | false | /r/LocalLLaMA/comments/1rjikwz/help_me_create_my_llm_ecosystem/o8domek/ | false | 1 |
t1_o8dom6d | I don't see why it would need to be considered either of those things. If it works, it works. | 1 | 0 | 2026-03-03T08:38:17 | NNN_Throwaway2 | false | null | 0 | o8dom6d | false | /r/LocalLLaMA/comments/1rjhmmf/presence_penalty_seems_to_be_incoming_on_lmstudio/o8dom6d/ | false | 1 |
t1_o8dodd8 | Been using it Qwen3.5 for my local HA setup, running just fine. Also why do you have to hate on the Chinese here lol. | 1 | 0 | 2026-03-03T08:35:50 | kbderrr | false | null | 0 | o8dodd8 | false | /r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/o8dodd8/ | false | 1 |
t1_o8doblr | Presence penalty adds a fixed penalty to a repeated token, regardless of how many times it has already appeared in context. The penalty doesn't change even if the token continues to appear.
Frequency and repetition penalty add a proportional penalty based on a token's frequency. However, repetition penalty has additional parameters to scale the slope and range, which can make it more impactful and more tunable.
If you want the exact implementation details, they can be found in the llama.cpp source code and related pull requests. | 1 | 0 | 2026-03-03T08:35:21 | NNN_Throwaway2 | false | null | 0 | o8doblr | false | /r/LocalLLaMA/comments/1rjhmmf/presence_penalty_seems_to_be_incoming_on_lmstudio/o8doblr/ | false | 1 |
t1_o8dob6b | Wouldn't just two $10k mac ultras give you over 1tb of RAM to use? You can hook 4 up for $40k before tax which at 2tb RAM should be usable if a bit slow because of the connections.
Hell, for $60k falcon northwest can build you a workstation that has over 1tb of RAM with 96gb Nvidia GPU and an insane CPU.
So while *damn* expensive it isn't quite as high as you're saying it is. Yet. Cause man if pricing isn't increasing. | 1 | 0 | 2026-03-03T08:35:14 | YT_Brian | false | null | 0 | o8dob6b | false | /r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8dob6b/ | false | 1 |
t1_o8do698 | Would appreciate someone who runs these to share the vllm args. | 1 | 0 | 2026-03-03T08:33:53 | UltrMgns | false | null | 0 | o8do698 | false | /r/LocalLLaMA/comments/1rjg514/qwen35_100b_part_ii_nvfp4_blackwell_is_up/o8do698/ | false | 1 |
t1_o8do5ne | interesting! any plan to apply the same options on Qwen/Qwen3.5-35B-A3B ? I reallly look forward to run it on 5060ti, which is probably the most valuable budget AI card at this moment[](https://huggingface.co/Qwen/Qwen3.5-35B-A3B) | 1 | 0 | 2026-03-03T08:33:43 | changtimwu | false | null | 0 | o8do5ne | false | /r/LocalLLaMA/comments/1qwbmct/qwen3codernext_on_rtx_5060_ti_16_gb_some_numbers/o8do5ne/ | false | 1 |
t1_o8do2mi | thanks! just tried it with some random images and it works well. e.g. for an image of 5 apples:
"This image displays a still life composition featuring five red apples arranged in a triangular pattern on a textured, off-white surface. Each apple is shown from a top-down perspective, highlighting their round shapes, subtle speckles, and brown stems. The lighting creates soft highlights on the apples’ smooth skin, emphasizing their natural form and vibrant colorations." | 1 | 0 | 2026-03-03T08:32:53 | kbderrr | false | null | 0 | o8do2mi | false | /r/LocalLLaMA/comments/1rjhuvq/visual_narrator_with_qwen3508b_on_webgpu/o8do2mi/ | false | 1 |
t1_o8dnzqb | Did you configured as [the model card on Hugging face told you](https://huggingface.co/Qwen/Qwen3.5-9B)?
We recommend using the following set of sampling parameters for generation
* Thinking mode for general tasks: `temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0`
* Thinking mode for precise coding tasks (e.g. WebDev): `temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0`
* Instruct (or non-thinking) mode for general tasks: `temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0`
* Instruct (or non-thinking) mode for reasoning tasks: `temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0`
This is what I got in LM Studio with with the 9b version and those configs:
>Thought for 3.37 seconds
>Okay, the user just said "Hello". I need to respond appropriately. Let me think about the context. Since there's no specific question or topic mentioned, a friendly and open-ended response is best. I should acknowledge their greeting and offer assistance. Maybe something like "Hello! How can I help you today?" That's simple and inviting. Let me make sure it's not too formal. Keep it warm and approachable. Alright, that should work.
| 1 | 0 | 2026-03-03T08:32:05 | MarzipanTop4944 | false | null | 0 | o8dnzqb | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8dnzqb/ | false | 1 |
t1_o8dnuzl | I’m still mad they didn’t realise it on Wednesday and call it Qwensday | 1 | 0 | 2026-03-03T08:30:47 | Glum-Traffic-7203 | false | null | 0 | o8dnuzl | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8dnuzl/ | false | 1 |
t1_o8dnsqi | Hai avuto problemi di x-frame? | 1 | 0 | 2026-03-03T08:30:10 | Single_Error8996 | false | null | 0 | o8dnsqi | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8dnsqi/ | false | 1 |
t1_o8dnshk | I just check once every month or two for any I use. If a new one exists I download and test out to compare and if netter delete the older one.
Be nice if Kobold or any of them could auto check but don't think any really do. | 1 | 0 | 2026-03-03T08:30:06 | YT_Brian | false | null | 0 | o8dnshk | false | /r/LocalLLaMA/comments/1rjjvqy/how_can_i_know_if_downloaded_models_have_a_newer/o8dnshk/ | false | 1 |
t1_o8dnp0w | It depends on which tests, there is also Qwen3-Coder-Next which isn't bad | 1 | 0 | 2026-03-03T08:29:11 | Deep_Traffic_7873 | false | null | 0 | o8dnp0w | false | /r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8dnp0w/ | false | 1 |
t1_o8dnngd | Thanks for taking the time to respond! | 1 | 0 | 2026-03-03T08:28:46 | bambamlol | false | null | 0 | o8dnngd | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o8dnngd/ | false | 1 |
t1_o8dnlde | How about gpt 5.3-codex? | 1 | 0 | 2026-03-03T08:28:11 | lemon07r | false | null | 0 | o8dnlde | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8dnlde/ | false | 1 |
t1_o8dnijp | Couple commands in the terminal to pull it and build it, bunch of frontends available if you need or I just ran couple prompts with codex to build me customized frontend for llamacpp server, its great. | 1 | 0 | 2026-03-03T08:27:24 | FinBenton | false | null | 0 | o8dnijp | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8dnijp/ | false | 1 |
t1_o8dnh54 | Dun think too much, just delete and redownload | 1 | 0 | 2026-03-03T08:27:01 | jikilan_ | false | null | 0 | o8dnh54 | false | /r/LocalLLaMA/comments/1rjjvqy/how_can_i_know_if_downloaded_models_have_a_newer/o8dnh54/ | false | 1 |
t1_o8dngrl | it's the chat template mismatch - when the model outputs raw XML instead of executing the tool call, the jinja template isn't kicking in correctly. unsloth dropped a fixed gguf earlier today, re-download and that should clear it. | 1 | 0 | 2026-03-03T08:26:55 | BC_MARO | false | null | 0 | o8dngrl | false | /r/LocalLLaMA/comments/1rjhy83/tool_calling_issues_with_qwen3535b_with_16gb_vram/o8dngrl/ | false | 1 |
t1_o8dnbiv | Yep - in LM Studio, enable this... Settings > Developer Tools > scroll to the bottom:
https://preview.redd.it/kayrq34tismg1.jpeg?width=731&format=pjpg&auto=webp&s=e9be5c72b7cd77adbb034bdb5cf9fa1ca5f8b6f2 | 1 | 0 | 2026-03-03T08:25:26 | sig_kill | false | null | 0 | o8dnbiv | false | /r/LocalLLaMA/comments/1re64fe/qwen35_thinking_blocks_in_output/o8dnbiv/ | false | 1 |
t1_o8dnabb | For those who think this is a feature, look at Qwen3 4B Thinking. Now you can continue talking nonsense. 12s vs 36s.
https://preview.redd.it/763oxcnoismg1.png?width=824&format=png&auto=webp&s=c5eb169d306872f01dafdf79b5f9b3d3116995e8 | 1 | 0 | 2026-03-03T08:25:06 | CapitalShake3085 | false | null | 0 | o8dnabb | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8dnabb/ | false | 1 |
t1_o8dn444 | You’re not missing anything magical.
**OpenClaw** got popular less because it’s technically revolutionary and more because it hit a timing + framing sweet spot. After the autonomous-agent hype cooled off, it reframed agents as **systems**—tools, execution loops, permissions, and guardrails—which aligned with what builders were already discovering in practice.
*Context:* that same gap is why we started building **ClawDock**—once people bought into the model, the hard part became actually running and observing those systems reliably. | 1 | 0 | 2026-03-03T08:23:22 | Icy-Resource164 | false | null | 0 | o8dn444 | false | /r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o8dn444/ | false | 1 |
t1_o8dn2dl | Luke Skywalker: "Amazing. Every word of what you just said was wrong." | 1 | 0 | 2026-03-03T08:22:54 | GrungeWerX | false | null | 0 | o8dn2dl | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8dn2dl/ | false | 1 |
t1_o8dn1ys | I don't think transformers equal full attention only. Any attention mechanism qualify for a block to be a transformer block (norm > attention > ffn ) | 1 | 0 | 2026-03-03T08:22:47 | Orolol | false | null | 0 | o8dn1ys | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8dn1ys/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.