name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8e7tml | I will fix that error; it's probably due to an incorrect text norm. Please write TTS as T T S | 1 | 0 | 2026-03-03T11:38:58 | Forsaken_Shopping481 | false | null | 0 | o8e7tml | false | /r/LocalLLaMA/comments/1rjjvge/update_tinytts_the_smallest_english_tts_model/o8e7tml/ | false | 1 |
t1_o8e7svf | According to market reports, companies frequently use local models, aware of the potential risks associated with cloud APIs. Therefore, they deploy their own local models as tools for processing critical information. Among the top ten most commonly used open-source models, Chinese AI models occupy six spots, with DeepSeek ranking first due to its wide multi-language compatibility. Current open-source models are sufficient for everyday work. Therefore, if your clients and company distrust open-source models, the optimal solution is to create and train your own AI model. Risk assessments show that all models carry inherent risks, as no one knows whether the initial information an AI model encounters has been contaminated. If this isn't feasible, it's better to hire more staff. | 1 | 0 | 2026-03-03T11:38:48 | keroro7128 | false | null | 0 | o8e7svf | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o8e7svf/ | false | 1 |
t1_o8e7qvr | Some people just use trash, unusable and expressions like that, when its clearly not the case. You just been arguing that you both have your personal opinion and tried to convince the other that your opinion is more correct. | 1 | 0 | 2026-03-03T11:38:21 | meTomi | false | null | 0 | o8e7qvr | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8e7qvr/ | false | 1 |
t1_o8e7q35 | It's a wishful thinking. They have no reason to do it. | 1 | 0 | 2026-03-03T11:38:10 | jacek2023 | false | null | 0 | o8e7q35 | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8e7q35/ | false | 1 |
t1_o8e7mzj | kilo code works, too. | 1 | 0 | 2026-03-03T11:37:28 | kayteee1995 | false | null | 0 | o8e7mzj | false | /r/LocalLLaMA/comments/1rjfijf/cline_not_playing_well_with_the_freshly_dropped/o8e7mzj/ | false | 1 |
t1_o8e7jw1 | 6. The price entrypoint for the mac mini is half or more less than your cheapest build. Now, current market price may not reflect this very well, as you could imagine it was an attractive turnkey solution with decent capability at half the price of DIY gpu setups and so on | 1 | 0 | 2026-03-03T11:36:45 | titpetric | false | null | 0 | o8e7jw1 | false | /r/LocalLLaMA/comments/1rjmlbi/local_llm_infrastructure_for_an_it_consulting/o8e7jw1/ | false | 1 |
t1_o8e7jem | You asked it to write curl? Or some part after curl to dissect the file? Maybe I’m missing the point why it would refuse. | 1 | 0 | 2026-03-03T11:36:39 | e38383 | false | null | 0 | o8e7jem | false | /r/LocalLLaMA/comments/1rjk9tt/are_all_models_censored_like_this/o8e7jem/ | false | 1 |
t1_o8e7hg9 | Is there any benchmark for different parameterized and quantizized versions? I privately tested 35B-A3 and 27B and can say that the 35B version isnt just better, its faster too, lol | 1 | 0 | 2026-03-03T11:36:13 | The-KTC | false | null | 0 | o8e7hg9 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8e7hg9/ | false | 1 |
t1_o8e7hcd | The full GPT models are almost certainly monstrosities with hundreds of billions of parameters, if not 1T+.
GPT-4.1 wouldn’t be easier to run locally than Kimi K2.5 or GLM-5, and already gets its ass handed to itself by both of them, so there wouldn’t be much value. | 1 | 0 | 2026-03-03T11:36:11 | -p-e-w- | false | null | 0 | o8e7hcd | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8e7hcd/ | false | 1 |
t1_o8e7etf | it's not able to process it that fast. its not subsecond processing.
Maybe on extremely high end devices.
But appreciate you and the inputs! | 1 | 0 | 2026-03-03T11:35:37 | alichherawalla | false | null | 0 | o8e7etf | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8e7etf/ | false | 1 |
t1_o8e7dkz | been running qwen 3.5 on mobile too, the jump from 3 to 3.5 at 4B is real. what quant are you using? Q4_K_M has been the sweet spot for me between quality and memory on phone | 1 | 0 | 2026-03-03T11:35:19 | angelin1978 | false | null | 0 | o8e7dkz | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8e7dkz/ | false | 1 |
t1_o8e7aav | Fair point 🙂 AI can absolutely produce garbage if you let it run unchecked. That’s exactly why I built the system the way I did. In Cognithor the model does not directly execute anything. It goes through a deterministic Gatekeeper first. No LLM decides permissions. The reasoning can be creative, the execution cannot. If something looks like rubbish, it’s either bad prompting, missing context, or a model limitation. That’s part of the experiment. I’m not claiming perfection. I’m building infrastructure to reduce that risk. | 1 | 0 | 2026-03-03T11:34:35 | Competitive_Book4151 | false | null | 0 | o8e7aav | false | /r/LocalLLaMA/comments/1rjmq6m/i_asked_chat_gpt_52_pro_to_scan_my_repo_here_is/o8e7aav/ | false | 1 |
t1_o8e7927 | “Almost real-time camera analysis” here means your app can repeatedly analyze fresh frames quickly enough to feel live (e.g., sampling 1 ,5 FPS or keyframes), not necessarily full 30 FPS continuous vision.
The point is that Qwen3.5-0.8B/2B are described as small/fast enough for edge devices and “real-time perception and decision-making,” which unlocks practical live camera modes on phones | 1 | 0 | 2026-03-03T11:34:17 | RIP26770 | false | null | 0 | o8e7927 | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8e7927/ | false | 1 |
t1_o8e77cm | How much vram do you need to run the model | 1 | 0 | 2026-03-03T11:33:53 | FerLuisxd | false | null | 0 | o8e77cm | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8e77cm/ | false | 1 |
t1_o8e76ux | Benchmarks? At this small of a sample size, it merely captures the output style of Opus. | 1 | 0 | 2026-03-03T11:33:46 | KvAk_AKPlaysYT | false | null | 0 | o8e76ux | false | /r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8e76ux/ | false | 1 |
t1_o8e72nr | 122B surely can't fit Strix Halo at Q8\_0. It can barely fit Q6. | 1 | 0 | 2026-03-03T11:32:49 | spaceman_ | false | null | 0 | o8e72nr | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8e72nr/ | false | 1 |
t1_o8e6xca | Looks like it gave you complete rubbish. | 1 | 0 | 2026-03-03T11:31:35 | Ok-Adhesiveness-4141 | false | null | 0 | o8e6xca | false | /r/LocalLLaMA/comments/1rjmq6m/i_asked_chat_gpt_52_pro_to_scan_my_repo_here_is/o8e6xca/ | false | 1 |
t1_o8e6tez | Not the corpos fighting | 1 | 0 | 2026-03-03T11:30:40 | m-freak | false | null | 0 | o8e6tez | false | /r/LocalLLaMA/comments/1rcpmwn/anthropic_weve_identified_industrialscale/o8e6tez/ | false | 1 |
t1_o8e6t7j | That is true, but also when we look at for example I cannot afford a data centre to run big models perhaps the way we doing things is to limited for what it could do | 1 | 0 | 2026-03-03T11:30:37 | Last-Shake-9874 | false | null | 0 | o8e6t7j | false | /r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8e6t7j/ | false | 1 |
t1_o8e6s4i | Depends on what you're inquiring it about. I asked it about some anime and while it did get the popular ones right, it didn't get the more obscure ones | 1 | 0 | 2026-03-03T11:30:21 | Leather_Flan5071 | false | null | 0 | o8e6s4i | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8e6s4i/ | false | 1 |
t1_o8e6p62 | Publishing a blog on huggingface does not makes this model open or local, so locallama is not the right place to advertize it. | 1 | 0 | 2026-03-03T11:29:40 | R_Duncan | false | null | 0 | o8e6p62 | false | /r/LocalLLaMA/comments/1rjmgdt/introducing_kanon_2_enricher_the_worlds_first/o8e6p62/ | false | 1 |
t1_o8e6p1g | [removed] | 1 | 0 | 2026-03-03T11:29:38 | [deleted] | true | null | 0 | o8e6p1g | false | /r/LocalLLaMA/comments/1rjmczv/low_vram_qwen35_4b_and_2b/o8e6p1g/ | false | 1 |
t1_o8e6na4 | Use MoE models around 70B parameters and offload weights to cpu. I'm assuming you are using Qwen3.5 27B, which is dense. Dense will start slow and slow down faster. | 1 | 0 | 2026-03-03T11:29:14 | Fresh_Finance9065 | false | null | 0 | o8e6na4 | false | /r/LocalLLaMA/comments/1rbkeea/which_one_are_you_waiting_for_more_9b_or_35b/o8e6na4/ | false | 1 |
t1_o8e6mwd | But at least it works fine for me.
ROCM, i suggest, manually broken as they want to say "buy our new cards or GTFO". | 1 | 0 | 2026-03-03T11:29:08 | No_Needleworker_6881 | false | null | 0 | o8e6mwd | false | /r/LocalLLaMA/comments/1o99s2u/rocm_70_install_for_mi50_32gb_ubuntu_2404_lts/o8e6mwd/ | false | 1 |
t1_o8e6mon | So how do you plug SearXNG into the equation? searxng-mcp? | 1 | 0 | 2026-03-03T11:29:05 | ParaboloidalCrest | false | null | 0 | o8e6mon | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8e6mon/ | false | 1 |
t1_o8e6l6f | You make a good point. I’m definitely trying to avoid compute waste by throwing everything at a VLM.
If I go the Classifier route first, do you have a recommendation for a lightweight one that handles General Object labels well and can keep the latency down.
My worry is false negatives, a classifier being too strict and killing a valid photo before the VLM gets a chance to see it. | 1 | 0 | 2026-03-03T11:28:45 | Born-Mastodon443 | false | null | 0 | o8e6l6f | false | /r/LocalLLaMA/comments/1rjkyq9/fast_free_vlm_for_object_id_quality_filtering/o8e6l6f/ | false | 1 |
t1_o8e6icr | pls edit the setup info about who produced that quant, HF link is ideal if you did not create it locally.
| 1 | 0 | 2026-03-03T11:28:05 | uhuge | false | null | 0 | o8e6icr | false | /r/LocalLLaMA/comments/1rbio4h/has_anyone_else_tried_iq2_quantization_im/o8e6icr/ | false | 1 |
t1_o8e6i5j | [removed] | 1 | 0 | 2026-03-03T11:28:03 | [deleted] | true | null | 0 | o8e6i5j | false | /r/LocalLLaMA/comments/1plz1gb/best_solution_for_building_a_realtime/o8e6i5j/ | false | 1 |
t1_o8e6hoh | Your poor nvme won’t last long running it like this. | 1 | 0 | 2026-03-03T11:27:56 | StardockEngineer | false | null | 0 | o8e6hoh | false | /r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8e6hoh/ | false | 1 |
t1_o8e6fsr | I've always considered your method a legitimate approach for a "SHTF" model. Having a massive model running on low-power hardware could be handy in a collapse/apocalypse scenario. | 1 | 0 | 2026-03-03T11:27:30 | RG_Fusion | false | null | 0 | o8e6fsr | false | /r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8e6fsr/ | false | 1 |
t1_o8e6ebf | 1 | 0 | 2026-03-03T11:27:09 | willnfld | false | null | 0 | o8e6ebf | false | /r/LocalLLaMA/comments/1rjmrj0/hello_i_am_a_guy_who_has_no_prior_ai_experience/o8e6ebf/ | false | 1 | |
t1_o8e6btg | You must be running llama.cpp over rpc | 1 | 0 | 2026-03-03T11:26:34 | StardockEngineer | false | null | 0 | o8e6btg | false | /r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8e6btg/ | false | 1 |
t1_o8e68as | Truly, and people wonder why they're suffering | 1 | 0 | 2026-03-03T11:25:43 | Leading-Research2653 | false | null | 0 | o8e68as | false | /r/LocalLLaMA/comments/1rcmlwk/so_is_openclaw_local_or_not/o8e68as/ | false | 1 |
t1_o8e68a2 | I think you must be talking about 122b and the guy you’re replying to is 397b | 1 | 0 | 2026-03-03T11:25:43 | StardockEngineer | false | null | 0 | o8e68a2 | false | /r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8e68a2/ | false | 1 |
t1_o8e65x7 | GPT-3 had 175 billion parameters, and given that a 1B model today seems to outmatch it, that means at least 99.5% of them were redundant. | 1 | 0 | 2026-03-03T11:25:09 | Mickenfox | false | null | 0 | o8e65x7 | false | /r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8e65x7/ | false | 1 |
t1_o8e5zqr | I asked Qwen to add a comment to this thread. I will update you ... | 1 | 0 | 2026-03-03T11:23:41 | Abject-Kitchen3198 | false | null | 0 | o8e5zqr | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8e5zqr/ | false | 1 |
t1_o8e5yyc | This is gold, thanks!
I’m still getting my head around the 'pre-VLM' stack.
When you talk about the Laplacian filter, is that something I can just drop into a Python script before the VLM call? I’d love to save that 30% on VLM processing by kicking out the blurry stuff early.
I hadn’t considered Florence-2, was so focused on full VLMs that I forgot about the specialised models, is that easy to run through something like LM Studio, seems it might be a bit more 'manual' to get going? | 1 | 0 | 2026-03-03T11:23:30 | Born-Mastodon443 | false | null | 0 | o8e5yyc | false | /r/LocalLLaMA/comments/1rjkyq9/fast_free_vlm_for_object_id_quality_filtering/o8e5yyc/ | false | 1 |
t1_o8e5sv6 | „everyone is saying i am wrong… this muse mean EVERYONE is wrong!“
great logic.. not a thinking model? | 1 | 0 | 2026-03-03T11:22:03 | howardhus | false | null | 0 | o8e5sv6 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8e5sv6/ | false | 1 |
t1_o8e5qox | I mean my mac studio runs it at 18 tokens/sec for ten grand. Prefill is too slow for quick agnetic work with large amounts of file reading yes but it does it. | 1 | 0 | 2026-03-03T11:21:32 | Front_Eagle739 | false | null | 0 | o8e5qox | false | /r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8e5qox/ | false | 1 |
t1_o8e5qfj | so 3.5 4b is worse than older 3 4b? | 1 | 0 | 2026-03-03T11:21:28 | Gold_Ad_2201 | false | null | 0 | o8e5qfj | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8e5qfj/ | false | 1 |
t1_o8e5nhv | I was pretty disillusioned with local models until qwen3-30b-a3b-2507 came out. It was the first model that felt like more than a toy to me. I just didn't know what to use it for. Then the next month OpenAI dropped the GPT-OSS models. A 20b MoE and 120b MoE. And despite their extreme safety-first stance, and the general "mOaR liKe ClOsED Ai" attitude of many, this sub was regularly swimming with, "Does anything beat GPT-OSS all-around yet?" posts as recently as 2 weeks ago.
People ran larger, and people ran smaller. But at their sizes nothing compared very well until recently. Nvidia dropped their own 30b MoE with 1m context length. It was met with mostly indifference. GLM-4.7-Flash made ways in January for being agentic-AF and a step up in coding at this size, but with looping issues. Qwen3-Coder-Next came out recently as an 80b-a3b Instruct model with solid coding that took the local crown for a little while, at least up until you got to 200b+ contenders. But I am skittish around quants under q4, so I didn't try it until I got a 5060ti 16gb added to squeeze it into VRAM.
And that's sort of where the 5090 range has been for the last year. As far as I experienced it anyway. | 1 | 0 | 2026-03-03T11:20:46 | _-_David | false | null | 0 | o8e5nhv | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8e5nhv/ | false | 1 |
t1_o8e5ml3 | Have my old Pixel 6, do you think it will work fine on it? | 1 | 0 | 2026-03-03T11:20:33 | gamerboy12555 | false | null | 0 | o8e5ml3 | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o8e5ml3/ | false | 1 |
t1_o8e5lua | Refusals aren’t identified based on the first token but based on 100 output tokens. The refusal count is not a differentiable function of the model parameters (it’s not even continuous, not even approximately), so I don’t see how backpropagation could possibly work. | 1 | 0 | 2026-03-03T11:20:23 | -p-e-w- | false | null | 0 | o8e5lua | false | /r/LocalLLaMA/comments/1qa0w6c/it_works_abliteration_can_reduce_slop_without/o8e5lua/ | false | 1 |
t1_o8e5kdg | Perhaps ANDI | 1 | 0 | 2026-03-03T11:20:02 | Abject-Kitchen3198 | false | null | 0 | o8e5kdg | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8e5kdg/ | false | 1 |
t1_o8e5jwo | True, but this is exactly how AIs end up wiping your entire hard drive on accident trying to delete a file. You probably don't want them to try *too* hard. | 1 | 0 | 2026-03-03T11:19:56 | Mickenfox | false | null | 0 | o8e5jwo | false | /r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/o8e5jwo/ | false | 1 |
t1_o8e5jxg | Using it on my mac studio 512 yes | 1 | 0 | 2026-03-03T11:19:56 | Front_Eagle739 | false | null | 0 | o8e5jxg | false | /r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8e5jxg/ | false | 1 |
t1_o8e5jkc | I LOVE these types of projects but either there's something wrong with the demo implementation, or the model simply isn't good enough yet.
It really seems to struggle with ALL sorts of abbreviations like "TTS", "GPU", "ONNX", etc.
Seems like it completely hallucinates their pronunciation. Sometimes the output doesn't even sound like an abbreviation, sometimes it just sounds like a completely different abbreviation. For example, "TTS" always seems to get pronounced as "TNE". | 1 | 0 | 2026-03-03T11:19:51 | bambamlol | false | null | 0 | o8e5jkc | false | /r/LocalLLaMA/comments/1rjjvge/update_tinytts_the_smallest_english_tts_model/o8e5jkc/ | false | 1 |
t1_o8e5igv | Dont quantisize the kv cache, in case you did. | 1 | 0 | 2026-03-03T11:19:35 | AppealSame4367 | false | null | 0 | o8e5igv | false | /r/LocalLLaMA/comments/1rjmnh7/help_loading_qwen35_35b_a3b_gguf_on_vllm/o8e5igv/ | false | 1 |
t1_o8e5b93 | [https://huggingface.co/noctrex/Qwen3.5-122B-A10B-MXFP4\_MOE-GGUF](https://huggingface.co/noctrex/Qwen3.5-122B-A10B-MXFP4_MOE-GGUF)
I'm running this quant, 14.81 tokens/second on RTX 4090 in LMStudio | 1 | 0 | 2026-03-03T11:17:49 | Agreeable-Market-692 | false | null | 0 | o8e5b93 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8e5b93/ | false | 1 |
t1_o8e59mb | llama.cpp is the whole suite of apps, but then you can use some frontends, like openwebui or sillytavern or opencode, but for the engine llama.cpp is the best because it has always the newest features, other apps just follow | 1 | 0 | 2026-03-03T11:17:26 | jacek2023 | false | null | 0 | o8e59mb | false | /r/LocalLLaMA/comments/1rjk2dq/im_a_noob_to_local_inference_how_do_you_choose/o8e59mb/ | false | 1 |
t1_o8e5989 | Ok thats around what I pay for electricity alone running it at IQ3_XXS… But the tinkering…. Priceless 😜 | 1 | 0 | 2026-03-03T11:17:20 | Haeppchen2010 | false | null | 0 | o8e5989 | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8e5989/ | false | 1 |
t1_o8e58lr | The current smaller MoE models are incredible. | 1 | 0 | 2026-03-03T11:17:11 | Appropriate-Lie-8812 | false | null | 0 | o8e58lr | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8e58lr/ | false | 1 |
t1_o8e555y | to me what they fail A LOT is doing unnecessary tool calls. Basically, whatever I promt, if there is a tool available, they will try to use it, even if it makes 0 sense and is not necessary | 1 | 0 | 2026-03-03T11:16:19 | mouseofcatofschrodi | false | null | 0 | o8e555y | false | /r/LocalLLaMA/comments/1rjm4bl/tool_calling_is_where_agents_fail_most/o8e555y/ | false | 1 |
t1_o8e540r | Yes, that's one of the main points of open source models, they can be finetuned or just brain-surgered to work differently. Search for variants of this model, you will find "abliterated", "uncensored", "derestricted", "heretic", etc, etc. Then you can try them and see which one works better. | 1 | 0 | 2026-03-03T11:16:02 | jacek2023 | false | null | 0 | o8e540r | false | /r/LocalLLaMA/comments/1rjk9tt/are_all_models_censored_like_this/o8e540r/ | false | 1 |
t1_o8e53c6 | 1 | 0 | 2026-03-03T11:15:52 | Capable_Degree_1998 | false | null | 0 | o8e53c6 | false | /r/LocalLLaMA/comments/1r9p1zu/what_are_the_rate_limits_for_arena_lmarena/o8e53c6/ | false | 1 | |
t1_o8e536t | I knew I was going to get a comment like this. That's why I said I know ollama performance isn't the best. Can I still be impressed? Am I allowed that? | 1 | 0 | 2026-03-03T11:15:50 | outtokill7 | false | null | 0 | o8e536t | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8e536t/ | false | 1 |
t1_o8e533z | [removed] | 1 | 0 | 2026-03-03T11:15:49 | [deleted] | true | null | 0 | o8e533z | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o8e533z/ | false | 1 |
t1_o8e52th | If you know the specific repo and model, sure, but often you aren’t pulling the OG model you’re pulling someone’s quants, so you’ve already been looking on HF for a good gguf version (or more likely you’re using an hf reference you read somewhere that someone claims works great). And are you running this model forever or do you want it to unload it when not in use? So yes, on the surface it looks the same but in a month you can’t hit a known endpoint with that hf repo in a request payload and expect that model to auto load with llama-server alone.
I know there are tool diehards here, I’m not that. I’ve been compiling llama.cpp as long as this sub has been a thing, but I understand why some people use docker over podman or containerd, and I don’t give a shit to tell people who do so they are wrong for doing it the way that’s working for them. | 1 | 0 | 2026-03-03T11:15:45 | The_frozen_one | false | null | 0 | o8e52th | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8e52th/ | false | 1 |
t1_o8e50qg | Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/PgFhZ8cnWW)
You've also been given a special flair for your contribution. We appreciate your post!
*I am a bot and this action was performed automatically.* | 1 | 0 | 2026-03-03T11:15:15 | WithoutReason1729 | false | null | 0 | o8e50qg | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8e50qg/ | true | 1 |
t1_o8e4z07 | Very | 1 | 0 | 2026-03-03T11:14:49 | StardockEngineer | false | null | 0 | o8e4z07 | false | /r/LocalLLaMA/comments/1rji5bc/how_do_the_small_qwen35_models_compare_to_the/o8e4z07/ | false | 1 |
t1_o8e4yp1 | Dejame anotar esto último porque estoy haciendo mi propio "openclaw" y voy a necesitar aplicarlo. Si tienes más recomendaciones dejamelas uwu | 1 | 0 | 2026-03-03T11:14:44 | madkoding | false | null | 0 | o8e4yp1 | false | /r/LocalLLaMA/comments/1rin3ea/alibaba_team_opensources_copaw_a_highperformance/o8e4yp1/ | false | 1 |
t1_o8e4vrv | Congrats on your work! It's a good idea to put more info on reddit post (also the graphics looks nice) to tell people why they should try your model. | 1 | 0 | 2026-03-03T11:14:01 | jacek2023 | false | null | 0 | o8e4vrv | false | /r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8e4vrv/ | false | 1 |
t1_o8e4rjh | [deleted] | 1 | 0 | 2026-03-03T11:13:00 | [deleted] | true | null | 0 | o8e4rjh | false | /r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8e4rjh/ | false | 1 |
t1_o8e4pa1 | 1 | 0 | 2026-03-03T11:12:27 | lenjet | false | null | 0 | o8e4pa1 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8e4pa1/ | false | 1 | |
t1_o8e4p3g | Please explain what exactly is wrong with you. | 1 | 0 | 2026-03-03T11:12:24 | jacek2023 | false | null | 0 | o8e4p3g | false | /r/LocalLLaMA/comments/1rjlwu4/unlimited_openclaw_ai_agent_free_premium_api/o8e4p3g/ | false | 1 |
t1_o8e4mez | The Mac route is a dead end. Llms in 2026 are used for agentic tasks. A Mac ultra is 10 to 30 times slower at processing prompts. And an agentic task can be very many prompts. | 1 | 0 | 2026-03-03T11:11:44 | Valuable-Run2129 | false | null | 0 | o8e4mez | false | /r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8e4mez/ | false | 1 |
t1_o8e4k85 | That's very hard to answer without benchmarks for the oss claude tune you linked | 1 | 0 | 2026-03-03T11:11:12 | AppealSame4367 | false | null | 0 | o8e4k85 | false | /r/LocalLLaMA/comments/1rjmczv/low_vram_qwen35_4b_and_2b/o8e4k85/ | false | 1 |
t1_o8e4i8h | And this is the valid answer. | 1 | 0 | 2026-03-03T11:10:42 | jacek2023 | false | null | 0 | o8e4i8h | false | /r/LocalLLaMA/comments/1rji5bc/how_do_the_small_qwen35_models_compare_to_the/o8e4i8h/ | false | 1 |
t1_o8e4gu1 | meaning? | 1 | 0 | 2026-03-03T11:10:22 | alichherawalla | false | null | 0 | o8e4gu1 | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8e4gu1/ | false | 1 |
t1_o8e4dgz | I think 95-99% people on this sub are focused on benchmarks and leaderboards, so asking them is pointless (you can ask leaderboard instead, cut the middle man). Different models have different behaviours, different knowledge, different levels of censorship. For example Mistral models are for sure lower than Qwen in leaderboards but for some reason they are extremely popular. As for Granite - IBM promised bigger versions, but somehow forgot about it, so we need to remind them ;) | 1 | 0 | 2026-03-03T11:09:32 | jacek2023 | false | null | 0 | o8e4dgz | false | /r/LocalLLaMA/comments/1rji5bc/how_do_the_small_qwen35_models_compare_to_the/o8e4dgz/ | false | 1 |
t1_o8e4c92 | Also add that with the new Qwen3.5, 0.8B enables near-real-time camera analysis.
Additionally, the new Qwen3.5 small models, 0.8B/2B, are designed for fast edge deployments on phones/tablets, focusing on “real-time perception and decision-making.”
This makes on-device near-real-time camera analysis feasible (e.g., sampling 1, 2 FPS + short prompts, streaming partial responses).
The release also emphasizes 0.8B/2B as low-latency, low-footprint edge device models, allowing for camera-first flows (spot text, classify objects/scenes, quick “what am I looking at?” assist) without needing cloud access. | 1 | 0 | 2026-03-03T11:09:14 | RIP26770 | false | null | 0 | o8e4c92 | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8e4c92/ | false | 1 |
t1_o8e4c2n | What actually happened to Paddle? I remember back in November/December it was lauded to be the by far best locally hosted PCR model and it would be supported by llama.cpp in the future, but then I never read about it again and it's still not supported. | 1 | 0 | 2026-03-03T11:09:12 | cyberdork | false | null | 0 | o8e4c2n | false | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8e4c2n/ | false | 1 |
t1_o8e4acu | Why you marked it as quant? It’s finerune | 1 | 0 | 2026-03-03T11:08:46 | stopbanni | false | null | 0 | o8e4acu | false | /r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8e4acu/ | false | 1 |
t1_o8e49td | Can you add Qwen3.5 9B? | 1 | 0 | 2026-03-03T11:08:38 | DeltaSqueezer | false | null | 0 | o8e49td | false | /r/LocalLLaMA/comments/1r7shtv/i_built_a_benchmark_that_tests_coding_llms_on/o8e49td/ | false | 1 |
t1_o8e49p2 | Has there been any official communication indicating that there will be any such model? (given that the 3.5 series is already trained on agentic coding, I would have guessed not). I wouldn't complain if we got at least a FIM trained variant of one of the smaller models. | 1 | 0 | 2026-03-03T11:08:36 | bjodah | false | null | 0 | o8e49p2 | false | /r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8e49p2/ | false | 1 |
t1_o8e49gc | I’ve seeing qween3.5 being mentioned a lot here
Is it better than this model https://ollama.com/slekrem/gpt-oss-claude-code-32k I came across this few moments on and want to know from guys why is Qwen better than Claude OSS model | 1 | 0 | 2026-03-03T11:08:33 | bad_detectiv3 | false | null | 0 | o8e49gc | false | /r/LocalLLaMA/comments/1rjmczv/low_vram_qwen35_4b_and_2b/o8e49gc/ | false | 1 |
t1_o8e47pf | I'm pretty sure this is how all vision models work? | 1 | 0 | 2026-03-03T11:08:07 | MythOfDarkness | false | null | 0 | o8e47pf | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8e47pf/ | false | 1 |
t1_o8e46xs | what on earth is that terrible ui | 1 | 0 | 2026-03-03T11:07:56 | CATLLM | false | null | 0 | o8e46xs | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8e46xs/ | false | 1 |
t1_o8e41y5 | how do you run on 8 ,if it doesn't fit in the vram?
are u using llamacpp? | 1 | 0 | 2026-03-03T11:06:42 | arm2armreddit | false | null | 0 | o8e41y5 | false | /r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8e41y5/ | false | 1 |
t1_o8e3wzv | My System bossgame m5 128gb shared | 1 | 0 | 2026-03-03T11:05:27 | Septa105 | false | null | 0 | o8e3wzv | false | /r/LocalLLaMA/comments/1rds9nm/strix_halo_models_loading_on_memory_but_plenty_of/o8e3wzv/ | false | 1 |
t1_o8e3uzt | Can you give me the exakt Model you use and how you get to 160k? I use llama.cpp rocm and rocm7.2 but my max is around 60k ctx no matter which quant I use also System breaks with oom at 60k. | 1 | 0 | 2026-03-03T11:04:56 | Septa105 | false | null | 0 | o8e3uzt | false | /r/LocalLLaMA/comments/1rds9nm/strix_halo_models_loading_on_memory_but_plenty_of/o8e3uzt/ | false | 1 |
t1_o8e3tfp | Hey I was hoping to abliberate Qwen 3.5 35B-A3B BF16 model. Can someone tell me how can I do that. I have an RTX PRO 6000 WS. | 1 | 0 | 2026-03-03T11:04:33 | Unhappy_Advantage_66 | false | null | 0 | o8e3tfp | false | /r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o8e3tfp/ | false | 1 |
t1_o8e3sk9 | Yes distributed. Q6 doesn't fit in memory because the size is 340gb - two dgx spark is 256gb. How are you able to fit the q6 on your strix halo with 128gb ram? | 1 | 0 | 2026-03-03T11:04:20 | CATLLM | false | null | 0 | o8e3sk9 | false | /r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8e3sk9/ | false | 1 |
t1_o8e3rv2 | >Still running: Qwen 3.5 122B only has 3/70 tasks done so take that ranking with a grain of salt. Also planning BF16 and Q8_K_XL runs for the Qwen3.5 models to show the real quantization tax — should have those up in a day or two.
Will you update your opening post for the quantization results/conclusions? | 1 | 0 | 2026-03-03T11:04:10 | DeltaSqueezer | false | null | 0 | o8e3rv2 | false | /r/LocalLLaMA/comments/1reds0p/qwen_35_craters_on_hard_coding_tasks_tested_all/o8e3rv2/ | false | 1 |
t1_o8e3rip | I am greedy like that. It isn't crazy though, we are already hitting GPT levels on moderate local hardware, and look how image generation advanced from fever dreams to what we have today in a couple of years. We will get there, hopefully before WW3 | 1 | 0 | 2026-03-03T11:04:05 | Mayion | false | null | 0 | o8e3rip | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8e3rip/ | false | 1 |
t1_o8e3q4g | I did some test, here my findings. (Ai written)
Key Findings
\### Thinking Mode Behavior
\- \*\*Default Behavior\*\*: The GGUF model in \`llama-server\` defaults to \*\*Non-Thinking Mode\*\* (or minimal thinking).
\- \*\*Observation\*\*: When utilizing a neutral template (ending in \`<|im\_start|>assistant\\n\`), the model \*does\* generate \`<think>\` and \`</think>\` tags, but for simple prompts (e.g., "derivative of x\^2"), it often generates an \*\*empty thinking block\*\* (\`<think>\\n\\n</think>\`) and proceeds directly to the answer.
\- \*\*Conclusion\*\*: The model has the capability enabled by default, but requires "forcing" or more complex prompts to actually utilize the thinking budget effectively.
\- \*\*Enabling Deep Thinking (Forced)\*\*:
\- \*\*Method\*\*: We use a custom template that ends the prompt with \`<|im\_start|>assistant\\n<think>\\n\`.
\- \*\*Result\*\*: By pre-filling the opening \`<think>\` tag, we force the model to generate the \*content\* of the thought process.
\- \*\*Note on Tags\*\*: When using this method, the opening \`<think>\` tag is part of the \*\*prompt\*\*, not the generated \*\*completion\*\*. The completion will start with the thinking content (e.g., "Thinking Process:...") and end with \`</think>\`.
\- \*\*Command\*\*: \`--chat-template-file qwen\_think.jinja\` (file provided in this folder).
\- \*\*Raw Output Verification\*\*:
\- \*\*Standard Prompt\*\*: \`<|im\_start|>assistant\\n\` -> Output: \`<think>\\n\\n</think>...\` (Empty think)
\- \*\*Forced Prompt\*\*: \`<|im\_start|>assistant\\n<think>\\n\` -> Output: \`Thinking Process: ... </think>...\` (Deep think)
\- \*\*API Note\*\*: Passing \`chat\_template\_kwargs\` via OpenAI API is currently ineffective with this server version.
\### Parsing Thinking Output
Since we are using a forced custom template, the response parsing logic must handle cases where the Thinking token \`<think>\` is part of the prompt (therefore implicitly opened) vs explicitly generated.
| 1 | 0 | 2026-03-03T11:03:44 | DaleCooperHS | false | null | 0 | o8e3q4g | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8e3q4g/ | false | 1 |
t1_o8e3l8o | Yes. Some people are running it with 8x rtx6000s | 1 | 0 | 2026-03-03T11:02:31 | chisleu | false | null | 0 | o8e3l8o | false | /r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8e3l8o/ | false | 1 |
t1_o8e3hi3 | We will always want the better version of anything, but Opus 4.6 right to me is a golden standard for what I work with. It will turn into a diminishing returns sort of situation. No matter what it will be as good as I need it to be for my workflow. | 1 | 0 | 2026-03-03T11:01:35 | Mayion | false | null | 0 | o8e3hi3 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8e3hi3/ | false | 1 |
t1_o8e3h2c | You don't serve thousands of concurrent users with consumer GPU. Both models need big clusters, and a whole architecture to lower the cost. Both models are close in terms of active parameters so the raw cost of compute is nearly the same. Tee VRAM cost is a non factor when you run your models on a 8xH200 anyway to be able to handle lot of concurrent requests and to store lof of KV caches. | 1 | 0 | 2026-03-03T11:01:28 | Orolol | false | null | 0 | o8e3h2c | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8e3h2c/ | false | 1 |
t1_o8e3eia | >is 397B models still worth trying to get run local?
Dunno. What my numbers are you chasing?
https://preview.redd.it/jyrhnf0fatmg1.jpeg?width=1170&format=pjpg&auto=webp&s=3dd7e69214ef286fc39fa89440139708ac66b5c3 | 1 | 0 | 2026-03-03T11:00:49 | ProfessionalSpend589 | false | null | 0 | o8e3eia | false | /r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/o8e3eia/ | false | 1 |
t1_o8e3dh9 | Angriest upvote I've had all day | 1 | 0 | 2026-03-03T11:00:35 | bityard | false | null | 0 | o8e3dh9 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8e3dh9/ | false | 1 |
t1_o8e3djk | [removed] | 1 | 0 | 2026-03-03T11:00:35 | [deleted] | true | null | 0 | o8e3djk | false | /r/LocalLLaMA/comments/1nmd3ia/qwen3next_is_so_easy_to_jailbreak_just_had_to/o8e3djk/ | false | 1 |
t1_o8e3dfc | Badly | 1 | 0 | 2026-03-03T11:00:34 | Pogsquog | false | null | 0 | o8e3dfc | false | /r/LocalLLaMA/comments/1rj5ngc/running_qwen3508b_on_my_7yearold_samsung_s10e/o8e3dfc/ | false | 1 |
t1_o8e3c32 | [removed] | 1 | 0 | 2026-03-03T11:00:14 | [deleted] | true | null | 0 | o8e3c32 | false | /r/LocalLLaMA/comments/1rjmasx/if_youre_an_operator_pls_dont_wire_gptclaude_in/o8e3c32/ | false | 1 |
t1_o8e3bwo | I am not the guy but you can turn off thinking in LMStudio this way for all qwen3.5 (including this one, I've tested it)
- My Models
- Edit model config (gear icon on this model in the list)
- Interence tab
- Prompt Template
- to the top add this line:
```
{% set enable_thinking = false %}
```
- load the model
This works because at the end it checks if the enable_thinking variable is set and defaults to thinking mode if undefined. In the template it is not set, LMStudio does not provide it, so we just initialize it in the template itself. | 1 | 0 | 2026-03-03T11:00:12 | jax_cooper | false | null | 0 | o8e3bwo | false | /r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8e3bwo/ | false | 1 |
t1_o8e36uk | I ran it on 4 RTX 6000 pros but replaced it with qwen 3.5 397B. | 1 | 0 | 2026-03-03T10:58:56 | TaiMaiShu-71 | false | null | 0 | o8e36uk | false | /r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8e36uk/ | false | 1 |
t1_o8e35kb | JSON parsing, classification, label and summarization; I have tried gpt-oss, LiquidAI extract, youtu \[54 GB data worth of txt files\]
out of these, I found only qwen {3.5, 9b} and llama 3.1\[yes, older gen\] to be accurate for my use case. The actual truth is you are not using them properly, even through liquidai extract meant for json extraction, it flawlessly done with proper json syntax but the content it summarized or captured were mostly inaccurate
I've wasted so much time on many models and understood that you can't judge a model by its benchmark, apply it on your damn application before judging | 1 | 0 | 2026-03-03T10:58:37 | idkwhattochoo | false | null | 0 | o8e35kb | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8e35kb/ | false | 1 |
t1_o8e33c4 | Latest ministral 3 8b instruct is very decent and almost a direct replacement. | 1 | 0 | 2026-03-03T10:58:05 | Kahvana | false | null | 0 | o8e33c4 | false | /r/LocalLLaMA/comments/1rj7p2h/i_need_an_uncensored_llm_for_8gb_vram/o8e33c4/ | false | 1 |
t1_o8e2uzr | i use in llama-cpp, `--reasoning-budget 0` not sure if this is helpful | 1 | 0 | 2026-03-03T10:55:58 | Gold_Sugar_4098 | false | null | 0 | o8e2uzr | false | /r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8e2uzr/ | false | 1 |
t1_o8e2ths | Have you considered running the fp8 model (AWQ) in SGLang? If you are serious about performance, that's something a geek should look into | 1 | 0 | 2026-03-03T10:55:35 | Pentium95 | false | null | 0 | o8e2ths | false | /r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8e2ths/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.