name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8fw751 | Awesome, then totally interested! | 1 | 0 | 2026-03-03T17:13:12 | vertigo235 | false | null | 0 | o8fw751 | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8fw751/ | false | 1 |
t1_o8fw36d | > I think it was due to stock/production issues
and you think they don't still have those? in this "chip/memory" economy? | 1 | 0 | 2026-03-03T17:12:39 | Caffdy | false | null | 0 | o8fw36d | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fw36d/ | false | 1 |
t1_o8fw1e6 | I've recently been looking for something and have been trying out perplexica (https://github.com/ItzCrazyKns/Perplexica). It's easy to use and install (docker), has a clean interface, and works okay (better than depending on search function calling/mcp in my experience).
It still uses searxng for gathering search results.
Two problems I have with it is sometimes it tells me it can't find anything and I just reply with "search again" and then it gives me good results.
The other is that if the searxng results exceed your model's context length then it just stops, doesn't even show an error in the UI much less try to compact context. But there is a github issue open for this so hopefully it keeps improving! | 1 | 0 | 2026-03-03T17:12:26 | cristoper | false | null | 0 | o8fw1e6 | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8fw1e6/ | false | 1 |
t1_o8fvyu5 | Post your llama-benchmark. Here is mine on a tiny 8GB RX 7600.
| model | size | params | backend | ngl | type_k | type_v | fa | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -----: | -----: | -: | --------------: | -------------------: |
| qwen35 ?B Q4_K - Medium | 5.45 GiB | 8.95 B | Vulkan | 99 | q8_0 | q8_0 | 1 | pp512 | 664.08 ± 0.00 |
| qwen35 ?B Q4_K - Medium | 5.45 GiB | 8.95 B | Vulkan | 99 | q8_0 | q8_0 | 1 | tg128 | 31.11 ± 0.00 | | 1 | 0 | 2026-03-03T17:12:06 | giant3 | false | null | 0 | o8fvyu5 | false | /r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8fvyu5/ | false | 1 |
t1_o8fvwwp | Gaza and the 'drug boats' were the test runs for autonomous strikes. That is what i meant when I said they got the data collection down. | 1 | 0 | 2026-03-03T17:11:51 | snoodoodlesrevived | false | null | 0 | o8fvwwp | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8fvwwp/ | false | 1 |
t1_o8fvv5v | qwen3.5:0.8b:
> Thinking Process:
>
> Analyze the Request:
>
> Input: "hi" (highly informal, greeting).
> Intent: Greet the user.
> Goal: Acknowledge the greeting and respond appropriately (polite but warm).
> Determine the appropriate tone:
>
> Since it's a casual "hi", I can be friendly.
> Keep it concise.
> Optionally offer assistance or conversation options.
> Drafting responses:
>
> Option 1 (Simple): Hello! How are you?
> Option 2 (Warm/Enthusiastic): Hi there! It's nice to see you. How about we chat?
> Option 3 (Short & Sweet): Hello! I'm ready to help.
> Selecting the best option:
>
> A natural greeting + a brief question or offer is usually best for an initial "hi".
> "Hello there! 😊 How can I assist you today?" feels friendly and helpful.
> Or just: "Hi! Nice to meet you! 👋"
> Refining the final output:
>
> Keep it natural. "Hi there!" is good.
>
> Add a touch of warmth. "Hello! How's it going?" or similar.
>
> Let's go with something welcoming and open-ended.
>
> Draft: Hi there! 😊 How can I help you today?
>
> Wait, actually, let's keep it simple but engaging.
>
> "Hi there! 🙌 What's on your mind?" or "I'm here to help."
>
> Actually, the most common response to "hi" is just a greeting. Let's say: "Hello there! 👋 How can I assist you today?"
>
> Final Polish:
>
> Ensure emojis add personality (though standard text + emoji).
> Keep it friendly.
> (Self-Correction during thought process): Since the system instructions sometimes imply I should be helpful and concise, but this is a simple greeting. I should respond warmly.
>
> Let's go with: "Hello! 👋 How can I assist you today?" or simply say "Hi there! 😊"
>
> Wait, looking at typical behavior for LLMs in a conversational setting when "hi" is asked. Response: "Hi there! 🙌 How are you doing? Or what would you like to talk about?"
>
> Actually, the user just said "hi". I should answer directly. "Hello! It's nice to say hi. How can I help you today?"
>
> Let's keep it conversational but simple.
>
> Final decision: "Hello! 🙌 How can I assist you today? 😊" or just "Hi there!" + a question. Let's go with: "Hi there! 👋 How are you feeling? What can I do for you?"
>
> Wait, simpler is better. "Hi! 👋 It's great to see you. How's your day going?"
>
> Okay, let's craft the final response. "Hi there! 😊 How can I help you today?"
Hello there! 😊 What's on your mind? | 1 | 0 | 2026-03-03T17:11:36 | SufficientPie | false | null | 0 | o8fvv5v | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8fvv5v/ | false | 1 |
t1_o8fvuxg | How is it performing for coding tasks? | 1 | 0 | 2026-03-03T17:11:35 | callmedevilthebad | false | null | 0 | o8fvuxg | false | /r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8fvuxg/ | false | 1 |
t1_o8fvusa | Q8 mlx, apple has different models | 1 | 0 | 2026-03-03T17:11:34 | BitXorBit | false | null | 0 | o8fvusa | false | /r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8fvusa/ | false | 1 |
t1_o8fvuqz | Easily the goat. His contributions to open source will never be forgotten. His decision to open source carried many PhD academics career | 1 | 0 | 2026-03-03T17:11:33 | KeikakuAccelerator | false | null | 0 | o8fvuqz | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fvuqz/ | false | 1 |
t1_o8fvuko | Eu ficaria encantado pois faço isso de alguns ferramentas chinesas para gerar bonecos e imprimir em 3d para meu filho. | 1 | 0 | 2026-03-03T17:11:32 | NullKalahar | false | null | 0 | o8fvuko | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8fvuko/ | false | 1 |
t1_o8fvsvn | Candidly, I don't feel qualified to answer these questions for you. | 1 | 0 | 2026-03-03T17:11:19 | Recoil42 | false | null | 0 | o8fvsvn | false | /r/LocalLLaMA/comments/1rbvbzt/best_opensource_coder_model_for_replacing_claude/o8fvsvn/ | false | 1 |
t1_o8fvqya | Thanks for your feedback. It’s an app that will be completely free and open source. | 1 | 0 | 2026-03-03T17:11:04 | Lightnig125 | false | null | 0 | o8fvqya | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8fvqya/ | false | 1 |
t1_o8fvoje | Performance metrics would be appreciated. How does it differ from this other one? (KDL 0.0916, refusals 9/100) https://huggingface.co/trohrbaugh/Qwen3.5-122B-A10B-heretic | 1 | 0 | 2026-03-03T17:10:45 | beneath_steel_sky | false | null | 0 | o8fvoje | false | /r/LocalLLaMA/comments/1rjqff6/sabomakoqwen35122ba10bhereticgguf_hugging_face/o8fvoje/ | false | 1 |
t1_o8fvhyb | Admittedly, I don't look at the size of TTS models because they all fall into either "KittenTTS" or "Not actually usable for real time applications (waiting 5 seconds on a raspberry pi does not count)" | 1 | 0 | 2026-03-03T17:09:53 | HugoCortell | false | null | 0 | o8fvhyb | false | /r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8fvhyb/ | false | 1 |
t1_o8fvh4r | I am using vulkan, have been told it is more stable, but slower.
Maybe switch and test again.
Check if there are updates coming the next days. The series just has been released | 1 | 0 | 2026-03-03T17:09:47 | Impossible_Art9151 | false | null | 0 | o8fvh4r | false | /r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8fvh4r/ | false | 1 |
t1_o8fveeu | It doesn't imply anything. | 1 | 0 | 2026-03-03T17:09:25 | popiazaza | false | null | 0 | o8fveeu | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fveeu/ | false | 1 |
t1_o8fvbnu | Well GLM under Claude Code gets a lot done for me using bash, grep, awk, sed, sqlite3, duckdb, python, etc, so I certainly believe it. Obviously it is better at things it knows about, so it's not gonna be great at using some new application with lots of flags unless you provide it an explanation of what it is and how to use it. And I suppose one you've done that you're back up to the token count for MCP for that tool. :-)
So yeah, know when to use each, but if CLI can do it don't waste the tokens. | 1 | 0 | 2026-03-03T17:09:03 | temperature_5 | false | null | 0 | o8fvbnu | false | /r/LocalLLaMA/comments/1rjtt01/the_truth_about_mcp_vs_cli/o8fvbnu/ | false | 1 |
t1_o8fv2qe | Sell to goverment so the goverment can save a strategically important company when the infinite money source runs out | 1 | 0 | 2026-03-03T17:07:53 | VoiceApprehensive893 | false | null | 0 | o8fv2qe | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o8fv2qe/ | false | 1 |
t1_o8fuyfe | As I wrote in another commend below this: it depends what you want to do with it. While I love cloud models like Gemini3 flash and sonnet 4.6, even Gemini3.1 Pro and Codex5.3 are not good enough for complex C++ tasks at the moment. Complexity is where all models struggle, it’s only a matter of when not if. And there is so much room for improvement.
Would you still use a calculus if you can have a calculator?
Or a typewriter if you have a computer and printer?
That’s the kind of gap I - sadly - still see between what’s possible with local small models and frontier ones. | 1 | 0 | 2026-03-03T17:07:19 | Danmoreng | false | null | 0 | o8fuyfe | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8fuyfe/ | false | 1 |
t1_o8fux36 | thanks for the reply.Thought a good planner gives granular, smaller tasks to smaller models. => aufficient quality
A 2nd question. What speed makes agents usable?
For most of my usecase >10t/s is minimum needed, >20t/s makes working quite good
Are there some rules of thumb for the primary model and the agents?
I can imagie that a slow primary cannot handle lots of agents simultaneously? | 1 | 0 | 2026-03-03T17:07:08 | Impossible_Art9151 | false | null | 0 | o8fux36 | false | /r/LocalLLaMA/comments/1rbvbzt/best_opensource_coder_model_for_replacing_claude/o8fux36/ | false | 1 |
t1_o8furw7 | Yeah, like this one from another thread here:
[https://www.reddit.com/r/LocalLLaMA/comments/1rivckt/comment/o8dx3t8/](https://www.reddit.com/r/LocalLLaMA/comments/1rivckt/comment/o8dx3t8/)
I opted not to engage, stuff like that is just embarrassing. | 1 | 0 | 2026-03-03T17:06:27 | tmvr | false | null | 0 | o8furw7 | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8furw7/ | false | 1 |
t1_o8fuham | I only recognize him because of all the "we're about to announce an announcement" twitter screenshot spam that people like to post here | 1 | 0 | 2026-03-03T17:05:02 | cristoper | false | null | 0 | o8fuham | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fuham/ | false | 1 |
t1_o8fuc8v | Good Luck and Fortune Junyang, please let us know where you land. | 1 | 0 | 2026-03-03T17:04:23 | JimJava | false | null | 0 | o8fuc8v | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fuc8v/ | false | 1 |
t1_o8fuc1z | "Qwen 3.5B-A3B (30B)"
It's 35B and you are just decreasing your VRAM from 48 to 24 if I am correct | 1 | 0 | 2026-03-03T17:04:22 | jacek2023 | false | null | 0 | o8fuc1z | false | /r/LocalLLaMA/comments/1rjrvku/is_it_worth_the_candle_2x_tesla_p40_24gb_to_12/o8fuc1z/ | false | 1 |
t1_o8fu8dp | Heretic works fine at 122B and below | 1 | 0 | 2026-03-03T17:03:53 | vpyno | false | null | 0 | o8fu8dp | false | /r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o8fu8dp/ | false | 1 |
t1_o8fu6cj | Went out with a bang, at least. The entire 3.5 family is awesome. I wonder what happened though? Hopefully he just found a new opportunity. | 1 | 0 | 2026-03-03T17:03:36 | ayylmaonade | false | null | 0 | o8fu6cj | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fu6cj/ | false | 1 |
t1_o8fu5ow | Man, the size comparison is far, don't think about it | 1 | 0 | 2026-03-03T17:03:31 | Billysm23 | false | null | 0 | o8fu5ow | false | /r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8fu5ow/ | false | 1 |
t1_o8fu316 | Just did a comparison between my method and heretic at 122B and heretic seems equivalent. Slightly lower MMLU score but it was within margin of error. I think heretic is fine for 122B | 1 | 0 | 2026-03-03T17:03:10 | vpyno | false | null | 0 | o8fu316 | false | /r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o8fu316/ | false | 1 |
t1_o8fu0f5 | Afaik the whole model still needs to be distributed over ram and vram.
You have 24gb combined system memory, so depending on the quants used it might just fit, but nothing else.
Better stick to the 9b models, they may fit fully on the gpu. | 1 | 0 | 2026-03-03T17:02:49 | coloredgreyscale | false | null | 0 | o8fu0f5 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8fu0f5/ | false | 1 |
t1_o8ftr0z | Interested in having, yes, interested in paying for it? No | 1 | 0 | 2026-03-03T17:01:34 | vertigo235 | false | null | 0 | o8ftr0z | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8ftr0z/ | false | 1 |
t1_o8ftqpq | And $1B still a reasonable compensation to him, crazy times! | 1 | 0 | 2026-03-03T17:01:32 | WideWorry | false | null | 0 | o8ftqpq | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ftqpq/ | false | 1 |
t1_o8ftq9a | Well, maybe he received a nice offer after that amazing releases | 1 | 0 | 2026-03-03T17:01:28 | BumblebeeParty6389 | false | null | 0 | o8ftq9a | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ftq9a/ | false | 1 |
t1_o8ftq3p | \> only takes like 2 or 3 more Gbs over 100k tokens
wait, not literally twice the space of fp16? | 1 | 0 | 2026-03-03T17:01:27 | Hammer-Evader-5624 | false | null | 0 | o8ftq3p | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o8ftq3p/ | false | 1 |
t1_o8ftq4u | This seems to have fixed the missing pydub issue for me. Thanks! | 1 | 0 | 2026-03-03T17:01:27 | Illustrious_Field134 | false | null | 0 | o8ftq4u | false | /r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o8ftq4u/ | false | 1 |
t1_o8ftmna | I don't understand how were you expecting something else. The configurations were always going to be the same as the M4 ones just with higher MT/s for the memory. | 1 | 0 | 2026-03-03T17:00:59 | tmvr | false | null | 0 | o8ftmna | false | /r/LocalLLaMA/comments/1rjrj0e/the_new_macbooks_airpromax_are_dissapointing/o8ftmna/ | false | 1 |
t1_o8ftl7t | KL divergence might not be a good metric here | 1 | 0 | 2026-03-03T17:00:48 | nonerequired_ | false | null | 0 | o8ftl7t | false | /r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8ftl7t/ | false | 1 |
t1_o8fti64 | by short googling "mcio retimer" I've found only "PCIe splitter" cards, which are plugged into x16 port on a motherboard and provide multiple x4/x8 links, not "PCIe merger" cards which are plugged into multiple x4/x8 ports on a motherboard and provide a single x16 port - what I need and wrote about in this post. Are the retimers required for any kind of long cables, either splitting or merging, or only for splitting the PCIe port? | 1 | 0 | 2026-03-03T17:00:24 | MelodicRecognition7 | false | null | 0 | o8fti64 | false | /r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8fti64/ | false | 1 |
t1_o8fthz6 | Based on those numbers, the the dense Qwen 3.5 - 27B performs as well as the 122B-A10B, is that real? Which one between the 2 would run faster on my Strix Halo mini pc? | 1 | 0 | 2026-03-03T17:00:23 | Dull-Breadfruit-3241 | false | null | 0 | o8fthz6 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8fthz6/ | false | 1 |
t1_o8ftgxk | That would land him in gulag. | 1 | 0 | 2026-03-03T17:00:14 | roxoholic | false | null | 0 | o8ftgxk | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ftgxk/ | false | 1 |
t1_o8ftb0s | That Jerry rigged solution is for testing model capability. We are in talks for a 600k NVL8 system from Supermicro. Testing model capabilities and prompts till then. | 1 | 0 | 2026-03-03T16:59:28 | mxforest | false | null | 0 | o8ftb0s | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8ftb0s/ | false | 1 |
t1_o8ftapv | Very sad news :( He was great | 1 | 0 | 2026-03-03T16:59:25 | jacek2023 | false | null | 0 | o8ftapv | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ftapv/ | false | 1 |
t1_o8ft7mw | > different token sequences being run in parallel will activate different experts. So memory bandwidth available to each expert takes a hit and things slow down
For a single-GPU setup serving multiple agents at once I don't get how this can be. The limiting factor is mem bandwith, and MoE uses less mem bandwith per request vs dense. If the N agents are hitting somewhat random / different parts of the model, that seems in the worst case equivalent to dense.
Can you suggest follow up topics and/or sources on this? | 1 | 0 | 2026-03-03T16:59:01 | PentagonUnpadded | false | null | 0 | o8ft7mw | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8ft7mw/ | false | 1 |
t1_o8ft4mi | Duplicate post | 1 | 0 | 2026-03-03T16:58:37 | LocalLLaMA-ModTeam | false | null | 0 | o8ft4mi | true | /r/LocalLLaMA/comments/1rjsxh0/neat_detail_qwen3coder_running_in_lm_studio_front/o8ft4mi/ | true | 1 |
t1_o8ft1zc | Rule 1 - Duplicate post | 1 | 0 | 2026-03-03T16:58:16 | LocalLLaMA-ModTeam | false | null | 0 | o8ft1zc | true | /r/LocalLLaMA/comments/1rjtzfv/end_of_an_era/o8ft1zc/ | true | 1 |
t1_o8ft04n | >cost is not an issue
>continues to explain how they save money by using a Jerry rigged solution of multiple macs instead of a more expensive Nvidia system.
hhhmmmmm | 1 | 0 | 2026-03-03T16:58:01 | Allseeing_Argos | false | null | 0 | o8ft04n | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8ft04n/ | false | 1 |
t1_o8fszcy | Dude probably snuck out of China and requested asylum in the US, Zuck probably paid him $1b | 1 | 0 | 2026-03-03T16:57:55 | vertigo235 | false | null | 0 | o8fszcy | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fszcy/ | false | 1 |
t1_o8fswlc | Does this imply a shift in Qwen team's model release strategy, such as no longer releasing small size models or even open-sourcing any models at all!? | 1 | 0 | 2026-03-03T16:57:33 | Apprehensive_Bed7502 | false | null | 0 | o8fswlc | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fswlc/ | false | 1 |
t1_o8fstvz | As a user wrote in a previous comment, I too would be interested especially with the lowpoly function and I add: I'm very interested in the OBJ format! | 1 | 0 | 2026-03-03T16:57:12 | Temporary-Roof2867 | false | null | 0 | o8fstvz | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8fstvz/ | false | 1 |
t1_o8fsssq | Absolutely would be interested. | 1 | 0 | 2026-03-03T16:57:03 | realtag2025 | false | null | 0 | o8fsssq | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8fsssq/ | false | 1 |
t1_o8fsrpv | You have to get the fully maxed spec to get the full 614GB/s btw | 1 | 0 | 2026-03-03T16:56:54 | spacefarers | false | null | 0 | o8fsrpv | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fsrpv/ | false | 1 |
t1_o8fsrin | GGUF itself is a pretty old quant method. At any rate, try this quant for running on vLLM (Redhat owns/maintains vLLM): [https://huggingface.co/RedHatAI/Qwen3-4B-FP8-dynamic](https://huggingface.co/RedHatAI/Qwen3-4B-FP8-dynamic)
Def tweak those params as needed to find the right balance. I know it sounds counter-intuitive but might also try reducing gpu\_memory\_utilization to 0.85. Might also try`enable-chunked-prefill` | 1 | 0 | 2026-03-03T16:56:52 | DinoAmino | false | null | 0 | o8fsrin | false | /r/LocalLLaMA/comments/1rjm6lf/better_vllm_setup_or_different_inference_software/o8fsrin/ | false | 1 |
t1_o8fsnwx | Despite this decision, remember his contribution guys. I get it's easy to pile up when people cash out (I don't know the circumstances), but as the name of this supreddit indicates, while LLama is dead it push the field forard. JL and Qwen team did tremendous pushing too! | 1 | 0 | 2026-03-03T16:56:24 | waiting_for_zban | false | null | 0 | o8fsnwx | false | /r/LocalLLaMA/comments/1rjtzfv/end_of_an_era/o8fsnwx/ | false | 1 |
t1_o8fsn14 | And without a long letter about how you’re ideologically opposed to whatever they’re doing?? How will we ever function without my drama??? People can’t just decide to stop doing something for personal reasons!! 😡 | 1 | 0 | 2026-03-03T16:56:17 | Borkato | false | null | 0 | o8fsn14 | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fsn14/ | false | 1 |
t1_o8fsk9k | Thanks for your feedback. That would actually be the whole point of making it open source. | 1 | 0 | 2026-03-03T16:55:56 | Lightnig125 | false | null | 0 | o8fsk9k | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8fsk9k/ | false | 1 |
t1_o8fsf0i | It’s usually step depth, not raw tool count: with strict schemas + validation we’re fine with dozens of tools, but drift starts showing up after ~10–20 steps if you don’t gate every call. | 1 | 0 | 2026-03-03T16:55:14 | BC_MARO | false | null | 0 | o8fsf0i | false | /r/LocalLLaMA/comments/1rjm4bl/tool_calling_is_where_agents_fail_most/o8fsf0i/ | false | 1 |
t1_o8fs88z | Running Qwen3.5 397B A17B Q4 MLX on this M5 ultra 512Gb variant might be the first time that it’s possible for consumers to have their own Claude Sonnet 4(.5) models on their own hardware for around $10k. Crazy. I suspect this is the big hardware drop I’ve been waiting for in conjunction with the right model to run on it dropping a few weeks ago.
Buckle up boys and girls it’s about to get wild
| 1 | 0 | 2026-03-03T16:54:22 | BumbleSlob | false | null | 0 | o8fs88z | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fs88z/ | false | 1 |
t1_o8fs7cb | >the base version of one of the Qwen3 models has been so deterministic when I was testing it for synthetic data generation you could barely call it "Base"
Yeah, the only 3.5 base model I've played around with was 35b. But I seriously wouldn't have known it wasn't the full instruct tune if I hadn't seen that one first. Probably the most baked I've ever seen a base model. | 1 | 0 | 2026-03-03T16:54:15 | toothpastespiders | false | null | 0 | o8fs7cb | false | /r/LocalLLaMA/comments/1rjpesa/qwen_35_what_is_base_version/o8fs7cb/ | false | 1 |
t1_o8fs76m | Why stepping down? | 1 | 0 | 2026-03-03T16:54:14 | mark_haas | false | null | 0 | o8fs76m | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fs76m/ | false | 1 |
t1_o8fs3xo | you are on AMD, which backend do you use? maybe this backend is optimized for gpt-oss but not for new qwen | 1 | 0 | 2026-03-03T16:53:49 | jacek2023 | false | null | 0 | o8fs3xo | false | /r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8fs3xo/ | false | 1 |
t1_o8fs381 | It's currently in production environment (for a handful of people). Will test it in depth whenever they are available for maintenance. Our use case was running full Deepseek and not necessarily speedups. | 1 | 0 | 2026-03-03T16:53:43 | mxforest | false | null | 0 | o8fs381 | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fs381/ | false | 1 |
t1_o8fs2dp | [deleted] | 1 | 0 | 2026-03-03T16:53:36 | [deleted] | true | null | 0 | o8fs2dp | false | /r/LocalLLaMA/comments/1rjsgy6/how_to_fix_qwen35_overthink/o8fs2dp/ | false | 1 |
t1_o8fs090 | I'm only getting a very weak influence from the voice sample on the final output. | 1 | 0 | 2026-03-03T16:53:19 | Stepfunction | false | null | 0 | o8fs090 | false | /r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8fs090/ | false | 1 |
t1_o8frxyo | Make sure you’re actually getting the new GGUF (check the file hash/size) and clear any local cache before re-downloading; a lot of people end up reusing the old blob. | 1 | 0 | 2026-03-03T16:53:02 | BC_MARO | false | null | 0 | o8frxyo | false | /r/LocalLLaMA/comments/1rjhy83/tool_calling_issues_with_qwen3535b_with_16gb_vram/o8frxyo/ | false | 1 |
t1_o8frtc2 | GPT OSS 20B have 3.6B active parameters so while generating tokens it is fast like a 3.6B model. Also it has bunch of speed optimizations and native 4 bit. | 1 | 0 | 2026-03-03T16:52:25 | BumblebeeParty6389 | false | null | 0 | o8frtc2 | false | /r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8frtc2/ | false | 1 |
t1_o8frsgr | That draft-min/max is tuned for MoE models, which Devstral Small 2 24B is not. You can use lower min/max and get a higher acceptance rate in real world tasks... which whatever you tested is *not*.
I don't think anyone here cares about the biggest number you can get in a contrived test. What was your average TPS over the course of an entire coding session? Probably only marginally higher than without ngram-mod, but any boost is still nice. | 1 | 0 | 2026-03-03T16:52:18 | coder543 | false | null | 0 | o8frsgr | false | /r/LocalLLaMA/comments/1rjpvdd/600tks_speed_on_local_hardware_with_self/o8frsgr/ | false | 1 |
t1_o8frqzs | Bro retired at the top | 1 | 0 | 2026-03-03T16:52:06 | No_Swimming6548 | false | null | 0 | o8frqzs | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8frqzs/ | false | 1 |
t1_o8frppg | Just after these amazing releases?? | 1 | 0 | 2026-03-03T16:51:57 | Leflakk | false | null | 0 | o8frppg | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8frppg/ | false | 1 |
t1_o8frn3q | F
Thank you and hope it's not the end for Qwens and Junyang | 1 | 0 | 2026-03-03T16:51:35 | sergeysi | false | null | 0 | o8frn3q | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8frn3q/ | false | 1 |
t1_o8frfcq | Ha, fun test. I threw this at the 35B-A3B just for some fun and got this: https://i.imgur.com/ixjTKqc.png | 1 | 0 | 2026-03-03T16:50:34 | ayylmaonade | false | null | 0 | o8frfcq | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8frfcq/ | false | 1 |
t1_o8frexf | 614 GB/s??? That's faster than a 5060ti. I know it will be magnitudes more expensive but it'll crush dgx spark and strix halo. | 1 | 0 | 2026-03-03T16:50:31 | Maleficent_Celery_55 | false | null | 0 | o8frexf | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8frexf/ | false | 1 |
t1_o8frczm | Which will be better for general use case: 35B 2bit quantized OR the 9B 8bit quantized? Have anyone compared these already? | 1 | 0 | 2026-03-03T16:50:16 | PoemPrestigious3834 | false | null | 0 | o8frczm | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o8frczm/ | false | 1 |
t1_o8frcj7 | I'd be interested if it was open source (so I could tweak it)
Most important features would be lowpoly and animation support (rigging) | 1 | 0 | 2026-03-03T16:50:12 | metigue | false | null | 0 | o8frcj7 | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8frcj7/ | false | 1 |
t1_o8fr87j | I’m down with OPP | 1 | 0 | 2026-03-03T16:49:38 | chicky-poo-pee-paw | false | null | 0 | o8fr87j | false | /r/LocalLLaMA/comments/1rjtt01/the_truth_about_mcp_vs_cli/o8fr87j/ | false | 1 |
t1_o8fr52r | Bro left with a bang. | 1 | 0 | 2026-03-03T16:49:14 | kulchacop | false | null | 0 | o8fr52r | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fr52r/ | false | 1 |
t1_o8fr2tz | MCP (Model Context Protocol) is essentially a standardized way for apps like VS Code, Claude Desktop, and others to connect AI models to external tools and data sources. —> it's basically a bridge that lets your local app make API calls to remote services (LLMs, databases, web APIs, etc.) behind the scenes, so it feels like the AI is running locally and has access to your files, calendar, Slack, etc., when really it's just shuttling requests back and forth over the internet. | 1 | 0 | 2026-03-03T16:48:56 | melanov85 | false | null | 0 | o8fr2tz | false | /r/LocalLLaMA/comments/1rjtt01/the_truth_about_mcp_vs_cli/o8fr2tz/ | false | 1 |
t1_o8fqxfg | I wonder if he is retiring. | 1 | 0 | 2026-03-03T16:48:13 | inaem | false | null | 0 | o8fqxfg | false | /r/LocalLLaMA/comments/1rjtzfv/end_of_an_era/o8fqxfg/ | false | 1 |
t1_o8fqqej | I'm just starting to learn but was wondering for a Slm model, what would be considered enough ram? | 1 | 0 | 2026-03-03T16:47:18 | Sad_Resolve_4888 | false | null | 0 | o8fqqej | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fqqej/ | false | 1 |
t1_o8fqp1x | Good, what will happen if we compare the older q8 to newer q4 or even q2? | 1 | 0 | 2026-03-03T16:47:07 | Billysm23 | false | null | 0 | o8fqp1x | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8fqp1x/ | false | 1 |
t1_o8fqowj | his llms no more more | 1 | 0 | 2026-03-03T16:47:06 | AlpY24upsal | false | null | 0 | o8fqowj | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fqowj/ | false | 1 |
t1_o8fqlz3 | [removed] | 1 | 0 | 2026-03-03T16:46:42 | [deleted] | true | null | 0 | o8fqlz3 | false | /r/LocalLLaMA/comments/1rgwyqi/agent_debugging_is_a_mess_am_i_the_only_one/o8fqlz3/ | false | 1 |
t1_o8fqjey | Solid setup for local LLMs! For OCR on handwritten medical docs, you might want a dedicated OCR tool first something like Tesseract with a custom trained model for medical handwriting could help before feeding text to an LLM.
Ollama + anythingLLM is a great starting point for managing models and building a local chat interface. Qwen2.5 32B might be better than the 27B variant for your needs since it has stronger reasoning, but try starting with a 7B model first to test your workflow your GPU should handle it well.
For understanding how LLMs work, I found Andrej Karpathy's YouTube series "Neural Networks: Zero to Hero" really helpful he explains things in a very accessible way | 1 | 0 | 2026-03-03T16:46:22 | Accomplished-Tap916 | false | null | 0 | o8fqjey | false | /r/LocalLLaMA/comments/1rjikwz/help_me_create_my_llm_ecosystem/o8fqjey/ | false | 1 |
t1_o8fqgfu | I mean, some of these people pushing for these cheap models are marketing/sales people, so it makes sense that they love overshooting unfounded confidence, lmao. | 1 | 0 | 2026-03-03T16:45:59 | Tank_Gloomy | false | null | 0 | o8fqgfu | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8fqgfu/ | false | 1 |
t1_o8fqfr6 | What's impressive is that other than the name, the rest seems correct. | 1 | 0 | 2026-03-03T16:45:53 | 0xfeel | false | null | 0 | o8fqfr6 | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8fqfr6/ | false | 1 |
t1_o8fqfrg | Which qwen coder exactly are they using? Is this visible anywhere? | 1 | 0 | 2026-03-03T16:45:53 | Evening_Ad6637 | false | null | 0 | o8fqfrg | false | /r/LocalLLaMA/comments/1rjsxh0/neat_detail_qwen3coder_running_in_lm_studio_front/o8fqfrg/ | false | 1 |
t1_o8fqepo | Does it go faster than lone-ultra speed for some models and not for others? Because, when all those youtubers were trying out the RDMA thunderbolt-5 clustering tests of clustering the 512gb ultras together and using that newly released RDMA technique with them, it showed them as running the models faster than a lone one could. From what they showed, it only went slower when using the older, non-RDMA method. | 1 | 0 | 2026-03-03T16:45:45 | DeepOrangeSky | false | null | 0 | o8fqepo | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8fqepo/ | false | 1 |
t1_o8fqe8m | Sweet! | 1 | 0 | 2026-03-03T16:45:42 | cnqso | false | null | 0 | o8fqe8m | false | /r/LocalLLaMA/comments/1rjr5uq/bloonsbench_evaluate_llm_agent_performance_on/o8fqe8m/ | false | 1 |
t1_o8fqb27 | Nooooooo | 1 | 0 | 2026-03-03T16:45:17 | mlon_eusk-_- | false | null | 0 | o8fqb27 | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fqb27/ | false | 1 |
t1_o8fq9n3 | I'm not sure what to elaborate on??? | 1 | 0 | 2026-03-03T16:45:06 | FullstackSensei | false | null | 0 | o8fq9n3 | false | /r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8fq9n3/ | false | 1 |
t1_o8fq6e0 | Je hoeft Docker helemaal niet te installeren. Kopieer het commando vanaf de OC website en je bent klaar. | 1 | 0 | 2026-03-03T16:44:41 | guidoadam | false | null | 0 | o8fq6e0 | false | /r/LocalLLaMA/comments/1rdew81/finally_got_openclaw_working_on_windows_after_way/o8fq6e0/ | false | 1 |
t1_o8fq0gl | I wonder why they didn’t mention that specific word in the title… | 1 | 0 | 2026-03-03T16:43:56 | Borkato | false | null | 0 | o8fq0gl | false | /r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8fq0gl/ | false | 1 |
t1_o8fq0e2 | [removed] | 1 | 0 | 2026-03-03T16:43:55 | [deleted] | true | null | 0 | o8fq0e2 | false | /r/LocalLLaMA/comments/1rjodma/cloakllm_uses_local_ollama_to_detect_pii_before/o8fq0e2/ | false | 1 |
t1_o8fpwbo | I’m sorry you can’t speak properly in English and are blaming me for not being able to do it.
Touch some grass man, I was only showing interest in what OP shared. | 1 | 0 | 2026-03-03T16:43:23 | Airscripts | false | null | 0 | o8fpwbo | false | /r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8fpwbo/ | false | 1 |
t1_o8fpuu5 | I understand that, I'm more so just shocked at how deep the gap in speed is while the difference in results is almost negligible for my use case.
I'm truly blown away that even the 4B model is slower by such a margin. | 1 | 0 | 2026-03-03T16:43:12 | spacecad_t | false | null | 0 | o8fpuu5 | false | /r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/o8fpuu5/ | false | 1 |
t1_o8fpri6 | I appreciate the honesty. I’m excited to try this one. | 1 | 0 | 2026-03-03T16:42:46 | ClayToTheMax | false | null | 0 | o8fpri6 | false | /r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/o8fpri6/ | false | 1 |
t1_o8fpons | I didn't understand anything until someone else mentioned it... I have no idea what's going on. | 1 | 0 | 2026-03-03T16:42:24 | Samy_Horny | false | null | 0 | o8fpons | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8fpons/ | false | 1 |
t1_o8fpmfn | I download the llama.cpp binaries for Ubuntu on llama.cpp releases page, then I run llama-server | 1 | 0 | 2026-03-03T16:42:06 | Effective_Head_5020 | false | null | 0 | o8fpmfn | false | /r/LocalLLaMA/comments/1rekedh/bad_local_performance_for_qwen_35_27b/o8fpmfn/ | false | 1 |
t1_o8fpizn | Yeah, Kimi Linear was severely undertrained by modern standards. Something like <6T tokens for a 42B. Compared to the 15-20T tokens for mamba hybrid MoEs in the 30-32B space , Granite 4h Small and Nemotron 3 Nano. | 1 | 0 | 2026-03-03T16:41:39 | ramendik | false | null | 0 | o8fpizn | false | /r/LocalLLaMA/comments/1rj08k1/k2_not_25_distillation_still_worth_it/o8fpizn/ | false | 1 |
t1_o8fpiq5 | [deleted] | 1 | 0 | 2026-03-03T16:41:36 | [deleted] | true | null | 0 | o8fpiq5 | false | /r/LocalLLaMA/comments/1rjtt01/the_truth_about_mcp_vs_cli/o8fpiq5/ | false | 1 |
t1_o8fpf5j | I need to personally use it still though just haven’t had the time. It seems to be performing better than gpt oss 120b in the benchmarks and I hear despite lower parameters, its good use of tool calling allows it to be formidable at coding. I also have 64 gb of ram in addition to a rtx pro 6000 so I might be eyeing the larger moe model for any local coding. | 1 | 0 | 2026-03-03T16:41:07 | hihenryjr | false | null | 0 | o8fpf5j | false | /r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/o8fpf5j/ | false | 1 |
t1_o8fpehn | That's concerning, Qwen seems to be the only team that actually listens to feedback and User input online | 1 | 0 | 2026-03-03T16:41:02 | ps5cfw | false | null | 0 | o8fpehn | false | /r/LocalLLaMA/comments/1rjtzfv/end_of_an_era/o8fpehn/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.