name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8g9bh8 | How do you know it doesn't use retimers? Do you know what retimers are? | 1 | 0 | 2026-03-03T18:14:39 | FullstackSensei | false | null | 0 | o8g9bh8 | false | /r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8g9bh8/ | false | 1 |
t1_o8g9am9 | you can do that without SSD, just SD card, \~3t/s on Pi5 16GB. | 1 | 0 | 2026-03-03T18:14:32 | jslominski | false | null | 0 | o8g9am9 | false | /r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o8g9am9/ | false | 1 |
t1_o8g98ny | What is dau? | 1 | 0 | 2026-03-03T18:14:17 | iamapizza | false | null | 0 | o8g98ny | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g98ny/ | false | 1 |
t1_o8g96s1 | Ah great, we're screwed. First Meta, now this. | 1 | 0 | 2026-03-03T18:14:03 | MoffKalast | false | null | 0 | o8g96s1 | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g96s1/ | false | 1 |
t1_o8g94hy | Noo....
Right when they were looking so amazing too. | 1 | 0 | 2026-03-03T18:13:45 | National_Meeting_749 | false | null | 0 | o8g94hy | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g94hy/ | false | 1 |
t1_o8g93zd | One liner for llama.cpp build + pull llama3.2
git clone https://github.com/ggml-org/llama.cpp && cd llama.cpp && cmake -B build && cmake --build build -j"$(nproc)" && ./build/bin/llama-cli -hf bartowski/Llama-3.2-3B-Instruct-GGUF:Q4_K_M -cnv | 1 | 0 | 2026-03-03T18:13:41 | ToothConstant5500 | false | null | 0 | o8g93zd | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8g93zd/ | false | 1 |
t1_o8g8zit | /r/homelabsales/ | 1 | 0 | 2026-03-03T18:13:07 | MelodicRecognition7 | false | null | 0 | o8g8zit | false | /r/LocalLLaMA/comments/1rjwjf3/hardware_usaca_8gpu_a100_40gb_sxm4_cluster_2x/o8g8zit/ | false | 1 |
t1_o8g8yeo | 1 | 0 | 2026-03-03T18:12:59 | jacek2023 | false | null | 0 | o8g8yeo | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g8yeo/ | false | 1 | |
t1_o8g8unm | what a great man I wish him and the qwen team all the best, buy why would they kick him out when qwen is dominating open source right now ? | 1 | 0 | 2026-03-03T18:12:29 | Certain-Cod-1404 | false | null | 0 | o8g8unm | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g8unm/ | false | 1 |
t1_o8g8u7q | I estimated the tokens you can generate from it are less expensive to rent from API than on your own laptop. | 1 | 0 | 2026-03-03T18:12:26 | visarga | false | null | 0 | o8g8u7q | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g8u7q/ | false | 1 |
t1_o8g8qv3 | This is interesting, thanks for sharing!
What are you using to interact with them? I think that maybe one of the limitations I might be having is the agent runtime itself | 1 | 0 | 2026-03-03T18:12:00 | Di_Vante | false | null | 0 | o8g8qv3 | false | /r/LocalLLaMA/comments/1rjv92p/whats_your_strategy_for_long_conversations_with/o8g8qv3/ | false | 1 |
t1_o8g8pq7 | Just responding in regards to general viability of using MCIO for connecting PCIe Gen 5 GPUs and providing an example of a different mobo that does not appear to use retimers (?) for its 20x MCIO ports but is confirmed to work at PCIe Gen 5. I don't have any experience with the H13SSL though, so perhaps the MCIO ports on that board aren't up to the task as you say.
For the OP, if you do run into issues you can add a retimer for MCIO through a PCIe adapter like this: https://c-payne.com/products/mcio-pcie-gen5-host-adapter-x16-retimer?variant=44787447595275 (or similar cheaper products on ebay) | 1 | 0 | 2026-03-03T18:11:52 | kersk | false | null | 0 | o8g8pq7 | false | /r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8g8pq7/ | false | 1 |
t1_o8g8pe8 | No, video collection | 1 | 0 | 2026-03-03T18:11:48 | Investolas | false | null | 0 | o8g8pe8 | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g8pe8/ | false | 1 |
t1_o8g8mme | Je suis | 1 | 0 | 2026-03-03T18:11:27 | Positivevibe06 | false | null | 0 | o8g8mme | false | /r/LocalLLaMA/comments/1ns2x8j/how_is_the_website_like_lm_arena_free_with_all/o8g8mme/ | false | 1 |
t1_o8g8kzk | Respect for surviving that 6-hour fine-tune gauntlet 😅
The tokenizer patch + preloading config workaround is clutch. Qwen3.5 + Transformers version mismatches are catching a lot of people right now.
Also good call ditching Unsloth for this use case — raw Transformers + PEFT is more predictable when things get weird. | 1 | 0 | 2026-03-03T18:11:14 | qubridInc | false | null | 0 | o8g8kzk | false | /r/LocalLLaMA/comments/1rjsf7f/i_spent_6_hours_last_night_failing_to_finetune/o8g8kzk/ | false | 1 |
t1_o8g8kp6 | I wouldn't be so sure about that, they may be smaller than you think. If they are not then OpenAI doesn't stand a chance. | 1 | 0 | 2026-03-03T18:11:12 | vertigo235 | false | null | 0 | o8g8kp6 | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8g8kp6/ | false | 1 |
t1_o8g8gc0 | why? | 1 | 0 | 2026-03-03T18:10:39 | murkomarko | false | null | 0 | o8g8gc0 | false | /r/LocalLLaMA/comments/1rg5uee/best_way_to_run_qwen3535ba3b_on_mac/o8g8gc0/ | false | 1 |
t1_o8g8a2y | No i use ud q3 k xl for 35b because it's faster and works at that quant. I have 8gb vram amd 32gb dram so I downloaded it at q4km. I go one step below until it breaks apart.problem is the model was rubbish to begin with | 1 | 0 | 2026-03-03T18:09:51 | Windowsideplant | false | null | 0 | o8g8a2y | false | /r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o8g8a2y/ | false | 1 |
t1_o8g89m5 | Pretty lightweight. The SQLite database for a \~700k LOC repo like Django is around 5-15 MB on disk. Smaller repos are usually under 1 MB. RAM-wise, SQLite uses a page cache but doesn't need to hold the whole graph in memory — so it scales well even on larger repos. The binary itself is \~30 MB. So total footprint is the binary + a small .db file per indexed project :) | 1 | 0 | 2026-03-03T18:09:47 | OkDragonfruit4138 | false | null | 0 | o8g89m5 | false | /r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8g89m5/ | false | 1 |
t1_o8g89jm | For sure, I am a dev since 14yold (40y now) and trying to do my first game but those 3d things even with codex is a pain in ass | 1 | 0 | 2026-03-03T18:09:46 | celsowm | false | null | 0 | o8g89jm | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8g89jm/ | false | 1 |
t1_o8g893z | And comfy UI any benchmark on that or is visual ai not interesting to Apple. | 1 | 0 | 2026-03-03T18:09:43 | OldTwo6751 | false | null | 0 | o8g893z | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g893z/ | false | 1 |
t1_o8g83su | Isn't there already trellis 2? And hunyuan-3d? Like you can already do it. Even has wide comfy support. | 1 | 0 | 2026-03-03T18:09:02 | j_osb | false | null | 0 | o8g83su | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8g83su/ | false | 1 |
t1_o8g80qr | Likely to be significantly bad news for OSS models going foward. Sad. | 1 | 0 | 2026-03-03T18:08:38 | Worldly-Cod-2303 | false | null | 0 | o8g80qr | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g80qr/ | false | 1 |
t1_o8g80l2 | Shipping Qwen 3.5 as a parting gift to the open source community is a hell of a way to sign off. Respect. | 1 | 0 | 2026-03-03T18:08:37 | theagentledger | false | null | 0 | o8g80l2 | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g80l2/ | false | 1 |
t1_o8g7zjk | it is an Intel machine (core ultra 9). It is still slow (15 tps) but is already much better than other models of the same size (gpt-oss 20b, even qwen3-coder 30b), and does really well with prompt processing for coding esp with opencode that already uses a lot of the context. the other models take more than 5 minutes to process a prompt with opencode - if they are even successful (most cant even finish within the 5min timeout) but this one finishes consistently within 3 mins and is the first model that can really do agentic coding for me | 1 | 0 | 2026-03-03T18:08:29 | octopus_limbs | false | null | 0 | o8g7zjk | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8g7zjk/ | false | 1 |
t1_o8g7yyq | Thanks :) Happy for you to try it out and leave some feedback :) I want to improve it where ever I can, so if something feels not right, let me know :) | 1 | 0 | 2026-03-03T18:08:25 | OkDragonfruit4138 | false | null | 0 | o8g7yyq | false | /r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8g7yyq/ | false | 1 |
t1_o8g7wcj | Yep - any tool in request destroys the CoT behavior. I explicitly wrote think/act middleware to bring it back for some cases. It makes a huge difference for small models. | 1 | 0 | 2026-03-03T18:08:05 | Tartarus116 | false | null | 0 | o8g7wcj | false | /r/LocalLLaMA/comments/1rf2ulo/qwen35_122b_in_72gb_vram_3x3090_is_the_best_model/o8g7wcj/ | false | 1 |
t1_o8g7snj | Intel/Qwen3.5-122B-A10B-int4-AutoRound this one runs for me but its a pain in the ass was a little slow then what i was expecting I'm new to vllm and i need some guidance . seems like Claude is hallucinating today also | 1 | 0 | 2026-03-03T18:07:36 | validsyntax1210 | false | null | 0 | o8g7snj | false | /r/LocalLLaMA/comments/1rgdrgz/any_one_able_to_run_qwen_35_awq_q4_with_vllm/o8g7snj/ | false | 1 |
t1_o8g7qwu | Historically, being optimistic about Apple's timeline for high end products is full of disappointment. The high end studio has no strategic importance to Apple. | 1 | 0 | 2026-03-03T18:07:22 | zipzag | false | null | 0 | o8g7qwu | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g7qwu/ | false | 1 |
t1_o8g7o4o | Speculation obviously, but while Qwen small models are absolutely goated to us, if their open-weight lineup continues to mog their closed 1T max models, that's not going to make him popular to the suits in a corporate behemoth like Alibaba. | 1 | 0 | 2026-03-03T18:07:01 | nullmove | false | null | 0 | o8g7o4o | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g7o4o/ | false | 1 |
t1_o8g7lce | It's interesting that the M5 Pro has slightly more memory bandwidth than the M3 Max (30 GPU) at 307 vs 300. I wonder how those will compare on local LLMs. | 1 | 0 | 2026-03-03T18:06:39 | jkiley | false | null | 0 | o8g7lce | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g7lce/ | false | 1 |
t1_o8g7k3y | thanks, noted. | 1 | 0 | 2026-03-03T18:06:29 | MelodicRecognition7 | false | null | 0 | o8g7k3y | false | /r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8g7k3y/ | false | 1 |
t1_o8g7dds | good idea | 1 | 0 | 2026-03-03T18:05:37 | LegacyRemaster | false | null | 0 | o8g7dds | false | /r/LocalLLaMA/comments/1rjmnv4/meet_swerebenchv2_the_largest_open_multilingual/o8g7dds/ | false | 1 |
t1_o8g7czd | Didn't you read cmerrifeld's comment? Their mac has has 96 **TB** of ram. u/cmerrifield isn't running some ai models, they're running ALL ai models. Simultaneously.
jokes aside, I agree if its the 96gb one for 4k. For 3.5k in the US you can get a Spark, which has quite a bit ram. Probably slower for single user inference though so if the studio serves you, keep it and be happy. | 1 | 0 | 2026-03-03T18:05:34 | PentagonUnpadded | false | null | 0 | o8g7czd | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g7czd/ | false | 1 |
t1_o8g7ccb | is this related?
https://preview.redd.it/edvmt5gaevmg1.png?width=1196&format=png&auto=webp&s=77065d44befc8b827e40e095c26e389f52be1289
| 1 | 0 | 2026-03-03T18:05:29 | fruesome | false | null | 0 | o8g7ccb | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g7ccb/ | false | 1 |
t1_o8g7c2q | Your spec is too old in the LLM age lol. You'll have to spend seffort to get something out of it. However, nothing will be close to Claude code.
My suggestion is to first test if the cuda build or vulkan build of llama.cpp runs better on this spec. Then, check for small models <3B, like Qwen2.5 Coder 1.5B/3B or Qwen3.5 2B. I think you can still have usable auto-completion using llama-vim or llama-vscode with Qwen2.5 Coder 1.5B. | 1 | 0 | 2026-03-03T18:05:27 | Ill-Fishing-1451 | false | null | 0 | o8g7c2q | false | /r/LocalLLaMA/comments/1rjkarj/local_model_suggestions_for_medium_end_pc_for/o8g7c2q/ | false | 1 |
t1_o8g73rm | How much storage space or RAM does the index consume? Just curious | 1 | 0 | 2026-03-03T18:04:22 | Gohan472 | false | null | 0 | o8g73rm | false | /r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8g73rm/ | false | 1 |
t1_o8g6wh5 | Nice work! This has been on my to-do list for months. | 1 | 0 | 2026-03-03T18:03:25 | throwaway292929227 | false | null | 0 | o8g6wh5 | false | /r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8g6wh5/ | false | 1 |
t1_o8g6v6i | Some suggestions from my experience running AI models on a 3050ti with 4GB of VRAM.
1: Don't require the usage of CUDA, yes I realize this is a big ask but Vulkan performance has been getting better for things like this and would make it system agnostic if you could get running the model via Vulkan working. (Also add a device selector for those of us with multiple GPU's that Vulkan can see)
2: Adding ram offloading for those of us that don't have super computers nor a 128GB RAM MacBook.
3: quantized models, provide 4-6 levels of .GGUF quantization for the model used for faster but potentially lower quality outputs so we don't have to run the full model if we want a draft model or to draft settings changes.
4: if the model has multiple parts and runs in stages (like running a text encoder first, model, and then VAE) don't run the entire generation pipeline again if a setting is changed in say the VAE, cache the outputs from each step, this reduces the need to wait for deterministic things like the text encoder/clip to encode the same text over again if we change something unrelated to the encoder.
This is just what I've been able to think of right now, yes these things may seem like a "nah duh" for optimization but I've seen most of the ai runners add 2 and 3, but have only seen a small handful implement 1 and 4. | 1 | 0 | 2026-03-03T18:03:15 | Ill-Oil-2027 | false | null | 0 | o8g6v6i | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8g6v6i/ | false | 1 |
t1_o8g6t2l | Reminds me of this XKCD comic:
[https://xkcd.com/937/](https://xkcd.com/937/) | 1 | 0 | 2026-03-03T18:02:58 | infearia | false | null | 0 | o8g6t2l | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8g6t2l/ | false | 1 |
t1_o8g6rfp | Maybe the amount of laziness you ascribe to people makes you blind to their cluessness :D.
Seeing google chrome's market share, it would suggest that a lot of people know at least how to install things. And seeing how popular ollama seems to be, most youtube tutorials about how to get started probably suggest it.
>
But I did gather good insights from people how to fix the problem by mentioning the year of the build
Never thought about it. Next time I need advice I'll definitely mention the year, I see how it could have helped me before. Thanks for the tip! | 1 | 0 | 2026-03-03T18:02:45 | bobby-chan | false | null | 0 | o8g6rfp | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8g6rfp/ | false | 1 |
t1_o8g6qs5 | 1 | 0 | 2026-03-03T18:02:40 | mecshades | false | null | 0 | o8g6qs5 | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8g6qs5/ | false | 1 | |
t1_o8g6nys | please let us know we 35b is done. A Q5 on 35B should be doable on a 5090, right? | 1 | 0 | 2026-03-03T18:02:18 | Fulminareverus | false | null | 0 | o8g6nys | false | /r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8g6nys/ | false | 1 |
t1_o8g6l0m | Yeah share it please. | 1 | 0 | 2026-03-03T18:01:54 | klop2031 | false | null | 0 | o8g6l0m | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8g6l0m/ | false | 1 |
t1_o8g6koa | Yes 9b totals to 5.95gb for q2_k_xl i suggest to stuck to anything with k_xl in the name they are generally better than the rest | 1 | 0 | 2026-03-03T18:01:52 | Express_Quail_1493 | false | null | 0 | o8g6koa | false | /r/LocalLLaMA/comments/1rjvacw/possible_to_run_on_8gb_cards/o8g6koa/ | false | 1 |
t1_o8g6k9l | I ran a small scale lora test with Qwen 2B immediately upon release and it didn't require any tweaks with TRL in a fresh venv. | 1 | 0 | 2026-03-03T18:01:49 | Middle_Bullfrog_6173 | false | null | 0 | o8g6k9l | false | /r/LocalLLaMA/comments/1rjsf7f/i_spent_6_hours_last_night_failing_to_finetune/o8g6k9l/ | false | 1 |
t1_o8g6ccz | They work so well i expect ram and gpu prices to go up even more. | 1 | 0 | 2026-03-03T18:00:47 | Invader-Faye | false | null | 0 | o8g6ccz | false | /r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o8g6ccz/ | false | 1 |
t1_o8g6awb | What motherboard are you using? | 1 | 0 | 2026-03-03T18:00:36 | amp804 | false | null | 0 | o8g6awb | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8g6awb/ | false | 1 |
t1_o8g67zi | Unless you're doing any kind of agentic coding or document ingestion or RAG, in which case you need prompt processing speed. The old M chips just didn't have enough GPU power. Cool to see Apple addressing this, even if late. How funny would it be if we eventually have (very stylish) Apple servers inferencing in corporate data centers everywhere, because they can run the large MoEs more affordably? | 1 | 0 | 2026-03-03T18:00:14 | temperature_5 | false | null | 0 | o8g67zi | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g67zi/ | false | 1 |
t1_o8g66yb | Memory bandwidth is only half the picture. Prompt processing is compute bound. If you handle large models with large contexts, each prompt can have you waiting 10+ minutes before any tokens get generated. Such a long prompt processing wait is a non-starter for a lot of users, especially with coding which requires context windows 200k or larger | 1 | 0 | 2026-03-03T18:00:06 | iMrParker | false | null | 0 | o8g66yb | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g66yb/ | false | 1 |
t1_o8g65et | interesting. does the idea of a personal ai assistant on the phone appeal to you? Run completely offline? | 1 | 0 | 2026-03-03T17:59:54 | alichherawalla | false | null | 0 | o8g65et | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8g65et/ | false | 1 |
t1_o8g5zyy | I don't know how much you payed for it, but on paper you might be better of with an M5 Max. You'd loose a bit of memory bandwidth (800gb/s vs 640gb/s) but you'd be gaining faster prompt processing, which is arguably more important now that most SOTA models are MoEs.
Of course an M5 Ultra is even better, but I don't know when it will come. | 1 | 0 | 2026-03-03T17:59:12 | cibernox | false | null | 0 | o8g5zyy | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g5zyy/ | false | 1 |
t1_o8g5wms | Yup. It's very sad. Qwen3.5, especially the smaller ones, are the first models at their respective sizes to do what I needed them to do. | 1 | 0 | 2026-03-03T17:58:46 | j_osb | false | null | 0 | o8g5wms | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g5wms/ | false | 1 |
t1_o8g5w30 | I'm confident it will not be an M4 Ultra. The reasons to not make a M4 Ultra still exist. | 1 | 0 | 2026-03-03T17:58:42 | zipzag | false | null | 0 | o8g5w30 | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g5w30/ | false | 1 |
t1_o8g5v23 | Would that be for the 9b version? | 1 | 0 | 2026-03-03T17:58:34 | cyberkiller6 | false | null | 0 | o8g5v23 | false | /r/LocalLLaMA/comments/1rjvacw/possible_to_run_on_8gb_cards/o8g5v23/ | false | 1 |
t1_o8g5v40 | Yes 9B is actually surprisingly great for it's size. I managed to get decent looking HTML codes from it. | 1 | 0 | 2026-03-03T17:58:34 | BarisSayit | false | null | 0 | o8g5v40 | false | /r/LocalLLaMA/comments/1rjve9e/possible_to_run_local_model_for_opencode_with_m3/o8g5v40/ | false | 1 |
t1_o8g5to1 | I agree with this. Part of it, in fairness, is probably me liking the tool of local LLMs and now searching for an application for said tool though.
I’m fine-tuning a local tool that has read-only access to my work email for this purpose. It basically serves as my digital administrative assistant, summarizing emails, creating to-do lists, generating a weekly and daily digest for me, managing my calendar so I’m prepping for deadlines ahead of time, etc. It doesn’t need Opus 4.6 or GPT 5.2 to do that, especially since I still need to go read the actual email itself a lot of the time (which is hyperlinked in the summary it generates), so small-ish local models are good enough. I’d previously used a Mistral model, but I’m going to try one of the new Qwen 3.5 models with it too now! | 1 | 0 | 2026-03-03T17:58:23 | TripleSecretSquirrel | false | null | 0 | o8g5to1 | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8g5to1/ | false | 1 |
t1_o8g5sft | There might've been something wrong with your environment?
uv pip install unsloth
uv pip install transformers==5.2.0
Works no problem. It's also what they use in the notebooks. | 1 | 0 | 2026-03-03T17:58:14 | TheRealMasonMac | false | null | 0 | o8g5sft | false | /r/LocalLLaMA/comments/1rjsf7f/i_spent_6_hours_last_night_failing_to_finetune/o8g5sft/ | false | 1 |
t1_o8g5q5n | that's fine if you like to use google products i guess. I don't want the fabric of my life to be bought and sold though, personally. Claude is the only good one in the game atm. Would be cool if they made a mobile OS though. | 1 | 0 | 2026-03-03T17:57:57 | perpetuallydying | false | null | 0 | o8g5q5n | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o8g5q5n/ | false | 1 |
t1_o8g5ht9 | I'm here to remind you before the bot does. | 1 | 0 | 2026-03-03T17:56:53 | mecshades | false | null | 0 | o8g5ht9 | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8g5ht9/ | false | 1 |
t1_o8g5fcn | To store your music collection in ram? | 1 | 0 | 2026-03-03T17:56:34 | zipzag | false | null | 0 | o8g5fcn | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g5fcn/ | false | 1 |
t1_o8g5e5k | I use Qwen 3.5 122B-A10B I think it stays coherent to at least 200k tokens, or at least I've seen it correctly perform agentic tasks still at 200k tokens in the context. My own chats with the model have been nowhere near as long, though I've given it image-padded inputs and long passages to read and it seems like context length makes no difference to it. I have no expectation that there is issues with long conversations with this type of model. 200k tokens is a huge novel for a conversation. | 1 | 0 | 2026-03-03T17:56:25 | audioen | false | null | 0 | o8g5e5k | false | /r/LocalLLaMA/comments/1rjv92p/whats_your_strategy_for_long_conversations_with/o8g5e5k/ | false | 1 |
t1_o8g5bqv | It's an AI model. It's from inclusiveAI. I'm using it in LM Studio, uploaded by noctrex. | 1 | 0 | 2026-03-03T17:56:07 | AppealThink1733 | false | null | 0 | o8g5bqv | false | /r/LocalLLaMA/comments/1rju3q7/how_can_the_zwz_model_be_as_fast_as_smaller/o8g5bqv/ | false | 1 |
t1_o8g5baz | Damn thats really sad. Wonder what happened. Why would they let him go after such a good release? | 1 | 0 | 2026-03-03T17:56:04 | dampflokfreund | false | null | 0 | o8g5baz | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g5baz/ | false | 1 |
t1_o8g59b5 | No available anymore?
| 1 | 0 | 2026-03-03T17:55:48 | BreakinLiberty | false | null | 0 | o8g59b5 | false | /r/LocalLLaMA/comments/1ftbrw5/ai_file_organizer_update_now_with_dry_run_mode/o8g59b5/ | false | 1 |
t1_o8g58sb | There's not much use for even 512 when it comes to inference, even on an M5 Ultra.
96GB is too small. 512gb isn't usable in most real world use cases. | 1 | 0 | 2026-03-03T17:55:44 | zipzag | false | null | 0 | o8g58sb | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g58sb/ | false | 1 |
t1_o8g58lx | thanks, added "math symbols" to the list of signs of AI-hallucinated text | 1 | 0 | 2026-03-03T17:55:43 | MelodicRecognition7 | false | null | 0 | o8g58lx | false | /r/LocalLLaMA/comments/1rjuslh/gradience_in_10_minutes/o8g58lx/ | false | 1 |
t1_o8g52l3 | They couldn't handle having Wenfeng's team mogging them (jk this is just me anticipating Deepseek next release). | 1 | 0 | 2026-03-03T17:54:56 | HaAtidChai | false | null | 0 | o8g52l3 | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g52l3/ | false | 1 |
t1_o8g4yfw | Maxed out M3 ultra 512 GB with 4 Tb SSD is $10,499 before tax, I just checked. | 1 | 0 | 2026-03-03T17:54:25 | BumbleSlob | false | null | 0 | o8g4yfw | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g4yfw/ | false | 1 |
t1_o8g4tc6 | I'm talking about the H13SSL, which is the board OP is using. This specific board doesn't have retimers on the MCIO ports, AFAIK. So, I don't know what's the point of comparing with something else. | 1 | 0 | 2026-03-03T17:53:46 | FullstackSensei | false | null | 0 | o8g4tc6 | false | /r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8g4tc6/ | false | 1 |
t1_o8g4np3 | I believe FP4 does run on ampere. 9B loads just fine but yea upgrading my GPU in my region is defnitely not an option either without spending like 3 or 4x. Need to keep it in UAE for compliance. | 1 | 0 | 2026-03-03T17:53:03 | Civil-Top-8167 | false | null | 0 | o8g4np3 | false | /r/LocalLLaMA/comments/1rjmnh7/help_loading_qwen35_35b_a3b_gguf_on_vllm/o8g4np3/ | false | 1 |
t1_o8g4luv | its got the entire suite, so image gen, vision, transcription, etc | 1 | 0 | 2026-03-03T17:52:48 | alichherawalla | false | null | 0 | o8g4luv | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8g4luv/ | false | 1 |
t1_o8g4gsn | xD | 1 | 0 | 2026-03-03T17:52:09 | M4r10_h4ck | false | null | 0 | o8g4gsn | false | /r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8g4gsn/ | false | 1 |
t1_o8g4exn | yes, but you get bazillions | 1 | 0 | 2026-03-03T17:51:55 | zipzag | false | null | 0 | o8g4exn | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g4exn/ | false | 1 |
t1_o8g4dy8 | “Qwen is nothing without it's people” coming? | 1 | 0 | 2026-03-03T17:51:48 | Independent-Ruin-376 | false | null | 0 | o8g4dy8 | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g4dy8/ | false | 1 |
t1_o8g4dc9 | 🤣 | 1 | 0 | 2026-03-03T17:51:43 | M4r10_h4ck | false | null | 0 | o8g4dc9 | false | /r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8g4dc9/ | false | 1 |
t1_o8g4ddc | I agree. It helped a lot and one wrong setting or quant can destroy speed or intelligence. I am still experimenting with best settings for best agentic coding.
Seems like tvall43 heretic quants are very smart and fast, but I haven't finished testing yet: [https://huggingface.co/tvall43/Qwen3.5-2B-heretic-gguf](https://huggingface.co/tvall43/Qwen3.5-2B-heretic-gguf)
What should be added for any Qwen 3.5 model, as far as I know:
\--temp 0.6 \\
\--top-p 0.95 \\
\--top-k 20 \\
\--min-p 0.0 \\
\--repeat-penalty 1.05 \\ | 1 | 0 | 2026-03-03T17:51:43 | AppealSame4367 | false | null | 0 | o8g4ddc | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8g4ddc/ | false | 1 |
t1_o8g4d0z | Are they base models? Could you link something from that line which is not a chat/instruct model? | 1 | 0 | 2026-03-03T17:51:40 | SuddenWerewolf7041 | false | null | 0 | o8g4d0z | false | /r/LocalLLaMA/comments/1rjvr81/best_base_model_not_chat_finetuned_in_modern/o8g4d0z/ | false | 1 |
t1_o8g4817 | I was wondering how is it possible for such a small model to recognize a building. Now it make sense. | 1 | 0 | 2026-03-03T17:51:02 | eXl5eQ | false | null | 0 | o8g4817 | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8g4817/ | false | 1 |
t1_o8g42rv | The issue with MCP, from what I can tell, is mostly that all the tools seem to get explained at once. If you provide 100 tools, maybe you can describe their purpose in average of 10 tokens each, and so 100 tools would only cost 1000 tokens, and good LLM ought to still know when to invoke what tool if these descriptions make sense. When LLM wants to invoke a tool, it naturally needs to know more, so in that case the inference framework should provide a way to dump information only about the tool the LLM is considering, with a possibility to back out if it doesn't fit. Easier said than done, but still pretty obvious basic engineering that someone ought to do.
MCPs could also be standardized and AIs trained to know them out of the box. Most popular probably already are there in the training data. But this would require large-scale cooperation and some kind of intelligent leadership in the AI space, and typically vendors of all new tech feel more like bunch of headless chickens running about, until eventually the weakest get culled and the space gets serious in competing with the remaining, and that way it eventually all gets standardized to a single victor's approach and then world makes sense again. | 1 | 0 | 2026-03-03T17:50:23 | audioen | false | null | 0 | o8g42rv | false | /r/LocalLLaMA/comments/1rjtt01/the_truth_about_mcp_vs_cli/o8g42rv/ | false | 1 |
t1_o8g40wx | qwen3.5 unslot quantization (q2\_k\_xl would total 5.95GB) with plenty room to spare for attention mechanisms and Context window. if you are new to all of this i recomend LMstudio they have dynamic calculations as you change the toggle settings to let u know if your model will spill over to the normal RAM which i think might be happening with you here given the size of your card | 1 | 0 | 2026-03-03T17:50:08 | Express_Quail_1493 | false | null | 0 | o8g40wx | false | /r/LocalLLaMA/comments/1rjvacw/possible_to_run_on_8gb_cards/o8g40wx/ | false | 1 |
t1_o8g409l | "Qwen3.5: overthinking to say hello." | 1 | 0 | 2026-03-03T17:50:03 | SufficientPie | false | null | 0 | o8g409l | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8g409l/ | false | 1 |
t1_o8g3zrf | 10k is 256GB M3 ultra. 512GB will be way higher | 1 | 0 | 2026-03-03T17:49:59 | spaceman3000 | false | null | 0 | o8g3zrf | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g3zrf/ | false | 1 |
t1_o8g3xyg | I'm joking of course, Junyang Lin, doesn't seem like the type that would sell out to Zuck. I hope this doesn't mean that Qwen is going to stop with their releases. | 1 | 0 | 2026-03-03T17:49:45 | vertigo235 | false | null | 0 | o8g3xyg | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g3xyg/ | false | 1 |
t1_o8g3xza |
just use - [https://huggingface.co/unsloth/Qwen3.5-9B-GGUF](https://huggingface.co/unsloth/Qwen3.5-9B-GGUF)
LM-studio is more ui/ux friendly when ollama | 1 | 0 | 2026-03-03T17:49:45 | Prudent_Way5827 | false | null | 0 | o8g3xza | false | /r/LocalLLaMA/comments/1rjve9e/possible_to_run_local_model_for_opencode_with_m3/o8g3xza/ | false | 1 |
t1_o8g3vcv | The Genoa and Turin variants of this Asrock motherboard have been reported to work with 8x blackwell GPUs @ PCIE5 through MCIO directly off the board without additional retimers:
https://www.asrockrack.com/general/productdetail.asp?Model=TURIN2D24G-2L%2b/500W#Specifications
I'm working on putting together a build with it right now, just waiting for the custom power cables to come in and I can do some tests. | 1 | 0 | 2026-03-03T17:49:25 | kersk | false | null | 0 | o8g3vcv | false | /r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8g3vcv/ | false | 1 |
t1_o8g3ty8 | I want to open a Discord server. Would you be interested in joining to follow the development progress, take part in beta testing, or share your ideas? | 1 | 0 | 2026-03-03T17:49:14 | Lightnig125 | false | null | 0 | o8g3ty8 | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8g3ty8/ | false | 1 |
t1_o8g3r74 | No and it has powerful gpu. | 1 | 0 | 2026-03-03T17:48:53 | spaceman3000 | false | null | 0 | o8g3r74 | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8g3r74/ | false | 1 |
t1_o8g3qlk | What a terrible idea. Count me out.
/s | 1 | 0 | 2026-03-03T17:48:49 | mana_hoarder | false | null | 0 | o8g3qlk | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8g3qlk/ | false | 1 |
t1_o8g3pqp | I want to say any length of cable might be problematic if the board doesn't have retimers. If you Google H13SSL MCIO gen 5, you'll see people having issues even with SSDs. Gen 5 is tough. | 1 | 0 | 2026-03-03T17:48:42 | FullstackSensei | false | null | 0 | o8g3pqp | false | /r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8g3pqp/ | false | 1 |
t1_o8g3n2j | It depends on who you are of course. | 1 | 0 | 2026-03-03T17:48:21 | vertigo235 | false | null | 0 | o8g3n2j | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g3n2j/ | false | 1 |
t1_o8g3lvj | Letta has been the one I've had most success so far, but I don't like how they really cap your context to 30k. Plus the tool handling I find it a bit off - either you need to accept you'll have a short context, because the tools themselves will take about 12k context, or you need to have a agent that essentially manages its own memory and calls other agents to do the things for me | 1 | 0 | 2026-03-03T17:48:12 | Di_Vante | false | null | 0 | o8g3lvj | false | /r/LocalLLaMA/comments/1rjv92p/whats_your_strategy_for_long_conversations_with/o8g3lvj/ | false | 1 |
t1_o8g3fx1 | As Carl Pei left OnePlus to found Nothing, Junyang Lin might have another organization to start that comes out better than the organization he left. One can hope... | 1 | 0 | 2026-03-03T17:47:26 | lqvz | false | null | 0 | o8g3fx1 | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g3fx1/ | false | 1 |
t1_o8g3a8n | This is not good. | 1 | 0 | 2026-03-03T17:46:41 | MikeLPU | false | null | 0 | o8g3a8n | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g3a8n/ | false | 1 |
t1_o8g39kk | I am toying with a moving window, but I feel like it's a hard balance between keeping the conversation quality good vs retaining good memory.
Is there any lib or MCP or whatever you'd recommend? | 1 | 0 | 2026-03-03T17:46:36 | Di_Vante | false | null | 0 | o8g39kk | false | /r/LocalLLaMA/comments/1rjv92p/whats_your_strategy_for_long_conversations_with/o8g39kk/ | false | 1 |
t1_o8g364i | thanks for clarifying, so you want to say that only 20cm model might provide the full PCIe v5 speed and even 50cm could be so long that it would downgrade the speed to PCIe v4?
I do not want to mess with that server in the nearest future so I will not test the 50cm cable with a PCIe v5 device. PCIe v4 speed is definitely achievable. | 1 | 0 | 2026-03-03T17:46:09 | MelodicRecognition7 | false | null | 0 | o8g364i | false | /r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8g364i/ | false | 1 |
t1_o8g2xmn | I hope they still release their planned song model he mentioned last year, they would have done amazing | 1 | 0 | 2026-03-03T17:45:03 | Zulfiqaar | false | null | 0 | o8g2xmn | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g2xmn/ | false | 1 |
t1_o8g2xif | It just occured to me that Qwen is pronounced as "queen" and not as "kwen" | 1 | 0 | 2026-03-03T17:45:02 | Express_Grocery_4707 | false | null | 0 | o8g2xif | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8g2xif/ | false | 1 |
t1_o8g2vsn | Aren't the new qwen models specifically designed for Q4? | 1 | 0 | 2026-03-03T17:44:48 | Expensive-Cry-8313 | false | null | 0 | o8g2vsn | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8g2vsn/ | false | 1 |
t1_o8g2q0s | Qwen coder next is the other smallish model I’ve been impressed by. | 1 | 0 | 2026-03-03T17:44:03 | nomorebuttsplz | false | null | 0 | o8g2q0s | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8g2q0s/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.