name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8f4ihb
november ??? what is it? a kickstarter campaign
1
0
2026-03-03T15:00:25
Oren_Lester
false
null
0
o8f4ihb
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f4ihb/
false
1
t1_o8f4gys
Platypus and Echidna.
1
0
2026-03-03T15:00:12
Jonodonozym
false
null
0
o8f4gys
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8f4gys/
false
1
t1_o8f4fu9
Can you help me to how to understand changed license i cannot find any differance
1
0
2026-03-03T15:00:02
comodore6564
false
null
0
o8f4fu9
false
/r/LocalLLaMA/comments/1mij7fh/list_of_openweight_models_with_unmodified/o8f4fu9/
false
1
t1_o8f4dpq
Thanks, this is super helpful.
1
0
2026-03-03T14:59:44
queequegscoffin
false
null
0
o8f4dpq
false
/r/LocalLLaMA/comments/1rjqaci/new_to_local_coder_what_would_be_your_choice_for/o8f4dpq/
false
1
t1_o8f4ctk
just use Open WebUI’s # shortcut. No agents needed." don't overcomplicate it with autonomous agents. Just paste the URL with a # prefix (e.g., #https://github.com/xxx) directly in your prompt. it scrapes the site on the fly, dumps the raw text into the context, and let the LLM do the heavy lifting. if you're tired of copy-pasting manually? use it.
1
0
2026-03-03T14:59:36
Rain_Sunny
false
null
0
o8f4ctk
false
/r/LocalLLaMA/comments/1rjrbzc/allowing_llms_to_reference_from_websites/o8f4ctk/
false
1
t1_o8f46e6
Tried it, models are super unreliable for me, stop mid-task, completely censored... probably gonna stick to my Minimax coding plan subscription.
1
0
2026-03-03T14:58:42
Xhatz
false
null
0
o8f46e6
false
/r/LocalLLaMA/comments/1rjr60d/alibaba_can_i_buy_any_suggestions/o8f46e6/
false
1
t1_o8f42w6
That would be funny. But I doubt a 730B company made such a big decision based on the whining of a few redditors lol
1
0
2026-03-03T14:58:12
gradient8
false
null
0
o8f42w6
false
/r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8f42w6/
false
1
t1_o8f41vn
Is this still maintained? We really need to be able to use models not in your catalogue. For instance Qwen3.5...
1
0
2026-03-03T14:58:03
Latt
false
null
0
o8f41vn
false
/r/LocalLLaMA/comments/1qh5yvm/llamabarn_023_tiny_macos_app_for_running_local/o8f41vn/
false
1
t1_o8f41lu
How much VRAM & context window are you using?
1
0
2026-03-03T14:58:00
tomByrer
false
null
0
o8f41lu
false
/r/LocalLLaMA/comments/1rjr9ze/did_anyone_replace_old_qwen25coder7b_with/o8f41lu/
false
1
t1_o8f40rr
I agree with your disagreement, there's a ton of value in the older models, including (or, to some, primarily) sentimental value! Having the older models like the first chat-tuned text-davinci-003 or that "conscious" LaMDA snapshot that managed to trick a professional engineer around would be infinitely better than letting them rest in deep archives. They're genuine artifacts.
1
0
2026-03-03T14:57:53
FriskyFennecFox
false
null
0
o8f40rr
false
/r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8f40rr/
false
1
t1_o8f3vwg
Sort of. I think it's like generating a superior prompt. Or generating 3 answers and converging 1. As far as a model knowing what "Quality" to stop at... I think it's quite arbitrary.
1
0
2026-03-03T14:57:12
Adventurous-Lead99
false
null
0
o8f3vwg
false
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8f3vwg/
false
1
t1_o8f3v2f
how can anybody know that :D
1
0
2026-03-03T14:57:05
madsheepPL
false
null
0
o8f3v2f
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f3v2f/
false
1
t1_o8f3tkm
What are you doing with all that compute?
1
0
2026-03-03T14:56:52
JamesEvoAI
false
null
0
o8f3tkm
false
/r/LocalLLaMA/comments/1rj8zuh/manage_qwen_35_model_settings_with_litellm_proxy/o8f3tkm/
false
1
t1_o8f3skl
Yeah, that's the thing worth waiting for. Although I have a feeling they're going to just wait and go straight to the M6 Ultra. It's already expected that there will be new M6 powered MacBook Pros towards the end of the year.
1
0
2026-03-03T14:56:44
Spanky2k
false
null
0
o8f3skl
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f3skl/
false
1
t1_o8f3sau
Realistically I’d say x2.5-3+ on Pro
1
0
2026-03-03T14:56:42
alexx_kidd
false
null
0
o8f3sau
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f3sau/
false
1
t1_o8f3ous
Bruh ! I know , i thought anybody bought it ? Or any suggestions? because i didn't know about that , I mean is that worth it or not and mainly I don't have that much hardware to locally use any model !
1
0
2026-03-03T14:56:13
Less_Strain7577
false
null
0
o8f3ous
false
/r/LocalLLaMA/comments/1rjr60d/alibaba_can_i_buy_any_suggestions/o8f3ous/
false
1
t1_o8f3o6l
Yes
1
0
2026-03-03T14:56:07
alexx_kidd
false
null
0
o8f3o6l
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f3o6l/
false
1
t1_o8f3m6r
haha. thank you for making my day.
1
0
2026-03-03T14:55:50
Darejk
false
null
0
o8f3m6r
false
/r/LocalLLaMA/comments/1rjr60d/alibaba_can_i_buy_any_suggestions/o8f3m6r/
false
1
t1_o8f3jug
[https://www.reddit.com/r/LocalLLaMA/comments/1rhr5ko/is\_there\_a\_way\_to\_disable\_thinking\_on\_qwen\_35\_27b/](https://www.reddit.com/r/LocalLLaMA/comments/1rhr5ko/is_there_a_way_to_disable_thinking_on_qwen_35_27b/) This post might help.
1
0
2026-03-03T14:55:29
nymical23
false
null
0
o8f3jug
false
/r/LocalLLaMA/comments/1rjof0g/qwen_35_nonthinking_scores_are_out_on_aa/o8f3jug/
false
1
t1_o8f3jhx
What's the KL divergence and PPL compared to the original?
1
0
2026-03-03T14:55:27
metigue
false
null
0
o8f3jhx
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8f3jhx/
false
1
t1_o8f3jce
Nice. Now give me a Mac Studio with that chip
1
0
2026-03-03T14:55:25
-paul-
false
null
0
o8f3jce
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f3jce/
false
1
t1_o8f3j8a
FAST AS FUCK BOI
1
0
2026-03-03T14:55:24
AppleBottmBeans
false
null
0
o8f3j8a
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f3j8a/
false
1
t1_o8f3ejc
Tell Gemini you hardware and ask it to give you the commands to build it with all possible accelerations for your hardware. I had Gemini build a script to do that and to check for an update and to build it and finally offer a benchmark at the end. Then if you want to switch models instantly, add in llama-swap, which allows you to setup a config for each model and just hit that and auto swap as desired. Use AI tools to help you solve this and it is a piece of cake.
1
0
2026-03-03T14:54:44
JMowery
false
null
0
o8f3ejc
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8f3ejc/
false
1
t1_o8f3eed
hopefully it's m5 ultra with neural accelerator and not m4 ultra
1
0
2026-03-03T14:54:42
lolwutdo
false
null
0
o8f3eed
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f3eed/
false
1
t1_o8f3d73
Why is it matter? I do not care
1
0
2026-03-03T14:54:32
raiffuvar
false
null
0
o8f3d73
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f3d73/
false
1
t1_o8f3bji
Q8_k quant new variant of q8_0 , k_xl is higher bit per weight, more file size and quality, you can use q_k_m , I think q_k are better quants then q_0
1
0
2026-03-03T14:54:19
Chemical_Pollution82
false
null
0
o8f3bji
false
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8f3bji/
false
1
t1_o8f3a53
https://old.reddit.com/r/LocalLLaMA/comments/1rjkarj/local_model_suggestions_for_medium_end_pc_for/o8f2zir/
1
0
2026-03-03T14:54:07
MelodicRecognition7
false
null
0
o8f3a53
false
/r/LocalLLaMA/comments/1rjkarj/local_model_suggestions_for_medium_end_pc_for/o8f3a53/
false
1
t1_o8f39l5
Those numbers are great, but for LLM performance, Apple's own sources are vague and potentially misleading as usual (see [sources at the bottom](https://www.apple.com/newsroom/2026/03/apple-introduces-macbook-pro-with-all-new-m5-pro-and-m5-max/)). They mention things like "4x faster AI performance", then you check the source they cite and it says "Testing was conducted by Apple in January and February 2026. See [apple.com/macbook-pro](https://www.apple.com/macbook-pro/) for more information." which just sends you to the Macbook Pro product page lol. They don't mention what they were doing, what task, which model, model size, etc. I think we'll have to wait to see what the community can tell us. I am expecting good improvements, though
1
0
2026-03-03T14:54:02
iMrParker
false
null
0
o8f39l5
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f39l5/
false
1
t1_o8f35o4
Nothing fancy yet, just the default tools that come with zeroclaw, plus a custom skill for SearXNG and for reading long resources (it kept trying to insert 40k+ context during a testing period where KV cache was preallocated to 49152)
1
0
2026-03-03T14:53:29
fulgencio_batista
false
null
0
o8f35o4
false
/r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/o8f35o4/
false
1
t1_o8f34xy
"Loving a TV show is a mental illness" would be the exact same logic as your argument. I'd say if there is nothing you love in your life that would be the mental illness, not finding something really enjoyable to do. I'm not talking about people who are literally in love with a model, I am talking about people who really enjoy the outputs and enjoy using that model. Tuning outputs won't get you there, if that were true GPT4-X-Alpaca should be sufficient to replace GPT4 because it was tuned on GPT4 outputs.
1
0
2026-03-03T14:53:22
henk717
false
null
0
o8f34xy
false
/r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8f34xy/
false
1
t1_o8f34e8
I’ll give it a try! Thanks
1
0
2026-03-03T14:53:17
ClayToTheMax
false
null
0
o8f34e8
false
/r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/o8f34e8/
false
1
t1_o8f347j
The M5 Ultra on Mac Studio, should it become a real thing, will be wild.
1
0
2026-03-03T14:53:16
newcantonrunner5
false
null
0
o8f347j
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f347j/
false
1
t1_o8f33tf
Here is a graph that will help you make this decision: https://preview.redd.it/vah8vpfzfumg1.png?width=2224&format=png&auto=webp&s=26feaf24feb608fa74b7b3eb076b5bc4954f34fb
2
0
2026-03-03T14:53:12
madsheepPL
false
null
0
o8f33tf
false
/r/LocalLLaMA/comments/1rjr60d/alibaba_can_i_buy_any_suggestions/o8f33tf/
false
2
t1_o8f33jq
Ok, so it's a joke.
1
0
2026-03-03T14:53:10
LoveMind_AI
false
null
0
o8f33jq
false
/r/LocalLLaMA/comments/1rjo81a/gemini_31_pro_hidden_thought_process_exposed/o8f33jq/
false
1
t1_o8f305d
Cool project! If you're looking for something similar that's ready to deploy, check out KinBot. It's an open-source self-hosted AI agent platform with persistent memory (hybrid search + LLM re-ranking), works with Ollama for fully local setups, supports 6 messaging channels (Telegram, Discord, WhatsApp, etc.), and agents can even build their own mini-apps with a built-in React SDK. SQLite-based, runs on a Raspberry Pi. https://marlburrow.github.io/kinbot/
1
0
2026-03-03T14:52:41
OpinionSimilar4445
false
null
0
o8f305d
false
/r/LocalLLaMA/comments/1r75i9t/built_a_multiagent_ai_butler_on_a_dgx_spark/o8f305d/
false
1
t1_o8f2zir
https://old.reddit.com/r/LocalLLaMA/comments/1rg0pv6/how_can_i_determine_how_much_vram_each_model_uses/o7o1lpp/ https://old.reddit.com/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o82wms6/ https://old.reddit.com/r/LocalLLaMA/comments/1ri42ee/help_finding_best_for_my_specs/o83kpzr/
1
0
2026-03-03T14:52:36
MelodicRecognition7
false
null
0
o8f2zir
false
/r/LocalLLaMA/comments/1rjkarj/local_model_suggestions_for_medium_end_pc_for/o8f2zir/
false
1
t1_o8f2wc4
So M5 Ultra would be up to 512GB unified memory and 1200GBps memory bandwidth?
1
0
2026-03-03T14:52:08
LowPlace8434
false
null
0
o8f2wc4
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f2wc4/
false
1
t1_o8f2txa
What would you recommend that's better?
1
0
2026-03-03T14:51:48
Daniel_H212
false
null
0
o8f2txa
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8f2txa/
false
1
t1_o8f2smr
Dis you tried this model? Mine followed 50+ steps, pulled git of several repos, uses gemini cli for coding agent. It is not perfect, ofc, but is hetter than what we had before
1
0
2026-03-03T14:51:36
Suitable_Currency440
false
null
0
o8f2smr
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8f2smr/
false
1
t1_o8f2p6j
While I realise everyone doesn't have the opportunity, the qwen 122b/10b heretic mxfp4 quant is the best I've used since gpt-oss-120b heretic. And it reads and understands images in the same ~65GB of VRAM. The heretic version makes it objectively better. I can't have it second guessing me. Will be putting it through its paces over the next few weeks. The capability of these things is crazy.
1
0
2026-03-03T14:51:07
AlwaysLateToThaParty
false
null
0
o8f2p6j
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8f2p6j/
false
1
t1_o8f2nne
docker run -d --name qwen35-vllm --gpus all --ipc=host --shm-size=32g -p 8003:8000 -v /mnt/models/huggingface:/root/.cache/huggingface -e CUDA_VISIBLE_DEVICES=2,3,4,5 -e NCCL_P2P_DISABLE=1 --entrypoint python3 vllm/vllm-openai:cu130-nightly -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 8000 --model nvidia/Qwen3.5-397B-A17B-NVFP4 --served-model-name Qwen3.5-397B-A17B-NVFP4 --tensor-parallel-size 4 --dtype auto --max-model-len 200000 --trust-remote-code --mm-encoder-tp-mode data --mm-processor-cache-type shm --reasoning-parser qwen3 --enable-auto-tool-choice --tool-call-parser qwen3_coder --compilation_config.cudagraph_mode=PIECEWISE I know the model card pushed sglqng and I did try but ran into issues. It was actually much easier to get it running well in vLLM.
1
0
2026-03-03T14:50:54
TaiMaiShu-71
false
null
0
o8f2nne
false
/r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8f2nne/
false
1
t1_o8f2nqw
If we are talking about a real war with your average civilian in one of the affected countries you probably can't. Infrastructure tends to get taken out first and by the time information about what is happening is made public its not overly useful to people on the ground. Wars are not like natural disasters. What would likely happen is the government would just advise people to be pulled back from near the front lines and you'd have warning advisories when air attacks are going on.
1
0
2026-03-03T14:50:54
mustafar0111
false
null
0
o8f2nqw
false
/r/LocalLLaMA/comments/1rjqo97/how_can_we_use_ai_modern_tech_stacks_to_help/o8f2nqw/
false
1
t1_o8f2l91
Even if there's benchmark, i think there's no way a small model can keep its coherency. Even for 7 to 16B, they start to talk nonsense and contradict itself a lot after a while.
1
0
2026-03-03T14:50:32
fungnoth
false
null
0
o8f2l91
false
/r/LocalLLaMA/comments/1rjoeok/is_qwen35_08b_more_powerful_than_mistral_7b/o8f2l91/
false
1
t1_o8f2k4l
I am a noob -- how do you create uncensored model?
1
0
2026-03-03T14:50:22
seymores
false
null
0
o8f2k4l
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8f2k4l/
false
1
t1_o8f2ido
Hey! With a 940MX (2GB VRAM) your GPU will be the bottleneck for most models, but you're not out of options. For coding specifically, look at: Qwen2.5 Coder 1.5B Q4 tiny and fast DeepSeek-Coder 1.3B built for coding tasks Since you have 16GB RAM, you can actually run larger models on CPU using llama.cpp. it'll be slow but workable for coding suggestions where you don't need instant responses. I built a free tool that can show you exactly what runs on your hardware [localops.tech](http://localops.tech) just plug in your GPU and it'll calculate VRAM fit and speed estimates across hundreds of models. Might save you a lot of trial and error!
1
0
2026-03-03T14:50:08
mhd2002
false
null
0
o8f2ido
false
/r/LocalLLaMA/comments/1rjkarj/local_model_suggestions_for_medium_end_pc_for/o8f2ido/
false
1
t1_o8f2hj0
It does! Its not unlimited like cloud models fore sure and when nearing my 262k context it does struggle but for simple everyday tasks? More than enough
1
0
2026-03-03T14:50:01
Suitable_Currency440
false
null
0
o8f2hj0
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8f2hj0/
false
1
t1_o8f2g00
I redownloaded but am having the same issue. Another commenter said that the files weren’t actually updated
1
0
2026-03-03T14:49:48
mzinz
false
null
0
o8f2g00
false
/r/LocalLLaMA/comments/1rjhy83/tool_calling_issues_with_qwen3535b_with_16gb_vram/o8f2g00/
false
1
t1_o8f2dbx
I'd recommend checking out VSCode and their Copilot Chat extension - it handles hundreds of tools in very long tool calling sessions amazingly. I use their impl as inspo for some of our internal tools.
1
0
2026-03-03T14:49:25
Swimming-Chip9582
false
null
0
o8f2dbx
false
/r/LocalLLaMA/comments/1rjm4bl/tool_calling_is_where_agents_fail_most/o8f2dbx/
false
1
t1_o8f2d66
Its almost like when claude gets better they all magically get better 😂
1
0
2026-03-03T14:49:23
Torodaddy
false
null
0
o8f2d66
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8f2d66/
false
1
t1_o8f2cnl
Oh i see. I'm not using ollama but lmstudio, their implementation might differ a little bit, they might fix it these days, i sugest you try to change for lmstudio and point to its server and see if works!
1
0
2026-03-03T14:49:19
Suitable_Currency440
false
null
0
o8f2cnl
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8f2cnl/
false
1
t1_o8f29ux
I do have them in a dedicated directory. Thanks!
1
0
2026-03-03T14:48:55
mzinz
false
null
0
o8f29ux
false
/r/LocalLLaMA/comments/1rjhy83/tool_calling_issues_with_qwen3535b_with_16gb_vram/o8f29ux/
false
1
t1_o8f27f7
I did, no change
1
0
2026-03-03T14:48:34
mzinz
false
null
0
o8f27f7
false
/r/LocalLLaMA/comments/1rjhy83/tool_calling_issues_with_qwen3535b_with_16gb_vram/o8f27f7/
false
1
t1_o8f21mf
For your 8GB VRAM + 32GB RAM setup: Q8 of a 9B model needs \~10GB, so it'll likely spill into RAM — still runnable but slower. You can verify exact VRAM needs at localops.tech. On the agentic coding question — 9B models can handle simple tasks with Cline/Roocode, but for larger codebases you'll hit context/reasoning limits. A 14B or 32B would be noticeably better for multi-file projects.
1
0
2026-03-03T14:47:44
mhd2002
false
null
0
o8f21mf
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8f21mf/
false
1
t1_o8f1zyt
There is a base frontier model released just today: [stepfun-ai/Step-3.5-Flash-Base · Hugging Face](https://huggingface.co/stepfun-ai/Step-3.5-Flash-Base)
1
0
2026-03-03T14:47:30
Expensive-Paint-9490
false
null
0
o8f1zyt
false
/r/LocalLLaMA/comments/1rj6hga/qwen35_base_models_for_122b_and_27b/o8f1zyt/
false
1
t1_o8f1vgw
Or use PasteBin, or Github/Gists, etc...
1
0
2026-03-03T14:46:51
tomByrer
false
null
0
o8f1vgw
false
/r/LocalLLaMA/comments/1rjqfzc/skillmd_files_are_amazing_but_makingcreating_them/o8f1vgw/
false
1
t1_o8f1qwp
which model should I run on my 4GB/6GB/8GB card?
1
0
2026-03-03T14:46:11
Cultural-Ordinary282
false
null
0
o8f1qwp
false
/r/LocalLLaMA/comments/1rjmczv/low_vram_qwen35_4b_and_2b/o8f1qwp/
false
1
t1_o8f1oiw
#!/bin/bash # AES SEDAI OPTIMIZED # Model: Qwen3.5-35B-A3B-Q4_K_M # Hardware: Ryzen 5600 (6 Core), 32GB RAM (3000MHz), RTX 2070 (8GB VRAM) export GGML_CUDA_GRAPH_OPT=1 llama-server -m Qwen3.5-35B-A3B-Q4_K_M-00001-of-00002.gguf -ngl 999 -fa on -c 65536 -b 4096 -ub 2048 -t 6 -np 1 -ncmoe 36 -ctk q8_0 -ctv q8_0 --port 8080 --api-key "opencode-local" --jinja --perf --temp 1.0 --top-p 0.95 --top-k 40 --min-p 0.01 --repeat-penalty 1.0 --host 0.0.0.0 --numa distribute --prio 2
1
0
2026-03-03T14:45:50
sagiroth
false
null
0
o8f1oiw
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8f1oiw/
false
1
t1_o8f1nmi
v0.10 maybe? https://docs.vllm.ai/en/v0.10.0/getting_started/installation/gpu.html
1
0
2026-03-03T14:45:43
MelodicRecognition7
false
null
0
o8f1nmi
false
/r/LocalLLaMA/comments/1rjjvqo/vllm_on_v100_for_qwen_newer_models/o8f1nmi/
false
1
t1_o8f1jt5
M5 Pro supports up to 64GB of unified memory with up to 307GB/s of memory bandwidth, while M5 Max supports up to 128GB of unified memory with up to 614GB/s of memory bandwidth.
1
0
2026-03-03T14:45:10
sunshinecheung
false
null
0
o8f1jt5
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f1jt5/
false
1
t1_o8f1jvu
https://www.google.com/search?channel=entpr&q=how+to+ask+technical+questions+about+when+program+does+not+work
1
0
2026-03-03T14:45:10
MelodicRecognition7
false
null
0
o8f1jvu
false
/r/LocalLLaMA/comments/1rjjvqo/vllm_on_v100_for_qwen_newer_models/o8f1jvu/
false
1
t1_o8f1ia4
So the inference was done locally, no network connection needed?
1
0
2026-03-03T14:44:57
hejj
false
null
0
o8f1ia4
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8f1ia4/
false
1
t1_o8f1dvw
It's always better to keep everything in VRAM. But VRAM is not infinite unlike our ambition. That's another reason to use MoE models. Their generation speed does not suffer that much from RAM offloading. Some even offload all experts to RAM and use like this.
1
0
2026-03-03T14:44:20
catlilface69
false
null
0
o8f1dvw
false
/r/LocalLLaMA/comments/1rjqaci/new_to_local_coder_what_would_be_your_choice_for/o8f1dvw/
false
1
t1_o8f1a5y
[removed]
1
0
2026-03-03T14:43:46
[deleted]
true
null
0
o8f1a5y
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/o8f1a5y/
false
1
t1_o8f190q
No idea about video or audio models, but for images I also feel like you: I find more use cases where local use may be preferable than cloud: \- you can play a lot with sizes, editing, upscaling etc, with great control. \- there are small models that are amazing like z image turbo (for many images I prefer its output than chatgpt's). That model in particular is super fast and mostly non-censored. Flux klein also seems pretty good. Also, image generation takes some minutes per picture. I doesn't need a huge context window and doesn't have a super slow promt processing, which are big limitations for local LLMs.
1
0
2026-03-03T14:43:36
mouseofcatofschrodi
false
null
0
o8f190q
false
/r/LocalLLaMA/comments/1rjqdkk/open_vs_closed_models_for_image_video_whats/o8f190q/
false
1
t1_o8f18qu
I find the 27B to be far more reliable in real world use (this is with the "precise" sampler parameter preset from the model card)
1
0
2026-03-03T14:43:34
MerePotato
false
null
0
o8f18qu
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8f18qu/
false
1
t1_o8f13s0
We were all expecting more/better AI specific silicon (AKA Neural Accelerator), but you failed to mention: \* features up to **2× faster SSD speeds (14.5GB/s)**  \* higher unified memory bandwidth \* Apple N1 wireless chip for Wi-Fi 7 (faster downloads if your router can handle it) & bro, at least link to SOMETHING [https://search.brave.com/search?q=Apple+unveils+M5+Pro+and+M5+Max%2C+citing+up+to+4%C3%97+faster+LLM+prompt+processing+than+M4+Pro+and+M4+Max&source=desktop&summary=1&conversation=08cd81a2a51ecd9e211a599994b04083112d](https://search.brave.com/search?q=Apple+unveils+M5+Pro+and+M5+Max%2C+citing+up+to+4%C3%97+faster+LLM+prompt+processing+than+M4+Pro+and+M4+Max&source=desktop&summary=1&conversation=08cd81a2a51ecd9e211a599994b04083112d)
1
0
2026-03-03T14:42:51
tomByrer
false
null
0
o8f13s0
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8f13s0/
false
1
t1_o8f10a2
vLLM.
1
0
2026-03-03T14:42:21
clothopos
false
null
0
o8f10a2
false
/r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8f10a2/
false
1
t1_o8f0ykz
[removed]
1
0
2026-03-03T14:42:06
[deleted]
true
null
0
o8f0ykz
false
/r/LocalLLaMA/comments/1nrnkji/how_much_memory_do_you_need_for_gptoss20b/o8f0ykz/
false
1
t1_o8f0ydp
This summer would be a great time for OpenAI to release some open models based on the GPT5 architecture, especially since these new Qwen models definitely seem more intelligent.
1
0
2026-03-03T14:42:04
fulgencio_batista
false
null
0
o8f0ydp
false
/r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8f0ydp/
false
1
t1_o8f0vxf
Hey! Instead of guessing, I actually built a free tool for exactly this localops.tech. It covers 144 GPUs and 492 models with VRAM calculations and speed estimates. Just select your GPU and the model you want to run. Hope it helps! [localops.tech](http://localops.tech)
1
0
2026-03-03T14:41:42
mhd2002
false
null
0
o8f0vxf
false
/r/LocalLLaMA/comments/1r6jqot/smaller_model_in_vram_vs_larger_model_mostly_in/o8f0vxf/
false
1
t1_o8f0sdb
I actually built a free tool for exactly this localops.tech. It covers 144 GPUs and 492 models with VRAM calculations and speed estimates. Just select your GPU and the model you want to run. Hope it helps! [localops.tech](http://localops.tech)
1
0
2026-03-03T14:41:11
mhd2002
false
null
0
o8f0sdb
false
/r/LocalLLaMA/comments/1kjlq7g/i_am_gpu_poor/o8f0sdb/
false
1
t1_o8f0myq
Hey, also thinking of this mobo, did you end up building the rig?
1
0
2026-03-03T14:40:24
beefgroin
false
null
0
o8f0myq
false
/r/LocalLLaMA/comments/1qsxpa3/mc62g40_mainboard_for_multigpu_setup/o8f0myq/
false
1
t1_o8f0ig1
Benchmark is https://swe-rebench.com/ This work is about training tasks, but we use the same pipeline to collect tasks for ReBench as well now, we can collect better tasks in more languages for Benchmark as well if you have specific requests, please write.
1
0
2026-03-03T14:39:44
Fabulous_Pollution10
false
null
0
o8f0ig1
false
/r/LocalLLaMA/comments/1rjmnv4/meet_swerebenchv2_the_largest_open_multilingual/o8f0ig1/
false
1
t1_o8f0hi7
"Loving a model" is basically mental illness (ai pyschosis) and a big reason why people should run local / open source to begin with. Tuning based on your previous chats is not that hard with Unsloth notebooks.
1
0
2026-03-03T14:39:35
Adventurous-Lead99
false
null
0
o8f0hi7
false
/r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8f0hi7/
false
1
t1_o8f0gwo
Yeah, I bet it definitely would. This thing is a little beast.
1
0
2026-03-03T14:39:30
teachersecret
false
null
0
o8f0gwo
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8f0gwo/
false
1
t1_o8f0g85
You can use this tool I built to see what models you can run [localops.tech](http://localops.tech)
1
0
2026-03-03T14:39:25
mhd2002
false
null
0
o8f0g85
false
/r/LocalLLaMA/comments/1rezhyq/why_isnt_my_gpu_utilizing_all_of_its_vram/o8f0g85/
false
1
t1_o8f0cxf
Just paste the link to claude code, or @ the file. You don't really have to do anything. If the backend rejects an upload, it will install and use local tools to split the pdf or convert to md. Same with codex and others.
1
0
2026-03-03T14:38:55
666666thats6sixes
false
null
0
o8f0cxf
false
/r/LocalLLaMA/comments/1rjqfzc/skillmd_files_are_amazing_but_makingcreating_them/o8f0cxf/
false
1
t1_o8f078a
[removed]
1
0
2026-03-03T14:38:04
[deleted]
true
null
0
o8f078a
false
/r/LocalLLaMA/comments/1qk5tyx/48gb_vram_worth_attempting_local_coding_model/o8f078a/
false
1
t1_o8ezlno
I tried to use that quant with sglang. Man it was frustrating. Do you have the command you use/repo/specific quant?
1
0
2026-03-03T14:34:49
funding__secured
false
null
0
o8ezlno
false
/r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8ezlno/
false
1
t1_o8ezl3k
Well, when I say "manual" I don't mean typing it all out. I meant you'd have to grab the YT transcript yourself, copy paste text or use OCR for a large PDF that's over 30MB (because that's the limit for uploads (for Claude at least). And then If you have a large index of info you want created into a skill you'd have to create one skill at a time. Like If you had a textbook or huge expert guide. The point being "manual" is that you still have to gather all the pieces yourself.
1
0
2026-03-03T14:34:44
junianwoo
false
null
0
o8ezl3k
false
/r/LocalLLaMA/comments/1rjqfzc/skillmd_files_are_amazing_but_makingcreating_them/o8ezl3k/
false
1
t1_o8ezizq
Do you know which provider backend supports MTP now?
1
0
2026-03-03T14:34:25
foggyghosty
false
null
0
o8ezizq
false
/r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8ezizq/
false
1
t1_o8ezi3n
How fast is the token generation?
1
0
2026-03-03T14:34:17
baseketball
false
null
0
o8ezi3n
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8ezi3n/
false
1
t1_o8ezftw
If you are manually creating skills you are doing it from an existing knowledge base of some kind and even in that case having an AI, Claude is good at this, consume that knowledge base within a context window and identify how to restructure it into scripts, resources, assets and the content of SKILL.md.  Other commenters here are correct, the most reliable way I have found is nearing the end of a context window. ask Claude to capture a named skill incorporating all the learnings, false positives, dead ends, bad assumptions, and eventual decisions you have made within that session. next time you are operating and learn new things, having it update the skill and even refactor it by removing content from the main skill. MD into resources that are referenced. I applaud the general thesis, but the skill is a distillation, not a recitation of a YouTube video or other static resource but rather a reference to static resource plus best practice behavior from the user's interaction. This is to say that the skills should certainly be distributed by services and software to make them easily accessible, but the moment they are perched into a project or repo. I have seen the greatest success in evolving them immediately to conform with your local project expectations and use cases. 
1
0
2026-03-03T14:33:57
flavordrake
false
null
0
o8ezftw
false
/r/LocalLLaMA/comments/1rjqfzc/skillmd_files_are_amazing_but_makingcreating_them/o8ezftw/
false
1
t1_o8ezfko
Your asking for two 5090s.  One 5080... Best offer
1
0
2026-03-03T14:33:54
Revolutionary_Loan13
false
null
0
o8ezfko
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8ezfko/
false
1
t1_o8ezarr
I don't know the proper terminology but they do all: if you just paste raw text, they run completion. If you use a chat template, you get instruct (with tools and all). If you use a FIM template, they work as great fill-in-the-middle copilots.
1
0
2026-03-03T14:33:11
666666thats6sixes
false
null
0
o8ezarr
false
/r/LocalLLaMA/comments/1rjpesa/qwen_35_what_is_base_version/o8ezarr/
false
1
t1_o8ez7ge
Agree for production coding. Curious though — do you see small models fitting anywhere in a production pipeline, like edge inference or preprocessing? Or strictly local/prototyping use?
1
0
2026-03-03T14:32:41
Rough-Heart-7623
false
null
0
o8ez7ge
false
/r/LocalLLaMA/comments/1rjbw0p/benchmarked_qwen_35_small_models_08b2b4b9b_on/o8ez7ge/
false
1
t1_o8ez4fb
LLMs can't do reliable internet research using just SearXNG. There are many open source tools and many paid services that address this fact.
1
0
2026-03-03T14:32:14
zipzag
false
null
0
o8ez4fb
false
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8ez4fb/
false
1
t1_o8ez1k6
Or open the anus
0
0
2026-03-03T14:31:49
Lucky-Necessary-8382
false
null
0
o8ez1k6
false
/r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8ez1k6/
false
0
t1_o8eyz5q
This benchmark should include the coding variants, this 30BA3B is not designed for coding, wondering how this stacks up with the /coder variants of the 30BA3B and I think the 9B is still far from that
1
0
2026-03-03T14:31:27
Ok-Internal9317
false
null
0
o8eyz5q
false
/r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8eyz5q/
false
1
t1_o8eywj9
[removed]
1
0
2026-03-03T14:31:03
[deleted]
true
null
0
o8eywj9
false
/r/LocalLLaMA/comments/1rjdeat/dual_rtx_3090_on_b550_70b_models_produce_garbage/o8eywj9/
false
1
t1_o8eysi4
I think base models are interesting. I talked to grok about them yesterday for a refresher. Assistant models are trained to respond to your prompt. Base models continue text from the prompt. That's text completion, which is kinda cool imo. With text completion, if you wanted to know something, you wouldn't directly ask the model as you would with an assistant. So instead of "who won the 1984 world series?", you'd say "the team that won the 1984 world series was" and the model would finish from there  it would most likely would name a team but depending on how you word the prompt, the completion may or may not go where you want. 
1
0
2026-03-03T14:30:27
ArchdukeofHyperbole
false
null
0
o8eysi4
false
/r/LocalLLaMA/comments/1rjpesa/qwen_35_what_is_base_version/o8eysi4/
false
1
t1_o8eysdb
[removed]
1
0
2026-03-03T14:30:26
[deleted]
true
null
0
o8eysdb
false
/r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8eysdb/
false
1
t1_o8eys8w
I run them on a 128GB M4 Max Mac. I just checked, not exactly twice as fast, but it feels like it. It's 15 tps for 27b and 25 tps for 122b-a10b for the same prompt.
1
0
2026-03-03T14:30:25
po_stulate
false
null
0
o8eys8w
false
/r/LocalLLaMA/comments/1rjof0g/qwen_35_nonthinking_scores_are_out_on_aa/o8eys8w/
false
1
t1_o8eyowp
but it was so confident! Qwen posts on this sub are hilarious
1
0
2026-03-03T14:29:55
WPBaka
false
null
0
o8eyowp
false
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8eyowp/
false
1
t1_o8eygbm
or maybe OpenAI was tired of the questions "when you're finally open, give us an open model" and finally made gpt-oss "here you go, leave us alone" and by chance gpt-oss-120b turned out very well
1
0
2026-03-03T14:28:40
jacek2023
false
null
0
o8eygbm
false
/r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8eygbm/
false
1
t1_o8eyffg
Awesome, thanks. Would it be better to keep the context window all in vram or allow some spillover into system ram. Trading off MoE speed with more knowledge?
1
0
2026-03-03T14:28:31
queequegscoffin
false
null
0
o8eyffg
false
/r/LocalLLaMA/comments/1rjqaci/new_to_local_coder_what_would_be_your_choice_for/o8eyffg/
false
1
t1_o8eyf64
Based on some assumptions i think the that 4.1 should be around 1-1.1trillion parameters not to mention that i imagine that openai designed other custom infrastructure that allow to use it, i hardly think it would be even usable, at the end of the day it’s the infrastructure that adapts to the model not othewise BUT I’d love another gpt oss series, a multimodal one possibly
1
0
2026-03-03T14:28:29
SAPPHIR3ROS3
false
null
0
o8eyf64
false
/r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8eyf64/
false
1
t1_o8eyeeu
[removed]
1
0
2026-03-03T14:28:22
[deleted]
true
null
0
o8eyeeu
false
/r/LocalLLaMA/comments/1qd2qwo/any_uncensored_unfiltered_ai_that_has_a_good/o8eyeeu/
false
1
t1_o8ey9dt
[https://undressme.ai?ref=tp4dnuu1](https://undressme.ai?ref=tp4dnuu1) is best!!!
1
0
2026-03-03T14:27:37
Bulky_Coast_1004
false
null
0
o8ey9dt
false
/r/LocalLLaMA/comments/1r6z1os/what_is_the_best_uncensored_ai/o8ey9dt/
false
1
t1_o8ey7ro
The tool switching, semantic payloads, and sawtooth timing clearly show non-human behavior. Using prompt injections as detection could be a reliable way to spot LLM-based attacks. Interested on seeing more on this matter, thanks OP.
1
0
2026-03-03T14:27:23
Airscripts
false
null
0
o8ey7ro
false
/r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8ey7ro/
false
1