name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8140el | Yes, but this is not the same. You have to speculate dozens of tokens into the future to overcome MoE limits, and ngram-mod does this by just speculating about | 2 | 0 | 2026-03-01T10:08:15 | coder543 | false | null | 0 | o8140el | false | /r/LocalLLaMA/comments/1reh3ro/mtp_on_qwen35_35ba3b/o8140el/ | false | 2 |
t1_o813wm8 | this is a bot account: registered 4 years ago, first message ever 4 hours ago, wrote 2 large comments in 3 minutes which is impossible for a live human. | 8 | 0 | 2026-03-01T10:07:16 | MelodicRecognition7 | false | null | 0 | o813wm8 | false | /r/LocalLLaMA/comments/1rhsto2/i_replaced_my_entire_automation_stack_with_mcp/o813wm8/ | false | 8 |
t1_o813jp8 | Sub-3% is not quite correct: a contract paid in CNY that would have cost $8000 a few months ago at 7.2 would now cost about $8400 at 6.86, which is a 5% cost increase in dollar terms.
Of course US-side inflation and demand are adding to the overall increase. | 0 | 0 | 2026-03-01T10:03:52 | Sufficient-Past-9722 | false | null | 0 | o813jp8 | false | /r/LocalLLaMA/comments/1rfp9sd/are_gpu_prices_rising_sharply_all_of_a_sudden/o813jp8/ | false | 0 |
t1_o813ikx | None of it matters. All of these are non-sense and it has nothing to do with the tech or the tools or the platform used. Just think about it for a second - if there was a bulletproof method to make money with now work on Polymarket or any similar platform or the stock market etc. then these markets would not exist. Tha... | 1 | 0 | 2026-03-01T10:03:35 | tmvr | false | null | 0 | o813ikx | false | /r/LocalLLaMA/comments/1rhazbc/13_m1_mbp_instead_of_m4_mac_mini/o813ikx/ | false | 1 |
t1_o813e3v | Good point, thanks for the heads up! I've been meaning to test the Qwen 3.5 series. The 32B Qwen 2.5 has been solid for our use cases but if 3.5 is that much better, especially for code review and structured output, it's worth the migration effort. Have you noticed improvements specifically in tool calling and function... | 1 | 0 | 2026-03-01T10:02:24 | EquivalentGuitar7140 | false | null | 0 | o813e3v | false | /r/LocalLLaMA/comments/1rhtbwx/where_do_you_use_ai_in_your_workflow/o813e3v/ | false | 1 |
t1_o813d1b | There are many so called depin projects where you can rent out your spare capacity with proper security protocols around it. Search and you will find. | 1 | 0 | 2026-03-01T10:02:07 | Professional_Mix2418 | false | null | 0 | o813d1b | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o813d1b/ | false | 1 |
t1_o8139u9 | **Total Score:** 4280
**Memory Usage:** 46.8GB (29.9GB VRAM / 16.9GB RAM)
**Accuracy per VRAM/RAM:** 1.82%
**Context:** 47,360
Only difference to the post's config is `- --n-cpu-moe` set at 5 compared to the above, which oddly allowed 0 and full context.
This puts between `Qwen3 Coder Next Unsloth UD-IQ3_XXS` a... | 7 | 0 | 2026-03-01T10:01:17 | Holiday_Purpose_3166 | false | null | 0 | o8139u9 | false | /r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o8139u9/ | false | 7 |
t1_o8136rn | Great question - the trust issue is real and it's the biggest barrier for self-hosted inference services.
A few approaches that work:
1. Zero-logging policy with proof: Open-source your proxy/gateway code so users can verify you're not storing prompts or responses. Transparency builds trust faster than promises.
... | 1 | 0 | 2026-03-01T10:00:28 | EquivalentGuitar7140 | false | null | 0 | o8136rn | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o8136rn/ | false | 1 |
t1_o81324u | That is logical. Hopefully the claim about optimising to Huawei chips signals the down fall of the CUDA moat, and would allow people to stop hogging nvidia gpus.
Though your argument is solid; increased demand probably wont lower any consumer GPU prices. | 1 | 0 | 2026-03-01T09:59:13 | notperson135 | false | null | 0 | o81324u | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o81324u/ | false | 1 |
t1_o81305z | I was wondering if the qwen coder next and the qwen 3.5 27B did well the same tests , or if together they scored more than 87/100.
My idea would be a first pass of qwen coder next, and a second pass of qwen 3.5 27B on the tests that coder next failed. So you dont have to swap models constantly, just one time ... | 1 | 0 | 2026-03-01T09:58:42 | brahh85 | false | null | 0 | o81305z | false | /r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o81305z/ | false | 1 |
t1_o812y98 | You will need a better system, it is how it is with a setup like yours. On my i5-8500T with dual-channel DDR4-2666 RAM machine I get the following pp/tg results in llama-bench for some small models:
`Qwen3 0.6B Q8_0 (0.6 GiB) = 248/32`
`Qwen3 1.7B Q8_0 (1.7 GiB) = 89/15`
`Qwen3 4B Q4_K_XK (2.4 GiB) = 33/... | 1 | 0 | 2026-03-01T09:58:11 | tmvr | false | null | 0 | o812y98 | false | /r/LocalLLaMA/comments/1rhblei/help_extremely_slow_prompt_processing_prefill_on/o812y98/ | false | 1 |
t1_o812v8y | The world is a nasty place. I recommend setting up proper sandboxes to access these kind of resources. | 1 | 0 | 2026-03-01T09:57:21 | Budget-Juggernaut-68 | false | null | 0 | o812v8y | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o812v8y/ | false | 1 |
t1_o812toi | [removed] | 1 | 0 | 2026-03-01T09:56:56 | [deleted] | true | null | 0 | o812toi | false | /r/LocalLLaMA/comments/1rhjg6w/longcatflashlite_685b_maybe_a_relatively_good/o812toi/ | false | 1 |
t1_o812set | This is a great new feature.
But I see that the model variants dont automatically pull through via the /v1/models API. However they do show up as aliases on the web interface.
I experimented by manually adding the variants under the 'aliases' section, but did not see them pull through via the above API. So perhaps ... | 3 | 0 | 2026-03-01T09:56:36 | Aggravating-Low-8224 | false | null | 0 | o812set | false | /r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o812set/ | false | 3 |
t1_o812qg3 | Thank you very much! | 1 | 0 | 2026-03-01T09:56:04 | Competitive_Book4151 | false | null | 0 | o812qg3 | false | /r/LocalLLaMA/comments/1rhkfek/built_a_localfirst_ai_agent_for_my_own_setup/o812qg3/ | false | 1 |
t1_o812ndw | I use CodeRabbit to analyze java merge requests, and it's pretty useful | 2 | 0 | 2026-03-01T09:55:16 | EiffelPower76 | false | null | 0 | o812ndw | false | /r/LocalLLaMA/comments/1rhtbwx/where_do_you_use_ai_in_your_workflow/o812ndw/ | false | 2 |
t1_o812md4 | i think you should consider newer models like the Qwen3.5 series now, it out classes the current Qwen2.5 models you are using by a large amount. | 2 | 0 | 2026-03-01T09:54:59 | Deep-Vermicelli-4591 | false | null | 0 | o812md4 | false | /r/LocalLLaMA/comments/1rhtbwx/where_do_you_use_ai_in_your_workflow/o812md4/ | false | 2 |
t1_o812jqa | I am using the 110B parameter model for unsupervised agentic coding. I've previously only been able to use gpt-oss-120b, and only in a limited setting because I've never been able to entirely trust that what it does is the right thing. I've had to verify that it hasn't done anything crazy, and it often does something t... | 0 | 0 | 2026-03-01T09:54:17 | audioen | false | null | 0 | o812jqa | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o812jqa/ | false | 0 |
t1_o812j7d | Don’t forget to experiment with sudo sysctl iogpu.wired\_limit\_mb=<mb>. You may be able to set it to 18 Gb (18432 mb) or even 20 Gb (20480 mb). With relatively small contexts 4 bit MLX gpt-oss will fit, gemma-3-27b-it-qat-4 bit will squeeze in barely. Qwen3.5-27b Q4 should be fine. With Qwen3.5-35B you’ll be limited ... | 1 | 0 | 2026-03-01T09:54:09 | sedtamensum | false | null | 0 | o812j7d | false | /r/LocalLLaMA/comments/1rhi4oy/new_macbook_air_m4_24gb_of_ram_do_you_have_this/o812j7d/ | false | 1 |
t1_o812fo3 | Thanks - yeah unsurprisingly qwen 3.5 35b MoE is what I'm expecting to be my daily driver on this...with at least an attempt to see how 70B models run with some offloading. | 1 | 0 | 2026-03-01T09:53:11 | youcloudsofdoom | false | null | 0 | o812fo3 | false | /r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o812fo3/ | false | 1 |
t1_o812b74 | I think “thankfully” went like that:
-ban them inmediately!
- but sir, we are about to strike Iran and are preparing for this and this and that other objectives.
- mmm, estimated time for end of operations?
- about 8 months, sir.
- i give you 6. | 1 | 0 | 2026-03-01T09:52:00 | Legitimate-Pumpkin | false | null | 0 | o812b74 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o812b74/ | false | 1 |
t1_o8129wo | well i am also new to that lmao , but one user recommended [aihorde.net](http://aihorde.net) though it got few downvotes , but there are definitely places where compute is shared ,though it is usually in the favor of high demand vs relatively low compute | 0 | 0 | 2026-03-01T09:51:39 | Key_Pace_9755 | false | null | 0 | o8129wo | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o8129wo/ | false | 0 |
t1_o8127sw | Honestly I am not surprised, most people are not interested in AI to begin with, those who are but do not care about privacy, would use cloud AI instead, and those who are concerned about privacy, would not feel comfortable with you having access to any kind of logging - I know definitely would not be OK with someone b... | 2 | 0 | 2026-03-01T09:51:07 | Lissanro | false | null | 0 | o8127sw | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o8127sw/ | false | 2 |
t1_o812756 | Ah, this is very good to note, as I tend to work across quite large contexts...hadn't thought about that dropoff, thank you. | 1 | 0 | 2026-03-01T09:50:56 | youcloudsofdoom | false | null | 0 | o812756 | false | /r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o812756/ | false | 1 |
t1_o81223c | CTO here running a mix of local and cloud models across our entire dev workflow. Here's exactly what we use and where:
1. Code generation and refactoring: Claude Code for complex multi-file changes, Cursor with Claude 4 Sonnet for everyday coding. Local Qwen 2.5 32B for quick completions when I don't want to hit API... | 5 | 0 | 2026-03-01T09:49:37 | EquivalentGuitar7140 | false | null | 0 | o81223c | false | /r/LocalLLaMA/comments/1rhtbwx/where_do_you_use_ai_in_your_workflow/o81223c/ | false | 5 |
t1_o811zod | It would be wild if Grump decided to handicap the Pentagon hours before declaring war just because some businessman pissed him off. Thankfully they didn't "IMMEDIATELY CEASE" everything just because he tweeted. | 1 | 0 | 2026-03-01T09:48:58 | Chilidawg | false | null | 0 | o811zod | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o811zod/ | false | 1 |
t1_o811qya | Electricity costs are 0.11$ / kWh for me atm though | 1 | 0 | 2026-03-01T09:46:42 | soyalemujica | false | null | 0 | o811qya | false | /r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/o811qya/ | false | 1 |
t1_o811p9b | yeah it totally felt weird lmao , i mean since when is sharing resources just cause u want to an evil thing to do , when i was on my old pc with limited vram i did wanna play around with abit bigger models and test them and use them but couldnt do it , so i thought hey now i might share some of it when the pc is not be... | 1 | 0 | 2026-03-01T09:46:14 | Key_Pace_9755 | false | null | 0 | o811p9b | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o811p9b/ | false | 1 |
t1_o811p41 | As someone new to this, may i ask if there is anything out there decentralized where people can share local compute? Thought you might know. Thanks | 1 | 0 | 2026-03-01T09:46:12 | Imaginary_Cellist272 | false | null | 0 | o811p41 | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o811p41/ | false | 1 |
t1_o811nkp | Legend for making this post tho | 1 | 0 | 2026-03-01T09:45:47 | Effective_Panic756 | false | null | 0 | o811nkp | false | /r/LocalLLaMA/comments/1ozu5v4/20000_epstein_files_in_a_single_text_file/o811nkp/ | false | 1 |
t1_o811l3p | To me, ChatGPT feel like that "know it all twit" who pretends to be smart in a patronizing way. | 14 | 0 | 2026-03-01T09:45:07 | Luvirin_Weby | false | null | 0 | o811l3p | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o811l3p/ | false | 14 |
t1_o811l0k | It wasnt. That pic of the failed launch is not even the same province | 4 | 1 | 2026-03-01T09:45:06 | Nobby_Binks | false | null | 0 | o811l0k | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o811l0k/ | false | 4 |
t1_o811kzl | Not really. The models are getting better, but "doing more will less" requires focused specialization. General MoE architecture is actually built on this principle - using a few related experts only instead of plowing throw all the collected knowledge helps to reduce compute requirements; however, this happens at the c... | 1 | 0 | 2026-03-01T09:45:05 | Prudent-Ad4509 | false | null | 0 | o811kzl | false | /r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o811kzl/ | false | 1 |
t1_o811jdf | I don’t know.
For me it’s not worth it to do something like this. | 1 | 0 | 2026-03-01T09:44:40 | ProfessionalSpend589 | false | null | 0 | o811jdf | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o811jdf/ | false | 1 |
t1_o811ihs | Re: offloading, for short prompts with short to medium responses mac will probably be faster, but as the context gets longer, the 3090s speed deteriorates more slowly than the mac’s does, so if you charted the two, there would be a point where the two lines crossed which is where 3090 would continue chugging along whil... | 1 | 0 | 2026-03-01T09:44:26 | datbackup | false | null | 0 | o811ihs | false | /r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o811ihs/ | false | 1 |
t1_o811flx | People who acted as if it's a big deal and as if you're evil for hosting a free service using your own resources are probably bots from companies like closedai or anthropic scared of lossing customers if everyone just started hosting free services with open models. | -4 | 0 | 2026-03-01T09:43:39 | po_stulate | false | null | 0 | o811flx | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o811flx/ | false | -4 |
t1_o811e4q | When using ngram-mod https://github.com/ggml-org/llama.cpp/pull/19164 in llama.cpp with Minimax-M2.5 I get up to 2x speedup in coding tasks (tg from 22 to 35 on average) | 1 | 0 | 2026-03-01T09:43:16 | Responsible_Pain3278 | false | null | 0 | o811e4q | false | /r/LocalLLaMA/comments/1reh3ro/mtp_on_qwen35_35ba3b/o811e4q/ | false | 1 |
t1_o81198d | Thanks, not sure why the downvotes on the post, but appreciate the comment. | 1 | 0 | 2026-03-01T09:41:57 | fredconex | false | null | 0 | o81198d | false | /r/LocalLLaMA/comments/1rhiwwk/arandu_v057beta_llamacpp_app_like_lm_studio_ollama/o81198d/ | false | 1 |
t1_o8116o6 | AI has completely ruined press releases, it’s like a 50/50 shot now when it’s clear something was written by AI if they actually meant what it said or if it just said something stupid and they didn’t catch it when they reviewed it | 4 | 0 | 2026-03-01T09:41:16 | Shawnj2 | false | null | 0 | o8116o6 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8116o6/ | false | 4 |
t1_o810r9s | You balance the risks. The possibility of something bad or uncomfortable happening after being listened by Uncle Sam versus my friend doing a bad use of the secrets they find in there. Then you take the decision.
I guess, the decision gets easier the cleaner your mind (and your chatgpt history) is! | 16 | 0 | 2026-03-01T09:37:13 | Salinas2498 | false | null | 0 | o810r9s | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o810r9s/ | false | 16 |
t1_o810qu9 | no i was just talking about hosting a single model on my gpu via llama cpp and then giving access to use it using cloudflare tunnels , not actually giving out my gpu for compute | -1 | 0 | 2026-03-01T09:37:06 | Key_Pace_9755 | false | null | 0 | o810qu9 | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o810qu9/ | false | -1 |
t1_o810nyy | Added Opus 4.6 score, it's somewhat disappointing (worse than Sonnet 4.6, tied with GLM-5). Maybe would be better with max effort.
https://preview.redd.it/9ff4zrldlemg1.png?width=1000&format=png&auto=webp&s=75e8f73eec130fa4f075cbf8cdca691efea9172d
| 1 | 0 | 2026-03-01T09:36:18 | fairydreaming | false | null | 0 | o810nyy | false | /r/LocalLLaMA/comments/1rg9lli/little_qwen_35_27b_and_qwen_35ba3b_models_did/o810nyy/ | false | 1 |
t1_o810meu | Ah, excellent. That was a smart choice. Is it in your plans to bench the High levels? It's a very popular one.
Also, as a feedback, might want to note the reasoning levels somewhere if you ever test multiple. And GPTs on High and Medium are very fast and much cheaper to test. | 1 | 0 | 2026-03-01T09:35:53 | Alex_1729 | false | null | 0 | o810meu | false | /r/LocalLLaMA/comments/1reds0p/qwen_35_craters_on_hard_coding_tasks_tested_all/o810meu/ | false | 1 |
t1_o810lmx | GOAT
https://preview.redd.it/6zsshocilemg1.jpeg?width=828&format=pjpg&auto=webp&s=cbc8cd33da92bc9e6feafc55ee4c91d8b72e9486 | 1 | 0 | 2026-03-01T09:35:40 | Remote-Echidna7078 | false | null | 0 | o810lmx | false | /r/LocalLLaMA/comments/1qwlhb5/i_bargained_kimi_plus_down_to_099_using_this/o810lmx/ | false | 1 |
t1_o810i2w | In which aspects does OSS-120B outperform the 80B version?
While I haven’t used OSS extensively for coding/dev tasks, I find Qwen3.5 (120B) excellent for everyday use, though it’s slower than OSS and tends to overthink. I’m currently using the 80B model with OpenCode and am really impressed. | 1 | 0 | 2026-03-01T09:34:42 | No_Algae1753 | false | null | 0 | o810i2w | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o810i2w/ | false | 1 |
t1_o810hfr | [deleted] | 1 | 0 | 2026-03-01T09:34:32 | [deleted] | true | null | 0 | o810hfr | false | /r/LocalLLaMA/comments/1rdef9x/qwen35397ba17budtq1_bench_results_fw_desktop/o810hfr/ | false | 1 |
t1_o810gbb | Thank You! Thats very inspiring. Dou you intend to allow an external follow up to that project like github or something?? Sorry if doublepost. | 1 | 0 | 2026-03-01T09:34:13 | Sir-Pay-a-lot | false | null | 0 | o810gbb | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o810gbb/ | false | 1 |
t1_o810bhx | oo i see , i was worried about that part before hosting , i did abit of research and ended up using llamacpp and cloud flare tunnel , is the risk actually substantial enough to not do it in the first place ? | 1 | 0 | 2026-03-01T09:32:54 | Key_Pace_9755 | false | null | 0 | o810bhx | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o810bhx/ | false | 1 |
t1_o810bc4 | You seem to be confusing matter. Make your GPU available isn’t the same as hosting an LLM. Hosting an LLM does require a GPU.
You are clear as mud as in what you actually want to do. There are lots of projects and services when you can monetize your GPU or don’t if you don’t want to. The choice is yours. | 4 | 0 | 2026-03-01T09:32:51 | Professional_Mix2418 | false | null | 0 | o810bc4 | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o810bc4/ | false | 4 |
t1_o8107cj | great reply , hey also btw , how do you manage the issue of people not trusting to send there data to your personal servers , i am curious about that one
| 0 | 0 | 2026-03-01T09:31:48 | Key_Pace_9755 | false | null | 0 | o8107cj | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o8107cj/ | false | 0 |
t1_o81074t | What's the prompt processing speed on that setup after you reach like 50k ctx or more? | 1 | 0 | 2026-03-01T09:31:44 | jslominski | false | null | 0 | o81074t | false | /r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/o81074t/ | false | 1 |
t1_o8106kp | Sounds like there wasn't a real problem though. Else there would've been actively used.
I've spent lots of time setting up a media server for the family (also as a fun side project to learn about networking and deployment), spin up lots of different things, like a local cloud server, PiHole, figure out ways to securel... | 5 | 0 | 2026-03-01T09:31:35 | Budget-Juggernaut-68 | false | null | 0 | o8106kp | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o8106kp/ | false | 5 |
t1_o81048c | Have you tried something newer, like qwen 3/3.5? | 5 | 0 | 2026-03-01T09:30:58 | CatEatsDogs | false | null | 0 | o81048c | false | /r/LocalLLaMA/comments/1rhsto2/i_replaced_my_entire_automation_stack_with_mcp/o81048c/ | false | 5 |
t1_o8103zw | Zimage-base | 2 | 0 | 2026-03-01T09:30:54 | ComposerGen | false | null | 0 | o8103zw | false | /r/LocalLLaMA/comments/1rhohkr/what_is_the_best_model_for_image_creation_with/o8103zw/ | false | 2 |
t1_o8100lo | This is a known issue with Qwen models in Ollama specifically around tool calling format. The problem is that Qwen uses a different tool call format than what OpenWebUI expects by default.
Here's what's happening: gpt-oss and nemotron follow the standard OpenAI function calling format natively, which OpenWebUI passe... | 1 | 0 | 2026-03-01T09:30:00 | EquivalentGuitar7140 | false | null | 0 | o8100lo | false | /r/LocalLLaMA/comments/1rhmwfn/cant_get_qwen_models_to_work_with_tool_calls/o8100lo/ | false | 1 |
t1_o80zx0i | The AI Max 395s are in my budget, but would they offer comparable performance to the undervolted 3090s? Or just a sidegrade from the Mac? | 1 | 0 | 2026-03-01T09:29:03 | youcloudsofdoom | false | null | 0 | o80zx0i | false | /r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o80zx0i/ | false | 1 |
t1_o80ztgu | GPT-4.1 is almost a year old and performs on par with GPT-OSS 20b by most accounts.
Qwen3.5 27b should be better at almost everything. | 8 | 0 | 2026-03-01T09:28:06 | MrPecunius | false | null | 0 | o80ztgu | false | /r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/o80ztgu/ | false | 8 |
t1_o80zt42 | well okay lmao relax , i am not forcing u to do it , neither did i say u need to send sensitive stuff or data , i completely get it , i am just a hobbyist doing this for fun , not everything needs to be money minded dude relax , and yes ik it aint free , but hosting a single 5090 for like 5 nights aint that expensive e... | -5 | 0 | 2026-03-01T09:28:00 | Key_Pace_9755 | false | null | 0 | o80zt42 | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o80zt42/ | false | -5 |
t1_o80zrgr | Actually I can just about squeeze a 128GB AI Max 395 out of that budget. I've founf Rocm fiddly but not impossible, so I'm open to it, but I assumed that the performance tradeoff was going to be similar to tye Mac against the 3090s, is that not the case? | 1 | 0 | 2026-03-01T09:27:34 | youcloudsofdoom | false | null | 0 | o80zrgr | false | /r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o80zrgr/ | false | 1 |
t1_o80zr2l | All OpenAI models are on XHigh reasoning. | 1 | 0 | 2026-03-01T09:27:28 | hauhau901 | false | null | 0 | o80zr2l | false | /r/LocalLLaMA/comments/1reds0p/qwen_35_craters_on_hard_coding_tasks_tested_all/o80zr2l/ | false | 1 |
t1_o80zmjb | ah , thats extremely helpful of you , i wasnt really thinking of getting into actually selling the compute or something , i just wanted to do it for fun for a little bit just being into tech in general , uk , this is my first high end pc , its just a thing kinda like people run folding home for free , your take on it i... | 0 | 0 | 2026-03-01T09:26:14 | Key_Pace_9755 | false | null | 0 | o80zmjb | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o80zmjb/ | false | 0 |
t1_o80zj5h | Your pivot from open-ended summarization to constrained JSON generation is exactly the pattern I've seen work in production with small models. The insight that SLMs excel at structured output with defined schemas but struggle with free-form synthesis is underappreciated.
I've been building something similar with MCP... | 1 | 0 | 2026-03-01T09:25:20 | EquivalentGuitar7140 | false | null | 0 | o80zj5h | false | /r/LocalLLaMA/comments/1rhqy4o/used_smollm2_17b_on_device_for_telegram_group/o80zj5h/ | false | 1 |
t1_o80zhgc | Yeah, my needs really are just agentic coding work, and it's a good point about increasing ram space for offloading were I to want move into bigger models anyway. Would offloading onto ddr5 ram and running the 3090s at 300w still give a performance edge over the Mac do you think? | 1 | 0 | 2026-03-01T09:24:53 | youcloudsofdoom | false | null | 0 | o80zhgc | false | /r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o80zhgc/ | false | 1 |
t1_o80zg6q | For free? Compute and electricity ain't free. And I rather send my data to cloud providers than some random dude online. | 6 | 0 | 2026-03-01T09:24:32 | Budget-Juggernaut-68 | false | null | 0 | o80zg6q | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o80zg6q/ | false | 6 |
t1_o80zda7 | is it so hard to understand? your family don't want you to have possibility to see their intimate questions they ask LLM's, build panopticon for someone else, and check logs there | 2 | 0 | 2026-03-01T09:23:44 | Educational_Sun_8813 | false | null | 0 | o80zda7 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80zda7/ | false | 2 |
t1_o80zcj0 | Depending on the software you use a vulnerability may be discovered soon which may allow some kind of crime to be committed through your computer.
With all computer evidence pointing to you.
| 3 | 0 | 2026-03-01T09:23:32 | ProfessionalSpend589 | false | null | 0 | o80zcj0 | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o80zcj0/ | false | 3 |
t1_o80zbiq | I use them all day for work. It's probably one of the systems that gets hit the hardest all day. | 1 | 0 | 2026-03-01T09:23:16 | Budget-Juggernaut-68 | false | null | 0 | o80zbiq | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o80zbiq/ | false | 1 |
t1_o80zbdl | GPT 5.3 Codex what reasoning level? I can't find this information on your post or your site. Same for the others. | 1 | 0 | 2026-03-01T09:23:13 | Alex_1729 | false | null | 0 | o80zbdl | false | /r/LocalLLaMA/comments/1reds0p/qwen_35_craters_on_hard_coding_tasks_tested_all/o80zbdl/ | false | 1 |
t1_o80zaie | Yeah the rise of MoE and quants has been what's provoked me into branching out into this space. Any indication the field is moving towards 'Doing more with less' in terms of model capacity at small sizes? | 1 | 0 | 2026-03-01T09:22:59 | youcloudsofdoom | false | null | 0 | o80zaie | false | /r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o80zaie/ | false | 1 |
t1_o80za40 | i literally hosted it yesterday
[Letting my RTX 5090 (2.1 TB/s mem) stretch its legs tonight. Hosting Qwen 3.5 35B at 8-batch parallel for whoever wants to test the new model cause why not (35 k context) : r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/1rhflqn/letting_my_rtx_5090_21_tbs_mem_stretch_its_l... | 1 | 0 | 2026-03-01T09:22:53 | Key_Pace_9755 | false | null | 0 | o80za40 | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o80za40/ | false | 1 |
t1_o80z88t | Short answer: yes, there's genuine demand, but the sustainable model isn't free hosting - it's building an inference API layer with proper rate limiting and usage tracking.
I've been running MCP servers backed by local models (Qwen 2.5 32B + Llama 3.3 70B on dual 3090s) and exposing them via OpenAI-compatible endpoi... | 1 | 0 | 2026-03-01T09:22:22 | EquivalentGuitar7140 | false | null | 0 | o80z88t | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o80z88t/ | false | 1 |
t1_o80z6vw | Thank you | 1 | 0 | 2026-03-01T09:22:00 | Constant-Simple-1234 | false | null | 0 | o80z6vw | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o80z6vw/ | false | 1 |
t1_o80z53h | Very good. | 1 | 0 | 2026-03-01T09:21:31 | Advanced-Exit4664 | false | null | 0 | o80z53h | false | /r/LocalLLaMA/comments/1rhiwwk/arandu_v057beta_llamacpp_app_like_lm_studio_ollama/o80z53h/ | false | 1 |
t1_o80z4kv | >The electricity cost alone to run A3B at speed for a whole month, let’s say 4 to 6 hours a day, will be a lot more than $10
Not on a Mac! My binned M4 Pro MBP/48GB pulls a measured 65W during inference. I run Qwen3 30b a3b 8-bit MLX @ \~55t/s.
Even here in SoCal with insane electricity prices ($0.40/kWh), that's les... | 5 | 0 | 2026-03-01T09:21:23 | MrPecunius | false | null | 0 | o80z4kv | false | /r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/o80z4kv/ | false | 5 |
t1_o80z4ij | I'd just be happy my wife let me spend so much on hardware for myself that she gets nothing out of. | 2 | 0 | 2026-03-01T09:21:22 | cosmicr | false | null | 0 | o80z4ij | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80z4ij/ | false | 2 |
t1_o80z173 | Thanks! Heroic work. | 1 | 0 | 2026-03-01T09:20:27 | Constant-Simple-1234 | false | null | 0 | o80z173 | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o80z173/ | false | 1 |
t1_o80z0su | If you use copilot a lot, then running a local model makes no sense financially. Just the electricity alone will likely be more than 10 bucks a month for a strong coding model (say minimax m2.5). | 1 | 0 | 2026-03-01T09:20:20 | LagOps91 | false | null | 0 | o80z0su | false | /r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/o80z0su/ | false | 1 |
t1_o80z0gk | wait what i dont understand whats so bad with it , i am just into tech so got a nice pc , is there something bad with sharing ? , i am a doctor by profession
| 0 | 0 | 2026-03-01T09:20:15 | Key_Pace_9755 | false | null | 0 | o80z0gk | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o80z0gk/ | false | 0 |
t1_o80yzty | by the way you can use the > symbol to write quotes:
> they come out like this. | 7 | 0 | 2026-03-01T09:20:05 | cosmicr | false | null | 0 | o80yzty | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80yzty/ | false | 7 |
t1_o80yyxc | Thanks for your work. Your quants are my usual go-to. | 1 | 0 | 2026-03-01T09:19:50 | Constant-Simple-1234 | false | null | 0 | o80yyxc | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o80yyxc/ | false | 1 |
t1_o80yuuq | Update: I have filed a ticket for the model unloading issue: [https://github.com/ggml-org/llama.cpp/issues/20002](https://github.com/ggml-org/llama.cpp/issues/20002) | 1 | 0 | 2026-03-01T09:18:44 | anubhav_200 | false | null | 0 | o80yuuq | false | /r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/o80yuuq/ | false | 1 |
t1_o80ysgm | Perhaps a stupid question, but how did/would you deal with data corruption? (like packet loss).
Cool project!
| 1 | 0 | 2026-03-01T09:18:06 | adeukis | false | null | 0 | o80ysgm | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o80ysgm/ | false | 1 |
t1_o80ynh0 | knew your geolocation before confirming it. consider doing something productive | 1 | 0 | 2026-03-01T09:16:47 | Brilliant-Driver2660 | false | null | 0 | o80ynh0 | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o80ynh0/ | false | 1 |
t1_o80ynfs | I have used UD Q6 before and sometimes I still use it for non-coding models. However, there are numerous issues with them starting with Qwen3 and Qwen3.5, especially with looping. At the same time, the losses from using Q8 are neglible.
Unsloth and others will catch up with their quantization methods eventually. They ... | 1 | 0 | 2026-03-01T09:16:46 | Prudent-Ad4509 | false | null | 0 | o80ynfs | false | /r/LocalLLaMA/comments/1rhmepa/qwen35122b_on_blackwell_sm120_fp8_kv_cache/o80ynfs/ | false | 1 |
t1_o80yjdx | Same problem here. Manual logging is a losing game because the interesting failures are always in the context that got passed between steps, not the individual tool calls. What actually worked for me was adding a runtime monitoring layer that records the full chain of tool calls and context handoffs so you can trace ex... | 1 | 0 | 2026-03-01T09:15:40 | thecanonicalmg | false | null | 0 | o80yjdx | false | /r/LocalLLaMA/comments/1rgwyqi/agent_debugging_is_a_mess_am_i_the_only_one/o80yjdx/ | false | 1 |
t1_o80yij7 | that is really interesting , thank you for telling me about that
| 1 | 0 | 2026-03-01T09:15:26 | Key_Pace_9755 | false | null | 0 | o80yij7 | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o80yij7/ | false | 1 |
t1_o80yfw3 | mixtrals small models are quite good for their size.
| 1 | 0 | 2026-03-01T09:14:44 | FeiX7 | false | null | 0 | o80yfw3 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o80yfw3/ | false | 1 |
t1_o80yfb2 | Wondered that too. TranslateGemma is pretty good. Also if anyone interested in tranlsating between European languages - EuroLLM 22b is quite usable, too. | 8 | 0 | 2026-03-01T09:14:34 | Icy-Degree6161 | false | null | 0 | o80yfb2 | false | /r/LocalLLaMA/comments/1rhqeob/qwen_35_27b_is_the_best_chinese_translation_model/o80yfb2/ | false | 8 |
t1_o80ye6y | Do you really mean Iraq? Or are you talking about Iran? | 1 | 1 | 2026-03-01T09:14:16 | pulse77 | false | null | 0 | o80ye6y | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80ye6y/ | false | 1 |
t1_o80yayr | 27b (8-bit MLX, in my case) is incredible. It thinks and thinks and thinks, but wow the result is really good. | 1 | 0 | 2026-03-01T09:13:24 | MrPecunius | false | null | 0 | o80yayr | false | /r/LocalLLaMA/comments/1rfjp6v/top_10_trending_models_on_hf/o80yayr/ | false | 1 |
t1_o80y86w | Find the repo here: https://github.com/Alex8791-cyber/cognithor | 1 | 0 | 2026-03-01T09:12:40 | Competitive_Book4151 | false | null | 0 | o80y86w | false | /r/LocalLLaMA/comments/1rhkfek/built_a_localfirst_ai_agent_for_my_own_setup/o80y86w/ | false | 1 |
t1_o80y7ol | lmao why , whats so bad with it , i usually run folding at night and yesterday i just ran a qwen 35 b a3b 3.5 model host via cloudflare and people did actually end up generating around 1 million tokens with it in total with 5 hours of uptime , so i thought hey just for fun while i am sleeping for a week i might let tha... | -2 | 0 | 2026-03-01T09:12:32 | Key_Pace_9755 | false | null | 0 | o80y7ol | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o80y7ol/ | false | -2 |
t1_o80y7jb | Thank you appreciate the support!! <3 | 2 | 0 | 2026-03-01T09:12:30 | yoracale | false | null | 0 | o80y7jb | false | /r/LocalLLaMA/comments/1rfds1h/qwen3535ba3b_q4_quantization_comparison/o80y7jb/ | false | 2 |
t1_o80y3lx | As I said in another thread not too long ago: for me I can say while Qwen3-Coder-30B performs much better at agentic tasks (OpenCode, RooCode), Qwen 2.5 remains the highest scoring model on my personal Q&A benchmark across code, math, RAG, and translation.
If you're not doing Agentic tasks, I find Qwen 3 is actually a... | 1 | 0 | 2026-03-01T09:11:26 | Emotional_Egg_251 | false | null | 0 | o80y3lx | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o80y3lx/ | false | 1 |
t1_o80y3dt | It took almost 10 mins to find a single message that said this exact thing as I knew someone will have had the same thought as me. I'm staggered it wasn't in every post as that's my experience. I work in IT in a technology company and I'd say 80% of people I work with don't really use AI. It's what's telling me this is... | 4 | 0 | 2026-03-01T09:11:23 | xxtherealgbhxx | false | null | 0 | o80y3dt | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80y3dt/ | false | 4 |
t1_o80y2hf | Looks like it was added last week, OP is quick haha
https://github.com/mostlygeek/llama-swap/pull/535
If you use LiteLLM you can achieve the same by creating multiple versions of the same model with different params | 3 | 0 | 2026-03-01T09:11:08 | Djagatahel | false | null | 0 | o80y2hf | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o80y2hf/ | false | 3 |
t1_o80y0qw | It was always just a hobby, it just took you this long to realise it. | 11 | 0 | 2026-03-01T09:10:40 | Polite_Jello_377 | false | null | 0 | o80y0qw | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80y0qw/ | false | 11 |
t1_o80y0p1 | As Teal’c would’ve said:
Indeed | 2 | 0 | 2026-03-01T09:10:39 | Sisuuu | false | null | 0 | o80y0p1 | false | /r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o80y0p1/ | false | 2 |
t1_o80xy1q | Just like copilot. Is this you, Satya Nadella? | 4 | 0 | 2026-03-01T09:09:57 | JLeonsarmiento | false | null | 0 | o80xy1q | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80xy1q/ | false | 4 |
t1_o80xsg2 | The source is the WSJ link the OP posted right at the top of the thread. You can click it and read the article yourself.
It’s not “made up”. It's reported by the Wall Street Journal, and multiple outlets (NDTV, Hindustan Times, CNBC, etc.) picked it up within hours, all citing the same piece. If the paywall on the WSJ... | 10 | 0 | 2026-03-01T09:08:27 | CoralBliss | false | null | 0 | o80xsg2 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80xsg2/ | false | 10 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.