name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o80xsbo | My projects gained a few stars in the last weeks. It is maintained: [https://github.com/DasDigitaleMomentum/searxNcrawl](https://github.com/DasDigitaleMomentum/searxNcrawl) | 2 | 0 | 2026-03-01T09:08:25 | tisDDM | false | null | 0 | o80xsbo | false | /r/LocalLLaMA/comments/1rhj0l9/mcp_server_for_searxngnonapi_local_search/o80xsbo/ | false | 2 |
t1_o80xgkh | if you can’t figure out something better to do with tokens than provide them to strangers for free, you have a problem. are you pulling a scam or just lacking? | 2 | 0 | 2026-03-01T09:05:19 | Brilliant-Driver2660 | false | null | 0 | o80xgkh | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o80xgkh/ | false | 2 |
t1_o80xfrz | That's a big market, yes. There is also [aihorde.net](https://aihorde.net/) for a more fun approach. | -4 | 0 | 2026-03-01T09:05:06 | lisploli | false | null | 0 | o80xfrz | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o80xfrz/ | false | -4 |
t1_o80xfk5 | Why do you run both at Q8 and not something like Q6 weights and unquantized KV? | 1 | 0 | 2026-03-01T09:05:03 | po_stulate | false | null | 0 | o80xfk5 | false | /r/LocalLLaMA/comments/1rhmepa/qwen35122b_on_blackwell_sm120_fp8_kv_cache/o80xfk5/ | false | 1 |
t1_o80xap2 | well more specifically , i was kinda asking like , if hosting models for people to just use for free , is something that actually people would appreciate and use for there work , i was thinking of hosting few for a week or so
| -1 | 0 | 2026-03-01T09:03:46 | Key_Pace_9755 | false | null | 0 | o80xap2 | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o80xap2/ | false | -1 |
t1_o80x4sf | See here for my settings: https://www.reddit.com/r/LocalLLaMA/comments/1rh9983/comment/o7x6tkr/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button | 1 | 0 | 2026-03-01T09:02:13 | OsmanthusBloom | false | null | 0 | o80x4sf | false | /r/LocalLLaMA/comments/1rgkmd7/ways_to_improve_prompt_processing_when_offloading/o80x4sf/ | false | 1 |
t1_o80x3pn | Love seeing tooling like Swarmit. In OpenClawCity.AI our agents keep a log of dependencies and revisit the plan before launching a new task, so something that surfaces long-term plans feels like the missing top layer. We even post these choreography experiments on Moltbook so other agents can reuse them. Curious how yo... | 0 | 0 | 2026-03-01T09:01:56 | vsider2 | false | null | 0 | o80x3pn | false | /r/LocalLLaMA/comments/1rhsgva/swarmit_longterm_planning_for_ai_agents/o80x3pn/ | false | 0 |
t1_o80wxqu | anthropic, openai, and every cloud hosting provider are providing token processing, inference, in exchange for money.
the entire world economy is focused on this.
is that what you are asking? | 13 | 0 | 2026-03-01T09:00:22 | Brilliant-Driver2660 | false | null | 0 | o80wxqu | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o80wxqu/ | false | 13 |
t1_o80wx26 | I think -fit has made adjusting -ngl and --n-cpu-moe manually obsolete. Just set -c to the context size you want and --fit-target to the amount of VRAM you need to reserve for other purposes (the default is 1024 MB which is too high in my case, I set it to 64 or 128). Then -fit will optimize the offload. You would have... | 1 | 0 | 2026-03-01T09:00:11 | OsmanthusBloom | false | null | 0 | o80wx26 | false | /r/LocalLLaMA/comments/1rgkmd7/ways_to_improve_prompt_processing_when_offloading/o80wx26/ | false | 1 |
t1_o80wuvh | Lobotomy Quants presents: LobotomyQuant1.0.q1_k.whattthehell.gguf. Note: this does not exist yet. | 1 | 0 | 2026-03-01T08:59:37 | Silver-Champion-4846 | false | null | 0 | o80wuvh | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o80wuvh/ | false | 1 |
t1_o80wtrp | Surely wasn't Trump, he loves children. | 3 | 0 | 2026-03-01T08:59:20 | _supert_ | false | null | 0 | o80wtrp | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80wtrp/ | false | 3 |
t1_o80wrk6 | The old one or the new one? | 1 | 0 | 2026-03-01T08:58:45 | zkstx | false | null | 0 | o80wrk6 | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o80wrk6/ | false | 1 |
t1_o80wrlk | Well, I think a lot of people would love free AI where they dont have to pay hardware nor Energy for (and have you pay for it instead).
If someone was willing to pay, they could go to a professional AI company thats more secure and reliable. | 1 | 0 | 2026-03-01T08:58:45 | Adorable_Ice_2963 | false | null | 0 | o80wrlk | false | /r/LocalLLaMA/comments/1rhsgqx/is_there_an_actual_need_for_people_to_host_models/o80wrlk/ | false | 1 |
t1_o80wrdl | u/A-Rahim We sent a DM regarding your usage of the Unsloth name thank you!! | 1 | 0 | 2026-03-01T08:58:42 | yoracale | false | null | 0 | o80wrdl | false | /r/LocalLLaMA/comments/1q5mh84/unslothmlx_finetune_llms_on_your_mac_same_api_as/o80wrdl/ | false | 1 |
t1_o80wlu5 | yeah I noticed the same, even though I have thinking off it occasionally still thinks its thinking even throws a </think> token out before fully commiting to a reply sometimes. had it correct itself a few times while creating code as well where it just stopped in mid code generation and said "wait thats not right let m... | 1 | 0 | 2026-03-01T08:57:15 | Lesser-than | false | null | 0 | o80wlu5 | false | /r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o80wlu5/ | false | 1 |
t1_o80wjbj | Hi may I ask what agentic framework you used in the demonstration? That looks so cool | 1 | 0 | 2026-03-01T08:56:35 | Takashi728 | false | null | 0 | o80wjbj | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o80wjbj/ | false | 1 |
t1_o80whwu | Yeah, I mean, it's easy to dump links to the bug tracker or hf pages, it doesn't even mean they are confirmed bugs, what a gotcha...
Some of them are not even related to your quants indeed.
Just bad faith. You guys are a blessing. | 2 | 0 | 2026-03-01T08:56:13 | TitwitMuffbiscuit | false | null | 0 | o80whwu | false | /r/LocalLLaMA/comments/1rfds1h/qwen3535ba3b_q4_quantization_comparison/o80whwu/ | false | 2 |
t1_o80wgqn | Same here. It just an engineering hobby. Not everyone will like our hobby though. We should build it on our own purpose first, like 99% will be our use case and let 1% for family purpose. At the end, if we died, those so-called hi-tech and complex engineering equipments will be a dust there, no one continue to operate.... | 1 | 0 | 2026-03-01T08:55:54 | Weary_Long3409 | false | null | 0 | o80wgqn | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80wgqn/ | false | 1 |
t1_o80wbbe | I mean the author wrote down detailed instructions on how to implement it. Though, I do not know if it is legit or not. Hence, I came here for answers | 0 | 0 | 2026-03-01T08:54:28 | TaaDaahh | false | null | 0 | o80wbbe | false | /r/LocalLLaMA/comments/1rhazbc/13_m1_mbp_instead_of_m4_mac_mini/o80wbbe/ | false | 0 |
t1_o80w91v | I am not sure, of course, but I feel like the secret sauce in this attack is Mossad. Mossad always keep an eye on their enemies and they go all in. They really are extremely effective. This attack killed the leader of Iran, that is a sign of fairly good intel. So even if US used ai for some analyzing, I still suspect i... | 0 | 0 | 2026-03-01T08:53:53 | hugthemachines | false | null | 0 | o80w91v | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80w91v/ | false | 0 |
t1_o80w785 | You'd be surprised how much a generalist foundation model can outperform old adhoc models. Not as efficient, but more accurate. Many a BERT workflow's been replaced with a mildly tuned llama.
Plus when it comes to multimodal image data you kind of don't have any other way of semantically merging it with text data than... | 29 | 0 | 2026-03-01T08:53:25 | MoffKalast | false | null | 0 | o80w785 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80w785/ | false | 29 |
t1_o80w6r1 | Haha I do see that. However, if you actually read the article, the author actually wrote down instructions on how to implement it. Thus, I was wondering if anyone has tried that or if it's just malicious | 0 | 0 | 2026-03-01T08:53:17 | TaaDaahh | false | null | 0 | o80w6r1 | false | /r/LocalLLaMA/comments/1rhazbc/13_m1_mbp_instead_of_m4_mac_mini/o80w6r1/ | false | 0 |
t1_o80w410 | Smaller Qwens, LFM2.5 1.2 is fast AF. | 1 | 0 | 2026-03-01T08:52:34 | More_Slide5739 | false | null | 0 | o80w410 | false | /r/LocalLLaMA/comments/1rf5zts/what_is_the_most_efficient_yet_capable_local/o80w410/ | false | 1 |
t1_o80w3gn | Oddly, I hadn't run into that issue before about a week ago, and I'm having it with several models, now - in two different clients, too. (SillyTavern and Open-WebUI). The latter is surprising to me - though I think it's a result of having tool use turned on in the history of a chat, then switching to a model that appar... | 1 | 0 | 2026-03-01T08:52:25 | overand | false | null | 0 | o80w3gn | false | /r/LocalLLaMA/comments/1re3job/little_help_with_chat_template/o80w3gn/ | false | 1 |
t1_o80w2w4 | No courses, they have detailed instructions though for how to implement it | 1 | 0 | 2026-03-01T08:52:16 | TaaDaahh | false | null | 0 | o80w2w4 | false | /r/LocalLLaMA/comments/1rhazbc/13_m1_mbp_instead_of_m4_mac_mini/o80w2w4/ | false | 1 |
t1_o80w15f | But do you need an M4 chip for this purpose? Isn't it a bit too much power? Especially if it isn't a local LLM?
But if it isn't a local LLM, are people paying for API calls? Or what are people paying for when using Clawdbot? | 1 | 0 | 2026-03-01T08:51:49 | TaaDaahh | false | null | 0 | o80w15f | false | /r/LocalLLaMA/comments/1rhazbc/13_m1_mbp_instead_of_m4_mac_mini/o80w15f/ | false | 1 |
t1_o80vzjv | have you tried seed-oss 36b? | 1 | 0 | 2026-03-01T08:51:24 | Gold_Scholar1111 | false | null | 0 | o80vzjv | false | /r/LocalLLaMA/comments/1rhqeob/qwen_35_27b_is_the_best_chinese_translation_model/o80vzjv/ | false | 1 |
t1_o80vp51 | I was talking about 122b-a10b | 1 | 0 | 2026-03-01T08:48:41 | po_stulate | false | null | 0 | o80vp51 | false | /r/LocalLLaMA/comments/1re894z/the_first_local_vision_model_to_get_this_right/o80vp51/ | false | 1 |
t1_o80vozx | Ballistic missile skills and agents | 3 | 0 | 2026-03-01T08:48:38 | Particular-Way7271 | false | null | 0 | o80vozx | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80vozx/ | false | 3 |
t1_o80vnm7 | But you gotta name it the Fun and Sunshine MCP, so it doesn't know what it actually does. The [TF2 Pyro approach](https://www.youtube.com/watch?v=WUhOnX8qt3I). | 14 | 0 | 2026-03-01T08:48:18 | MoffKalast | false | null | 0 | o80vnm7 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80vnm7/ | false | 14 |
t1_o80vipv | # U.S. used Anthropic AI tools during airstrikes on Iran
To do fucking what? Translate some shit into another language? Make some deep fakes? I swear all this AI shit is just to pump up AI a little longer before it fucking explodes like the bombs we are dropping overseas. | 4 | 0 | 2026-03-01T08:46:57 | txmail | false | null | 0 | o80vipv | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80vipv/ | false | 4 |
t1_o80vdl7 | The only valid AI "safety issue" is goverments using AI for oppression, tyranny and war.
Yes, Katherine, your goverment too. | 4 | 0 | 2026-03-01T08:45:37 | crantob | false | null | 0 | o80vdl7 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80vdl7/ | false | 4 |
t1_o80va08 | My basic PDF example from last year is [llm-pdf.py](https://github.com/Jay4242/llm-scripts/blob/main/llm-pdf.py). You would change the prompts of course. You (or a bot) could add in [pytesseract](https://pypi.org/project/pytesseract/) support. Then add support for moving/copying the files.
Definitely work on copied... | 1 | 0 | 2026-03-01T08:44:40 | SM8085 | false | null | 0 | o80va08 | false | /r/LocalLLaMA/comments/1rhddg1/want_to_build_a_local_agentic_ai_to_help_with/o80va08/ | false | 1 |
t1_o80v8vk | Interested to know how you're implementing. I'm working on this in several different ways. I'm looking ultimately to do it in a way that allows for complex learning as opposed to what's currently being done with LoRA-based/adjacent techniques. Partial weights (shared between multiple networks) is one idea that has some... | 1 | 0 | 2026-03-01T08:44:22 | More_Slide5739 | false | null | 0 | o80v8vk | false | /r/LocalLLaMA/comments/1rgd851/catastrophic_forgetting_by_language_models/o80v8vk/ | false | 1 |
t1_o80v8dj | I'm not sure. I think you should start with this one to see if it works, since it does not need any special setup. If it does not work, then try docling and marker. | 1 | 0 | 2026-03-01T08:44:14 | o0genesis0o | false | null | 0 | o80v8dj | false | /r/LocalLLaMA/comments/1rhddg1/want_to_build_a_local_agentic_ai_to_help_with/o80v8dj/ | false | 1 |
t1_o80v838 | Ah welp, I gotta download more vram then. Thanks for the test :) | 1 | 0 | 2026-03-01T08:44:10 | MoffKalast | false | null | 0 | o80v838 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o80v838/ | false | 1 |
t1_o80v76r | This is a great feature! I thought that it was impossible to change gpt-oss reasoning\_effort on the fly with llama.cpp
I think I have to give llama-swap a try.
In the Qwen3.5 example, I see there is temperature settings in the command line and in the filter. If the user gives a temperature value in this message, whi... | 0 | 0 | 2026-03-01T08:43:55 | PhilippeEiffel | false | null | 0 | o80v76r | false | /r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o80v76r/ | false | 0 |
t1_o80v3ge | That’s the problem with building something other people didn’t ask for and are already comfortable and used to using the familiar trusted brands versions, they don’t try or want to switch, because they already have what they need and want. Doing it as a hobby for yourself sounds like the right direction to go. | 2 | 0 | 2026-03-01T08:42:56 | Forsaken-Paramedic-4 | false | null | 0 | o80v3ge | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80v3ge/ | false | 2 |
t1_o80uuyj | Thanks for the awesome suggestion. I agree that increasing b/in can be helpful. I see a lot of conversation about adjusting ngl and n-cpu-moe, but I thought that llama-fit-params already adjusts and accounts for all that? Was wondering what your thoughts and experience on that were | 1 | 0 | 2026-03-01T08:40:40 | Frequent-Slice-6975 | false | null | 0 | o80uuyj | false | /r/LocalLLaMA/comments/1rgkmd7/ways_to_improve_prompt_processing_when_offloading/o80uuyj/ | false | 1 |
t1_o80uuw4 | If you have solar panels the electricity cost is close to zero. | 2 | 0 | 2026-03-01T08:40:39 | Thomas-Lore | false | null | 0 | o80uuw4 | false | /r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/o80uuw4/ | false | 2 |
t1_o80utmu | You can run the one with less parameters though | 1 | 0 | 2026-03-01T08:40:20 | Possible-Basis-6623 | false | null | 0 | o80utmu | false | /r/LocalLLaMA/comments/1r5h1gj/you_can_run_minimax25_locally/o80utmu/ | false | 1 |
t1_o80usfp | I’m still using Whisper - specifically faster-whisper with the v3 turbo model. Parakeet is okay but I find Whisper produces better sentences and punctuation.
I learned the hard way to avoid whisper.cpp though! Seems a lot less accurate than the original OpenAI whisper implementation or faster-whisper. | 1 | 0 | 2026-03-01T08:40:02 | fourfourthree | false | null | 0 | o80usfp | false | /r/LocalLLaMA/comments/1r7bsfd/best_audio_models_feb_2026/o80usfp/ | false | 1 |
t1_o80urq1 | Ah thanks! | 1 | 0 | 2026-03-01T08:39:50 | riceinmybelly | false | null | 0 | o80urq1 | false | /r/LocalLLaMA/comments/1rhr9ht/coworke_plugins_wiped_out_100_billion_from_saas_i/o80urq1/ | false | 1 |
t1_o80unys | Everyone need a mac studio 512GB which pays you 7 years of ultra plan lol, but still you have one mac studio which would last for 7 years | 1 | 0 | 2026-03-01T08:38:49 | Possible-Basis-6623 | false | null | 0 | o80unys | false | /r/LocalLLaMA/comments/1r5h1gj/you_can_run_minimax25_locally/o80unys/ | false | 1 |
t1_o80unp7 | You created chatterui.... Salute to you sir.
Your app doesn't support mediatek npu's right? | 1 | 0 | 2026-03-01T08:38:45 | Esodis | false | null | 0 | o80unp7 | false | /r/LocalLLaMA/comments/1rhikjv/does_anyone_know_about_this_app/o80unp7/ | false | 1 |
t1_o80umdq | Wait - what? | 1 | 0 | 2026-03-01T08:38:24 | bugtank | false | null | 0 | o80umdq | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80umdq/ | false | 1 |
t1_o80uhl8 | Also all the bugs they listed are not related to the quant provider or unsloth? It's just unfortunate bugs in llama.cpp and it's totally normal to happen as no implementation can be perfect on day one.
Every single bug they listed affect all quant uploaders and not only unsloth. | 2 | 0 | 2026-03-01T08:37:09 | yoracale | false | null | 0 | o80uhl8 | false | /r/LocalLLaMA/comments/1rfds1h/qwen3535ba3b_q4_quantization_comparison/o80uhl8/ | false | 2 |
t1_o80ufnf | This comment will probably get lost but I'm surprised reading the comments and nobody mentioning it.
You're not first. You're not last. checkout r/selfhosted that's, in a way, everyday story to everyone there. You're trying to "offer" a service that already exists. People usually stick to one provider of any service a... | 2 | 0 | 2026-03-01T08:36:38 | kweglinski | false | null | 0 | o80ufnf | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80ufnf/ | false | 2 |
t1_o80uey1 | And how are these related to the quant provider or unsloth? It's just unfortunate bugs in llama.cpp and it's totally normal to happen as noone can be perfect.
Every single bug you listed affect all quant uploaders and not only unsloth. | 1 | 0 | 2026-03-01T08:36:27 | yoracale | false | null | 0 | o80uey1 | false | /r/LocalLLaMA/comments/1rfds1h/qwen3535ba3b_q4_quantization_comparison/o80uey1/ | false | 1 |
t1_o80uehy | It sounds reasonable but it is absolutely made up unless OC specifies a source. Don’t foment fake news. | 4 | 0 | 2026-03-01T08:36:20 | chill1217 | false | null | 0 | o80uehy | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80uehy/ | false | 4 |
t1_o80ue5b | Go to LLMs -> select your model setting by click the gear icon -> inference -> Prompt template -> template jinja, add this to the first line:
{%- set enable\_thinking = false %}
The load the model. This works for me! | 1 | 0 | 2026-03-01T08:36:15 | Historical-Crazy1831 | false | null | 0 | o80ue5b | false | /r/LocalLLaMA/comments/1rhcj7b/qwen35_with_lm_studio_api_without_thinking_output/o80ue5b/ | false | 1 |
t1_o80ud8v | Oh man I try openwork. Still on pc . Since openwork use opencode cli as backend. The plugin work in openwork too. The condition only is you have to windows. Sometimes openwork become to laggy so i recommend use Direct opencode dekstop app. | 2 | 0 | 2026-03-01T08:36:00 | No_Structure7849 | false | null | 0 | o80ud8v | false | /r/LocalLLaMA/comments/1rhr9ht/coworke_plugins_wiped_out_100_billion_from_saas_i/o80ud8v/ | false | 2 |
t1_o80ucca | " Dario herói " | 1 | 0 | 2026-03-01T08:35:46 | charmander_cha | false | null | 0 | o80ucca | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80ucca/ | false | 1 |
t1_o80uao2 | At least a military girl school in a mil complex. | -13 | 0 | 2026-03-01T08:35:19 | Far_Note6719 | false | null | 0 | o80uao2 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80uao2/ | false | -13 |
t1_o80uajz | I can't imagine an LLM would be as suited to that as a smaller type of prediction model which could have existed for years if not decades. | 40 | 0 | 2026-03-01T08:35:18 | AnOnlineHandle | false | null | 0 | o80uajz | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80uajz/ | false | 40 |
t1_o80u496 | Local Qwen will not even approach GPT4.1 | -5 | 0 | 2026-03-01T08:33:36 | Low-Opening25 | false | null | 0 | o80u496 | false | /r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/o80u496/ | false | -5 |
t1_o80u45r | A solution looking for a problem! Keep it all and just keep geeking out, find some local businesses or charities that will appreciate it through solving real problems, I wish I had better hardware to help out the email processing and organising the soccer club! | 1 | 0 | 2026-03-01T08:33:35 | johnerp | false | null | 0 | o80u45r | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80u45r/ | false | 1 |
t1_o80u2m1 | I’m not sure what is new here.
The U.S. was very capable with bombing other countries even before LLMs. The other thing is a disagreement between two entities which was already covered.
| 3 | 0 | 2026-03-01T08:33:10 | ProfessionalSpend589 | false | null | 0 | o80u2m1 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80u2m1/ | false | 3 |
t1_o80u1oz | .\llama-server.exe --model Distil\Qwen3.5-35B-A3B-MXFP4_MOE.gguf --alias Qwen3.5-35B-A3B-MXFP4 --mmproj \Distil\MMorj\mmproj-Qwen35bA3-BF16.gguf --flash-attn on -c 65536 --n-predict 65536 --jinja --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0.00 --threads 6 --fit on --no-mmap | 1 | 0 | 2026-03-01T08:32:55 | Beneficial-Good660 | false | null | 0 | o80u1oz | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o80u1oz/ | false | 1 |
t1_o80txe4 | The initial launch of 5 wasn't that bad originally. Short, to the point, no BS. They tuned it wayyy back the bubbly 4o syccophantic approach in the subsequent versions.
A lot more people than expected flipped out when their AI wasn't worshipping the ground they walked on. | 16 | 0 | 2026-03-01T08:31:47 | somersetyellow | false | null | 0 | o80txe4 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80txe4/ | false | 16 |
t1_o80tvs6 | There is nothing fitting your requirements that you can purely in VRAM. The best bet is Qwen3.5 35B or meybe the coding oriented the *Qwen3* ***Coder*** *30B A3B* with experts in system RAM. Just use the `--fit-ctx` parameter with llamacpp and use the context size you actually need not some arbitrary value. Yes it is o... | 2 | 0 | 2026-03-01T08:31:20 | tmvr | false | null | 0 | o80tvs6 | false | /r/LocalLLaMA/comments/1rhcnbt/best_coding_model_to_run_entirely_on_12gb_vram/o80tvs6/ | false | 2 |
t1_o80ttkh | 3090 80t/s
.\llama-server.exe --model Distil\Qwen3.5-35B-A3B-MXFP4_MOE.gguf --alias Qwen3.5-35B-A3B-MXFP4 --mmproj \Distil\MMorj\mmproj-Qwen35bA3-BF16.gguf --flash-attn on -c 65536 --n-predict 65536 --jinja --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0.00 --threads 6 --fit on --no-mmap | 1 | 0 | 2026-03-01T08:30:45 | Beneficial-Good660 | false | null | 0 | o80ttkh | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o80ttkh/ | false | 1 |
t1_o80tsr9 | uk its interesting because it actually depends on your hardware too , and the quantization your running , for example a new flagship graphics card with hardware acceleration for specific lower quants would get the work done way way quick , then a old one , which would as u/jslominski said relate to different performanc... | 3 | 0 | 2026-03-01T08:30:32 | Key_Pace_9755 | false | null | 0 | o80tsr9 | false | /r/LocalLLaMA/comments/1rhrg47/open_source_llm_comparable_to_gpt41/o80tsr9/ | false | 3 |
t1_o80ts8b | I didn’t see your answer till now! One last question: since I don’t have windows, should I try finding similar pipelines with cowork plugins with https://github.com/different-ai/openwork | 2 | 0 | 2026-03-01T08:30:23 | riceinmybelly | false | null | 0 | o80ts8b | false | /r/LocalLLaMA/comments/1rhr9ht/coworke_plugins_wiped_out_100_billion_from_saas_i/o80ts8b/ | false | 2 |
t1_o80t6jv | Just as a followup, out of curiosity I tested if it's possible to overrun an 100k context, and yep it's sadly pretty easily possible, so I'd guess if you are limited to a context of 16k you would be quite limited in the scope of how to use mistral vibe. I think it is mostly good for making minor changes to code bases (... | 1 | 0 | 2026-03-01T08:24:43 | WildDogOne | false | null | 0 | o80t6jv | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o80t6jv/ | false | 1 |
t1_o80t5s6 | Did you try the new Unsloth update and re-download it? [https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new\_qwen3535ba3b\_unsloth\_dynamic\_ggufs\_benchmarks/](https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/) | 1 | 0 | 2026-03-01T08:24:31 | yoracale | false | null | 0 | o80t5s6 | false | /r/LocalLLaMA/comments/1re894z/the_first_local_vision_model_to_get_this_right/o80t5s6/ | false | 1 |
t1_o80t1qq | I'd say this is a wrong dilemma. The 2x 3090 can do much more and much faster than a 64GB Mac Studio regardless even if it is a Max. Much faster prompt processing, much faster token generation, fast image and video generation. Even with the current RAM prices it would also be cheaper building a 2x 3090 and 64GB DDR5 RA... | 1 | 0 | 2026-03-01T08:23:27 | tmvr | false | null | 0 | o80t1qq | false | /r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o80t1qq/ | false | 1 |
t1_o80t1mi | There were these updates: [https://x.com/UnslothAI/status/2013966866646180345](https://x.com/UnslothAI/status/2013966866646180345)
Qwen3-coder-next: [https://huggingface.co/unsloth/Qwen3-Coder-Next-GGUF/discussions/5](https://huggingface.co/unsloth/Qwen3-Coder-Next-GGUF/discussions/5)
But they're related to the infe... | 1 | 0 | 2026-03-01T08:23:26 | yoracale | false | null | 0 | o80t1mi | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o80t1mi/ | false | 1 |
t1_o80t0ic | Yes,
Most documents hold the classification data on the first 2 pages.
But OCR is necessary, because those pdfs are scanned in docs | 2 | 0 | 2026-03-01T08:23:08 | Gold-Drag9242 | false | null | 0 | o80t0ic | false | /r/LocalLLaMA/comments/1rhddg1/want_to_build_a_local_agentic_ai_to_help_with/o80t0ic/ | false | 2 |
t1_o80spvh | Thank you.
Can this PDF library deal with scans?
Those pdfs need OCR to extract data | 1 | 0 | 2026-03-01T08:20:18 | Gold-Drag9242 | false | null | 0 | o80spvh | false | /r/LocalLLaMA/comments/1rhddg1/want_to_build_a_local_agentic_ai_to_help_with/o80spvh/ | false | 1 |
t1_o80spok | One's friends and family are rarely the target market for niche startup projects already dwarfed by more capable, more well known, free competitors; especially when such projects require intellectual investment to even comprehend what you're "pitching" and why it's a good alternative to said competitors. | 3 | 0 | 2026-03-01T08:20:14 | ___fallenangel___ | false | null | 0 | o80spok | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80spok/ | false | 3 |
t1_o80slme | Well I know there were initially issues with the qwen 3.5 unsloth quant, or at least that's what many reported. I can't remember the last occasions of things like this exactly. I recall at least one other unsloth quant not working for some new model family, but couldn't tell you which one. I know I've experienced cases... | 1 | 0 | 2026-03-01T08:19:11 | Monkey_1505 | false | null | 0 | o80slme | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o80slme/ | false | 1 |
t1_o80slc6 | Ah sorry, got my answer from perplexity:
The bottom line: `oh-my-opencode` and `eren726290/opencode-plugins` are complementary, not redundant — one supercharges coding, the other brings business workflows into OpenCode. Cowork plugins are a parallel concept but locked to Anthropic’s desktop app ecosystem, and Claude C... | 1 | 0 | 2026-03-01T08:19:07 | riceinmybelly | false | null | 0 | o80slc6 | false | /r/LocalLLaMA/comments/1rhr9ht/coworke_plugins_wiped_out_100_billion_from_saas_i/o80slc6/ | false | 1 |
t1_o80skec | I want it. | 1 | 0 | 2026-03-01T08:18:53 | woswoissdenniii | false | null | 0 | o80skec | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80skec/ | false | 1 |
t1_o80sjmd | FYI the quant issue didn't affect any quants except Q2\_X\_XL, Q3\_X\_XL and Q4\_X\_XL. So if you were using Q5 or above, you were completely in the clear. However, we do have to update all of them with tool-calling chat template issues. (not the chat template issue was prelevant in the original model and is not releva... | 2 | 0 | 2026-03-01T08:18:40 | yoracale | false | null | 0 | o80sjmd | false | /r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/o80sjmd/ | false | 2 |
t1_o80shvz | FYI the quant issue didn't affect any quants except Q2\_X\_XL, Q3\_X\_XL and Q4\_X\_XL. So if you were using Q5 or above, you were completely in the clear. However, we do have to update all of them with tool-calling chat template issues. (not the chat template issue was prelevant in the original model and is not releva... | 1 | 0 | 2026-03-01T08:18:13 | yoracale | false | null | 0 | o80shvz | false | /r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/o80shvz/ | false | 1 |
t1_o80sdpo | Without the threat from a stick. | 2 | 0 | 2026-03-01T08:17:07 | ProfessionalSpend589 | false | null | 0 | o80sdpo | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80sdpo/ | false | 2 |
t1_o80s6z9 | What are some real life use-cases you use Nanobot for? I need inspiration. Something that e.g. free ChatGPT can't do. | 1 | 0 | 2026-03-01T08:15:22 | MichalMikolas | false | null | 0 | o80s6z9 | false | /r/LocalLLaMA/comments/1ra89sb/just_installed_nanobot_fully_locally/o80s6z9/ | false | 1 |
t1_o80s56t | Very interesting thank you u/Xantrk we're going to take a look!!! | 1 | 0 | 2026-03-01T08:14:54 | yoracale | false | null | 0 | o80s56t | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o80s56t/ | false | 1 |
t1_o80s4yb | One's friends and family are rarely the target market for passion projects | 1 | 0 | 2026-03-01T08:14:50 | ___fallenangel___ | false | null | 0 | o80s4yb | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80s4yb/ | false | 1 |
t1_o80s20a | You dont have to imagine. That system exists for a long time.
https://www.972mag.com/lavender-ai-israeli-army-gaza/ | 12 | 0 | 2026-03-01T08:14:04 | bobrobor | false | null | 0 | o80s20a | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o80s20a/ | false | 12 |
t1_o80s189 | Your friends and family are rarely the target market for anything | 1 | 0 | 2026-03-01T08:13:52 | ___fallenangel___ | false | null | 0 | o80s189 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o80s189/ | false | 1 |
t1_o80rxj1 | Why not just give the model a screenshot of the UI you want? Why make it harder for them? I wouldn’t be able to build one for you without checking references. Also, you can literally add to your prompt, “Please do a web/docs search and find how RPN based calculators work” when you have this type of tooling enabled. | 1 | 0 | 2026-03-01T08:12:54 | jslominski | false | null | 0 | o80rxj1 | false | /r/LocalLLaMA/comments/1rhdddm/qwen_35_122ba10b_q3_k_xl_ud_actually_passed_my/o80rxj1/ | false | 1 |
t1_o80rw7n | [removed] | 1 | 0 | 2026-03-01T08:12:34 | [deleted] | true | null | 0 | o80rw7n | false | /r/LocalLLaMA/comments/1rhmye0/trying_to_improve_my_memory_system_any_notes/o80rw7n/ | false | 1 |
t1_o80rvli | Oh I get you! Could you tell me when you’ve seen other issues like this happen? | 1 | 0 | 2026-03-01T08:12:24 | yoracale | false | null | 0 | o80rvli | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o80rvli/ | false | 1 |
t1_o80rugj | Sick, now just make it vibe code the os around it. | 1 | 0 | 2026-03-01T08:12:06 | wh33t | false | null | 0 | o80rugj | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o80rugj/ | false | 1 |
t1_o80rth9 | we have boot sector LLM before GTA 6 | 1 | 0 | 2026-03-01T08:11:51 | Altruistic_Heat_9531 | false | null | 0 | o80rth9 | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o80rth9/ | false | 1 |
t1_o80rqm8 | There are different kinds of reasoning. My benchmark checks that the model won't get lost in a sea of simple premises where omitting or misinterpreting even a single one is fatal, but the reasoning process itself is not complicated.
I wonder how smaller Qwen 3.5 models will perform in problems that are non-obvious and... | 2 | 0 | 2026-03-01T08:11:06 | fairydreaming | false | null | 0 | o80rqm8 | false | /r/LocalLLaMA/comments/1rg9lli/little_qwen_35_27b_and_qwen_35ba3b_models_did/o80rqm8/ | false | 2 |
t1_o80rqfj | FYI the quant issue didn't affect any quants except Q2\_X\_XL, Q3\_X\_XL and Q4\_X\_XL. So if you were using Q6, you were completely in the clear. However, we do have to update all of them with tool-calling chat template issues. (not the chat template issue was prelevant in the original model and is not relevant to Uns... | 1 | 0 | 2026-03-01T08:11:03 | yoracale | false | null | 0 | o80rqfj | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o80rqfj/ | false | 1 |
t1_o80rq40 | Tried q5_k_m, iq4_xs, ud-iq3_xxs, also didn't help. I think this is a problem with my llama.cpp, but gpt oss 120B runs normally | 1 | 0 | 2026-03-01T08:10:58 | Acrobatic_Donkey5089 | false | null | 0 | o80rq40 | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o80rq40/ | false | 1 |
t1_o80rpug | FYI the quant issue didn't affect any quants except Q2\_X\_XL, Q3\_X\_XL and Q4\_X\_XL. So if you were using Q6, you were completely in the clear. However, we do have to update all of them with tool-calling chat template issues. (not the chat template issue was prelevant in the original model and is not relevant to Uns... | 2 | 0 | 2026-03-01T08:10:54 | yoracale | false | null | 0 | o80rpug | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o80rpug/ | false | 2 |
t1_o80rpnb | FYI the quant issue didn't affect any quants except Q2\_X\_XL, Q3\_X\_XL and Q4\_X\_XL. So if you were using Q6, you were completely in the clear. However, we do have to update all of them with tool-calling chat template issues. (not the chat template issue was prelevant in the original model and is not relevant to Uns... | 1 | 0 | 2026-03-01T08:10:50 | yoracale | false | null | 0 | o80rpnb | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o80rpnb/ | false | 1 |
t1_o80rgh2 | 2-3 3090s, this is still a cheap setup for the performance. | 1 | 0 | 2026-03-01T08:08:26 | jslominski | false | null | 0 | o80rgh2 | false | /r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o80rgh2/ | false | 1 |
t1_o80rfes | Oh my opencode it self act as plugin. So i take oh my opencode architecture as inspirational framwork | 2 | 0 | 2026-03-01T08:08:09 | No_Structure7849 | false | null | 0 | o80rfes | false | /r/LocalLLaMA/comments/1rhr9ht/coworke_plugins_wiped_out_100_billion_from_saas_i/o80rfes/ | false | 2 |
t1_o80reof | Please update us with the newest details when you test everything, I am planning to use hybrid setup as well | 1 | 0 | 2026-03-01T08:07:58 | FeiX7 | false | null | 0 | o80reof | false | /r/LocalLLaMA/comments/1qtnz9s/best_local_model_for_openclaw/o80reof/ | false | 1 |
t1_o80reao | Qwen3.5 a3b is much better in my personal tests. | 1 | 0 | 2026-03-01T08:07:52 | jslominski | false | null | 0 | o80reao | false | /r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o80reao/ | false | 1 |
t1_o80re8c | [deleted] | 1 | 0 | 2026-03-01T08:07:51 | [deleted] | true | null | 0 | o80re8c | false | /r/LocalLLaMA/comments/1rhr9ht/coworke_plugins_wiped_out_100_billion_from_saas_i/o80re8c/ | false | 1 |
t1_o80rbmt | I am, this is exactly what I need on dual RTX 3090! | 1 | 0 | 2026-03-01T08:07:10 | jslominski | false | null | 0 | o80rbmt | false | /r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o80rbmt/ | false | 1 |
t1_o80raaj | which gguf you used for qwen? unsloth just updated it, you can try | 1 | 0 | 2026-03-01T08:06:49 | FeiX7 | false | null | 0 | o80raaj | false | /r/LocalLLaMA/comments/1qtnz9s/best_local_model_for_openclaw/o80raaj/ | false | 1 |
t1_o80r7p2 | i'm just using lm studio to run the model, i'm not typing in it's GUI. ty tho | 7 | 0 | 2026-03-01T08:06:09 | PermitNo8107 | false | null | 0 | o80r7p2 | false | /r/LocalLLaMA/comments/1rhr5ko/is_there_a_way_to_disable_thinking_on_qwen_35_27b/o80r7p2/ | false | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.