name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o82p3hs | See RabbitLLM on GitHub. This fork of AirLLM is only a week old but it does work for some models with reasonable performance. See also the discussion item I started on that repo regarding performance improvements. | 1 | 0 | 2026-03-01T16:27:01 | Protopia | false | null | 0 | o82p3hs | false | /r/LocalLLaMA/comments/1rhcnbt/best_coding_model_to_run_entirely_on_12gb_vram/o82p3hs/ | false | 1 |
t1_o82p2or | Does the -ve VRAM always complain about the +ve VRAM neighbours? 😒 | 1 | 0 | 2026-03-01T16:26:54 | giant3 | false | null | 0 | o82p2or | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o82p2or/ | false | 1 |
t1_o82p2kh | You ever been inside either of those companies?
Just asking for a friend. | 1 | 0 | 2026-03-01T16:26:53 | Orpheusly | false | null | 0 | o82p2kh | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o82p2kh/ | false | 1 |
t1_o82os0r | Anything can be executed in a container.
Ollama is just a wrapper of llama.cpp, and not even the latest versions. They tend to be behind in terms of model and feature support, have more bugs, and very bad at anything but the most basic use cases. | 7 | 0 | 2026-03-01T16:25:28 | FullstackSensei | false | null | 0 | o82os0r | false | /r/LocalLLaMA/comments/1ri0iep/ollama_or_openvino/o82os0r/ | false | 7 |
t1_o82ompv | Yes | 1 | 0 | 2026-03-01T16:24:46 | high_funtioning_mess | false | null | 0 | o82ompv | false | /r/LocalLLaMA/comments/1rclied/glm47flash_vs_qwen3codernext_vs_gptoss120b/o82ompv/ | false | 1 |
t1_o82ol5j | I've got an M4 Max Macbook pro -- would this help me? If yes - how? How is this different from training on Metal?
In the sense that does training on the ANE vs Metal provide higher compute? | 1 | 0 | 2026-03-01T16:24:34 | DarthLoki79 | false | null | 0 | o82ol5j | false | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o82ol5j/ | false | 1 |
t1_o82okow | Maybe it's possible to artificially put more weight on the instructions. Like in the attention after the softmax stage put extra attention on the instructions tokens aka modifying the attention weights outputted by the softmax. I wonder what would happen... Would it completely destroy it's "thinking" process or is woul... | 2 | 0 | 2026-03-01T16:24:30 | AdventurousFly4909 | false | null | 0 | o82okow | false | /r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o82okow/ | false | 2 |
t1_o82ojdz | [removed] | 1 | 0 | 2026-03-01T16:24:19 | [deleted] | true | null | 0 | o82ojdz | false | /r/LocalLLaMA/comments/1rhymsi/18_failed_attempts_to_get_a_tiny_ai_agent_running/o82ojdz/ | false | 1 |
t1_o82ois5 | Added to post. | 1 | 0 | 2026-03-01T16:24:14 | Holiday_Purpose_3166 | false | null | 0 | o82ois5 | false | /r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o82ois5/ | false | 1 |
t1_o82oabi | Two thing can be made to be true. | 1 | 0 | 2026-03-01T16:23:06 | FrostyParking | false | null | 0 | o82oabi | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o82oabi/ | false | 1 |
t1_o82o956 | Siii cada vez peor!
Se llena de tantas advertencias para cubrirse legalmente que termina aportando poco y nada | 1 | 0 | 2026-03-01T16:22:55 | Loud_Tie_275 | false | null | 0 | o82o956 | false | /r/LocalLLaMA/comments/1nolt8t/condescension_in_ai_is_getting_worse/o82o956/ | false | 1 |
t1_o82o4fj | Do you not look anything up when info is at your fingertips? Do you not ever sit there and think…gee LLM sure is reasonin a lot, should I be doing the same?
Learn what consilience and confluence mean. Learn what inductive, deductive reasoning are. Learn how to THINK.
Myopia is why we’re in this shithole of a world ... | -1 | 0 | 2026-03-01T16:22:17 | brownman19 | false | null | 0 | o82o4fj | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82o4fj/ | false | -1 |
t1_o82o4d7 | Appreciate the words. | 2 | 0 | 2026-03-01T16:22:16 | Holiday_Purpose_3166 | false | null | 0 | o82o4d7 | false | /r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o82o4d7/ | false | 2 |
t1_o82o29e | If the frontiers are so adverse to working with this government why has two of the largest jumped at the first sign of trouble for Anthropic?.....why has Google changed their policies to fit more with the current doctrine?.....don't fool yourself my friend, what these companies say in public is a whole different story ... | 1 | 0 | 2026-03-01T16:21:59 | FrostyParking | false | null | 0 | o82o29e | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o82o29e/ | false | 1 |
t1_o82o262 | The 6.6 TFLOPS/watt figure is wild, nearly 5x an H100. Even at 2-3% utilization the efficiency story is compelling. If you manage to push that up with better graph scheduling, a cluster of M4 Minis could genuinely become one of the most power-efficient training setups out there. | 22 | 0 | 2026-03-01T16:21:58 | ruibranco | false | null | 0 | o82o262 | false | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o82o262/ | false | 22 |
t1_o82nxkd | Just tested it. Yep, even these ones. I read the hugging face description, I can see some use cases for me where this model will come in handy, such as speaking for practice sake. But it fails at romanji where I would practice some reading. GPT OSS 20B MXFP4 still beats it. | 1 | 0 | 2026-03-01T16:21:21 | Rique_Belt | false | null | 0 | o82nxkd | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o82nxkd/ | false | 1 |
t1_o82nkhu | Nice, you can also put QR codes on boxes of "stuff" in the attic, etc. Then they can scan it and see a list of everything inside. Think of it as the first iteration of ChatGPT, just keep adding features/data, and make it super easy to use and use it yourself. Eventually, one or more will start using it, especially i... | 2 | 0 | 2026-03-01T16:19:36 | gearcontrol | false | null | 0 | o82nkhu | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o82nkhu/ | false | 2 |
t1_o82nhi0 | I was referring to why your writing style feels out of place.
If you have two agents chatting, each with their own system prompt, as the chat goes on the context for both bots becomes more and more alike, and the system prompt gets relatively less and less attention, both bots become capable of simulating the same con... | 1 | 0 | 2026-03-01T16:19:11 | AICatgirls | false | null | 0 | o82nhi0 | false | /r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o82nhi0/ | false | 1 |
t1_o82nbr0 | Someone here previously mentioned DeepEval. I haven't yet tried it myself but it looks like it could be useful for this.
https://github.com/confident-ai/deepeval | 1 | 0 | 2026-03-01T16:18:25 | OsmanthusBloom | false | null | 0 | o82nbr0 | false | /r/LocalLLaMA/comments/1ri14x0/has_anyone_built_a_proper_eval_pipeline_for_local/o82nbr0/ | false | 1 |
t1_o82nbdk | llama.cpp can be execute from within a container?
Why are you against Ollama? | 1 | 0 | 2026-03-01T16:18:22 | G4rp | false | null | 0 | o82nbdk | false | /r/LocalLLaMA/comments/1ri0iep/ollama_or_openvino/o82nbdk/ | false | 1 |
t1_o82n6ef | I'm not shilling. I'm actually an OAI user and a codex subscriber.
I just understand this environment intimately because I work in it every single day under similar constraints, but okay. I'm going back to work now. Cheers! | 1 | 0 | 2026-03-01T16:17:41 | Orpheusly | false | null | 0 | o82n6ef | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o82n6ef/ | false | 1 |
t1_o82n3lr | i guess the problem is what these models are trained with. Not many devs use qt. As i mentioned with html, CSS and js it one shotted a screenshot to html. But i will try different settings in the link you gave. Thank you. | 2 | 0 | 2026-03-01T16:17:18 | wisepal_app | false | null | 0 | o82n3lr | false | /r/LocalLLaMA/comments/1rhzknn/best_local_model_for_python_and_qt_quick_coding/o82n3lr/ | false | 2 |
t1_o82mz41 | Thanks. Cant beat working projs with signs of use/life. | 2 | 0 | 2026-03-01T16:16:42 | SteppenAxolotl | false | null | 0 | o82mz41 | false | /r/LocalLLaMA/comments/1rhj0l9/mcp_server_for_searxngnonapi_local_search/o82mz41/ | false | 2 |
t1_o82mrom | Reads like a slop post to be honest... | 14 | 0 | 2026-03-01T16:15:42 | LagOps91 | false | null | 0 | o82mrom | false | /r/LocalLLaMA/comments/1ri0puh/honor_would_use_deepseek/o82mrom/ | false | 14 |
t1_o82mqt0 | > DENVER--(BUSINESS WIRE)-- Anthropic and Palantir Technologies Inc. (NYSE: PLTR) today announced a partnership with Amazon Web Services (AWS) to provide U.S. intelligence and defense agencies access to the Claude 3 and 3.5 family of models on AWS
> This partnership allows for an integrated suite of technology to **op... | 6 | 0 | 2026-03-01T16:15:35 | yopla | false | null | 0 | o82mqt0 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82mqt0/ | false | 6 |
t1_o82mpwl | Idk why you are shilling but okay friend.
Firstly I'm the actual world, that 6 month deadline is final. If Anthropic doesn't get a court verdict by the expiration date, it won't be reintroduced as the primary system, by then the other two will be regarded as the prime suppliers. So they lost that business already. Th... | 1 | 0 | 2026-03-01T16:15:28 | FrostyParking | false | null | 0 | o82mpwl | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o82mpwl/ | false | 1 |
t1_o82ml5s | No, JJK | 1 | 0 | 2026-03-01T16:14:50 | Hammer-Evader-5624 | false | null | 0 | o82ml5s | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o82ml5s/ | false | 1 |
t1_o82mj2u | That was what i tought first. I prepared an Md file from official documents but it took 50k tokens. So i will try to narrow it to at least 10k. I am using Pi for just 3 days i will look at extensions. Thanks for this advice. | 1 | 0 | 2026-03-01T16:14:33 | wisepal_app | false | null | 0 | o82mj2u | false | /r/LocalLLaMA/comments/1rhzknn/best_local_model_for_python_and_qt_quick_coding/o82mj2u/ | false | 1 |
t1_o82mieb | Super! Thanks for sharing this. Just trying out vllm-mlx today and noticed tool calling was broken too so this is very helpful. | 1 | 0 | 2026-03-01T16:14:27 | whysee0 | false | null | 0 | o82mieb | false | /r/LocalLLaMA/comments/1rf288a/qwen3codernext_at_65_toks_on_m3_ultra_with/o82mieb/ | false | 1 |
t1_o82mh3v | i just run vllm from docker. Yeah I had some bumps I.e. I had no idea that I have to add switch `--dtype float16` to run Qwen 3 8b (or something similar I do not remember now) and running Qwen Coder restarts gnome manager completely, but otherwise it was pretty much just `docker run`. And I barely just started playing ... | 1 | 0 | 2026-03-01T16:14:17 | komio | false | null | 0 | o82mh3v | false | /r/LocalLLaMA/comments/1pihhd5/best_gpu_for_running_local_llms/o82mh3v/ | false | 1 |
t1_o82m9tz | I always found "Prompt engineering" retarded, you have access to all the code and weights and you choose to interact with it in the most inefficient and stupid way possible. I recommend you consider steering vectors.
[https://www.emergentmind.com/topics/steering-vectors](https://www.emergentmind.com/topics/steering-ve... | 1 | 0 | 2026-03-01T16:13:18 | AdventurousFly4909 | false | null | 0 | o82m9tz | false | /r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o82m9tz/ | false | 1 |
t1_o82m8l6 | Wow this is incredible. Would you happen to know how I can translate those launch commands into the lmstudio settings? I'm still learning myself and trying to get the best performance with this model on my own 5080. Thank you! | 1 | 0 | 2026-03-01T16:13:08 | WhataburgerFreak | false | null | 0 | o82m8l6 | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o82m8l6/ | false | 1 |
t1_o82lton | Wait what DeepSeek model is deployable on android devices? Or is this foreshadowing? They better not be talking about those DeepSeek distills. | 5 | 0 | 2026-03-01T16:11:08 | nullmove | false | null | 0 | o82lton | false | /r/LocalLLaMA/comments/1ri0puh/honor_would_use_deepseek/o82lton/ | false | 5 |
t1_o82ltmj | The government is also betting that frontier AI companies are lining up to work with them. They aren't. The regulatory and compliance burden is brutal, and the political instability of the last few years has made it worse. You can't strong-arm a vendor you need when your alternatives are limited.
Source: Am also a so... | 1 | 0 | 2026-03-01T16:11:07 | Orpheusly | false | null | 0 | o82ltmj | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o82ltmj/ | false | 1 |
t1_o82lnzk | Excellent writeup! Thanks for doing this! I trust these kind of tests alot more than the benchmarks. | 2 | 0 | 2026-03-01T16:10:22 | Freaker79 | false | null | 0 | o82lnzk | false | /r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o82lnzk/ | false | 2 |
t1_o82lc0p | dense models def feel more consistent for coding imo. moe routing can be unpredictable when you need reliable code gen patterns. the parameter efficiency is nice but sometimes you want all that compute active instead of hoping the router picks the right experts. | 10 | 0 | 2026-03-01T16:08:46 | papertrailml | false | null | 0 | o82lc0p | false | /r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o82lc0p/ | false | 10 |
t1_o82lbqq | Are you citing your imagination as facts? | 7 | 0 | 2026-03-01T16:08:44 | AnOnlineHandle | false | null | 0 | o82lbqq | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82lbqq/ | false | 7 |
t1_o82l6w2 | I have had exceptional results with Qwen3 Coder Next and Qwen3.5 35B.
If they fail at Qt I suspect that Qt just appears very rarely in the training set (which does make sense).
At that point the model becomes less important but you should provide it as much examples and documentation as possible. | 1 | 0 | 2026-03-01T16:08:05 | mlhher | false | null | 0 | o82l6w2 | false | /r/LocalLLaMA/comments/1rhzknn/best_local_model_for_python_and_qt_quick_coding/o82l6w2/ | false | 1 |
t1_o82l3gx | tbh this explains a lot... been running qwen3.5 for coding and noticed it gets weird around 25-30k tokens, kept thinking it was the model but makes sense if k-cache quantization is messing with attention patterns. fp16 k-cache is probably worth the vram hit for anything that needs consistent outputs. | 2 | 0 | 2026-03-01T16:07:38 | papertrailml | false | null | 0 | o82l3gx | false | /r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o82l3gx/ | false | 2 |
t1_o82ku4n | llama.cpp should work just fine or am I missing something?
I would try to avoid Ollama like the plague. | 10 | 0 | 2026-03-01T16:06:23 | mlhher | false | null | 0 | o82ku4n | false | /r/LocalLLaMA/comments/1ri0iep/ollama_or_openvino/o82ku4n/ | false | 10 |
t1_o82ktzt | Not to mention they just shot themselves in the foot by admitting they used Claude in the attacks on Iran.
So which is it, are they a big bad terrible threat to national security or an essential tool for strategic advantage? | 1 | 0 | 2026-03-01T16:06:22 | Orpheusly | false | null | 0 | o82ktzt | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o82ktzt/ | false | 1 |
t1_o82kkfb | > HPE
this is almost the worst server you could have bought lol
> fits in my 1U
1U is a serious limitation, if you plan to run models bigger than 8B then you'll need large GPUs so a larger case.
> 60w
power savings come at a cost: this GPU has just 200 GB/s memory bandwidth which is just a little bit faster than o... | 1 | 0 | 2026-03-01T16:05:05 | MelodicRecognition7 | false | null | 0 | o82kkfb | false | /r/LocalLLaMA/comments/1rhifeg/im_waiting_for_my_nvidia_a2_to_crawl_in_to_run_a/o82kkfb/ | false | 1 |
t1_o82kk32 | i'm using kitten 0.8
fixed something, they should be similar | 2 | 0 | 2026-03-01T16:05:02 | HatEducational9965 | false | null | 0 | o82kk32 | false | /r/LocalLLaMA/comments/1rc9qvb/kitten_tts_v08_running_in_the_browser/o82kk32/ | false | 2 |
t1_o82kk40 | major props if you're able to get this set up and working consistently, sounds like a nightmare to manage unless you have a CS background? im not sure if this is something you could claude code through with any sort of efficiency, how did you plan on setting this up, and have you looked at alternatives like deepjudge o... | 2 | 0 | 2026-03-01T16:05:02 | space_149 | false | null | 0 | o82kk40 | false | /r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82kk40/ | false | 2 |
t1_o82k5km | It means I know to do the reading before forming a fractured and sensationalist opinion.
1. The "6 months" is not referring to a period for appeal before the courts. That's the window imposed on federal agencies to phase out using Claude. Anthropic has no fixed statutory deadline in this case, but will of course likel... | 1 | 0 | 2026-03-01T16:03:06 | Orpheusly | false | null | 0 | o82k5km | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o82k5km/ | false | 1 |
t1_o82jtja | I'm skeptical for AI on android. LLM performance is poor even in high end snapdragon 8 elite SOCs. Unless they start making NPUs with more than 2gb of memory access, this is just hype marketing. | 11 | 0 | 2026-03-01T16:01:31 | ----Val---- | false | null | 0 | o82jtja | false | /r/LocalLLaMA/comments/1ri0puh/honor_would_use_deepseek/o82jtja/ | false | 11 |
t1_o82je1g | "It wasn't an interception. It was a revelation."
"Somewhere outside, the night sky lit up in a brilliant constellation of conflagration that sent shivers down your spine."
"The missiles struck with military precision -- the commander's jaw tightened, his breath hitched, his pupils blown wide."
"The air smelled of c... | 29 | 0 | 2026-03-01T15:59:27 | Due_Effort8570 | false | null | 0 | o82je1g | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82je1g/ | false | 29 |
t1_o82jdsc | Yeah I’m doing a malware analysis class this semester and even when throwing full assembly traces in the chat it happily aids in reversing them | 20 | 0 | 2026-03-01T15:59:25 | claythearc | false | null | 0 | o82jdsc | false | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o82jdsc/ | false | 20 |
t1_o82j1y1 | I feel like a child before Christmas | 3 | 0 | 2026-03-01T15:57:51 | AppealSame4367 | false | null | 0 | o82j1y1 | false | /r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o82j1y1/ | false | 3 |
t1_o82j1rw | The Apple NPU works in fp16 most probably(determined by sending INT8 workloads and observing the same peak as FP16) . Which is what triggered the training question 😅
Fp16 training made things a bit easier | 7 | 0 | 2026-03-01T15:57:50 | jack_smirkingrevenge | false | null | 0 | o82j1rw | false | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o82j1rw/ | false | 7 |
t1_o82iwnc | I don’t have the sources on hand. You’ll need to dig but LLMs can help with that.
1989 was when CERN introduced the WWW in conjunction with us intel.
1998 was when DARPA reportedly met with Oracle and Google
During that entire period Epstein was in his prime surveillance and Intel outfit. We also know that Sergei ... | 1 | 1 | 2026-03-01T15:57:08 | brownman19 | false | null | 0 | o82iwnc | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82iwnc/ | false | 1 |
t1_o82iul7 | If you've got the budget, buy it. Whatever anyone says about Macs, they really do "just work". Speccing and running a high end Windows PC will turn into a sysadmin job. You pay the premium for simplicity and reliability. | 1 | 0 | 2026-03-01T15:56:51 | scratchresistor | false | null | 0 | o82iul7 | false | /r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o82iul7/ | false | 1 |
t1_o82im90 | 9b-A1b could be interesting for my laptop | 15 | 0 | 2026-03-01T15:55:44 | Ok-Measurement-1575 | false | null | 0 | o82im90 | false | /r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o82im90/ | false | 15 |
t1_o82im30 | openscad rulez! | 1 | 0 | 2026-03-01T15:55:42 | Educational_Sun_8813 | false | null | 0 | o82im30 | false | /r/LocalLLaMA/comments/1rfi53f/completed_my_64gb_vram_rig_dual_mi50_build_custom/o82im30/ | false | 1 |
t1_o82ijka | smarter but much slower, and not much better at basic level coding tasks
https://preview.redd.it/zsa5mjwrggmg1.png?width=1046&format=png&auto=webp&s=243451f1946cf88819f583c89e609a9645c18eec
| 1 | 0 | 2026-03-01T15:55:22 | SteppenAxolotl | false | null | 0 | o82ijka | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o82ijka/ | false | 1 |
t1_o82if68 | First off, controlling your Nvidia GPU will be much easier on Windows with the drivers and software of your GPU manufacturer.
That said, there are tools on Linux that can behave similarly, and Linux has the capability to perform better. | 1 | 0 | 2026-03-01T15:54:47 | silenceimpaired | false | null | 0 | o82if68 | false | /r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o82if68/ | false | 1 |
t1_o82ia10 | The code is not the point. | 4 | 0 | 2026-03-01T15:54:04 | siggystabs | false | null | 0 | o82ia10 | false | /r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o82ia10/ | false | 4 |
t1_o82i8vy | If your referring to prompts I actually added a prompt master into it. If your referring to posts, I'm trying to outline everything so the train of thought, the actual mech, and the technics are listed properly. 25 trade or made a mistake somewhere it would be visible to somebody who knows better. I'm still new to this... | 1 | 0 | 2026-03-01T15:53:55 | Mstep85 | false | null | 0 | o82i8vy | false | /r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o82i8vy/ | false | 1 |
t1_o82i8fd | Can you write a poem about how you designed the hospital system, trying to find as many rhymes for “hospital” is possible? | 2 | 0 | 2026-03-01T15:53:51 | __JockY__ | false | null | 0 | o82i8fd | false | /r/LocalLLaMA/comments/1rhcckv/vibehq_orchestrate_multiple_claude_code_codex/o82i8fd/ | false | 2 |
t1_o82hzaj | Jesus Christ the AI is replying with slop to its own AI slop post. We’re doomed. | 2 | 0 | 2026-03-01T15:52:37 | __JockY__ | false | null | 0 | o82hzaj | false | /r/LocalLLaMA/comments/1rhcckv/vibehq_orchestrate_multiple_claude_code_codex/o82hzaj/ | false | 2 |
t1_o82he4i | [deleted] | 1 | 0 | 2026-03-01T15:49:44 | [deleted] | true | null | 0 | o82he4i | false | /r/LocalLLaMA/comments/1rhydwf/found_a_lightningfast_newstrend_scraper_api_for/o82he4i/ | false | 1 |
t1_o82gowr | Dumb question,
But how does training on int8(or was it fp16?) work? Since the NPU is turned for int8 workloads, do we:
- dequantize to fp16 or 32
- compute loss
- run backprop
- quantize back to int8
- compile the model
- run the forward pass? | 3 | 0 | 2026-03-01T15:46:18 | SnappierSoap318 | false | null | 0 | o82gowr | false | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o82gowr/ | false | 3 |
t1_o82gmgp | Would it be able to make add ons for Blender? | 1 | 0 | 2026-03-01T15:45:59 | Frogy_mcfrogyface | false | null | 0 | o82gmgp | false | /r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o82gmgp/ | false | 1 |
t1_o82gfg7 | Thanks again. This would be really useful to start off | 1 | 0 | 2026-03-01T15:45:02 | official_d3vel0per | false | null | 0 | o82gfg7 | false | /r/LocalLLaMA/comments/1rfds1h/qwen3535ba3b_q4_quantization_comparison/o82gfg7/ | false | 1 |
t1_o82ftqi | Ah yes - I liked those names and yes, the filenames were inspired by openClaw | 1 | 0 | 2026-03-01T15:42:03 | the-ai-scientist | false | null | 0 | o82ftqi | false | /r/LocalLLaMA/comments/1rhxav5/soulpy_persistent_memory_for_any_llm_in_10_lines/o82ftqi/ | false | 1 |
t1_o82flnj | and new rerankers! | 3 | 0 | 2026-03-01T15:40:56 | ab2377 | false | null | 0 | o82flnj | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o82flnj/ | false | 3 |
t1_o82filc | [removed] | 1 | 0 | 2026-03-01T15:40:30 | [deleted] | true | null | 0 | o82filc | false | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o82filc/ | false | 1 |
t1_o82fdrt | I've never used ollama so I can't say if it's for sure faster but I think generally it is considered faster.
At the very least you get quicker updates with performance uplifts and it's pretty easy to get going with the -fit flag. | 2 | 0 | 2026-03-01T15:39:50 | 12bitmisfit | false | null | 0 | o82fdrt | false | /r/LocalLLaMA/comments/1rhh96x/qwen3_4b_and_8b_thinking_loop/o82fdrt/ | false | 2 |
t1_o82fbui | Great work!! | 1 | 0 | 2026-03-01T15:39:34 | TheThoccnessMonster | false | null | 0 | o82fbui | false | /r/LocalLLaMA/comments/1rhbtnw/the_state_of_openweights_llms_performance_on/o82fbui/ | false | 1 |
t1_o82f3r7 | i just dont know how to thanks qwen! | 2 | 0 | 2026-03-01T15:38:25 | ab2377 | false | null | 0 | o82f3r7 | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o82f3r7/ | false | 2 |
t1_o82ev5e | [removed] | 1 | 0 | 2026-03-01T15:37:14 | [deleted] | true | null | 0 | o82ev5e | false | /r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o82ev5e/ | false | 1 |
t1_o82et68 | ICBMCP | 3 | 0 | 2026-03-01T15:36:57 | _hephaestus | false | null | 0 | o82et68 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82et68/ | false | 3 |
t1_o82epf8 | Claude will happily help you reverse engineer basically anything. Ask about documenting or as if you as the person who wrote it, or ask about creating a reference implementation, or documentation.
Codex will happily do it too.
I’ve never actually gotten a refusal. It has an internal system reminder injected in to the... | 73 | 0 | 2026-03-01T15:36:26 | iKy1e | false | null | 0 | o82epf8 | false | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o82epf8/ | false | 73 |
t1_o82eo7b | AFAIK the 395 would be fairly comparable to a Mac but the extra ram for the money would be the big selling point so you could run larger models.
I think the 3090s are the best bet so you have multiple upgrade paths and better performance, but if power and heat are a big deal you really can't beat the efficiency of uni... | 1 | 0 | 2026-03-01T15:36:16 | 12bitmisfit | false | null | 0 | o82eo7b | false | /r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o82eo7b/ | false | 1 |
t1_o82ekrx | No clue. I don’t use it. | 0 | 0 | 2026-03-01T15:35:47 | iwaswrongonce | false | null | 0 | o82ekrx | false | /r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o82ekrx/ | false | 0 |
t1_o82ekja | i have never coded qt, but you just read this https://unsloth.ai/docs/models/qwen3.5 and try their 35b-a3b, try some q4 quants, people are very happy with it, i tried too using opencode, built a simple app wanted to check how good it is in agentic calls (didnt do anything complex though), it did everything very well. | 2 | 0 | 2026-03-01T15:35:45 | ab2377 | false | null | 0 | o82ekja | false | /r/LocalLLaMA/comments/1rhzknn/best_local_model_for_python_and_qt_quick_coding/o82ekja/ | false | 2 |
t1_o82efkh | Uhh did you get that backwards? I always hear that Gemma is the best at translating, and I gotta be honest, I can’t see how Gemma is as good as qwen at vision.
If I give Gemma a screenshot of text, and ask it a question about it, it will hallucinate the answer. It will only work if I first ask it to convert the image ... | 2 | 0 | 2026-03-01T15:35:03 | Far-Low-4705 | false | null | 0 | o82efkh | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o82efkh/ | false | 2 |
t1_o82eess | That is the kind of thing that "Palantir" does, so I doubt they are using generalist LLMs from Anthropic for that when they already have specialist tools for this purpose. | 1 | 0 | 2026-03-01T15:34:57 | paul__k | false | null | 0 | o82eess | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82eess/ | false | 1 |
t1_o82eemj | > less smarter
🤭 | 15 | 0 | 2026-03-01T15:34:55 | __JockY__ | false | null | 0 | o82eemj | false | /r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o82eemj/ | false | 15 |
t1_o82ee8l | I didn't mean your project is like openclaw.
Was just making an observation regarding the SOUL.md and MEMORY.md that are also used by openclaw for the same porpuses. | 1 | 0 | 2026-03-01T15:34:52 | DrunkenRobotBipBop | false | null | 0 | o82ee8l | false | /r/LocalLLaMA/comments/1rhxav5/soulpy_persistent_memory_for_any_llm_in_10_lines/o82ee8l/ | false | 1 |
t1_o82e59m | Also interested in this | 1 | 0 | 2026-03-01T15:33:37 | Gold_Sugar_4098 | false | null | 0 | o82e59m | false | /r/LocalLLaMA/comments/1re7m8y/release_tinytts_an_ultralightweight_english_tts/o82e59m/ | false | 1 |
t1_o82dx1x | THANK YOU lmao. This shit reminds me when the app store started taking off and people were profiting off the most dumbass apps just due to the timely novelty. | 1 | 0 | 2026-03-01T15:32:27 | btoned | false | null | 0 | o82dx1x | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o82dx1x/ | false | 1 |
t1_o82dp5e | To be honest, I didn't even know what polymarket was up until now | 1 | 0 | 2026-03-01T15:31:20 | TaaDaahh | false | null | 0 | o82dp5e | false | /r/LocalLLaMA/comments/1rhazbc/13_m1_mbp_instead_of_m4_mac_mini/o82dp5e/ | false | 1 |
t1_o82dnvs | Have you tried not overwording? | 1 | 0 | 2026-03-01T15:31:10 | AICatgirls | false | null | 0 | o82dnvs | false | /r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o82dnvs/ | false | 1 |
t1_o82dd3n | Jesus Christ dude, there was nothing like glm five in early 2025. Not even close. Did you just get here? | 1 | 0 | 2026-03-01T15:29:40 | nomorebuttsplz | false | null | 0 | o82dd3n | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o82dd3n/ | false | 1 |
t1_o82dcm6 | off topic but that rice is so cool | 9 | 0 | 2026-03-01T15:29:35 | Abject_Computer_1571 | false | null | 0 | o82dcm6 | false | /r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o82dcm6/ | false | 9 |
t1_o82d7h1 | Vllm imo is easier and harder. It's much less flexible in hardware configurations but if you have the vram it's nice. I wish I could do eagle 3 speculative decoding easily in llamacpp. | 1 | 0 | 2026-03-01T15:28:52 | 12bitmisfit | false | null | 0 | o82d7h1 | false | /r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o82d7h1/ | false | 1 |
t1_o82cm7t | You need -ve VRAM to load -3B. | 10 | 0 | 2026-03-01T15:25:59 | kulchacop | false | null | 0 | o82cm7t | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o82cm7t/ | false | 10 |
t1_o82cl3n | Good question! They solve different problems.
soul.py is a primitive — ~150 lines that give any LLM persistent identity + memory via markdown files. You drop it into your existing Python code. That's it.
Clawdbot/OpenClaw is a full agent runtime — channels (Telegram/Discord/Slack), tool execution, sandboxing, cron jo... | 1 | 0 | 2026-03-01T15:25:50 | the-ai-scientist | false | null | 0 | o82cl3n | false | /r/LocalLLaMA/comments/1rhxav5/soulpy_persistent_memory_for_any_llm_in_10_lines/o82cl3n/ | false | 1 |
t1_o82cl1i | I limit power and set a fan curve. I haven't tried to see what all else it can do.
Idk about an overlay. I use btop (specifically btop++) for monitoring system usage stuff on a side monitor. It's more like a terminal task manager replacement but it shows current temps which is good enough for me to keep an eye on thi... | 1 | 0 | 2026-03-01T15:25:49 | 12bitmisfit | false | null | 0 | o82cl1i | false | /r/LocalLLaMA/comments/1rgyd8p/switching_from_windows_to_linux_what_distro_to/o82cl1i/ | false | 1 |
t1_o82chr1 | 3B-A1B | 17 | 0 | 2026-03-01T15:25:23 | Own-Potential-2308 | false | null | 0 | o82chr1 | false | /r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o82chr1/ | false | 17 |
t1_o82chrb | Is multi-token prediction implemented for Qwen3.5 on llama.cpp? | 2 | 0 | 2026-03-01T15:25:23 | spaceman_ | false | null | 0 | o82chrb | false | /r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o82chrb/ | false | 2 |
t1_o82chc8 | Just use Conduit (it's open source and on the playstore), llama.cpp (or ikllamacpp or vLLM or whatever) and OpenwebUI. | 1 | 0 | 2026-03-01T15:25:19 | TacGibs | false | null | 0 | o82chc8 | false | /r/LocalLLaMA/comments/1rer60n/lm_link/o82chc8/ | false | 1 |
t1_o82cgbu | Bro, exactly this. Hijacking the reasoning block is honestly the meta right now because yelling at the system prompt feels like talking to a brick wall after 10 messages lol.
Your point about overriding the reasoning just to steer them instead of actually letting them "think" is super interesting, especially for crea... | 1 | 0 | 2026-03-01T15:25:12 | Mstep85 | false | null | 0 | o82cgbu | false | /r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o82cgbu/ | false | 1 |
t1_o82cc2j | Firstly the Supply Chain risk designation, reduces Anthropic's ability to be profitable and sustainable. Thereby introducing a reliability risk to any potential startup or enterprise that would use their API....
Secondly the DPA invocation makes it difficult for the JWCC partners like AWS and Google. The defense cont... | 1 | 0 | 2026-03-01T15:24:37 | FrostyParking | false | null | 0 | o82cc2j | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o82cc2j/ | false | 1 |
t1_o82c3ku | This is what openclaw does. | 0 | 0 | 2026-03-01T15:23:27 | DrunkenRobotBipBop | false | null | 0 | o82c3ku | false | /r/LocalLLaMA/comments/1rhxav5/soulpy_persistent_memory_for_any_llm_in_10_lines/o82c3ku/ | false | 0 |
t1_o82c3am | Doesn't it already do multitoken prediction? | 1 | 0 | 2026-03-01T15:23:24 | Own-Potential-2308 | false | null | 0 | o82c3am | false | /r/LocalLLaMA/comments/1rhvabz/we_need_to_go_deeper/o82c3am/ | false | 1 |
t1_o82btjv | Would make things easier, and get rid of my knee pain... It's weird how places you never though could hurt will hurt as you age..
Anyways, nope but I mostly post when I'm doing side things and use dictation with Ai to make sure I don't end up sounding like I'm having dementia... Mmmhhh pudding | 1 | 0 | 2026-03-01T15:22:05 | Mstep85 | false | null | 0 | o82btjv | false | /r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o82btjv/ | false | 1 |
t1_o82btma | “Those three Patriot missiles that missed should have zigged instead of zagged!” | 21 | 0 | 2026-03-01T15:22:05 | UltraSPARC | false | null | 0 | o82btma | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o82btma/ | false | 21 |
t1_o82bq54 | Fuck, I have this same issue. R9700 trying to run Qwen3.5 35B Q4.
All my attempts to get this working are just bringing back all my memories from over a decade ago about why I switched to Nvidia and swore never to go back to AMD. Driver and software support is just such dogshit. I don't get how it can still be so bad... | 1 | 0 | 2026-03-01T15:21:36 | sudden_aggression | false | null | 0 | o82bq54 | false | /r/LocalLLaMA/comments/1rhk0gz/r9700_and_vllm_with_qwen35/o82bq54/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.