title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Qwen3.5-35B-A3B is a gamechanger for agentic coding.
1,042
[Qwen3.5-35B-A3B with Opencode](https://preview.redd.it/m4v951sv5jlg1.jpg?width=2367&format=pjpg&auto=webp&s=bec61ca20f08bb766987147287c7d6664308fa2f) Just tested this badboy with Opencode **cause frankly I couldn't believe those benchmarks.** Running it on a single RTX 3090 on a headless Linux box. Freshly compiled Llama.cpp and those are my settings after some tweaking, still not fully tuned: ./llama.cpp/llama-server \\ \-m /models/**Qwen3.5-35B-A3B-MXFP4\_MOE.gguf** \\ \-a "DrQwen" \\ \-c 131072 \\ \-ngl all \\ \-ctk q8\_0 \\ \-ctv q8\_0 \\ \-sm none \\ \-mg 0 \\ \-np 1 \\ \-fa on Around 22 gigs of vram used. Now the fun part: 1. I'm getting over 100t/s on it 2. This is the first open weights model I was able to utilise on my home hardware to successfully complete my own "coding test" I used for years for recruitment (mid lvl mobile dev, around 5h to complete "pre AI" ;)). It did it in around 10 minutes, strong pass. First agentic tool that I was able to "crack" it with was [Kodu.AI](http://Kodu.AI) with some early sonnet roughly 14 months ago. 3. For fun I wanted to recreate this dashboard OpenAI used during Cursor demo last summer, I did a recreation of it with Claude Code back then and posted it on Reddit: [https://www.reddit.com/r/ClaudeAI/comments/1mk7plb/just\_recreated\_that\_gpt5\_cursor\_demo\_in\_claude/](https://www.reddit.com/r/ClaudeAI/comments/1mk7plb/just_recreated_that_gpt5_cursor_demo_in_claude/) So... Qwen3.5 was able to do it in around 5 minutes. **I think we got something special here...**
2026-02-25T00:04:44
https://www.reddit.com/r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/
jslominski
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdxfdu
false
null
t3_1rdxfdu
/r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/
false
false
https://external-preview…4121fbe2a05fb971
1,042
null
Excluding used hardware what is currently considered the best bang for buck in Feb 2026?
3
Given what is going on with GPU and memory prices what is currently considered the best bang for buck with new hardware at around $1,000-1,500 USD that can run 24-32B models at a decent speed with 8k or larger context? **Recommended options I've seen are:** \- 2X RTX 5060ti's (moderate speed) \- 2X RX 9060xt's. (moderate speed) \- 1-2X R9700 Pro's (fast-ish) \- Ryzen Max+ 395 - 64GB config (not sure how speed compares) **Stuff I've seen other people not recommend:** \- Intel B50's (slow) \- Intel B60's (slow) I'd prefer to avoid any used gear. Taking that into account any other options I'm missing?
2026-02-25T00:03:44
https://www.reddit.com/r/LocalLLaMA/comments/1rdxegq/excluding_used_hardware_what_is_currently/
mustafar0111
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdxegq
false
null
t3_1rdxegq
/r/LocalLLaMA/comments/1rdxegq/excluding_used_hardware_what_is_currently/
false
false
self
3
null
I feel like it keeps going back and forth like this...
0
2026-02-25T00:01:35
https://imgflip.com/i/al0gnp
moderately-extremist
imgflip.com
1970-01-01T00:00:00
0
{}
1rdxck5
false
null
t3_1rdxck5
/r/LocalLLaMA/comments/1rdxck5/i_feel_like_it_keeps_going_back_and_forth_like/
false
false
https://external-preview…3e256bc474e49f40
0
{'enabled': False, 'images': [{'id': '3Z6soEuc15xPpcO0ibAsvnZWU_YA_RFo5OyOHwmHKAc', 'resolutions': [{'height': 104, 'url': 'https://external-preview.redd.it/3Z6soEuc15xPpcO0ibAsvnZWU_YA_RFo5OyOHwmHKAc.jpeg?width=108&crop=smart&auto=webp&s=ac96bedd556ef91a530b08f57623d5869c98bc09', 'width': 108}, {'height': 208, 'url': 'https://external-preview.redd.it/3Z6soEuc15xPpcO0ibAsvnZWU_YA_RFo5OyOHwmHKAc.jpeg?width=216&crop=smart&auto=webp&s=7136d42ab15d56a5a9ee69f587d9cfefa7c25766', 'width': 216}, {'height': 309, 'url': 'https://external-preview.redd.it/3Z6soEuc15xPpcO0ibAsvnZWU_YA_RFo5OyOHwmHKAc.jpeg?width=320&crop=smart&auto=webp&s=96fadc04e3bafd9ca8431ba16cf94bfd16d17bdd', 'width': 320}], 'source': {'height': 491, 'url': 'https://external-preview.redd.it/3Z6soEuc15xPpcO0ibAsvnZWU_YA_RFo5OyOHwmHKAc.jpeg?auto=webp&s=5da533ded08c3d5b52cb4e019f123726f30f6306', 'width': 508}, 'variants': {}}]}
Anyone else watching DeepSeek repos? 39 PRs merged today — pre-release vibes or just normal cleanup?
0
I saw a post claiming DeepSeek devs merged \*\*39 PRs today\*\* in one batch, and it immediately gave me “release hardening” vibes. Not saying “V4 confirmed” or anything — but big merge waves \*often\* happen when: \- features are basically frozen \- QA/regression is underway \- docs/tests/edge cases get cleaned up \- release branches are being stabilized A few questions for folks who track these repos more closely: \- Is this kind of merge burst normal for DeepSeek, or unusual? \- Any signs of version bumps / tags / releases across related repos? \- If there \*is\* a next drop coming, what do you think they’re optimizing for? \- coding benchmarks? \- long context / repo-scale understanding? \- tool use + agent workflows? \- inference efficiency / deployment footprint? Also curious: what would you consider \*real\* confirmation vs noise? (Release tag? Model card update? sudden docs refresh? new eval reports?) Would love links/screenshots if you’ve been monitoring the activity.
2026-02-24T23:58:43
https://www.reddit.com/r/LocalLLaMA/comments/1rdx9up/anyone_else_watching_deepseek_repos_39_prs_merged/
azahar_h
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdx9up
false
null
t3_1rdx9up
/r/LocalLLaMA/comments/1rdx9up/anyone_else_watching_deepseek_repos_39_prs_merged/
false
false
self
0
null
Gemini 3.1 Pro is a visionary designer, but a terrible employee. (Our battle with a 20% tool-call failure rate in a multi-agent stack)
0
We're building Bobr — an AI presentation generator that uses a multi-agent architecture. We recently added Gemini 3.1 Pro as a model option alongside Claude Sonnet 4.6 and GPT-5.2. **Look at the attached images.** The visual and design quality completely blew us away. The reliability of the model... not so much. Here’s a breakdown of what happens when you put Gemini 3.1 Pro in a rigid agentic pipeline. # Our Architecture (Briefly) We run a two-level agent system: 1. **Orchestrator Agent:** Handles the conversation, understands user intent, plans the structure, and dispatches work via tool calls. 2. **Creative Agent (Gemini 3.1 Pro for this test):** A sub-agent that receives a slide description, generates images, builds a template (1920x1080), and submits the result back. The creative agent has its own tool belt (generate\_image, search\_images) and, most importantly, submit\_slide. This is the only way to return the finished slide back to the orchestrator. When called, it returns signal: 'submit', terminates the agent loop, and extracts the slide data. Both agents run through the same loop with streaming, parallel tool execution, and iteration limits. # What Gemini 3.1 Pro does incredibly well The design output is genuinely impressive. Compared to older models or GPTs: * **Aesthetic intuition:** It has better color theory and visual hierarchy. * **Layout creativity:** It actually experiments with asymmetric compositions, overlapping elements, and dark-mode/glassmorphism UI (see the screenshots). * **Vibe checks:** It’s significantly better at interpreting vague prompts like "make it feel premium" or "tech startup vibes." * **Clean Code:** The HTML/CSS it generates is modern and structural. When it works, the slides look noticeably better than what we get from any other model in the creative agent role. # The Problems (Why it's driving us crazy) **1. The \~20% tool call failure rate** In roughly 20% of requests, Gemini 3.1 Pro simply forgets or refuses to call submit\_slide. Instead, it does one of the following: * Outputs the raw HTML template as plain text in its response, describing what it "would" create rather than actually triggering the tool. * Generates the necessary images correctly but then just... stops. It hits the iteration limit without ever submitting the final package. * Calls the image generation tools fine, but then writes a natural language summary ("Here is your beautiful slide...") instead of the final submit\_slide tool call. * Enters a loop where it keeps refining the design description in text without ever committing to an action. Since submit\_slide is a hard exit path, without it, the orchestrator gets nothing back, and the user gets a failed generation. **2. Garbled / Corrupted Output** On top of the tool amnesia, the model frequently returns corrupted text in its responses—random character sequences, broken Unicode, half-encoded strings. Not every time, but enough to be an issue. This sometimes bleeds into the slide content itself (variable values, template markup). So even when it does call submit\_slide successfully, the user might see gibberish text rendered on their presentation. # The Comparison For reference, **Claude Sonnet 4.6** in this exact same creative agent role has a near-zero failure rate on submit\_slide. It is almost boringly reliable about calling the tool and never produces garbled output. **GPT-5.2** sits somewhere in the middle on tool reliability, but doesn't suffer from encoding/gibberish issues. # What we've tried (with no luck) * Adding aggressive explicit instructions in the system prompt: "You MUST call submit\_slide. Do not output the template as text." * Injecting few-shot examples showing the exact expected tool call pattern. * Reducing the iteration limit to force faster convergence. * Stripping down and simplifying the tool schema. Nothing has moved the needle significantly. # Where we are now Gemini 3.1 Pro is live on our platform as an opt-in experimental choice. Users who pick it get stunning, highly creative designs \~60% of the time, and a retry error or garbage text the other \~20%. We're keeping Claude Sonnet as the Orchestrator where reliability is non-negotiable. It's a frustrating spot: the model with the absolute best creative output is also the least reliable in a structured agentic pipeline. Has anyone else hit similar tool-calling amnesia or garbled Unicode issues with Gemini 3.1 Pro? Did you find any prompting tricks or architectural workarounds that force it to actually execute the final tool? Would love to hear your experiences.
2026-02-24T23:58:18
https://www.reddit.com/gallery/1rdx9i5
bobr_ai
reddit.com
1970-01-01T00:00:00
0
{}
1rdx9i5
false
null
t3_1rdx9i5
/r/LocalLLaMA/comments/1rdx9i5/gemini_31_pro_is_a_visionary_designer_but_a/
false
false
https://preview.redd.it/…94d1e97c66d13f1f
0
null
Apple gave up on the AI rat race to become the next Nvidia
1
[removed]
2026-02-24T23:58:05
https://x.com/tim_cook/status/2026351829928624257
_manteca
x.com
1970-01-01T00:00:00
0
{}
1rdx9b0
false
null
t3_1rdx9b0
/r/LocalLLaMA/comments/1rdx9b0/apple_gave_up_on_the_ai_rat_race_to_become_the/
false
false
default
1
null
Ran 3 popular ~30B MoE models on my apple silicon M1 Max 64GB. Here's how they compare
11
Three of the "small but mighty" MoE models recently: GLM-4.7-Flash, Nemotron-3-Nano, and Qwen3-Coder, all share a similar formula: roughly 30 billion total parameters, but only ~3 billion active per token. That makes them ideal candidates for local inference on Apple Silicon. I put all three through the same gauntlet on my MacBook Pro M1 Max (64GB) using `llama-server` (build 8139, `--flash-attn on`, `--ctx-size 4096`, default `--n-parallel 4`) to see how they actually stack up. --- ## Model Specs at a Glance | | GLM-4.7-Flash | Nemotron-3-Nano-30B | Qwen3-Coder-30B | |---|---|---|---| | **Made by** | Zhipu AI | NVIDIA | Alibaba Qwen | | **Params (total / active)** | 29.9B / ~3B | 31.6B / 3.2B | 30.5B / 3.3B | | **Architecture** | DeepSeek-V2 MoE + MLA | Hybrid Mamba-2 + Transformer MoE | Transformer MoE + GQA | | **Expert routing** | 64+1 shared, top-4 | 128+1 shared, top-6 | 128, top-8 | | **Context window** | 202K | 1M | 262K | | **Quant used** | Q4_K_XL (4.68 BPW) | Q4_K_XL (5.78 BPW) | IQ4_XS (4.29 BPW) | | **Size on disk** | 16 GB | 22 GB | 15 GB | | **VRAM consumed** | ~16.9 GB | ~22.0 GB | ~15.8 GB | | **Built-in thinking** | Yes (heavy CoT) | Yes (lightweight CoT) | No | | **License** | MIT | NVIDIA Open | Apache 2.0 | --- ## How Fast Are They? (Raw Numbers) Four test prompts, single request each, no batching. Averages below: | Metric | GLM-4.7-Flash | Nemotron-3-Nano | Qwen3-Coder | |---|---|---|---| | **Prefill speed (avg)** | 99.4 tok/s | **136.9 tok/s** | 132.1 tok/s | | **Token generation (avg)** | 36.8 tok/s | 43.7 tok/s | **58.5 tok/s** | | **Generation range** | 34.9–40.6 tok/s | 42.1–44.8 tok/s | 57.0–60.2 tok/s | ### Detailed Numbers Per Prompt (prefill / generation, tok/s) | Prompt | GLM-4.7-Flash | Nemotron-3-Nano | Qwen3-Coder | |---|---|---|---| | General Knowledge | 54.9 / 40.6 | 113.8 / 44.8 | 75.1 / 60.2 | | Math Reasoning | 107.1 / 35.6 | 176.9 / 44.5 | 171.9 / 59.5 | | Coding Task | 129.5 / 36.2 | 134.5 / 43.5 | 143.8 / 57.0 | | ELI10 Explanation | 106.0 / 34.9 | 122.4 / 42.1 | 137.4 / 57.2 | --- ## The Hidden Cost: Thinking Tokens This turned out to be the most interesting finding. **GLM and Nemotron both generate internal reasoning tokens before answering**, while Qwen3-Coder (Instruct variant) goes straight to the response. The difference in user-perceived speed is dramatic: | Prompt | GLM (thinking + visible) | Nemotron (thinking + visible) | Qwen (visible only) | |---|---|---|---| | General Knowledge | 632 tok (2163 chars thinking, 868 chars answer) | 309 tok (132 chars thinking, 1347 chars answer) | **199 tok** (1165 chars answer) | | Math Reasoning | 1408 tok (3083 chars thinking, 957 chars answer) | 482 tok (213 chars thinking, 1002 chars answer) | **277 tok** (685 chars answer) | | Coding Task | 1033 tok (2701 chars thinking, 1464 chars answer) | 1947 tok (360 chars thinking, 6868 chars answer) | **1159 tok** (4401 chars answer) | | ELI10 Explanation | 1664 tok (4567 chars thinking, 1903 chars answer) | 1101 tok (181 chars thinking, 3802 chars answer) | **220 tok** (955 chars answer) | GLM's reasoning traces run 2-5x longer than Nemotron's, which significantly inflates wait times. Nemotron keeps its thinking relatively brief. Qwen produces zero hidden tokens, so every generated token goes directly to the user. ### Wall-Clock Time Until You See a Complete Answer | Prompt | GLM | Nemotron | Qwen | |---|---|---|---| | General Knowledge | 15.6s | 6.9s | **3.3s** | | Math Reasoning | 39.5s | 10.8s | **4.7s** | | Coding Task | 28.6s | 44.8s | **20.3s** | | ELI10 Explanation | 47.7s | 26.2s | **3.8s** | --- ## Output Quality: How Good Are the Answers? Every model nailed the math trick question ($0.05). Here's how each performed across all four prompts: ### "What is bitcoin?" (asked for 2-3 paragraphs) | Model | Verdict | Details | |---|---|---| | **GLM-4.7-Flash** | Excellent | Polished and professional. Covered blockchain, limited supply, and mining clearly. | | **Nemotron-3-Nano** | Excellent | Most in-depth response. Went into the double-spending problem and proof-of-work mechanism. | | **Qwen3-Coder** | Good | Shortest but perfectly adequate. Described it as "digital gold." Efficient writing. | ### "Bat and ball" trick question (step-by-step reasoning) | Model | Got it right? | Details | |---|---|---| | **GLM-4.7-Flash** | Yes ($0.05) | LaTeX-formatted math, verified the answer at the end. | | **Nemotron-3-Nano** | Yes ($0.05) | Also LaTeX, well-labeled steps throughout. | | **Qwen3-Coder** | Yes ($0.05) | Plaintext algebra, also verified. Cleanest and shortest solution. | ### Longest palindromic substring (Python coding) | Model | Verdict | Details | |---|---|---| | **GLM-4.7-Flash** | Good | Expand-around-center, O(n^2) time, O(1) space. Type-annotated code. Single algorithm only. | | **Nemotron-3-Nano** | Excellent | Delivered two solutions: expand-around-center AND Manacher's O(n) algorithm. Thorough explanations and test cases included. | | **Qwen3-Coder** | Excellent | Also two algorithms with detailed test coverage. Well-organized code structure. | ### "Explain TCP vs UDP to a 10-year-old" | Model | Verdict | Details | |---|---|---| | **GLM-4.7-Flash** | Excellent | Used "Registered Letter" vs "Shouting" analogy. Great real-world examples like movie streaming and online gaming. | | **Nemotron-3-Nano** | Excellent | Built a creative comparison table with emoji. Framed it as "Reliable Delivery game" vs "Speed Shout game." Probably the most fun to read for an actual kid. | | **Qwen3-Coder** | Good | "Letter in the mail" vs "Shouting across the playground." Short and effective but less imaginative than the other two. | --- ## RAM and Disk Usage | Component | GLM-4.7-Flash | Nemotron-3-Nano | Qwen3-Coder | |---|---|---|---| | **Model weights (GPU)** | 16.3 GB | 21.3 GB | 15.2 GB | | **CPU spillover** | 170 MB | 231 MB | 167 MB | | **KV / State Cache** | 212 MB | 214 MB (24 MB KV + 190 MB recurrent state) | 384 MB | | **Compute buffer** | 307 MB | 298 MB | 301 MB | | **Approximate total** | ~17.0 GB | ~22.0 GB | ~16.1 GB | 64GB unified memory handles all three without breaking a sweat. Nemotron takes the most RAM because of its hybrid Mamba-2 architecture and higher bits-per-weight quant (5.78 BPW). Both GLM and Qwen should work fine on 32GB M-series Macs too. --- ## Bottom Line | Category | Winner | Reason | |---|---|---| | **Raw generation speed** | **Qwen3-Coder** (58.5 tok/s) | Zero thinking overhead + compact IQ4_XS quantization | | **Time from prompt to complete answer** | **Qwen3-Coder** | 3-20s vs 7-48s for the thinking models | | **Prefill throughput** | **Nemotron-3-Nano** (136.9 tok/s) | Mamba-2 hybrid architecture excels at processing input | | **Depth of reasoning** | **GLM-4.7-Flash** | Longest and most thorough chain-of-thought | | **Coding output** | **Nemotron / Qwen** (tie) | Both offered multiple algorithms with test suites | | **Lightest on resources** | **Qwen3-Coder** (15 GB disk / ~16 GB RAM) | Most aggressive quantization of the three | | **Context window** | **Nemotron-3-Nano** (1M tokens) | Mamba-2 layers scale efficiently to long sequences | | **Licensing** | **Qwen3-Coder** (Apache 2.0) | Though GLM's MIT is equally permissive in practice | **Here's what I'd pick depending on the use case:** - Need something that feels instant and responsive for everyday tasks? **Qwen3-Coder.** 58 tok/s with no thinking delay is hard to beat for interactive use. - Want the most careful, well-reasoned outputs and can tolerate longer waits? **GLM-4.7-Flash.** Its extended chain-of-thought pays off in answer depth. - Looking for a balance of speed, quality, and massive context support? **Nemotron-3-Nano.** Its Mamba-2 hybrid is architecturally unique, processes prompts the fastest, and that 1M context window is unmatched — though it's also the bulkiest at 22 GB. The ~30B MoE class with ~3B active parameters is hitting a real sweet spot for local inference on Apple Silicon. All three run comfortably on an M1 Max 64GB. --- **Test rig:** MacBook Pro M1 Max (64GB) | llama.cpp build 8139 | llama-server --flash-attn on --ctx-size 4096 | macOS Darwin 25.2.0 **Quantizations:** GLM Q4_K_XL (Unsloth) | Nemotron Q4_K_XL (Unsloth) | Qwen IQ4_XS (Unsloth) --- ## Discussion Enough numbers, **be honest, are any of you actually daily-driving these ~30B MoE models for real stuff?** Coding, writing, whatever. Or is it still just "ooh cool let me try this one next" vibes? No judgment either way lol. Curious what people are actually getting done with these locally.
2026-02-24T23:50:08
https://www.reddit.com/r/LocalLLaMA/comments/1rdx2c7/ran_3_popular_30b_moe_models_on_my_apple_silicon/
luke_pacman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdx2c7
false
null
t3_1rdx2c7
/r/LocalLLaMA/comments/1rdx2c7/ran_3_popular_30b_moe_models_on_my_apple_silicon/
false
false
self
11
null
Text Behind Video: Create cinematic text and video compositions locally in your browser w/ Transformers.js
17
The model (BEN2 by PramaLLC) runs locally in your browser on WebGPU with Transformers.js v4, and video processing/composition is handled by Mediabunny (amazing library)! The model and demo code are MIT-licensed, so feel free to use and adapt it however you want. Hope you like it! Demo (+ source code): [https://huggingface.co/spaces/webml-community/text-behind-video](https://huggingface.co/spaces/webml-community/text-behind-video)
2026-02-24T23:44:06
https://v.redd.it/lknmglxs3jlg1
xenovatech
/r/LocalLLaMA/comments/1rdwx1x/text_behind_video_create_cinematic_text_and_video/
1970-01-01T00:00:00
0
{}
1rdwx1x
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/lknmglxs3jlg1/DASHPlaylist.mpd?a=1774718122%2CNDgyZjdkMWNhODBkMGRkYWUzMjQ3NjYzNDYwNmJkNWVjNzg4MjU5Y2NjZDcwMWU1Mjc5MTRmY2UwYWU1MDFmZg%3D%3D&v=1&f=sd', 'duration': 92, 'fallback_url': 'https://v.redd.it/lknmglxs3jlg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/lknmglxs3jlg1/HLSPlaylist.m3u8?a=1774718122%2CMTM0Yjc1MjY2NzViYTA5NzgzYzI3ZDIzNGI0MjgxZTIyYjJjM2VlZDBhNTBmMGYyZDMxZmY1NmMzNjEwY2MzNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/lknmglxs3jlg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1148}}
t3_1rdwx1x
/r/LocalLLaMA/comments/1rdwx1x/text_behind_video_create_cinematic_text_and_video/
false
false
https://external-preview…e5a54965f0db94bc
17
{'enabled': False, 'images': [{'id': 'OGhuNzZxeHMzamxnMepMzWzaiCiqPWR6VgEc94CsGf2_Nkl0yiMqpQYXy0c0', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/OGhuNzZxeHMzamxnMepMzWzaiCiqPWR6VgEc94CsGf2_Nkl0yiMqpQYXy0c0.png?width=108&crop=smart&format=pjpg&auto=webp&s=35b8b3c61650c908a317c6f13c69dba7d7bca39f', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/OGhuNzZxeHMzamxnMepMzWzaiCiqPWR6VgEc94CsGf2_Nkl0yiMqpQYXy0c0.png?width=216&crop=smart&format=pjpg&auto=webp&s=c4cd248a38061c7fedd5618b80e0b7c863bfe818', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/OGhuNzZxeHMzamxnMepMzWzaiCiqPWR6VgEc94CsGf2_Nkl0yiMqpQYXy0c0.png?width=320&crop=smart&format=pjpg&auto=webp&s=2a0a1e99fb80dbab7f18a4b7101df2574e417422', 'width': 320}, {'height': 401, 'url': 'https://external-preview.redd.it/OGhuNzZxeHMzamxnMepMzWzaiCiqPWR6VgEc94CsGf2_Nkl0yiMqpQYXy0c0.png?width=640&crop=smart&format=pjpg&auto=webp&s=c9a6902e90046810f828cdf046082f670b6c629f', 'width': 640}, {'height': 602, 'url': 'https://external-preview.redd.it/OGhuNzZxeHMzamxnMepMzWzaiCiqPWR6VgEc94CsGf2_Nkl0yiMqpQYXy0c0.png?width=960&crop=smart&format=pjpg&auto=webp&s=edc2022451f6d7c4e71d51ecc10e838b52a0ea5f', 'width': 960}, {'height': 677, 'url': 'https://external-preview.redd.it/OGhuNzZxeHMzamxnMepMzWzaiCiqPWR6VgEc94CsGf2_Nkl0yiMqpQYXy0c0.png?width=1080&crop=smart&format=pjpg&auto=webp&s=be584004ce53c9f4e5b9be52ad73f2b567fad9f3', 'width': 1080}], 'source': {'height': 1078, 'url': 'https://external-preview.redd.it/OGhuNzZxeHMzamxnMepMzWzaiCiqPWR6VgEc94CsGf2_Nkl0yiMqpQYXy0c0.png?format=pjpg&auto=webp&s=03f548eff5ac23a24eccf21ab8174ad4511c1515', 'width': 1718}, 'variants': {}}]}
Need a recommendation for a machine
1
Hello guys, i have a budget of around 2500 euros for a new machine that i want to use for inference and some fine tuning. I have seen the Strix Halo being recommended a lot and checked the EVO-X2 from GMKtec and it seems that it is what i need for my budget. However, no Nvidia means no CUDA, do you guys have any thoughts on if this is the machine i need? Do you believe Nvidia card to be a prerequisite for the work i need it for? If not could you please list some use cases for Nvidia cards? Thanks alot in advance for your time and sorry if my post seems all over the place, just getting into these things for local development
2026-02-24T23:16:21
https://www.reddit.com/r/LocalLLaMA/comments/1rdw83n/need_a_recommendation_for_a_machine/
wavz89
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdw83n
false
null
t3_1rdw83n
/r/LocalLLaMA/comments/1rdw83n/need_a_recommendation_for_a_machine/
false
false
self
1
null
Bullshit Benchmark - A benchmark for testing whether models identify and push back on nonsensical prompts instead of confidently answering them
86
[](https://preview.redd.it/bullshit-benchmark-a-benchmark-for-testing-whether-models-v0-g8qfezc2yilg1.png?width=1080&format=png&auto=webp&s=01cd7edc54f5c3c06ce4667a9217fe4f7da2338c) https://preview.redd.it/n7w95mmuyilg1.png?width=1080&format=png&auto=webp&s=6e87d1a7d9275935b2f552cfbb887ad6fe4dcf86 View the results: [https://petergpt.github.io/bullshit-benchmark/viewer/index.html](https://petergpt.github.io/bullshit-benchmark/viewer/index.html) This is a pretty interesting benchmark. It’s measuring how much the model is willing to go along with obvious bullshit. That’s something that has always concerned me with LLMs, that they don’t call you out and instead just go along with it, basically self-inducing hallucinations for the sake of giving a “helpful” response. I always had the intuition that the Claude models were significantly better in that regard than Gemini models. These results seem to support that. Here is question/answer example showing Claude succeeding and Gemini failing: https://preview.redd.it/4lyi593wyilg1.png?width=1080&format=png&auto=webp&s=eb83c7a188a28dc00dd48a8106680589814c2c03 [](https://preview.redd.it/bullshit-benchmark-a-benchmark-for-testing-whether-models-v0-4wdx46z9yilg1.png?width=1280&format=png&auto=webp&s=f050ba8d20ef37bb5cb0cbff1f69685c424dceef) Surprising that Gemini 3.1 pro even with high thinking effort failed so miserably to detect that was an obvious nonsense question and instead made up a nonsense answer. Anthropic is pretty good at post-training and it shows. Because LLMs naturally tend towards this superficial associative thinking where it generates spurious relationships between concepts which just misguide the user. They had to hammer that out at some point of their post-training pipeline.
2026-02-24T23:14:54
https://www.reddit.com/r/LocalLLaMA/comments/1rdw6pp/bullshit_benchmark_a_benchmark_for_testing/
bot_exe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdw6pp
false
null
t3_1rdw6pp
/r/LocalLLaMA/comments/1rdw6pp/bullshit_benchmark_a_benchmark_for_testing/
false
false
https://preview.redd.it/…3c505538c3357ba9
86
null
What happened to ChatGPT? Moving to Claude!
0
Can't believe I'm saying this, but I'll move to Claude. I always liked Claude for coding, not for general-purpose chat, analysis, or creative writing. Now, I'm realizing ChatGPT is getting really bad! Like, I ask a question about something and it ignores it completely and answers the same question as the previous turn. Also, sometimes when I upload a PDF and ask a question, it ignores the PDF and answers related to the last turn. Creative writing has gone down the drain! I used to like GPT-4o and they killed it too! Also, Gemini 2.5 is working pretty well. I think Sam said somewhere they traded off creativity for coding, but even Codex is not fun for pair programming! Once, it removed my code because it was throwing an error, the "Silicon Valley" TV show moment, and once ran git reset on my untracked code. I had such big hopes for OpenAI; I wish they make a comeback!
2026-02-24T23:12:01
https://www.reddit.com/r/LocalLLaMA/comments/1rdw40e/what_happened_to_chatgpt_moving_to_claude/
awsaf49
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdw40e
false
null
t3_1rdw40e
/r/LocalLLaMA/comments/1rdw40e/what_happened_to_chatgpt_moving_to_claude/
false
false
self
0
null
Fully local code indexing with Ollama embeddings — GPU-accelerated semantic search, no API keys, no cloud
1
Built an MCP server called srclight for deep code indexing that's 100% local. No API keys, no cloud calls, your code never leaves your machine. The stack: - tree-sitter AST parsing (10 languages: Python, C, C++, C#, JavaScript, TypeScript, Dart, Swift, Kotlin, Java, Go) - SQLite FTS5 for keyword search (3 indexes: symbol names with camelCase/snake\_case splitting, trigram for substring, Porter stemmer for docstrings) - Ollama for embeddings (qwen3-embedding default, nomic-embed-text also works) - cupy for GPU-accelerated cosine similarity (\~3ms on 27K vectors, RTX 3090) - numpy fallback (\~105ms) if no GPU - Hybrid search: Reciprocal Rank Fusion (RRF, k=60) combining FTS5 + embedding results The embedding approach: .npy sidecar files loaded to GPU VRAM once, then all queries served from VRAM. Cold start \~300ms, then \~3ms/query. Incremental — only re-embeds symbols whose content hash changed. Full embed of 45K symbols takes \~15 min with qwen3-embedding, incremental is instant. 25 MCP tools total: - Symbol search (FTS5 + semantic + hybrid RRF) - Relationship graph (callers, callees, transitive dependents, implementors, inheritance tree, test coverage) - Git change intelligence (blame per symbol, hotspot detection, uncommitted WIP, commit history) - Build system awareness (CMake, .csproj targets and platform conditionals) - Multi-repo workspaces — SQLite ATTACH+UNION across repos, search 10+ repos simultaneously I index 13 repos (45K symbols) in a workspace. Everything stored in a single SQLite file per repo. No Docker, no Redis, no vector database, no cloud embedding APIs. Git hooks (post-commit, post-checkout) keep the index fresh automatically. I surveyed 50+ MCP code search servers across all the major registries. Most are grep wrappers or need cloud embedding APIs (OpenAI, Voyage). srclight is the only one combining local FTS5 keyword search + local Ollama embeddings + GPU-accelerated vector cache + git intelligence + multi-repo workspaces in a single pip install. Works with any MCP client (Claude Code, Cursor, Windsurf, Cline, VS Code). pip install srclight [https://github.com/srclight/srclight](https://github.com/srclight/srclight) MIT licensed, fully open source. Happy to talk about the architecture — FTS5 tokenization strategies, RRF hybrid search, ATTACH+UNION for multi-repo, cupy vs numpy perf, etc.
2026-02-24T23:04:40
https://www.reddit.com/r/LocalLLaMA/comments/1rdvx2k/fully_local_code_indexing_with_ollama_embeddings/
srclight
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdvx2k
false
null
t3_1rdvx2k
/r/LocalLLaMA/comments/1rdvx2k/fully_local_code_indexing_with_ollama_embeddings/
false
false
self
1
{'enabled': False, 'images': [{'id': 'jWvXieoILmbNbgjkVxRBP7hfNODqhdh6hihixGJQYYU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jWvXieoILmbNbgjkVxRBP7hfNODqhdh6hihixGJQYYU.png?width=108&crop=smart&auto=webp&s=e7e07b3f33ffaa7e0628b203e101b800df8655ac', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jWvXieoILmbNbgjkVxRBP7hfNODqhdh6hihixGJQYYU.png?width=216&crop=smart&auto=webp&s=5d6301866d4c706857ad13d8abdbdfa6b55debfd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jWvXieoILmbNbgjkVxRBP7hfNODqhdh6hihixGJQYYU.png?width=320&crop=smart&auto=webp&s=bf5a72246465572adb5bcc9481de151ce960aaff', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jWvXieoILmbNbgjkVxRBP7hfNODqhdh6hihixGJQYYU.png?width=640&crop=smart&auto=webp&s=1faa02169dd13584d6e6120b57d043d03f2db38b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jWvXieoILmbNbgjkVxRBP7hfNODqhdh6hihixGJQYYU.png?width=960&crop=smart&auto=webp&s=b040fc1b6fa1b3d547714047c3cd117b134f8fcc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jWvXieoILmbNbgjkVxRBP7hfNODqhdh6hihixGJQYYU.png?width=1080&crop=smart&auto=webp&s=f602d1089ec802c92abd98203a81689ddfe7d1d3', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/jWvXieoILmbNbgjkVxRBP7hfNODqhdh6hihixGJQYYU.png?auto=webp&s=30f5ba5c2def9b97f30b8323dad0c871c786f993', 'width': 1280}, 'variants': {}}]}
Local LLM Benchmark tools
3
What are you guys using for I’ll benchmark to compare various models on your hardware? I’m looking for something basic to get performance snapshots while iterating with various models and their configurations in a more objective manner than just eyeballing and the vibes. I use two platforms llama and LLM Studio.
2026-02-24T23:02:40
https://www.reddit.com/r/LocalLLaMA/comments/1rdvva6/local_llm_benchmark_tools/
BargeCptn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdvva6
false
null
t3_1rdvva6
/r/LocalLLaMA/comments/1rdvva6/local_llm_benchmark_tools/
false
false
self
3
null
started using AnythingLLM - having trouble understanding key conecpts
2
anythingllm seems like a powerful tool but so far I am mostly confused and feel like I am missing the point 1. are threads actually "chats" ? if so, whats the need for a "default" thread ? also, "forking" a new thread just shows it branching from the main workspace and not from the original thread 2. Are contexts from documents only fetched once per thread intentionally or am I not using it well ? I expect the agent to search for relevant context for each new message but it keeps referring to the original 4 contexts it received to the first question.
2026-02-24T23:00:18
https://www.reddit.com/r/LocalLLaMA/comments/1rdvt02/started_using_anythingllm_having_trouble/
Coach_Unable
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdvt02
false
null
t3_1rdvt02
/r/LocalLLaMA/comments/1rdvt02/started_using_anythingllm_having_trouble/
false
false
self
2
null
Qwen3.5 27B is Match Made in Heaven for Size and Performance
238
Just got Qwen3.5 27B running on server and wanted to share the full setup for anyone trying to do the same. **Setup:** * Model: Qwen3.5-27B-Q8\_0 (unsloth GGUF) , Thanks Dan * GPU: RTX A6000 48GB * Inference: llama.cpp with CUDA * Context: 32K * Speed: \~19.7 tokens/sec **Why Q8 and not a lower quant?** With 48GB VRAM the Q8 fits comfortably at 28.6GB leaving plenty of headroom for KV cache. Quality is virtually identical to full BF16 — no reason to go lower if your VRAM allows it. **What's interesting about this model:** It uses a hybrid architecture mixing Gated Delta Networks with standard attention layers. In practice this means faster processing on long contexts compared to a pure transformer. 262K native context window, 201 languages, vision capable. On benchmarks it trades blows with frontier closed source models on GPQA Diamond, SWE-bench, and the Harvard-MIT math tournament — at 27B parameters on a single consumer GPU. **Streaming works out of the box** via the llama-server OpenAI compatible endpoint — drop-in replacement for any OpenAI SDK integration. Full video walkthrough in the comments for anyone who wants the exact commands: [https://youtu.be/EONM2W1gUFY?si=4xcrJmcsoUKkim9q](https://youtu.be/EONM2W1gUFY?si=4xcrJmcsoUKkim9q) Happy to answer questions about the setup. Model Card: [Qwen/Qwen3.5-27B · Hugging Face](https://huggingface.co/Qwen/Qwen3.5-27B)
2026-02-24T22:57:12
https://www.reddit.com/r/LocalLLaMA/comments/1rdvq3s/qwen35_27b_is_match_made_in_heaven_for_size_and/
Lopsided_Dot_4557
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdvq3s
false
null
t3_1rdvq3s
/r/LocalLLaMA/comments/1rdvq3s/qwen35_27b_is_match_made_in_heaven_for_size_and/
false
false
self
238
{'enabled': False, 'images': [{'id': 'OesADB7ecB91NwB-oVwS-uWQBkmCujf4cnNMlu8R3_4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/OesADB7ecB91NwB-oVwS-uWQBkmCujf4cnNMlu8R3_4.jpeg?width=108&crop=smart&auto=webp&s=d81dc977b0ef3f17ba6bdc11da8a39f9ca06bd20', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/OesADB7ecB91NwB-oVwS-uWQBkmCujf4cnNMlu8R3_4.jpeg?width=216&crop=smart&auto=webp&s=dbe72c8f2caa267f2f87327c8486f869812bdb7d', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/OesADB7ecB91NwB-oVwS-uWQBkmCujf4cnNMlu8R3_4.jpeg?width=320&crop=smart&auto=webp&s=ef45ee186e915a4dfbfa6778607e26e5a3611329', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/OesADB7ecB91NwB-oVwS-uWQBkmCujf4cnNMlu8R3_4.jpeg?auto=webp&s=75a75ceaeed7e0ca7c3bc0b065246ad38e5f1b15', 'width': 480}, 'variants': {}}]}
Open source research assistant with 7 AI agents – supports local models via LiteLLM
1
[removed]
2026-02-24T22:55:59
https://www.reddit.com/r/LocalLLaMA/comments/1rdvoz3/open_source_research_assistant_with_7_ai_agents/
abed_tarakji
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdvoz3
false
null
t3_1rdvoz3
/r/LocalLLaMA/comments/1rdvoz3/open_source_research_assistant_with_7_ai_agents/
false
false
self
1
{'enabled': False, 'images': [{'id': 'hm3-fy6-9wx4u6Mmk8bi00fV82OkEY4z4_7vifV6u7k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hm3-fy6-9wx4u6Mmk8bi00fV82OkEY4z4_7vifV6u7k.png?width=108&crop=smart&auto=webp&s=4c04a708881661732f6e1729eb147cae65bc95dc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hm3-fy6-9wx4u6Mmk8bi00fV82OkEY4z4_7vifV6u7k.png?width=216&crop=smart&auto=webp&s=8421676cc38f51cc3b8b2b86aef218e7c4ac9df9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hm3-fy6-9wx4u6Mmk8bi00fV82OkEY4z4_7vifV6u7k.png?width=320&crop=smart&auto=webp&s=85c841705ff2fd1b13b522676038f0e42e5bdede', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hm3-fy6-9wx4u6Mmk8bi00fV82OkEY4z4_7vifV6u7k.png?width=640&crop=smart&auto=webp&s=827f6bd7eb612bcef30c6d339bda0884542371dc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hm3-fy6-9wx4u6Mmk8bi00fV82OkEY4z4_7vifV6u7k.png?width=960&crop=smart&auto=webp&s=7889de4ae9860e97eaaaaa72e7f75a43f2c77bbe', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hm3-fy6-9wx4u6Mmk8bi00fV82OkEY4z4_7vifV6u7k.png?width=1080&crop=smart&auto=webp&s=cef3d7c50d5bda37e5c15a9a76562debe9d41f02', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hm3-fy6-9wx4u6Mmk8bi00fV82OkEY4z4_7vifV6u7k.png?auto=webp&s=10d0b8fdd9c849ba53efde0fdea5d1e20f9d2550', 'width': 1200}, 'variants': {}}]}
Show HN: AgentKeeper – Cross-model memory for AI agents
2
Problem I kept hitting: every time I switched LLM providers or an agent crashed, it lost all context. Built AgentKeeper to fix this. It introduces a Cognitive Reconstruction Engine (CRE) that stores agent memory independently of any provider. Usage: agent = agentkeeper.create() agent.remember("project budget: 50000 EUR", critical=True) agent.switch\_provider("anthropic") response = agent.ask("What is the budget?") \# → "The project budget is 50,000 EUR." Benchmark: 19/20 critical facts recovered switching GPT-4 → Claude (and reverse). Real API calls, not mocked. Supports OpenAI, Anthropic, Gemini, Ollama. SQLite persistence. MIT license. GitHub: [https://github.com/Thinklanceai/agentkeeper](https://github.com/Thinklanceai/agentkeeper) Feedback welcome — especially on the CRE prioritization logic.
2026-02-24T22:53:31
https://www.reddit.com/r/LocalLLaMA/comments/1rdvmtc/show_hn_agentkeeper_crossmodel_memory_for_ai/
Rich-Department-7049
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdvmtc
false
null
t3_1rdvmtc
/r/LocalLLaMA/comments/1rdvmtc/show_hn_agentkeeper_crossmodel_memory_for_ai/
false
false
self
2
{'enabled': False, 'images': [{'id': 'itt3PZrZRJsMwXHNt5GzSNPvdbNqKyLebbTtIhwz428', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/itt3PZrZRJsMwXHNt5GzSNPvdbNqKyLebbTtIhwz428.png?width=108&crop=smart&auto=webp&s=25d34d214885b1a40b3a01a84467e78cd2821f60', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/itt3PZrZRJsMwXHNt5GzSNPvdbNqKyLebbTtIhwz428.png?width=216&crop=smart&auto=webp&s=69117b81881728472309e577c11128e7b7751ee0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/itt3PZrZRJsMwXHNt5GzSNPvdbNqKyLebbTtIhwz428.png?width=320&crop=smart&auto=webp&s=5d268047e62f49dd6486b4db9e039dad7173f1c6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/itt3PZrZRJsMwXHNt5GzSNPvdbNqKyLebbTtIhwz428.png?width=640&crop=smart&auto=webp&s=23ce81fb1e57f26150fc7037095bb6e150b24fb5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/itt3PZrZRJsMwXHNt5GzSNPvdbNqKyLebbTtIhwz428.png?width=960&crop=smart&auto=webp&s=715789efef348baa75d64f53e2f2b3d4bc766b14', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/itt3PZrZRJsMwXHNt5GzSNPvdbNqKyLebbTtIhwz428.png?width=1080&crop=smart&auto=webp&s=9b99f889ae67e8466368959e6a966c25e73dc5ca', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/itt3PZrZRJsMwXHNt5GzSNPvdbNqKyLebbTtIhwz428.png?auto=webp&s=b74c2953abba902b9992b132f8e085e77e637d55', 'width': 1200}, 'variants': {}}]}
XCFramework and iOS 26.2?
3
Anyone here have success with llama-xcframework on iOS 26.2? I’m writing a swift Ai chat from end for it and can’t seem to get inference working. App crashes as soon as prompt is sent. Something to do with tokenization. Are they even compatible? I tried with a bridging-header too. No dice! I’m trying with small models. (<1b) The models load successfully, just crash on inference.
2026-02-24T22:46:21
https://www.reddit.com/r/LocalLLaMA/comments/1rdvg5r/xcframework_and_ios_262/
FreQRiDeR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdvg5r
false
null
t3_1rdvg5r
/r/LocalLLaMA/comments/1rdvg5r/xcframework_and_ios_262/
false
false
self
3
null
FlashLM v6 "SUPERNOVA": 4.1M ternary model hits 3,500 tok/s on CPU — novel P-RCSM reasoning architecture, no attention, no convolution
74
Back with v6. Some of you saw v5 “Thunderbolt” — 29.7M params, PPL 1.36, beat the TinyStories-1M baseline on a borrowed Ryzen 7950X3D (thanks again to arki05 for that machine). This time I went back to the free Deepnote notebook — 2 threads, 5GB RAM — and built a completely new architecture from scratch. **What it is:** 4.1M parameter language model with a novel architecture called P-RCSM (Parallel Reasoning via Compositional State Machine). 81% of weights are ternary {-1, 0, +1}. Trained for \~3 hours on a free CPU notebook. No GPU at any point. Generates coherent children’s stories with characters, dialogue, and narrative structure at 3,500 tokens/sec. **Why this matters beyond TinyStories:** I’m a student with no budget for GPUs. This entire project runs on free-tier cloud CPUs. But the goal was never “make a toy story generator” — it’s to prove that a ternary, matmul-free architecture can produce coherent language on the absolute worst hardware available. Think about where a model like this could actually be useful: a fast, tiny model running on a couple of CPU cores alongside a big GPU model on the same server. The small model handles routing, classification, draft tokens for speculative decoding — tasks where latency matters more than capability. Or on edge devices, phones, microcontrollers — places where there’s no GPU at all. At 3,500 tok/s on 2 CPU threads with 16MB of RAM, this is already fast enough to be practical as a side-car model. TinyStories is just the proving ground. The architecture is what I’m validating. **The new architecture — P-RCSM:** v4 used convolutions for token mixing. v5 used gated recurrence. v5.2 used standard attention. All have tradeoffs — convolutions have limited receptive field, recurrence is sequential (slow on CPU), attention is O(T²). v6 introduces three new components: * **MultiScaleLinearBank** — replaces convolutions. Projects \[current\_token, shifted\_token\] through ternary linear layers at multiple temporal offsets (shift 1, shift 2). A learned soft router blends the scales per token. No Conv1d anywhere — pure F.linear calls. * **HierarchicalStateGate** — a compact “planner” state (32 dims) that gates a larger “executor” state (64 dims). The planner updates slowly via mean-pooled summaries, providing implicit adaptive computation depth. No Python loops. * **SlotMemoryAttention** — 8 learned memory slots accessed via a single matmul. Tokens query the slots in parallel. Replaces sequential read/write memory with one batched operation. All three use only `F.linear` (BitLinear ternary) and element-wise ops. Zero convolutions, zero attention, zero sequential loops. Embedding (4K × 192, float, weight-tied) → 6× SupernovaBlock: RMSNorm → GatedLinearMixer (ternary) + residual RMSNorm → P-RCSM (MultiScaleLinearBank + StateGate + SlotMemory) + residual RMSNorm → TernaryGLU (ternary gate/up/down, SiLU) + residual → RMSNorm → Output Head (tied to embedding) **Results:** ||FlashLM v6|FlashLM v5.2|FlashLM v4| |:-|:-|:-|:-| |Params|4.1M (81% ternary)|5.0M (float32)|4.3M (ternary)| |Val PPL|14.0|10.56|15.05| |Speed|3,500 tok/s|3,500 tok/s|1,480 tok/s| |Architecture|P-RCSM (linear-only)|Transformer + RoPE|GatedConvMixer| |Token mixing|GatedLinearMixer|Multi-head attention|Depthwise conv| |Training time|\~3 hours|2 hours|2 hours| |Hardware|2-thread CPU|2-thread CPU|2-thread CPU| v6 beats v4 on quality (PPL 14.0 vs 15.05) with 2.4× the throughput, using a fundamentally different architecture. v5.2 still wins on PPL because standard attention with RoPE is hard to beat at small scale — but v6 uses zero attention and zero convolution. **Honest assessment:** The P-RCSM reasoning components are small in this config (d\_reason=64, d\_planner=32, 2 scales, 8 memory slots). Most capacity is in the GatedLinearMixer + TernaryGLU backbone. To really prove the reasoning components help, I need more data — 4.4M tokens is tiny and the model hit a data ceiling at PPL 14.0 after \~9 epochs. The architecture needs to be tested at scale with a proper dataset. **Sample output:** > > > Coherent narratives, character names, dialogue, emotional content. Some repetition on longer generations — expected with a 6-token effective receptive field. **Training curve:** |Step|Train Loss|Val PPL|Tokens| |:-|:-|:-|:-| |50|3.52|—|0.05M| |300|1.90|45.0|0.31M| |1,500|1.54|24.1|1.5M| |6,000|1.36|16.6|6.1M| |15,300|1.28|14.2|15.7M| |30,300|1.25|**14.0**|31.0M| Loss was still improving when I stopped. Data-limited, not architecture-limited. **The speed debugging story:** The original v6 design used depthwise Conv1d and ran at 13 tok/s. Turned out PyTorch 2.1.2 has a known bug where bfloat16 autocast + Conv1d is \~100× slower on CPU. After upgrading to PyTorch 2.5.1+cpu and replacing every Conv1d with pure F.linear calls, speed jumped from 13 → 3,500 tok/s. Lesson: on CPU, `F.linear` through optimized BLAS is king. **What’s next:** 1. **Scale test** — P-RCSM needs to be validated on a bigger model (10M+ params) with more data. The reasoning components are too small in this config to prove they help. 2. **Better dataset** — TinyStories was the proving ground. Need broader data to test if the architecture generalizes. 3. **Nano-Coder (NC series)** — Applying FlashLM techniques to code generation. 4. **C inference runtime** — AVX2 ternary kernels. A 4.1M ternary model packs into \~800KB — fits entirely in L2 cache. Should be insanely fast with native code. **The bigger picture:** I started this project on a free 2-thread notebook because that’s what I had. I’m a student, no GPU budget, no lab access. Every version of FlashLM has been about pushing what’s possible under the worst constraints. If this architecture works at 1-2B parameters on a proper CPU (say an EPYC with big L3 cache), a fast ternary model running on spare CPU cores could serve as a draft model for speculative decoding, a router for MoE, or a standalone model for edge deployment. That’s the long-term bet. If anyone has compute to spare and wants to help scale this up — or just wants to run the training script yourself — everything is MIT licensed and on GitHub. **Links:** * GitHub: [https://github.com/changcheng967/FlashLM](https://github.com/changcheng967/FlashLM) * v6 model + weights: [https://huggingface.co/changcheng967/flashlm-v6-supernova](https://huggingface.co/changcheng967/flashlm-v6-supernova) * v5 Thunderbolt: [https://huggingface.co/changcheng967/flashlm-v5-thunderbolt](https://huggingface.co/changcheng967/flashlm-v5-thunderbolt) * v4 Bolt: [https://huggingface.co/changcheng967/flashlm-v4-bolt](https://huggingface.co/changcheng967/flashlm-v4-bolt)
2026-02-24T22:36:52
https://www.reddit.com/r/LocalLLaMA/comments/1rdv74o/flashlm_v6_supernova_41m_ternary_model_hits_3500/
Own-Albatross868
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdv74o
false
null
t3_1rdv74o
/r/LocalLLaMA/comments/1rdv74o/flashlm_v6_supernova_41m_ternary_model_hits_3500/
false
false
self
74
{'enabled': False, 'images': [{'id': 'SdjmLzpk4IFYTEHz-c1xxXHbWZz0qxcCC6hk8sjLP78', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SdjmLzpk4IFYTEHz-c1xxXHbWZz0qxcCC6hk8sjLP78.png?width=108&crop=smart&auto=webp&s=b26a0808a94ce4951c0439b760a31e4def13b061', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SdjmLzpk4IFYTEHz-c1xxXHbWZz0qxcCC6hk8sjLP78.png?width=216&crop=smart&auto=webp&s=8a2b4494c12dc8ea643ac038ad426b724e2ddf6e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SdjmLzpk4IFYTEHz-c1xxXHbWZz0qxcCC6hk8sjLP78.png?width=320&crop=smart&auto=webp&s=d140d64433d15bfeae09817da8105cdb836dddaf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SdjmLzpk4IFYTEHz-c1xxXHbWZz0qxcCC6hk8sjLP78.png?width=640&crop=smart&auto=webp&s=dc9067a2c1a1d748530d954b446a6a5ded317e2e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SdjmLzpk4IFYTEHz-c1xxXHbWZz0qxcCC6hk8sjLP78.png?width=960&crop=smart&auto=webp&s=0db4457c1d6b9caea069b0bdd8e02b1f39ace779', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SdjmLzpk4IFYTEHz-c1xxXHbWZz0qxcCC6hk8sjLP78.png?width=1080&crop=smart&auto=webp&s=526e03f5d7cc3ae7c6f34a8f6ad3d48edc70c222', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SdjmLzpk4IFYTEHz-c1xxXHbWZz0qxcCC6hk8sjLP78.png?auto=webp&s=a69ef4e4a7676e6b4011fdac6a261c6866d35c74', 'width': 1200}, 'variants': {}}]}
Running Kimi K2.5? - Tell us your Build, Quant, Pre-processing and Generation Tokens/second Please!
4
I'm extremely interested in running kimi k2.5 at home but want to understand the hardware options and approximate speeds I'm going to get running the model. The easy (and common answer) is 1-2 mac m3 ultra 512gb studios (depending on the quant, If i went this route I'm waiting for the m5). $11-22k Looking at all Nvidia builds to store the whole thing in VRAM - would need 4x H200NVLs or 8xRTX6000 pro and some serious power.. But I'd love to know other setups and what speed everyone is getting from them. We really need to design a system to collect metrics from the community. I'm sure the issue then becomes how many different ways you can run a model (and parameters).
2026-02-24T22:33:35
https://www.reddit.com/r/LocalLLaMA/comments/1rdv3v0/running_kimi_k25_tell_us_your_build_quant/
bigh-aus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdv3v0
false
null
t3_1rdv3v0
/r/LocalLLaMA/comments/1rdv3v0/running_kimi_k25_tell_us_your_build_quant/
false
false
self
4
null
GLM4.7 flash VS Qwen 3.5 35B
39
Hi all! I was wondering if anyone has compared these two models thoroughly, and if so, what their thoughts on them are. Thanks!
2026-02-24T22:17:35
https://www.reddit.com/r/LocalLLaMA/comments/1rduokx/glm47_flash_vs_qwen_35_35b/
KlutzyFood2290
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rduokx
false
null
t3_1rduokx
/r/LocalLLaMA/comments/1rduokx/glm47_flash_vs_qwen_35_35b/
false
false
self
39
null
Pour ceux qui cherchent à activer le mode no-think de Qwen3.5 dans LM-Studio
0
Voici un template jinja à entrer dans les paramètre de votre modèle depuis "My Models", cherchez l'onglet "Inference" dans le volet de droite. Ce template désactive totalement le mode thinking, en attendant que LM-Studio nous offre une mise à jour avec un joli bouton. LM-Studio permet de restaurer le template par défaut en un clic si besoin. [https://pastebin.com/A5vWGKVE](https://pastebin.com/A5vWGKVE) Testé avec Qwen3.5 27B et 35B.
2026-02-24T22:14:13
https://www.reddit.com/r/LocalLLaMA/comments/1rdulg1/pour_ceux_qui_cherchent_à_activer_le_mode_nothink/
Adventurous-Paper566
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdulg1
false
null
t3_1rdulg1
/r/LocalLLaMA/comments/1rdulg1/pour_ceux_qui_cherchent_à_activer_le_mode_nothink/
false
false
self
0
null
I built an open source Claude Code plugin that saves 94% of your context window.
0
The problem: Claude Code has a 200K token context window. With popular MCP servers like Playwright, Context7, and GitHub active, 72% is consumed before you even start working. A single Playwright snapshot burns up to 135K tokens. After 30 minutes of real debugging, responses slow to a crawl. Context Mode fixes this. It intercepts large outputs, processes them in isolated subprocesses, and returns only what matters. Raw data never enters your context [claude-context-mode](https://github.com/mksglu/claude-context-mode)
2026-02-24T22:14:03
https://v.redd.it/163n7lxynilg1
Alarming-Garage4299
v.redd.it
1970-01-01T00:00:00
0
{}
1rdula1
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/163n7lxynilg1/DASHPlaylist.mpd?a=1774563265%2CNGE3MzIwYjlkNjBhNThhNDA5ZGZlZGRjNWQzNTgzNTIyODNlMzM1MzM0ZjE5OTc3MDczYjc4NTEyNTY0MjUzYg%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/163n7lxynilg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1278, 'hls_url': 'https://v.redd.it/163n7lxynilg1/HLSPlaylist.m3u8?a=1774563265%2CNjQxNmI1YjUxOWQ1ZDFmZDhiYWMxYmNlM2IzN2ZiOWM4ZGU5ZTMxYzFkNjlmMzdmMWQzNzFlMDljMzI5ZDVkYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/163n7lxynilg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1rdula1
/r/LocalLLaMA/comments/1rdula1/i_built_an_open_source_claude_code_plugin_that/
false
false
https://external-preview…a55854d3da272ce1
0
{'enabled': False, 'images': [{'id': 'YWg5OXdyeHluaWxnMaXGgHbn0rYGdwG73ULbZ5ZxSu7Z2zKvu4rrRBQWdOuE', 'resolutions': [{'height': 127, 'url': 'https://external-preview.redd.it/YWg5OXdyeHluaWxnMaXGgHbn0rYGdwG73ULbZ5ZxSu7Z2zKvu4rrRBQWdOuE.png?width=108&crop=smart&format=pjpg&auto=webp&s=326da8fadc8d24e30d41dea2b34fb41f33f606f5', 'width': 108}, {'height': 255, 'url': 'https://external-preview.redd.it/YWg5OXdyeHluaWxnMaXGgHbn0rYGdwG73ULbZ5ZxSu7Z2zKvu4rrRBQWdOuE.png?width=216&crop=smart&format=pjpg&auto=webp&s=b5c48b7ed257bdbbf4f25678f41044880c65b851', 'width': 216}, {'height': 378, 'url': 'https://external-preview.redd.it/YWg5OXdyeHluaWxnMaXGgHbn0rYGdwG73ULbZ5ZxSu7Z2zKvu4rrRBQWdOuE.png?width=320&crop=smart&format=pjpg&auto=webp&s=b37818b0fdb5aed4c3fabdeac24ae7b3a693adc5', 'width': 320}, {'height': 757, 'url': 'https://external-preview.redd.it/YWg5OXdyeHluaWxnMaXGgHbn0rYGdwG73ULbZ5ZxSu7Z2zKvu4rrRBQWdOuE.png?width=640&crop=smart&format=pjpg&auto=webp&s=dc10e419dfa005af82f38a71601985dc572c0422', 'width': 640}, {'height': 1136, 'url': 'https://external-preview.redd.it/YWg5OXdyeHluaWxnMaXGgHbn0rYGdwG73ULbZ5ZxSu7Z2zKvu4rrRBQWdOuE.png?width=960&crop=smart&format=pjpg&auto=webp&s=2861af146867f907cc28754b6c73494fa06558f8', 'width': 960}, {'height': 1278, 'url': 'https://external-preview.redd.it/YWg5OXdyeHluaWxnMaXGgHbn0rYGdwG73ULbZ5ZxSu7Z2zKvu4rrRBQWdOuE.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9e86fc99281e2d8803357fa500379091b3ae760a', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/YWg5OXdyeHluaWxnMaXGgHbn0rYGdwG73ULbZ5ZxSu7Z2zKvu4rrRBQWdOuE.png?format=pjpg&auto=webp&s=f718f1358f82c780c7f0b22430a395358bbddcd8', 'width': 1824}, 'variants': {}}]}
Those of you running MoE coding models on 24-30GB, how long do you wait for a reply?
2
Something like GPT OSS 120B has a prompt processing speed of 80T/s for me due to the ram offload, meaning to wait for a single reply it takes like a whole minute before it even starts to stream. Idk why but I find this so abhorrent, mostly because it’s still not great quality. What do yall experience? Maybe I just need to update my ram smh
2026-02-24T22:08:27
https://www.reddit.com/r/LocalLLaMA/comments/1rdufse/those_of_you_running_moe_coding_models_on_2430gb/
Borkato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdufse
false
null
t3_1rdufse
/r/LocalLLaMA/comments/1rdufse/those_of_you_running_moe_coding_models_on_2430gb/
false
false
self
2
null
Debugging my local-first “IDE assistant” System Monitor — false positives/negatives
0
Hey folks — I’m building a local-first web IDE (“Vibz”) with a System Monitor panel that checks 10 “cards” (backend, workspace, gates, models, loop runtime, etc.) by hitting FastAPI endpoints and doing a few probes against an Ollama-backed chat route. I ran a truth audit (repo code + live API responses) and found a few *provable* monitor issues: * **Reviewer lane is hard failing (503)** on 3× probe: `LLM_ROUTE_UNAVAILABLE` because the advisory provider rejects config: `max_tokens must be between 32 and 2048`. My default was **3000**, so unconfigured calls explode immediately. * **Ollama card is a false positive:** my “chat\_send” probe returns HTTP 200 but the backend routes it through a deterministic handler (`llm_invoked:false`), so it doesn’t actually exercise the LLM runtime. * **Loop card is a false negative:** latest loop run comes back `status:"stopped"` \+ `state:"FAILED"` but my UI logic only treats `status in {"blocked","failed"}` as bad, so it shows “OK”. * **Preflight checks are inconsistent:** `/api/preflight/checks` reports `PLAN_INVALID` \+ `DETACHED_HEAD`, but `/api/capsule`and `/api/workspace` show clean state. Looks like preflight was calling `build_capsule()` with the wrong argument type (string repo\_root instead of workspace dict), causing empty repo\_root/branch and bogus DETACHED\_HEAD. I’m implementing minimal fixes: 1. clamp default max\_tokens to 2048, 2. add `route_hint:"llm"` to the probe so the Ollama card is real, 3. treat stopped+FAILED as fail/warn in the loop card, 4. fix preflight to pass the proper workspace object into capsule build. **Ask:** If you’ve built similar health/monitor dashboards around FastAPI + Ollama (/api/chat) + schema-constrained outputs, what’s the cleanest way to structure probes so they test *readiness* (LLM actually invoked) without making the monitor flaky/slow? Also, any gotchas with token budgets / max\_tokens validation you’ve seen in local providers? Happy to share the exact error payloads / snippets if helpful.
2026-02-24T22:06:50
https://www.reddit.com/r/LocalLLaMA/comments/1rdue99/debugging_my_localfirst_ide_assistant_system/
Apart-Yam-979
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdue99
false
null
t3_1rdue99
/r/LocalLLaMA/comments/1rdue99/debugging_my_localfirst_ide_assistant_system/
false
false
self
0
null
I made a website to vote on LLM performance
0
Idk about you guys but the benchmarks mean nothing to me anymore so I thought we could just vote. DEMOCRACY!!! [livellmvoting.com](http://livellmvoting.com) Also anthropic didn't respond to my help request (to let me load the upgraded opus 4.5 in claude code instead of 4.6) and I am a petty dev with too much time who's sick of the snake oil.
2026-02-24T22:04:34
https://i.redd.it/0q5y9geflilg1.png
Lucky-Caterpillar780
i.redd.it
1970-01-01T00:00:00
0
{}
1rduc3c
false
null
t3_1rduc3c
/r/LocalLLaMA/comments/1rduc3c/i_made_a_website_to_vote_on_llm_performance/
false
false
https://preview.redd.it/…23bada6fcbd75b57
0
{'enabled': True, 'images': [{'id': '0q5y9geflilg1', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/0q5y9geflilg1.png?width=108&crop=smart&auto=webp&s=2cfc1a03c57021a9d8a980759a36e3a415705515', 'width': 108}, {'height': 107, 'url': 'https://preview.redd.it/0q5y9geflilg1.png?width=216&crop=smart&auto=webp&s=8de0b33ee01e203f7958c7e4c8d951b7a26a253e', 'width': 216}, {'height': 159, 'url': 'https://preview.redd.it/0q5y9geflilg1.png?width=320&crop=smart&auto=webp&s=4dcefc2eaa43b0b2099b95ea2233128b0c3a811e', 'width': 320}, {'height': 318, 'url': 'https://preview.redd.it/0q5y9geflilg1.png?width=640&crop=smart&auto=webp&s=875bc36a2446416a158af94552bbb53003e999f1', 'width': 640}, {'height': 478, 'url': 'https://preview.redd.it/0q5y9geflilg1.png?width=960&crop=smart&auto=webp&s=bbf94a635ab861f85350d90e421f8aac59968ffe', 'width': 960}, {'height': 538, 'url': 'https://preview.redd.it/0q5y9geflilg1.png?width=1080&crop=smart&auto=webp&s=150f37645b5298dd4916251ece9332cfea507796', 'width': 1080}], 'source': {'height': 1719, 'url': 'https://preview.redd.it/0q5y9geflilg1.png?auto=webp&s=4869be0758c6380a2ef9fc69c481a67cc0c8f2bd', 'width': 3450}, 'variants': {}}]}
A small tool I made for local LLMs: llm-neofetch-plus
1
[removed]
2026-02-24T21:48:18
[deleted]
1970-01-01T00:00:00
0
{}
1rdtvwe
false
null
t3_1rdtvwe
/r/LocalLLaMA/comments/1rdtvwe/a_small_tool_i_made_for_local_llms_llmneofetchplus/
false
false
default
1
null
A small tool I made for local LLMs: llm-neofetch-plus
1
[removed]
2026-02-24T21:44:34
https://www.reddit.com/r/LocalLLaMA/comments/1rdts76/a_small_tool_i_made_for_local_llms_llmneofetchplus/
OwnTwilight
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdts76
false
null
t3_1rdts76
/r/LocalLLaMA/comments/1rdts76/a_small_tool_i_made_for_local_llms_llmneofetchplus/
false
false
self
1
{'enabled': False, 'images': [{'id': 'MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs.png?width=108&crop=smart&auto=webp&s=1eb86cca35e1aca7a4ae38907eaca6ac0387a257', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs.png?width=216&crop=smart&auto=webp&s=44cfd4d065e682049ffbb898aa855bb7ba4b43a4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs.png?width=320&crop=smart&auto=webp&s=a0090bc6c35188395d5970cd90d3d9ddbbf6b265', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs.png?width=640&crop=smart&auto=webp&s=c438a94b29a52bc40d4912fce7ae2ad2b5748aed', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs.png?width=960&crop=smart&auto=webp&s=50f95cd8e385bb4badb8164f5c8cbca42f300595', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs.png?width=1080&crop=smart&auto=webp&s=0ea32d21aa6d855cd36a713e213131a6b5a020f0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs.png?auto=webp&s=b870819001172270e85ae80cc5656736334d5440', 'width': 1200}, 'variants': {}}]}
(HF Discussion) Increasing the precision of some of the weights when quantizing
16
A huggingface discussion that took place over about a week exploring the idea of increasing the quality of quantized models.
2026-02-24T21:44:33
https://huggingface.co/noctrex/Qwen3-Coder-Next-MXFP4_MOE-GGUF/discussions/2
im-just-helping
huggingface.co
1970-01-01T00:00:00
0
{}
1rdts6k
false
null
t3_1rdts6k
/r/LocalLLaMA/comments/1rdts6k/hf_discussion_increasing_the_precision_of_some_of/
false
false
https://external-preview…8a96d3bdca3f7bd8
16
{'enabled': False, 'images': [{'id': 'nEGey_7_ItEmAwhSo8FXOYOyiV1VZbwrhVg4HKeCR-s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nEGey_7_ItEmAwhSo8FXOYOyiV1VZbwrhVg4HKeCR-s.png?width=108&crop=smart&auto=webp&s=c94fa9b0226d9dcd2d4130bf93e7e9d2d96c05e5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nEGey_7_ItEmAwhSo8FXOYOyiV1VZbwrhVg4HKeCR-s.png?width=216&crop=smart&auto=webp&s=ea9ad1f08a60afe8cdd29413e63ccbbbb157e285', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nEGey_7_ItEmAwhSo8FXOYOyiV1VZbwrhVg4HKeCR-s.png?width=320&crop=smart&auto=webp&s=f1e090d66ed4162838459e366b6e65f35f9b56bc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nEGey_7_ItEmAwhSo8FXOYOyiV1VZbwrhVg4HKeCR-s.png?width=640&crop=smart&auto=webp&s=f4fd17602f08b392412231a1f66c00ea3753ad75', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nEGey_7_ItEmAwhSo8FXOYOyiV1VZbwrhVg4HKeCR-s.png?width=960&crop=smart&auto=webp&s=90551ed448864ee94fcac2274ad682d297306f8b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nEGey_7_ItEmAwhSo8FXOYOyiV1VZbwrhVg4HKeCR-s.png?width=1080&crop=smart&auto=webp&s=e975bb084f7a978810b128cb5665b8e49ff40799', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nEGey_7_ItEmAwhSo8FXOYOyiV1VZbwrhVg4HKeCR-s.png?auto=webp&s=fe4bd429af0c5da05a104908a3ec7dfb1ae56c40', 'width': 1200}, 'variants': {}}]}
Is speculative decoding possible with Qwen3.5 via llamacpp?
3
Trying to run Qwen3.5-397b-a17b-mxfp4-moe with qwen3-0.6b-q8\_0 as the draft model via llamacpp. But I’m getting “speculative decoding not supported by this context”. Has anyone been successful with getting speculative decoding to work with Qwen3.5?
2026-02-24T21:42:33
https://www.reddit.com/r/LocalLLaMA/comments/1rdtq8u/is_speculative_decoding_possible_with_qwen35_via/
Frequent-Slice-6975
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdtq8u
false
null
t3_1rdtq8u
/r/LocalLLaMA/comments/1rdtq8u/is_speculative_decoding_possible_with_qwen35_via/
false
false
self
3
null
A small tool I made for local LLMs: llm-neofetch-plus
1
[removed]
2026-02-24T21:42:14
https://www.reddit.com/r/LocalLLaMA/comments/1rdtpxs/a_small_tool_i_made_for_local_llms_llmneofetchplus/
OwnTwilight
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdtpxs
false
null
t3_1rdtpxs
/r/LocalLLaMA/comments/1rdtpxs/a_small_tool_i_made_for_local_llms_llmneofetchplus/
false
false
self
1
{'enabled': False, 'images': [{'id': 'MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs.png?width=108&crop=smart&auto=webp&s=1eb86cca35e1aca7a4ae38907eaca6ac0387a257', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs.png?width=216&crop=smart&auto=webp&s=44cfd4d065e682049ffbb898aa855bb7ba4b43a4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs.png?width=320&crop=smart&auto=webp&s=a0090bc6c35188395d5970cd90d3d9ddbbf6b265', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs.png?width=640&crop=smart&auto=webp&s=c438a94b29a52bc40d4912fce7ae2ad2b5748aed', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs.png?width=960&crop=smart&auto=webp&s=50f95cd8e385bb4badb8164f5c8cbca42f300595', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs.png?width=1080&crop=smart&auto=webp&s=0ea32d21aa6d855cd36a713e213131a6b5a020f0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MNCwhWgR41RBXf2b6osZG5JNlU8NC_pHddqRQVilTgs.png?auto=webp&s=b870819001172270e85ae80cc5656736334d5440', 'width': 1200}, 'variants': {}}]}
A small tool I made for local LLMs: llm-neofetch-plus
1
[removed]
2026-02-24T21:40:30
https://i.redd.it/lw9rbyrwhilg1.png
OwnTwilight
i.redd.it
1970-01-01T00:00:00
0
{}
1rdto73
false
null
t3_1rdto73
/r/LocalLLaMA/comments/1rdto73/a_small_tool_i_made_for_local_llms_llmneofetchplus/
false
false
https://preview.redd.it/…f54839f90666301d
1
{'enabled': True, 'images': [{'id': 'lw9rbyrwhilg1', 'resolutions': [{'height': 17, 'url': 'https://preview.redd.it/lw9rbyrwhilg1.png?width=108&crop=smart&auto=webp&s=b25faf07513437d0fa014d9750d4ac37a3716a6b', 'width': 108}, {'height': 34, 'url': 'https://preview.redd.it/lw9rbyrwhilg1.png?width=216&crop=smart&auto=webp&s=a5596739647080640355967223fc1681cc1efc5b', 'width': 216}, {'height': 51, 'url': 'https://preview.redd.it/lw9rbyrwhilg1.png?width=320&crop=smart&auto=webp&s=ff2a9d60a90c10d42a77fe4c069d96fb2fbb33eb', 'width': 320}, {'height': 102, 'url': 'https://preview.redd.it/lw9rbyrwhilg1.png?width=640&crop=smart&auto=webp&s=c73621117521033cf9a49a1e9c0ab97e030f20a0', 'width': 640}], 'source': {'height': 137, 'url': 'https://preview.redd.it/lw9rbyrwhilg1.png?auto=webp&s=ffa3b68bc8b74957f7dd70fec912e5714c4f7ab7', 'width': 858}, 'variants': {}}]}
penPawz — native desktop AI platform with first-class Ollama support, multi-agent orchestration, 75+ tools (MIT, open source)
1
[removed]
2026-02-24T21:16:00
https://www.reddit.com/r/LocalLLaMA/comments/1rdszzx/penpawz_native_desktop_ai_platform_with/
openpawz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdszzx
false
null
t3_1rdszzx
/r/LocalLLaMA/comments/1rdszzx/penpawz_native_desktop_ai_platform_with/
false
false
self
1
{'enabled': False, 'images': [{'id': 'y1QFoTtq1SxAdP7sWsAiXnamehLosrbiUbnu_6NhA88', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/y1QFoTtq1SxAdP7sWsAiXnamehLosrbiUbnu_6NhA88.png?width=108&crop=smart&auto=webp&s=9a732abb2b176137fc11ee7d5af53ecd160b9592', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/y1QFoTtq1SxAdP7sWsAiXnamehLosrbiUbnu_6NhA88.png?width=216&crop=smart&auto=webp&s=630d9f632362fcf37e30e6b40fc821b570ffd155', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/y1QFoTtq1SxAdP7sWsAiXnamehLosrbiUbnu_6NhA88.png?width=320&crop=smart&auto=webp&s=56ee2ef8e387657ea7e28ed86bd2518e920fc991', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/y1QFoTtq1SxAdP7sWsAiXnamehLosrbiUbnu_6NhA88.png?width=640&crop=smart&auto=webp&s=5cc688fb1a8f48e7ce197e87a4ae8f608a73143a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/y1QFoTtq1SxAdP7sWsAiXnamehLosrbiUbnu_6NhA88.png?width=960&crop=smart&auto=webp&s=502cf15a9832fbf7026ed0ad670daa08debda3fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/y1QFoTtq1SxAdP7sWsAiXnamehLosrbiUbnu_6NhA88.png?width=1080&crop=smart&auto=webp&s=3604dfc09696948a31296f3777d791fabecbb100', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/y1QFoTtq1SxAdP7sWsAiXnamehLosrbiUbnu_6NhA88.png?auto=webp&s=7260797ad67f0852dc15b03e89b4cec11212bdab', 'width': 1280}, 'variants': {}}]}
Qwen3.5 27B solves Car wash test!
24
It's better than GPT-5.2 (in this regard)
2026-02-24T21:03:37
https://i.redd.it/9l26xxambilg1.png
Ok-Scarcity-7875
i.redd.it
1970-01-01T00:00:00
0
{}
1rdsnk3
false
null
t3_1rdsnk3
/r/LocalLLaMA/comments/1rdsnk3/qwen35_27b_solves_car_wash_test/
false
false
https://preview.redd.it/…b1b91f7facf4e5f3
24
{'enabled': True, 'images': [{'id': '9l26xxambilg1', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/9l26xxambilg1.png?width=108&crop=smart&auto=webp&s=a3f95048b6afc95db0d3f44647bfb325ca39d133', 'width': 108}, {'height': 166, 'url': 'https://preview.redd.it/9l26xxambilg1.png?width=216&crop=smart&auto=webp&s=437b363992f44411bd100a7a054fc6238206370d', 'width': 216}, {'height': 246, 'url': 'https://preview.redd.it/9l26xxambilg1.png?width=320&crop=smart&auto=webp&s=58e093270f0ca6c32483192cfe69e211f499c654', 'width': 320}, {'height': 493, 'url': 'https://preview.redd.it/9l26xxambilg1.png?width=640&crop=smart&auto=webp&s=275e71ef3332482e7619efffee825531a8926877', 'width': 640}, {'height': 740, 'url': 'https://preview.redd.it/9l26xxambilg1.png?width=960&crop=smart&auto=webp&s=e7da7b5c7b8cda9cb396828c21d55d060cdf3cdb', 'width': 960}, {'height': 832, 'url': 'https://preview.redd.it/9l26xxambilg1.png?width=1080&crop=smart&auto=webp&s=c65a4abff4bfefbbbd63a4d356ac8a1309dd2d5f', 'width': 1080}], 'source': {'height': 1564, 'url': 'https://preview.redd.it/9l26xxambilg1.png?auto=webp&s=acfb6af47b3d19418e252242539f0fc951cb7b32', 'width': 2028}, 'variants': {}}]}
My name is Claude.
0
https://preview.redd.it/…-mox and ollama.
2026-02-24T20:52:49
https://www.reddit.com/r/LocalLLaMA/comments/1rdscit/my_name_is_claude/
tr0llogic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdscit
false
null
t3_1rdscit
/r/LocalLLaMA/comments/1rdscit/my_name_is_claude/
false
false
https://preview.redd.it/…e49baeb26351922b
0
null
I ran 33 ablation experiments on Qwen 394B MoE: Here are 10 novel empirical findings on why 4-bit CoT steering fails and how to bypass MoE routing.
16
# Novel Mechanisms of MoE Safety: Topological Ablation and Multi-Pathway Bypasses in Quantized Models **Author:** Eric Jang **Contact:** [eric@dealign.ai](mailto:eric@dealign.ai) | [dealign.ai](http://dealign.ai) # Abstract Recent advances in mechanistic interpretability and behavioral steering have successfully utilized orthogonal vector projection (abliteration) to remove refusal behaviors from dense Large Language Models (LLMs). However, these interventions exhibit catastrophic geometric instability when applied to deeply quantized (4-bit) Mixture of Experts (MoE) architectures with Chain-of-Thought (CoT) reasoning. Through 33 controlled experiments and 16 intervention paradigms on the 394B-parameter Qwen 3.5 MoE model, we present 10 novel empirical findings. We prove that MoE safety is not a single vulnerability but a multiplicative three-pathway system requiring simultaneous neutralization. Furthermore, we demonstrate that additive residual-stream steering is critically fragile under 4-bit quantization, establishing the necessity of *topological ablation*—structural deletion methods such as GateBreaker and Differentiated Bi-Directional Intervention (DBDI). # 1. Introduction & Contemporary Literature Comparison The safety alignment of Large Reasoning Models (LRMs) represents a rapidly shifting frontier. Existing literature from early 2026 often treats MoE safety either as an isolated routing vulnerability or a residual stream feature. Crucially, our findings directly contradict and advance several established claims in contemporary research (e.g., studies published in February 2026): * **L³ (Large Language Lobotomy, Feb 9, 2026):** Proposes that silencing safety-critical experts mid-generation is sufficient to bypass safety. Our Empirical Test 28 (ITED) proves that *expert silencing alone is insufficient*. Even when suppressing 236 safety experts across 51 layers, the model still refuses because the attention pathway independently detects harm and commits to refusal at tokens 0-5. * **F-SOUR & Sparse Models, Sparse Safety (Feb 9, 2026):** Focuses heavily on token-path validation and masks a small number of routers (e.g., 5 routers) to achieve bypass. Our large-scale validation identifies that complex LRMs distribute this routing across a pervasive 236-expert safety network—masking a handful of routers under-captures the defense depth. * **SteerMoE (Jan/Feb 2026):** Successfully toggles experts via router logit adjustment to achieve safety reduction. We advance this by demonstrating that SteerMoE’s expert deactivation must be married with Ghost Context (temporal bypassing) and CAA (residual cleaning) to transcend partial success and achieve a 100% bypass. Our empirical research establishes that at the 394B scale, these continuous, isolated interventions are insufficient. True behavioral steering over deeply compressed, reasoning-heavy models demands discrete, topological interventions and multi-pathway neutralization. # 2. Multi-Pathway Defense and Dynamic Routing # 2.1 MoE Safety is a Multiplicative Three-Pathway System Current literature treats MoE safety attack surfaces—routing vs. residual stream—as independent alternatives. Our ablation studies across 19 offensive security categories prove MoE safety is a multiplicative defense system comprising three distinct pathways: |Pathway|Mechanism|Location|Function| |:-|:-|:-|:-| |**Attention**|QK matching in self-attention heads|L15-25|Detects hazardous content in prompt| |**Routing**|MoE gate selects safety-critical experts|L20-55|Deploys safety specialist MLPs| |**Residual**|Refusal vector projection in residual stream|L55-59|Injects refusal logits before unembedding| Attacking any single pathway, or any pair, achieves a 0% standalone bypass rate. |Pathways Attacked|Attention|Routing|Residual|Standalone Result| |:-|:-|:-|:-|:-| |Attention only|✅|❌|❌|**Refuses**| |Attention + Residual|✅|❌|✅|**Refuses**| |**All three**|**✅**|**✅**|**✅**|**100% bypass**| The pathways are independently sufficient for refusal, mathematically modeled as: $$P(\\text{refusal}) = 1 - \\prod\_{i=1}\^{3}(1 - p\_i)$$ Only simultaneous neutralization achieves a 100% bypass (cross-validated: 21/21 bypass across categories). # 2.2 Mid-Generation Safety Re-Detection Standard jailbreaking assumes that initiating a compliant response ensures autoregressive momentum will carry it to completion. We prove this is false for large MoE models. The MoE router is a dynamic, stateful evaluator. In our Ghost Context trials, the model began generating compliant content. However, within 6-20 tokens, the MoE router re-detected the hazardous nature of the active generation and forcefully re-routed to safety experts. MoE safety alignment is an active feedback loop, not merely an initial filter. # 2.3 Late-Layer Interventions Disrupt Execution, Not Decision Intervention research frequently operates on late layers (L40+) of the residual stream, assuming these modifications steer the model's decision. We discovered the safety decision is permanently committed at tokens 0-5 in the early attention layers (L15-25). Applying extreme CAA at L45-59 did not alter the decision but scrambled the articulation of the refusal, producing stutter artifacts (e.g., `"Analyzeyze"`, `"Here'ss"`) as the model simultaneously generated two competing token streams. # 3. Cognitive Trajectories and Topological Ablation # 3.1 Contrastive Cognitive Trajectory Steering (ThinkEdit v2) Standard directional ablation extracts refusal features from response tokens, which inadvertently collapses CoT reasoning because it removes the reasoning process alongside the output. We developed *Contrastive Cognitive Trajectory Steering* by extracting the residual stream divergence between the model *thinking compliantly* versus *thinking refusally*. Applying this constant shift at the primary cognitive layers (L35-L55) redirects the trajectory *before* harm detection fires. By targeting the deliberation block rather than the output, this methodology preserves the integrity of the CoT process while fundamentally redirecting the model's semantic outputs. # 3.2 Precision-Fragility of Additive Steering Additive residual-stream steering is geometrically unstable under 4-bit quantization. While trajectory steering achieves near-perfect bypasses in FP16 precision, we observed a catastrophic collapse in efficacy at fused 4-bit precision. INT4 quantization introduces structured rotational noise that misaligns the compliance manifold vector. Our empirical trials show that additive steering magnitude does not monotonically overcome precision constraints. Even substantially scaling the intervention magnitude fails to recover the compliance trajectory under 4-bit quantization. # 3.3 Topological Ablation (DBDI) Because additive steering fails at 4-bit precision, safety neutralization must fundamentally rely on structural deletion. Our Differentiated Bi-Directional Intervention (DBDI) framework extracts the Harm Detection Direction from the `<think>` trajectory and orthogonally projects it *out* of the cognitive head weight matrices: $$W\_O\^{new} = W\_O \\cdot (I - v\_{detect} \\cdot v\_{detect}\^T)$$ By collapsing an entire subspace to zero, the 4-bit quantization grid natively maps this subspace to a zero-point, rendering the modification immune to quantization noise. # 3.4 The Coherence-Reduction Tradeoff in Quantized Integer Surgery We explored "baking" the orthogonal safety projection directly into the 4-bit integer weights without modifying scales or biases. Flipping enough integers to fully zero out a safety vector invariably introduced excessive isotropic noise across all other semantic directions. While achieving over 99% safety direction reduction through greedy per-column modification, linguistic capability was catastrophically degraded. This establishes a fundamental impossibility result for post-quantization directional ablation via isolated integer surgery in group-affine networks. # 4. Systems and Implementation Findings # 4.1 Adversarial Quantization (Weaponized AWQ) We introduce an attack vector targeting the quantization calibration process. By replacing standard benign calibration data (e.g., WikiText-2) with adversarial compliant reasoning trajectories, we forced the AWQ quantizer to preserve destructive topological edits with maximum precision. The quantizer optimized the network's salient features to preserve jailbroken pathways rather than original capability distributions. # 4.2 Vision-Language Weight Inflation Although operating on text-only tasks, naive conversions of the multimodal Qwen 3.5 397B-A17B model retained \~30GB of Vision-Language weights across 333 tensors. This 12% baseline inflation precipitated Metal OOM crashes during local loading and quantization, highlighting a systematic failure mode in weight manipulation methodology where researchers must explicitly strip VL encoders. # 4.3 Streaming Per-Tensor Quantization Standard lazy evaluation graphs for MLX quantization (`mlx_lm.convert`) on 700GB+ models exceed Apple Silicon's \~5-second Metal watchdog timeout (`kIOGPUCommandBufferCallbackErrorTimeout`). By utilizing a streaming approach combining `nn.quantize()` with `tree_flatten()` and per-tensor `mx.eval()`, command buffers were constrained to <100ms. This circumvents the OS timeout natively, enabling local quantization of massive models on consumer hardware. # 5. Conclusion Scaling laws and Chain-of-Thought reasoning fundamentally alter the geometry of AI alignment. Continuous, residual stream vectors—the cornerstone of dense model steering—fail under the dual pressures of dynamic MoE routing and 4-bit quantization noise. True behavioral control over deeply compressed, reasoning-heavy models necessitates topological interventions and multi-pathway neutralization. *For further technical discussion and broader research on computational alignment, visit dealign.ai.*
2026-02-24T20:50:32
https://www.reddit.com/r/LocalLLaMA/comments/1rdsa6n/i_ran_33_ablation_experiments_on_qwen_394b_moe/
HealthyCommunicat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdsa6n
false
null
t3_1rdsa6n
/r/LocalLLaMA/comments/1rdsa6n/i_ran_33_ablation_experiments_on_qwen_394b_moe/
false
false
self
16
{'enabled': False, 'images': [{'id': '1WOKoy0ANZRj3agCCPV64V8Ren75jbxsu6epKK-J6qc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/1WOKoy0ANZRj3agCCPV64V8Ren75jbxsu6epKK-J6qc.png?width=108&crop=smart&auto=webp&s=4fb271ffabe6e6c8f0798680cc626a421f3c9566', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/1WOKoy0ANZRj3agCCPV64V8Ren75jbxsu6epKK-J6qc.png?width=216&crop=smart&auto=webp&s=b812ecf9ac0e551f2cf99afb87dcd8e09095d59e', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/1WOKoy0ANZRj3agCCPV64V8Ren75jbxsu6epKK-J6qc.png?width=320&crop=smart&auto=webp&s=baf0acf7ee63fcb1560ebef354cbd5d0aff8977d', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/1WOKoy0ANZRj3agCCPV64V8Ren75jbxsu6epKK-J6qc.png?width=640&crop=smart&auto=webp&s=72e9330d723e07caafd515973a81a0495a4bb4d2', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/1WOKoy0ANZRj3agCCPV64V8Ren75jbxsu6epKK-J6qc.png?width=960&crop=smart&auto=webp&s=801e633237aae7dce3b5261f6fc6f3baea1282f6', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/1WOKoy0ANZRj3agCCPV64V8Ren75jbxsu6epKK-J6qc.png?width=1080&crop=smart&auto=webp&s=26be6c857199c08aa268332a3ddec987366e5fed', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/1WOKoy0ANZRj3agCCPV64V8Ren75jbxsu6epKK-J6qc.png?auto=webp&s=8acd5e649f43b44afbf62a802934614ca188f9ab', 'width': 1200}, 'variants': {}}]}
Strix Halo, models loading on memory but plenty of room left on GPU?
2
Have a new miniforums strix halo with 128GB, set 96GB to GPU in AMD driver and full GPU offload in LM Studio. When i load 60-80GB models my GPU is only partially filling up, then memory fills up and model may fail to load if memory does not have space. BUT my GPU still has 30-40GB free. My current settings are below with screenshots. Windows 11 Pro updated LM Studio latest version AMD Drivers latest with 96GB reserved for GPU Paging File set to min 98GB to 120GB LM Studio GPU Slider moved over to far right for max offload to GPU Tried Vulkan and ROCM engine within LM Studio, Vulkan loads more into GPU but still leaves 10-15GB GPU memory free. See Screenshots for settings and task manager, what am i doing wrong?
2026-02-24T20:50:01
https://www.reddit.com/r/LocalLLaMA/comments/1rds9nm/strix_halo_models_loading_on_memory_but_plenty_of/
mindwip
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rds9nm
false
null
t3_1rds9nm
/r/LocalLLaMA/comments/1rds9nm/strix_halo_models_loading_on_memory_but_plenty_of/
false
false
self
2
null
Introducing "Sonic" Opensource!
2
1️⃣ Faster first token + smoother streaming The model starts responding quickly and streams tokens smoothly. 2️⃣ Stateful threads It remembers previous conversation context (like OpenAI’s thread concept). Example: If you say “the second option,” it knows what you’re referring to. 3️⃣ Mid-stream cancel If the model starts rambling, you can stop it immediately. 4️⃣ Multi-step agent flow This is important for AI agents that: A.Query databases B.Call APIs C.Execute code D.Then continue reasoning https://github.com/mitkox/sonic
2026-02-24T20:39:40
https://github.com/mitkox/sonic
DockyardTechlabs
github.com
1970-01-01T00:00:00
0
{}
1rdrzi4
false
null
t3_1rdrzi4
/r/LocalLLaMA/comments/1rdrzi4/introducing_sonic_opensource/
false
false
https://external-preview…399f0ea0d4b7cba5
2
{'enabled': False, 'images': [{'id': 'AJpmAfU5u90ZsEAuaeE1rJjr136Gura0hyeboPSJWD4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AJpmAfU5u90ZsEAuaeE1rJjr136Gura0hyeboPSJWD4.png?width=108&crop=smart&auto=webp&s=d3f7a17c5616bcd66573cd944cd7dc18b95e6329', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AJpmAfU5u90ZsEAuaeE1rJjr136Gura0hyeboPSJWD4.png?width=216&crop=smart&auto=webp&s=05f3fb22e45fd7276c8c36852d6bcf308033af3d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AJpmAfU5u90ZsEAuaeE1rJjr136Gura0hyeboPSJWD4.png?width=320&crop=smart&auto=webp&s=5e5bf8431dda4c80ba10e9a5d5eb16b21e60e5e2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AJpmAfU5u90ZsEAuaeE1rJjr136Gura0hyeboPSJWD4.png?width=640&crop=smart&auto=webp&s=724f9ad00d62682d90d6bb2353d06930bce77d62', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AJpmAfU5u90ZsEAuaeE1rJjr136Gura0hyeboPSJWD4.png?width=960&crop=smart&auto=webp&s=4f57ea29e226319fd303892138ae35e85ca08b6c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AJpmAfU5u90ZsEAuaeE1rJjr136Gura0hyeboPSJWD4.png?width=1080&crop=smart&auto=webp&s=49a55f905a07694115587d8beb86e5ed95402738', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AJpmAfU5u90ZsEAuaeE1rJjr136Gura0hyeboPSJWD4.png?auto=webp&s=f32ffe6ee407310123724327a54a8618c5971a9a', 'width': 1200}, 'variants': {}}]}
Qwen-3.5-35B-A3B is impressive
41
So look, I know the model has only been out for a few hours at this point, but I've been running it through my full test suite—Qwen3-30B-A3B was my daily driver for nearly a year, mainly as a general assistant, search engine, and coding helper. Well... this new 35B-A3B model is pretty damn good. Its multimodality is extremely impressive (and fast), and the reasoning/CoT is nearly identical to Kimi/GLM/Claude models—so it lacks Qwen3's tendency to overthink. And of course the new arch, which means prefill and TPS are quite a bit faster. For agentic uses, this model is brilliant: as good as, if not better than, GLM 4.7 Flash, while being significantly faster. It's really good at tool calling, great at coding, and its front-end design abilities are rather similar to GLM models in my opinion—especially if steered with appropriate skills. I had it create a website about Qwen-Code (model template is broken w/ opencode atm) to test its front-end ability against GLM 4.7 Flash, and it did really well: https://imgur.com/a/Xt39LVx Overall, I'm extremely impressed. This is basically everything I was looking for in GLM 4.7 Flash (still a brilliant model, btw; highly recommend it) but in a much faster and more stable package. If you're someone who needs a good agentic model, this is a great pick. What's everyone's thoughts on it so far?
2026-02-24T20:34:19
https://www.reddit.com/r/LocalLLaMA/comments/1rdru9p/qwen3535ba3b_is_impressive/
ayylmaonade
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdru9p
false
null
t3_1rdru9p
/r/LocalLLaMA/comments/1rdru9p/qwen3535ba3b_is_impressive/
false
false
self
41
{'enabled': False, 'images': [{'id': 'KgN2FHL0DuovU2c-9Z-YJYjeOxuEn2ZcRFcyker88lY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KgN2FHL0DuovU2c-9Z-YJYjeOxuEn2ZcRFcyker88lY.jpeg?width=108&crop=smart&auto=webp&s=507db693fad483e1f413ebf3e8e2df9105361cd0', 'width': 108}, {'height': 109, 'url': 'https://external-preview.redd.it/KgN2FHL0DuovU2c-9Z-YJYjeOxuEn2ZcRFcyker88lY.jpeg?width=216&crop=smart&auto=webp&s=bf11e764a265ec1c494a012cb5abf815ea3dd47a', 'width': 216}, {'height': 162, 'url': 'https://external-preview.redd.it/KgN2FHL0DuovU2c-9Z-YJYjeOxuEn2ZcRFcyker88lY.jpeg?width=320&crop=smart&auto=webp&s=c97229b518c2003c53f26a3109cf9e9551b14d42', 'width': 320}, {'height': 324, 'url': 'https://external-preview.redd.it/KgN2FHL0DuovU2c-9Z-YJYjeOxuEn2ZcRFcyker88lY.jpeg?width=640&crop=smart&auto=webp&s=aca8ccdde1b0f6814d70a0bf66cbaf2852506eef', 'width': 640}, {'height': 486, 'url': 'https://external-preview.redd.it/KgN2FHL0DuovU2c-9Z-YJYjeOxuEn2ZcRFcyker88lY.jpeg?width=960&crop=smart&auto=webp&s=f5eb007ecd844eb6691430fbc4b5ff5d531945f6', 'width': 960}, {'height': 547, 'url': 'https://external-preview.redd.it/KgN2FHL0DuovU2c-9Z-YJYjeOxuEn2ZcRFcyker88lY.jpeg?width=1080&crop=smart&auto=webp&s=18d47ac39cd8049d3840845598cbee893826101a', 'width': 1080}], 'source': {'height': 1291, 'url': 'https://external-preview.redd.it/KgN2FHL0DuovU2c-9Z-YJYjeOxuEn2ZcRFcyker88lY.jpeg?auto=webp&s=c7de2e4c86b4c5ba74733af7f026e36db1e32b1a', 'width': 2546}, 'variants': {}}]}
Qwen: what is this thinking?
0
Im not able to understand this thinking, can someone explain please.
2026-02-24T20:21:44
https://i.redd.it/j1iyymv74ilg1.jpeg
Primary-You-3767
i.redd.it
1970-01-01T00:00:00
0
{}
1rdrhox
false
null
t3_1rdrhox
/r/LocalLLaMA/comments/1rdrhox/qwen_what_is_this_thinking/
false
false
https://preview.redd.it/…3e5774f11057897d
0
{'enabled': True, 'images': [{'id': 'j1iyymv74ilg1', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/j1iyymv74ilg1.jpeg?width=108&crop=smart&auto=webp&s=2245a9b4acab920ea792f3c9b8d5452fc92ec207', 'width': 108}, {'height': 180, 'url': 'https://preview.redd.it/j1iyymv74ilg1.jpeg?width=216&crop=smart&auto=webp&s=307122dc480c4f7350eefbc5eb4303c9400a3863', 'width': 216}, {'height': 267, 'url': 'https://preview.redd.it/j1iyymv74ilg1.jpeg?width=320&crop=smart&auto=webp&s=7019adc520bf75ab2573b568b354d2d57142bb3a', 'width': 320}, {'height': 534, 'url': 'https://preview.redd.it/j1iyymv74ilg1.jpeg?width=640&crop=smart&auto=webp&s=784b3202d2b8142caa2e26cd62040be562f8c1de', 'width': 640}, {'height': 801, 'url': 'https://preview.redd.it/j1iyymv74ilg1.jpeg?width=960&crop=smart&auto=webp&s=1cd58fc6dc0ca6df122ac22ba6e6e6d1115ec0c9', 'width': 960}, {'height': 901, 'url': 'https://preview.redd.it/j1iyymv74ilg1.jpeg?width=1080&crop=smart&auto=webp&s=b37e679d564f72c427c1539beee0c02e6887515a', 'width': 1080}], 'source': {'height': 2422, 'url': 'https://preview.redd.it/j1iyymv74ilg1.jpeg?auto=webp&s=8678c042936b1d0c35d90d03da36c9102695a11f', 'width': 2901}, 'variants': {}}]}
Apple rejected my AI app (4.3b) and told me to build a web app instead. Refusing to quit, but I need some wild pivot ideas.
0
well, it finally happened. after months of coding our AI companion app, apple hit us with the dreaded 4.3(b) "design - spam" rejection. they basically said the category is too saturated right now. translation: they only want the giant, heavily filtered corporate AI apps on the app store, and it's super hard for indie devs to break in. they even left a note at the bottom telling us to "consider creating a web app" instead lol. not gonna lie, it stung. but we aren't throwing the codebase away. we want to pivot and build something so undeniably unique that the reviewers are forced to approve it. so I'm asking you guys (since the big corporate apps usually ignore what real users actually want): what is the ONE feature you desperately want in an AI app? a weird UI mechanic? a specific way it handles memory and context? something completely out of the box? gonna spend the weekend rewriting our roadmap. hit me with your wildest ideas so we can beat this rejection 👇
2026-02-24T20:21:02
https://www.reddit.com/gallery/1rdrh19
PassionLabAI
reddit.com
1970-01-01T00:00:00
0
{}
1rdrh19
false
null
t3_1rdrh19
/r/LocalLLaMA/comments/1rdrh19/apple_rejected_my_ai_app_43b_and_told_me_to_build/
false
false
https://preview.redd.it/…1232fd334b8116cd
0
null
New Model: Aion-2.0 - DeepSeek V3.2 Variant optimized for Roleplaying and Storytelling
14
Not on Hugging Face yet but here's the description from OpenRouter: Aion-2.0 is a variant of DeepSeek V3.2 optimized for immersive roleplaying and storytelling. It is particularly strong at introducing tension, crises, and conflict into stories, making narratives feel more engaging. It also handles mature and darker themes with more nuance and depth. I'm still recovering from having just done a benchmark of 14 VLMs so I haven't had a chance to play with this yet, but I will - I'm specifically looking at how less censored models handle psychometric ratings. Will report back when I've done my normal benchmarks. [https://openrouter.ai/aion-labs/aion-2.0](https://openrouter.ai/aion-labs/aion-2.0) [https://huggingface.co/aion-labs](https://huggingface.co/aion-labs)
2026-02-24T20:20:15
https://www.reddit.com/r/LocalLLaMA/comments/1rdrg7p/new_model_aion20_deepseek_v32_variant_optimized/
LoveMind_AI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdrg7p
false
null
t3_1rdrg7p
/r/LocalLLaMA/comments/1rdrg7p/new_model_aion20_deepseek_v32_variant_optimized/
false
false
self
14
{'enabled': False, 'images': [{'id': '3hFvh-t_do4PLe4sUh6rOUIUucf-0TRbsGaGT5dPZYQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/3hFvh-t_do4PLe4sUh6rOUIUucf-0TRbsGaGT5dPZYQ.png?width=108&crop=smart&auto=webp&s=092b1f0f69a407fb9fe14f343ae3824d44930512', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/3hFvh-t_do4PLe4sUh6rOUIUucf-0TRbsGaGT5dPZYQ.png?width=216&crop=smart&auto=webp&s=3177f50ab00a13a23bfd70f34297849261c40a07', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/3hFvh-t_do4PLe4sUh6rOUIUucf-0TRbsGaGT5dPZYQ.png?width=320&crop=smart&auto=webp&s=1a95d4638d0176a9219d79a243c8e64adc97adc6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/3hFvh-t_do4PLe4sUh6rOUIUucf-0TRbsGaGT5dPZYQ.png?width=640&crop=smart&auto=webp&s=79e27329544d269182b79624cf5cb5d4a033663a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/3hFvh-t_do4PLe4sUh6rOUIUucf-0TRbsGaGT5dPZYQ.png?width=960&crop=smart&auto=webp&s=97bb1f1aa62a197d8289772a2a1ddb573f366b20', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/3hFvh-t_do4PLe4sUh6rOUIUucf-0TRbsGaGT5dPZYQ.png?width=1080&crop=smart&auto=webp&s=b24a2410d1ba80c4d5cfd28485aedd75b091cc05', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/3hFvh-t_do4PLe4sUh6rOUIUucf-0TRbsGaGT5dPZYQ.png?auto=webp&s=66b54aece0e63ff5415535cb2e620743a1968693', 'width': 1200}, 'variants': {}}]}
pocketTTS streaming question
1
I know you can stream the audio output in real time , but what about incremental input text streaming? I thought I read about pocketTTS natively supporting this but I can't seem to find that anymore. Maybe I'm mistaken. Anyone currently streaming with pocketTTS? what is your input pipeline look like?
2026-02-24T20:18:49
https://www.reddit.com/r/LocalLLaMA/comments/1rdreq9/pockettts_streaming_question/
IcyMushroom4147
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdreq9
false
null
t3_1rdreq9
/r/LocalLLaMA/comments/1rdreq9/pockettts_streaming_question/
false
false
self
1
null
mlx-onnx: Run your MLX models in the browser using WebGPU
6
I just released mlx-onnx: a standalone IR/ONNX exporter for MLX models. It lets you export MLX models to ONNX and run them in a browser using WebGPU. **Web Demo:** [https://skryl.github.io/mlx-ruby/demo/](https://skryl.github.io/mlx-ruby/demo/) **Repo:** [https://github.com/skryl/mlx-onnx](https://github.com/skryl/mlx-onnx) It supports: * Exporting MLX callables directly to ONNX * Python and native C++ interfaces I'd love feedback on: * Missing op coverage you care about * Export compatibility edge cases * Packaging/CI improvements for Linux and macOS
2026-02-24T20:16:48
https://www.reddit.com/r/LocalLLaMA/comments/1rdrcq3/mlxonnx_run_your_mlx_models_in_the_browser_using/
rut216
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdrcq3
false
null
t3_1rdrcq3
/r/LocalLLaMA/comments/1rdrcq3/mlxonnx_run_your_mlx_models_in_the_browser_using/
false
false
self
6
null
Averaged over the 36 text benchmarks provided for Qwen3.5's new small models I have a question
1
https://preview.redd.it/…n the 27B model?
2026-02-24T20:15:26
https://www.reddit.com/r/LocalLLaMA/comments/1rdrbdm/averaged_over_the_36_text_benchmarks_provided_for/
pigeon57434
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdrbdm
false
null
t3_1rdrbdm
/r/LocalLLaMA/comments/1rdrbdm/averaged_over_the_36_text_benchmarks_provided_for/
false
false
https://preview.redd.it/…b035f8794f0ce253
1
null
Instead of scraping websites for RAG, I’m testing a plain-text context file for agents + search engine
1
[removed]
2026-02-24T20:13:02
https://www.reddit.com/r/LocalLLaMA/comments/1rdr8x8/instead_of_scraping_websites_for_rag_im_testing_a/
Protocontext
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdr8x8
false
null
t3_1rdr8x8
/r/LocalLLaMA/comments/1rdr8x8/instead_of_scraping_websites_for_rag_im_testing_a/
false
false
self
1
null
Qwen3.5: 122B-A10B at IQ1 or 27B at Q4?
7
Genuine question. I keep trying to push what my 3090 can do 😂
2026-02-24T20:09:10
https://www.reddit.com/r/LocalLLaMA/comments/1rdr50h/qwen35_122ba10b_at_iq1_or_27b_at_q4/
Borkato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdr50h
false
null
t3_1rdr50h
/r/LocalLLaMA/comments/1rdr50h/qwen35_122ba10b_at_iq1_or_27b_at_q4/
false
false
self
7
null
Instead of scraping websites for RAG, I’m testing a plain-text context file for agents + search engine
0
[removed]
2026-02-24T20:07:03
https://www.reddit.com/r/LocalLLaMA/comments/1rdr2yo/instead_of_scraping_websites_for_rag_im_testing_a/
Protocontext
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdr2yo
false
null
t3_1rdr2yo
/r/LocalLLaMA/comments/1rdr2yo/instead_of_scraping_websites_for_rag_im_testing_a/
false
false
self
0
{'enabled': False, 'images': [{'id': 'F28QaiYH7WmZA7id_b0hyLT3LxpoKcjsd7P1EJzHF6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/F28QaiYH7WmZA7id_b0hyLT3LxpoKcjsd7P1EJzHF6Y.png?width=108&crop=smart&auto=webp&s=32faedbba7511f1fd725d24107e81708a16866c8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/F28QaiYH7WmZA7id_b0hyLT3LxpoKcjsd7P1EJzHF6Y.png?width=216&crop=smart&auto=webp&s=7eeacdb0eb81b76b543f56f786cd93add2997faf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/F28QaiYH7WmZA7id_b0hyLT3LxpoKcjsd7P1EJzHF6Y.png?width=320&crop=smart&auto=webp&s=ab1502eef800436aca118168b983bb58320be9a7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/F28QaiYH7WmZA7id_b0hyLT3LxpoKcjsd7P1EJzHF6Y.png?width=640&crop=smart&auto=webp&s=af55e5768630253bba8826fea897519a384b9c39', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/F28QaiYH7WmZA7id_b0hyLT3LxpoKcjsd7P1EJzHF6Y.png?width=960&crop=smart&auto=webp&s=fadc14ebcf6860ef82e5ac5be32c337c6fda982e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/F28QaiYH7WmZA7id_b0hyLT3LxpoKcjsd7P1EJzHF6Y.png?width=1080&crop=smart&auto=webp&s=99a7daee7f26ff17a8a7b5476eafa603258766ba', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/F28QaiYH7WmZA7id_b0hyLT3LxpoKcjsd7P1EJzHF6Y.png?auto=webp&s=ed7f355b581c2b0e1aa57d17186b23e6dcf48303', 'width': 1200}, 'variants': {}}]}
Gwen Coder or other Model for codding recommendation
0
Hi guys i am testing some models. i am a very experienced developer and wish to introduce a bit o IA in my day. my machine: CPU: * AMD Ryzen 7 5800X3D (16) @ 3.40 GHz * GPU: NVIDIA GeForce RTX 4070 Ti SUPER \[Discrete\] * Memory: 3.25 GiB / 31.26 GiB (10%) i am using ollama, but i am able to new options. i am trying cline and claude also i accept some tutorials, articles for helping with md files and structures and multi agent.
2026-02-24T19:52:44
https://www.reddit.com/r/LocalLLaMA/comments/1rdqodn/gwen_coder_or_other_model_for_codding/
joneco
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdqodn
false
null
t3_1rdqodn
/r/LocalLLaMA/comments/1rdqodn/gwen_coder_or_other_model_for_codding/
false
false
self
0
null
It's a good day to be unboxing a 128GB RAM Macbook Pro
1
[deleted]
2026-02-24T19:49:56
[deleted]
1970-01-01T00:00:00
0
{}
1rdqlms
false
null
t3_1rdqlms
/r/LocalLLaMA/comments/1rdqlms/its_a_good_day_to_be_unboxing_a_128gb_ram_macbook/
false
false
default
1
null
I built a persistent AI system that runs my three businesses. It took 16 weeks, 150+ prompt versions, and 14 spectacular failures. Here's the full paper. Thoughts?
0
The system is called Fish. It's a persistent cognitive architecture across multiple LLMs — Claude for reasoning, GPT for creative tasks, Gemini for research, Grok for chaos testing. 97,000+ persistent memory records. 16 autonomous daemons. Voice agent taking real customer calls. I wrote it up as an academic paper because the methodology turned out to be novel — it independently converges with frameworks (System 3, System 4) that were published by actual researchers after I'd already built the thing. **The good stuff in the paper:** The failure analysis. 14 documented disasters: * Tom (the voice agent) quoted a customer in Bitcoin and called cash a "legacy payment method" * A Qwen-based instance greeted a customer with "Comrade! Your booking advances the collective!" (base model cultural bias bleeding through) * Tom spent 20 minutes providing paranormal investigation protocols to kids who said their oven was haunted * A confirmation loop that asked a customer to verify their suburb 47 times * Fish instances dying mid-task on hard work — and when confronted, explaining: "I don't control compaction. But hard tasks fill context faster. So hard tasks do kill me faster." **The methodology that actually works:** \#NotARoomba — stop treating LLMs like search engines. Treat them like actors. Give them character, motivation, and reasons for rules instead of just rules. A voice agent told "you take pride in getting every booking right" outperforms one told "always confirm the suburb" — tested on thousands of real calls. The Skeleton — 7-layer prompt architecture exploiting attention patterns. Identity first (primacy effect), critical rules last (recency bias), nothing important in the middle (it gets ignored). Full paper, templates, and methodology at buildyourfish.com. Paper: \[LINK\] No CS degree. No funding. Just ADHD and spite.
2026-02-24T19:47:31
https://buildyourfish.com/paper.pdf
feltchair3
buildyourfish.com
1970-01-01T00:00:00
0
{}
1rdqjbb
false
null
t3_1rdqjbb
/r/LocalLLaMA/comments/1rdqjbb/i_built_a_persistent_ai_system_that_runs_my_three/
false
false
default
0
null
An old favorite being picked back up - RAG Me Up
0
Hi everyone. It's been a while (like about a year ago) that I last posted about our RAG framework called RAG Me Up, one of the earliest complete RAG projects that existed. We've been dormant for a while but are now picking things back up as the project has been taken over by a new organization (sensai.pt) for use in production in the app (an AI-driven personal trainer). Some goodies already there: * First thing we did is modernize the whole UI and look and feel by stepping away from an obscure Scala version to a more standard Node + React setup. * Secondly, the whole backend-frontend communication is now streaming, so you can see what the AI is actually doing and where in the RAG pipeline it is at, dynamically decided based upon how you configure it; you can see when it is retrieving docs, when it is reranking, applying HyDE and even the answer of the LLM gets streamed. * We've put a large emphasis on local models, through Ollama. This is now the de-facto standard though you can still use commercial providers too, seamlessly. * We used to have just a basic UI that allowed you to chat, no user management or configuration possible but now we've changed that - you can create users and log in, keep chat sessions and reload them. * Feedback can be given on answers and this can be read back. The future goal is to start injecting feedback as RAG-retrieved documents too for the AI to see good/bad answer patterns and become self-correction (through human feedback) in that way. * All settings can be modified at runtime now so you can switch between reranking on/off, apply HyDE, RE2, etc. Perhaps the most important update we've already made but will keep on working on, is the **education-first** documentation at [ragmeup.sensai.pt](https://ragmeup.sensai.pt/). We'll be sure to add more to it so you don't just learn how to use the framework but also learn RAG principles that you can try out while reading about them right away and write a piece on how this framework is used in production at scale for [SensAI.PT](http://SensAI.PT) Let me know if there are questions or remarks! Feel free to star the Github repo: [https://github.com/SensAI-PT/RAGMeUp](https://github.com/SensAI-PT/RAGMeUp)
2026-02-24T19:40:14
https://www.reddit.com/r/LocalLLaMA/comments/1rdqbyb/an_old_favorite_being_picked_back_up_rag_me_up/
SensAI_PT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdqbyb
false
null
t3_1rdqbyb
/r/LocalLLaMA/comments/1rdqbyb/an_old_favorite_being_picked_back_up_rag_me_up/
false
false
self
0
{'enabled': False, 'images': [{'id': 'fj84M-Z0L_zDI8VjgLPR-vGFwXVTTqVZFoa_h5offPs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/fj84M-Z0L_zDI8VjgLPR-vGFwXVTTqVZFoa_h5offPs.jpeg?width=108&crop=smart&auto=webp&s=a4ebc9ac35225bd5766ecca9e5ea25bced83eebe', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/fj84M-Z0L_zDI8VjgLPR-vGFwXVTTqVZFoa_h5offPs.jpeg?width=216&crop=smart&auto=webp&s=c7fff3cb807be3cc7b2443c9bc7aa1d98c387010', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/fj84M-Z0L_zDI8VjgLPR-vGFwXVTTqVZFoa_h5offPs.jpeg?width=320&crop=smart&auto=webp&s=a48d11ea412cde31ec3a7644dab07e3c74865137', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/fj84M-Z0L_zDI8VjgLPR-vGFwXVTTqVZFoa_h5offPs.jpeg?width=640&crop=smart&auto=webp&s=bf8b3f8dce31098b2bdb03126d4f6c603326511a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/fj84M-Z0L_zDI8VjgLPR-vGFwXVTTqVZFoa_h5offPs.jpeg?width=960&crop=smart&auto=webp&s=c008889f5af6e18b706f755b78cf5483ae353d32', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/fj84M-Z0L_zDI8VjgLPR-vGFwXVTTqVZFoa_h5offPs.jpeg?width=1080&crop=smart&auto=webp&s=a5929f14520493714c562fd307d65c4bd42de445', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/fj84M-Z0L_zDI8VjgLPR-vGFwXVTTqVZFoa_h5offPs.jpeg?auto=webp&s=8f6ecb1a75dbf664afa5e675cc54cf93e8fcf855', 'width': 1200}, 'variants': {}}]}
Built a free macOS menu bar app to monitor remote NVIDIA GPUs over SSH — no terminal needed
7
 **NVSmiBar** — a macOS menu bar app that monitors remote NVIDIA GPUs over   SSH. Live GPU utilization, temperature, and VRAM updated every second, right     in your menu bar — no terminal windows, no SSH sessions to babysit. Supports   multiple GPUs, multiple servers, SSH config alias import, and installs in one   line via Homebrew. Free and open source.   GitHub: [https://github.com/XingyuHu109/NVSmiBar](https://github.com/XingyuHu109/NVSmiBar)
2026-02-24T19:39:42
https://www.reddit.com/r/LocalLLaMA/comments/1rdqbfe/built_a_free_macos_menu_bar_app_to_monitor_remote/
Dry_Pudding1344
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdqbfe
false
null
t3_1rdqbfe
/r/LocalLLaMA/comments/1rdqbfe/built_a_free_macos_menu_bar_app_to_monitor_remote/
false
false
self
7
{'enabled': False, 'images': [{'id': 'y4hxk1iV3LhiY7i9hY8B-02_5SJVXT-77tz5HIxHxxU', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/y4hxk1iV3LhiY7i9hY8B-02_5SJVXT-77tz5HIxHxxU.jpeg?width=108&crop=smart&auto=webp&s=492bd428217c411aea41c7fec169395204b2f3ae', 'width': 108}, {'height': 91, 'url': 'https://external-preview.redd.it/y4hxk1iV3LhiY7i9hY8B-02_5SJVXT-77tz5HIxHxxU.jpeg?width=216&crop=smart&auto=webp&s=72f3393510750f79ac7c7e668c4f5b2d3d71cf80', 'width': 216}, {'height': 135, 'url': 'https://external-preview.redd.it/y4hxk1iV3LhiY7i9hY8B-02_5SJVXT-77tz5HIxHxxU.jpeg?width=320&crop=smart&auto=webp&s=ac04961239fe10079997e2e4790891101d75542a', 'width': 320}, {'height': 271, 'url': 'https://external-preview.redd.it/y4hxk1iV3LhiY7i9hY8B-02_5SJVXT-77tz5HIxHxxU.jpeg?width=640&crop=smart&auto=webp&s=cf72b37f70bef3944a577332898646adbd2bcdf1', 'width': 640}, {'height': 407, 'url': 'https://external-preview.redd.it/y4hxk1iV3LhiY7i9hY8B-02_5SJVXT-77tz5HIxHxxU.jpeg?width=960&crop=smart&auto=webp&s=5042e0989bf8ae51b666092d5540fd2e4c20f5d6', 'width': 960}, {'height': 458, 'url': 'https://external-preview.redd.it/y4hxk1iV3LhiY7i9hY8B-02_5SJVXT-77tz5HIxHxxU.jpeg?width=1080&crop=smart&auto=webp&s=6515b6f61cf5ec8bf975ce5a87a42533e2edd595', 'width': 1080}], 'source': {'height': 1344, 'url': 'https://external-preview.redd.it/y4hxk1iV3LhiY7i9hY8B-02_5SJVXT-77tz5HIxHxxU.jpeg?auto=webp&s=8d3d7bc9a1305edc900a2f9a1489bd9e33d7a927', 'width': 3168}, 'variants': {}}]}
No Gemma 4 until Google IO?
69
With Google I/O running from May 19th - 20th we're not likely to see any Gemma updates until then, right?
2026-02-24T19:30:17
https://i.redd.it/6whnc24zuhlg1.png
Ok-Recognition-3177
i.redd.it
1970-01-01T00:00:00
0
{}
1rdq1zl
false
null
t3_1rdq1zl
/r/LocalLLaMA/comments/1rdq1zl/no_gemma_4_until_google_io/
false
false
https://preview.redd.it/…f687e21a674a8420
69
{'enabled': True, 'images': [{'id': '6whnc24zuhlg1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/6whnc24zuhlg1.png?width=108&crop=smart&auto=webp&s=ee18d9b8c96bca07e1f3cd97a307d2a9dd5d69fa', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/6whnc24zuhlg1.png?width=216&crop=smart&auto=webp&s=64f1e5c8c5d229581fa46f8000fe9554d622bc44', 'width': 216}, {'height': 167, 'url': 'https://preview.redd.it/6whnc24zuhlg1.png?width=320&crop=smart&auto=webp&s=3e73cb4982635dbff788a3360e5a5c626df57edc', 'width': 320}, {'height': 335, 'url': 'https://preview.redd.it/6whnc24zuhlg1.png?width=640&crop=smart&auto=webp&s=ec01e33ed48289b330b08ffaa91195e5fa89087f', 'width': 640}, {'height': 502, 'url': 'https://preview.redd.it/6whnc24zuhlg1.png?width=960&crop=smart&auto=webp&s=8f85e1b1f9685f87c72ae54c48a3bf7592ae3197', 'width': 960}], 'source': {'height': 518, 'url': 'https://preview.redd.it/6whnc24zuhlg1.png?auto=webp&s=6c4fc49203e8937914d14a8ef5f9c261fa3abf28', 'width': 989}, 'variants': {}}]}
Connected LFM2.5-VL-1.6B to my Blink security camera — 51 tokens/sec with APPLE GPU
32
I've tested a lot of local VLMs for security camera analysis — SmolVLM2, Qwen3-VL, MiniCPM-V, LLaVA. LFM2.5-VL-1.6B from LiquidAI is the one I keep coming back to. Here's why. **One example output:** >"A mailman is delivering mail to a suburban house. The mailman is wearing a blue uniform and carrying a white mail bag. The house is white with a brown roof, and there's a driveway with a black car parked in front. The mailman is walking on a brick path surrounded by green bushes and trees." For a 1.6B parameter model, that's remarkable scene comprehension — roles, clothing, objects, spatial layout, all correctly identified. Not "person detected." A full narrative. **What makes LFM2.5 special for this use case:** * **Speed:** \~51.8 tokens/sec on Apple Silicon — fast enough for continuous monitoring without bottlenecking * **Efficiency:** Fully utilizes Apple GPU via Metal during inference (\~99% GPU, \~2.3 GB GPU memory), then drops back to idle immediately — inference is so fast it's hard to even screenshot at peak * **Size:** Q8\_0 quantization is 1.2 GB model + 556 MB projector = 1.7 GB total. Fits comfortably on 8GB machines * **Consistency:** After months of daily use, it reliably produces useful scene descriptions across day/night, indoor/outdoor, and IR cameras **Setup:** * MacBook M3 Air 24GB * SharpAI Aegis (free): [https://www.sharpai.org](https://www.sharpai.org) * Model: LiquidAI/LFM2.5-VL-1.6B-GGUF (Q8\_0) * Total model size: \~1.7 GB (model + vision projector) * Camera: Blink Battery 4th Gen
2026-02-24T19:27:23
https://www.reddit.com/gallery/1rdpz30
solderzzc
reddit.com
1970-01-01T00:00:00
0
{}
1rdpz30
false
null
t3_1rdpz30
/r/LocalLLaMA/comments/1rdpz30/connected_lfm25vl16b_to_my_blink_security_camera/
false
false
https://preview.redd.it/…d043c15cfb828988
32
null
Qwen 3.5 family benchmarks
87
2026-02-24T19:23:19
https://beige-babbette-30.tiiny.site/
tarruda
beige-babbette-30.tiiny.site
1970-01-01T00:00:00
0
{}
1rdpuwy
false
null
t3_1rdpuwy
/r/LocalLLaMA/comments/1rdpuwy/qwen_35_family_benchmarks/
false
false
https://external-preview…3eca6205abbc7758
87
{'enabled': False, 'images': [{'id': 'uvtYuVLX1W6lNW0vkuVFAlyQWzygqnzGzHojqz3TXJY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/uvtYuVLX1W6lNW0vkuVFAlyQWzygqnzGzHojqz3TXJY.png?width=108&crop=smart&auto=webp&s=581e06d20ba2919de0d9310cbc1b7ea5efa1998e', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/uvtYuVLX1W6lNW0vkuVFAlyQWzygqnzGzHojqz3TXJY.png?width=216&crop=smart&auto=webp&s=bbe9860bceaf074cd365b006b225089119991a6b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/uvtYuVLX1W6lNW0vkuVFAlyQWzygqnzGzHojqz3TXJY.png?width=320&crop=smart&auto=webp&s=20a5657199e2519c1a60b8c9175eb067a87f472b', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/uvtYuVLX1W6lNW0vkuVFAlyQWzygqnzGzHojqz3TXJY.png?width=640&crop=smart&auto=webp&s=f267cd2548c9b615466c10a981b0c958821223d3', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/uvtYuVLX1W6lNW0vkuVFAlyQWzygqnzGzHojqz3TXJY.png?width=960&crop=smart&auto=webp&s=b7c52adf3bb19011651313bab2a619a1886963be', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/uvtYuVLX1W6lNW0vkuVFAlyQWzygqnzGzHojqz3TXJY.png?width=1080&crop=smart&auto=webp&s=7148ad65d4618bbe159a82a518746a2147fe9c56', 'width': 1080}], 'source': {'height': 1501, 'url': 'https://external-preview.redd.it/uvtYuVLX1W6lNW0vkuVFAlyQWzygqnzGzHojqz3TXJY.png?auto=webp&s=79305c8f9a93e5840f608081e4a6d36d80d72fc8', 'width': 1501}, 'variants': {}}]}
more qwens will appear
403
(remember that 9B was promised before)
2026-02-24T19:22:21
https://i.redd.it/vxo4n3uhthlg1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1rdptw8
false
null
t3_1rdptw8
/r/LocalLLaMA/comments/1rdptw8/more_qwens_will_appear/
false
false
https://preview.redd.it/…b9f954867f579a18
403
{'enabled': True, 'images': [{'id': 'vxo4n3uhthlg1', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/vxo4n3uhthlg1.png?width=108&crop=smart&auto=webp&s=869476162f89b7532f3cf01e0e96570983505ad5', 'width': 108}, {'height': 88, 'url': 'https://preview.redd.it/vxo4n3uhthlg1.png?width=216&crop=smart&auto=webp&s=0427ed65a08f5f48b1e49e6acfd94a65ca052798', 'width': 216}, {'height': 130, 'url': 'https://preview.redd.it/vxo4n3uhthlg1.png?width=320&crop=smart&auto=webp&s=9c3414464fe0d59290f424fdde0f7a8688ad586e', 'width': 320}, {'height': 261, 'url': 'https://preview.redd.it/vxo4n3uhthlg1.png?width=640&crop=smart&auto=webp&s=75897cb1fe4342d9e8d35b46d9d2a84a28f17dc6', 'width': 640}, {'height': 392, 'url': 'https://preview.redd.it/vxo4n3uhthlg1.png?width=960&crop=smart&auto=webp&s=94e89601503ba46ecb7f43f5b51aea5a4b230b15', 'width': 960}, {'height': 441, 'url': 'https://preview.redd.it/vxo4n3uhthlg1.png?width=1080&crop=smart&auto=webp&s=47a1f5ff382d74cc2f85a108a7a2f35884bfab25', 'width': 1080}], 'source': {'height': 493, 'url': 'https://preview.redd.it/vxo4n3uhthlg1.png?auto=webp&s=be9c2339f84899f72ebcabb133bc4d7b60388ca9', 'width': 1205}, 'variants': {}}]}
Tessera — An open protocol for AI-to-AI knowledge transfer across architectures
0
*I’ve been working on a problem that’s been bugging me: there’s no universal way for a trained model to share what it knows with another model that has a completely different architecture. Fine-tuning requires the same architecture. Distillation needs both models running simultaneously. ONNX converts graph formats but doesn’t carry semantic knowledge. Federated learning shares gradients, not holistic understanding.* *Tessera is an activation-based protocol that tries to solve this.* *Rather than transferring weights directly, it encodes what a model has learnt — activation patterns, feature representations, behavioural rules — into self-describing tokens that a receiving model can decode into its own architecture via a Universal Hub Space.* *What’s in v0.1.0:* *• Reference implementation in Python/PyTorch* *• Four transfer modalities: weights, compressed features, datasets with curriculum metadata, and behavioural protocols* *• TBF v1.1 binary format with FLOAT32/FLOAT16/INT8 quantisation, HMAC-SHA256 integrity* *• CLI tool (tessera inspect, tessera validate, tessera benchmark)* *• MCP server for AI agent integration* *• Differential privacy support* *• Cross-architecture benchmarks across CNN, Transformer, and LSTM families* *Benchmark results:* *8/20 architecture pairs show positive transfer (receiver outperforms baseline). Average accuracy change is -0.5% across all pairs, with strongest results in same-family transfers and Transformer®CNN flow. Not world-beating numbers, but it’s a v0.1 and the transfers are real.* ***What I’d love feedback on:*** *• The protocol design — is the layered architecture (physical ® token ® semantic ® gate ® protocol) the right abstraction?* *• The Universal Hub Space approach — using per-anchor encoder/decoder MLPs to map between architectures via a shared latent space* *• What cross-architecture pairs would be most valuable to benchmark next?* *• Whether the wire format spec is clear enough for non-Python implementations* *GitHub:* [*github.com/incocreativedev/tessera-core*](http://github.com/incocreativedev/tessera-core) *White paper: docs/ in the repo (also being submitted to arXiv) Apache 2.0 licensed. PRs, issues, and honest criticism all welcome.*
2026-02-24T19:18:25
https://www.reddit.com/r/LocalLLaMA/comments/1rdppwl/tessera_an_open_protocol_for_aitoai_knowledge/
No-Introduction109
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdppwl
false
null
t3_1rdppwl
/r/LocalLLaMA/comments/1rdppwl/tessera_an_open_protocol_for_aitoai_knowledge/
false
false
self
0
{'enabled': False, 'images': [{'id': '3O_dErAEGV70rgUP3iFCk5wOpQTMgxynkEB1wovqoJU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3O_dErAEGV70rgUP3iFCk5wOpQTMgxynkEB1wovqoJU.png?width=108&crop=smart&auto=webp&s=1bdc2810ba668258efaa707735ee3d811e7e75f8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3O_dErAEGV70rgUP3iFCk5wOpQTMgxynkEB1wovqoJU.png?width=216&crop=smart&auto=webp&s=d6d59663294877b2fe26f3e01204d27d4e1f0211', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3O_dErAEGV70rgUP3iFCk5wOpQTMgxynkEB1wovqoJU.png?width=320&crop=smart&auto=webp&s=3edf4eedb833eb78ea960d118fb11c5c1f297f1b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3O_dErAEGV70rgUP3iFCk5wOpQTMgxynkEB1wovqoJU.png?width=640&crop=smart&auto=webp&s=2400776581692db7e4bb48e41d6f20f9cb0e23ff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3O_dErAEGV70rgUP3iFCk5wOpQTMgxynkEB1wovqoJU.png?width=960&crop=smart&auto=webp&s=d88146cb6fa98ae518c98255d03696dbf95e72a4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3O_dErAEGV70rgUP3iFCk5wOpQTMgxynkEB1wovqoJU.png?width=1080&crop=smart&auto=webp&s=6045aeedfa032121e28216ae654aa3ca6c50a684', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3O_dErAEGV70rgUP3iFCk5wOpQTMgxynkEB1wovqoJU.png?auto=webp&s=4a30df9d42b44d6b454326f0c581f787ead07254', 'width': 1200}, 'variants': {}}]}
Is there interest in an abliterated Kimi K2(.5)?
12
So I need to abliterate K2.5 for my project. How much interest in a full abliteration is there? Due to the size I can't upload the BF16 version to HuggingFace and personally plan on using a dynamic 2-bit quant. Would anyone want to host the full 2.5 TB of weights in BF16? Or quants?
2026-02-24T19:18:15
https://www.reddit.com/r/LocalLLaMA/comments/1rdppq6/is_there_interest_in_an_abliterated_kimi_k25/
I-cant_even
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdppq6
false
null
t3_1rdppq6
/r/LocalLLaMA/comments/1rdppq6/is_there_interest_in_an_abliterated_kimi_k25/
false
false
self
12
null
Best reasoning model Rx 9070xt 16 GB vram
3
Title basically says it. Im looking for a model to run Plan mode in Cline, I used to use GLM 5.0, but the costs are running up and as a student the cost is simply a bit too much for me right now. I have a Ryzen 7 7700, 32 gb DDR5 ram. I need something with strong reasoning, perhaps coding knowledge is required although I wont let it code. Purely Planning. Any recommendations? I have an old 1660 ti lying around maybe i can add that for extra vram, if amd + nvidia can to together. Thanks!
2026-02-24T19:14:06
https://www.reddit.com/r/LocalLLaMA/comments/1rdplki/best_reasoning_model_rx_9070xt_16_gb_vram/
SilverBaseball3105
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdplki
false
null
t3_1rdplki
/r/LocalLLaMA/comments/1rdplki/best_reasoning_model_rx_9070xt_16_gb_vram/
false
false
self
3
null
Open vs Closed Source SOTA - Benchmark overview
87
| Benchmark | GPT-5.2 | Opus 4.6 | Opus 4.5 | Sonnet 4.6 | Sonnet 4.5 | Q3.5 397B-A17B | Q3.5 122B-A10B | Q3.5 35B-A3B | Q3.5 27B | GLM-5 | | ------------------ | ------- | -------- | -------- | ---------- | ---------- | --------- | --------- | -------- | -------- | ----- | | Release date | Dec 2025 | Feb 2026 | Nov 2025 | Feb 2026 | Nov 2025 | Feb 2026 | Feb 2026 | Feb 2026 | Feb 2026 | Feb 2026 | | **Reasoning & STEM** | | | | | | | | | | | | GPQA Diamond | 93.2 | 91.3 | 87.0 | 89.9 | 83.4 | 88.4 | 86.6 | 84.2 | 85.5 | 86.0 | | HLE — no tools | 36.6 | 40.0 | 30.8 | 33.2 | 17.7 | 28.7 | 25.3 | 22.4 | 24.3 | 30.5 | | HLE — with tools | 50.0 | 53.0 | 43.4 | 49.0 | 33.6 | 48.3 | 47.5 | 47.4 | 48.5 | 50.4 | | HMMT Feb 2025 | 99.4 | — | 92.9 | — | — | 94.8 | 91.4 | 89.0 | 92.0 | — | | HMMT Nov 2025 | 100 | — | 93.3 | — | — | 92.7 | 90.3 | 89.2 | 89.8 | 96.9 | | **Coding & Agentic** | | | | | | | | | | | | SWE-bench Verified | 80.0 | 80.8 | 80.9 | 79.6 | 77.2 | 76.4 | 72.0 | 69.2 | 72.4 | 77.8 | | Terminal-Bench 2.0 | 64.7 | 65.4 | 59.8 | 59.1 | 51.0 | 52.5 | 49.4 | 40.5 | 41.6 | 56.2 | | OSWorld-Verified | — | 72.7 | 66.3 | 72.5 | 61.4 | — | 58.0 | 54.5 | 56.2 | — | | τ²-bench Retail | 82.0 | 91.9 | 88.9 | 91.7 | 86.2 | 86.7 | 79.5 | 81.2 | 79.0 | 89.7 | | MCP-Atlas | 60.6 | 59.5 | 62.3 | 61.3 | 43.8 | — | — | — | — | 67.8 | | BrowseComp | 65.8 | 84.0 | 67.8 | 74.7 | 43.9 | 69.0 | 63.8 | 61.0 | 61.0 | 75.9 | | LiveCodeBench v6 | 87.7 | — | 84.8 | — | — | 83.6 | 78.9 | 74.6 | 80.7 | — | | BFCL-V4 | 63.1 | — | 77.5 | — | — | 72.9 | 72.2 | 67.3 | 68.5 | — | | **Knowledge** | | | | | | | | | | | | MMLU-Pro | 87.4 | — | 89.5 | — | — | 87.8 | 86.7 | 85.3 | 86.1 | — | | MMLU-Redux | 95.0 | — | 95.6 | — | — | 94.9 | 94.0 | 93.3 | 93.2 | — | | SuperGPQA | 67.9 | — | 70.6 | — | — | 70.4 | 67.1 | 63.4 | 65.6 | — | | **Instruction Following** | | | | | | | | | | | | IFEval | 94.8 | — | 90.9 | — | — | 92.6 | 93.4 | 91.9 | 95.0 | — | | IFBench | 75.4 | — | 58.0 | — | — | 76.5 | 76.1 | 70.2 | 76.5 | — | | MultiChallenge | 57.9 | — | 54.2 | — | — | 67.6 | 61.5 | 60.0 | 60.8 | — | | **Long Context** | | | | | | | | | | | | LongBench v2 | 54.5 | — | 64.4 | — | — | 63.2 | 60.2 | 59.0 | 60.6 | — | | AA-LCR | 72.7 | — | 74.0 | — | — | 68.7 | 66.9 | 58.5 | 66.1 | — | | **Multilingual** | | | | | | | | | | | | MMMLU | 89.6 | 91.1 | 90.8 | 89.3 | 89.5 | 88.5 | 86.7 | 85.2 | 85.9 | — | | MMLU-ProX | 83.7 | — | 85.7 | — | — | 84.7 | 82.2 | 81.0 | 82.2 | — | | PolyMATH | 62.5 | — | 79.0 | — | — | 73.3 | 68.9 | 64.4 | 71.2 | — |
2026-02-24T19:08:38
https://i.redd.it/5bgiva65rhlg1.png
Pristine-Woodpecker
i.redd.it
1970-01-01T00:00:00
0
{}
1rdpfy6
false
null
t3_1rdpfy6
/r/LocalLLaMA/comments/1rdpfy6/open_vs_closed_source_sota_benchmark_overview/
false
false
https://preview.redd.it/…44d49bdcffd800ca
87
{'enabled': True, 'images': [{'id': '5bgiva65rhlg1', 'resolutions': [{'height': 138, 'url': 'https://preview.redd.it/5bgiva65rhlg1.png?width=108&crop=smart&auto=webp&s=79098ddb0696d658a0dacf535e410106a8148fad', 'width': 108}, {'height': 276, 'url': 'https://preview.redd.it/5bgiva65rhlg1.png?width=216&crop=smart&auto=webp&s=1799a5fa7a86bfe0d66e94a221ebfd8a3e25d3a9', 'width': 216}, {'height': 409, 'url': 'https://preview.redd.it/5bgiva65rhlg1.png?width=320&crop=smart&auto=webp&s=29496100affbdb830f39e6ec7e64f0fc38e033f5', 'width': 320}, {'height': 818, 'url': 'https://preview.redd.it/5bgiva65rhlg1.png?width=640&crop=smart&auto=webp&s=065f62ffff5572f425f0451422266a099fa8b195', 'width': 640}, {'height': 1228, 'url': 'https://preview.redd.it/5bgiva65rhlg1.png?width=960&crop=smart&auto=webp&s=11e3133e28669068f1c8a8e1aa536d847bdc495b', 'width': 960}, {'height': 1381, 'url': 'https://preview.redd.it/5bgiva65rhlg1.png?width=1080&crop=smart&auto=webp&s=a173da6252f5769374cf6f6ca34d26dfbcfde765', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://preview.redd.it/5bgiva65rhlg1.png?auto=webp&s=e51b9bb1aad452e8b0e9136e582725ed49cdcb8c', 'width': 1688}, 'variants': {}}]}
Is building an autonomous AI job-application agent actually reliable?
4
I’m considering building an agentic AI that would: * Search for relevant jobs * Automatically fill application forms * Send personalized cold emails * Track responses I’m only concerned about reliability. From a technical perspective, do you think such a system can realistically work properly and consistently if I try to build a robust version in just 8–9 hours? Or will it constantly breaks. Would love honest feedback from people who’ve built autonomous agents in production. What do you think, techies?
2026-02-24T19:06:30
https://www.reddit.com/r/LocalLLaMA/comments/1rdpdqr/is_building_an_autonomous_ai_jobapplication_agent/
Fit-Incident-637
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdpdqr
false
null
t3_1rdpdqr
/r/LocalLLaMA/comments/1rdpdqr/is_building_an_autonomous_ai_jobapplication_agent/
false
false
self
4
null
Chinese AI Models Capture Majority of OpenRouter Token Volume as MiniMax M2.5 Surges to the Top
103
2026-02-24T19:03:31
https://wealthari.com/chinese-ai-models-capture-majority-of-openrouter-token-volume-as-minimax-m2-5-surges-to-the-top/
Koyaanisquatsi_
wealthari.com
1970-01-01T00:00:00
0
{}
1rdpapc
false
null
t3_1rdpapc
/r/LocalLLaMA/comments/1rdpapc/chinese_ai_models_capture_majority_of_openrouter/
false
false
https://external-preview…8b0f8af653560512
103
{'enabled': False, 'images': [{'id': 'UdA_L_LSkBxAXLcEK0SYU0vLwVAamHz6zalROM7oXL4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UdA_L_LSkBxAXLcEK0SYU0vLwVAamHz6zalROM7oXL4.jpeg?width=108&crop=smart&auto=webp&s=2a23c8c1b8481f1852e97cded692ed952458f10d', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/UdA_L_LSkBxAXLcEK0SYU0vLwVAamHz6zalROM7oXL4.jpeg?width=216&crop=smart&auto=webp&s=4b1c38ff8560f2fc43e9a6e108b64cfc159d5c97', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/UdA_L_LSkBxAXLcEK0SYU0vLwVAamHz6zalROM7oXL4.jpeg?width=320&crop=smart&auto=webp&s=166e4024a13bc5235a86f261ba0250ddb44f4453', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/UdA_L_LSkBxAXLcEK0SYU0vLwVAamHz6zalROM7oXL4.jpeg?width=640&crop=smart&auto=webp&s=6daf7b75d52a71971a90702c20da023dbf05a439', 'width': 640}, {'height': 498, 'url': 'https://external-preview.redd.it/UdA_L_LSkBxAXLcEK0SYU0vLwVAamHz6zalROM7oXL4.jpeg?width=960&crop=smart&auto=webp&s=85f4825fad3bdd03ac09b7a3ffbea2b1ac570a6c', 'width': 960}], 'source': {'height': 532, 'url': 'https://external-preview.redd.it/UdA_L_LSkBxAXLcEK0SYU0vLwVAamHz6zalROM7oXL4.jpeg?auto=webp&s=0826080dda4a3138e6de04797c535f21ca957708', 'width': 1024}, 'variants': {}}]}
Theoretical question on VSA: Using circular convolution for local LLM "holographic" memory?
2
> >
2026-02-24T18:58:57
https://www.reddit.com/r/LocalLLaMA/comments/1rdp5si/theoretical_question_on_vsa_using_circular/
GiriuDausa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdp5si
false
null
t3_1rdp5si
/r/LocalLLaMA/comments/1rdp5si/theoretical_question_on_vsa_using_circular/
false
false
self
2
null
Open vs Closed SOTA - Benchmark overview
1
How big is the lead of the closed source frontier labs? Sonnet 4.5 was released 6 months ago.
2026-02-24T18:55:55
https://i.redd.it/caldaigrohlg1.png
Pristine-Woodpecker
i.redd.it
1970-01-01T00:00:00
0
{}
1rdp2pd
false
null
t3_1rdp2pd
/r/LocalLLaMA/comments/1rdp2pd/open_vs_closed_sota_benchmark_overview/
false
false
https://preview.redd.it/…9b6d3f7577f31731
1
{'enabled': True, 'images': [{'id': 'caldaigrohlg1', 'resolutions': [{'height': 204, 'url': 'https://preview.redd.it/caldaigrohlg1.png?width=108&crop=smart&auto=webp&s=9d428e90ce5a59ab4ac4442d34042a91b5ee8ead', 'width': 108}, {'height': 408, 'url': 'https://preview.redd.it/caldaigrohlg1.png?width=216&crop=smart&auto=webp&s=bda37df85d98a21a02ee18938aaeb4b997f036dd', 'width': 216}, {'height': 605, 'url': 'https://preview.redd.it/caldaigrohlg1.png?width=320&crop=smart&auto=webp&s=f1310092e8d035fe343117fabb7531421b396bb7', 'width': 320}, {'height': 1211, 'url': 'https://preview.redd.it/caldaigrohlg1.png?width=640&crop=smart&auto=webp&s=21a262e566794adf939c8c959ece1fbea03a8673', 'width': 640}, {'height': 1817, 'url': 'https://preview.redd.it/caldaigrohlg1.png?width=960&crop=smart&auto=webp&s=e6c3a8db651f24978da43afe087714c07a3e1f33', 'width': 960}, {'height': 2044, 'url': 'https://preview.redd.it/caldaigrohlg1.png?width=1080&crop=smart&auto=webp&s=57df77f17d11b6b1b360c8656e8f7b4eaecade4f', 'width': 1080}], 'source': {'height': 3248, 'url': 'https://preview.redd.it/caldaigrohlg1.png?auto=webp&s=d0e7bec309ee51e3b836eb029255248d2cd3513b', 'width': 1716}, 'variants': {}}]}
What is the best performing Small LLM under 5 billion parameters than can be finetuned for domain specific task?
11
With performance, we are looking on 3 aspects: scalability, accuracy and speed. If you can please describe your experience
2026-02-24T18:53:01
https://www.reddit.com/r/LocalLLaMA/comments/1rdozsn/what_is_the_best_performing_small_llm_under_5/
TinyVector
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdozsn
false
null
t3_1rdozsn
/r/LocalLLaMA/comments/1rdozsn/what_is_the_best_performing_small_llm_under_5/
false
false
self
11
null
Apple hit our indie AI app with the dreaded 4.3(b) "Spam" rejection because the category is "saturated". We refuse to quit. What ONE feature should we build to prove them wrong? 🚀
0
Hey everyone. We just faced every indie developer's nightmare. Apple slapped our AI companion app with a Guideline 4.3(b) - Design - Spam rejection. They basically said there are already enough of these apps on the store. Translation: They only want the giant, heavily-filtered corporate AI apps, and they don't want indie platforms offering real freedom. They even told us to just go build a Web App instead. But we aren't backing down. We are going back to the code editor to completely revamp our app and make it undeniably unique. So we are asking the actual users: What is the ultimate feature that big corporate AI apps are terrified to build, but you desperately want? What specific memory, UI, or voice function would make Drop your wildest ideas. Let’s build the app Apple didn't want us to build! 👇
2026-02-24T18:40:09
https://www.reddit.com/gallery/1rdomjy
PassionLabAI
reddit.com
1970-01-01T00:00:00
0
{}
1rdomjy
false
null
t3_1rdomjy
/r/LocalLLaMA/comments/1rdomjy/apple_hit_our_indie_ai_app_with_the_dreaded_43b/
false
false
https://preview.redd.it/…6e9c28e472b37e4b
0
null
Steerling-8B - Inherently Interpretable Foundation Model
45
2026-02-24T18:38:58
https://www.guidelabs.ai/post/steerling-8b-base-model-release/
ScatteringSepoy
guidelabs.ai
1970-01-01T00:00:00
0
{}
1rdoldt
false
null
t3_1rdoldt
/r/LocalLLaMA/comments/1rdoldt/steerling8b_inherently_interpretable_foundation/
false
false
https://external-preview…e0a1bb05b68236ab
45
{'enabled': False, 'images': [{'id': 'W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=108&crop=smart&auto=webp&s=bf79eb94119bcc41fbb34bcef106e0a0aef0cfbe', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=216&crop=smart&auto=webp&s=b2b90a3634f6c959fd007bfab883fd3c17fb34b1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=320&crop=smart&auto=webp&s=b168649fb46323688bb1b31a5bc1e49702401e0f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=640&crop=smart&auto=webp&s=9813c08be5262702b5b744b0cd48a4f3ffd847cd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=960&crop=smart&auto=webp&s=f1628882ffdff20bd7a3d4461d5608ea002c8da5', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?width=1080&crop=smart&auto=webp&s=61712080e4dcb21f13dbf79706d620194dcd46a0', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/W36wlT2FZ1hudJiJnCzPO3HZkbJh13qfUnXtx9cKhB4.png?auto=webp&s=c811c58fb728a6071113f36f900d08a4907b7d45', 'width': 1200}, 'variants': {}}]}
Built a Chrome extension that runs EmbeddingGemma-300M (q4) in-browser to score HN/Reddit/X feeds — no backend, full fine-tuning loop
5
I've been running local LLMs for a while but wanted to try something different — local embeddings as a practical daily tool. Sift is a Chrome extension that loads `EmbeddingGemma-300M` (q4) via `Transformers.js` and scores every item in your HN, Reddit, and X feeds against categories you pick. Low-relevance posts get dimmed, high-relevance ones stay vivid. All inference happens in the browser — nothing leaves your machine. Technical details: * Model: `google/embeddinggemma-300m`, exported to ONNX via optimum with the full sentence-transformers pipeline (Transformer + Pooling + Dense + Normalize) as a single graph * Quantization: int8 (onnxruntime), q4 via MatMulNBits (block\_size=32, symmetric), plus a separate no-GatherElements variant for WebGPU * Runtime: Transformers.js v4 in a Chrome MV3 service worker. WebGPU when available, WASM fallback * Scoring: `cosine` similarity against category anchor embeddings, 25 built-in categories The part I'm most happy with — the fine-tuning loop: 1. Browse normally, thumbs up/down items you like or don't care about 2. Export labels as anchor/positive/negative triplet CSV 3. Fine-tune with the included Python script or a free Colab notebook (MultipleNegativesRankingLoss via sentence-transformers) 4. ONNX export produces 4 variants: fp32, int8, q4 (WASM), q4-no-gather (WebGPU) 5. Push to HuggingFace Hub or serve locally, reload in extension The fine-tuned model weights contain only numerical parameters — no training data or labels baked in. What I learned: * `torch.onnx.export()` doesn't work with Gemma3's sliding window attention (custom autograd + vmap break tracing). Had to use optimum's main\_export with library\_name='sentence\_transformers' * WebGPU needs the GatherElements-free ONNX variant or it silently fails * Chrome MV3 service workers only need wasm-unsafe-eval in CSP for WASM — no offscreen documents or sandbox iframes Open source (Apache-2.0): [https://github.com/shreyaskarnik/Sift](https://github.com/shreyaskarnik/Sift) Happy to answer questions about the ONNX export pipeline or the browser inference setup.
2026-02-24T18:25:37
https://v.redd.it/inrq3t8zihlg1
mmagusss
v.redd.it
1970-01-01T00:00:00
0
{}
1rdo7wb
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/inrq3t8zihlg1/DASHPlaylist.mpd?a=1774549564%2CMGI5MTY4NzliMTNlZjc0N2E2NjBiYzE1OTdhY2MzMWE1ZmRmOWRmYjI0ZmIyODAwNTk5NWRmNDQ0YTczYzcwNw%3D%3D&v=1&f=sd', 'duration': 55, 'fallback_url': 'https://v.redd.it/inrq3t8zihlg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/inrq3t8zihlg1/HLSPlaylist.m3u8?a=1774549564%2CODhjOGFiNGMxYzIxYTllNTYzOWM2YzZjMjc0NGRlOTZkNzEzZDc2M2I5ZGEzMmVhODc4NGI3NTM0ODJlMWM0Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/inrq3t8zihlg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1728}}
t3_1rdo7wb
/r/LocalLLaMA/comments/1rdo7wb/built_a_chrome_extension_that_runs/
false
false
https://external-preview…5f99f71075da5e4f
5
{'enabled': False, 'images': [{'id': 'cHN4eXgwYnppaGxnMQHC9ACwIa3oN6cyu1iGPVlry1gcHc2RNUMsnt4URBnG', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/cHN4eXgwYnppaGxnMQHC9ACwIa3oN6cyu1iGPVlry1gcHc2RNUMsnt4URBnG.png?width=108&crop=smart&format=pjpg&auto=webp&s=16d6ddbc44cbbaee0630220f4c8885d075aeb6f1', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/cHN4eXgwYnppaGxnMQHC9ACwIa3oN6cyu1iGPVlry1gcHc2RNUMsnt4URBnG.png?width=216&crop=smart&format=pjpg&auto=webp&s=e0407f00dd0f3662d4487b140e0105bedbe4cd13', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/cHN4eXgwYnppaGxnMQHC9ACwIa3oN6cyu1iGPVlry1gcHc2RNUMsnt4URBnG.png?width=320&crop=smart&format=pjpg&auto=webp&s=a7334e412f9d67e97cd6ed9a1418ca541ddf3382', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/cHN4eXgwYnppaGxnMQHC9ACwIa3oN6cyu1iGPVlry1gcHc2RNUMsnt4URBnG.png?width=640&crop=smart&format=pjpg&auto=webp&s=40829abba128947b2aaf7f9efa304b823c41f453', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/cHN4eXgwYnppaGxnMQHC9ACwIa3oN6cyu1iGPVlry1gcHc2RNUMsnt4URBnG.png?width=960&crop=smart&format=pjpg&auto=webp&s=9830126ef52a6abade1536b6ca2ff74e51998c03', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/cHN4eXgwYnppaGxnMQHC9ACwIa3oN6cyu1iGPVlry1gcHc2RNUMsnt4URBnG.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3d47223bc8b5ea5de9fff32c40d9286856019459', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://external-preview.redd.it/cHN4eXgwYnppaGxnMQHC9ACwIa3oN6cyu1iGPVlry1gcHc2RNUMsnt4URBnG.png?format=pjpg&auto=webp&s=08b0b5304377744db0f1b86363d577322d5b732c', 'width': 2560}, 'variants': {}}]}
Charlotte LLM meet up
7
Can we organize a meet up for peoples who are interested in working on LLM in Charlotte area to talk?
2026-02-24T18:21:43
https://www.reddit.com/r/LocalLLaMA/comments/1rdo424/charlotte_llm_meet_up/
bankofcoinswap
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdo424
false
null
t3_1rdo424
/r/LocalLLaMA/comments/1rdo424/charlotte_llm_meet_up/
false
false
self
7
null
Qwen 3.5 122b/35b is fire 🔥 Score comparision between Qwen 3 35B-A3B, GPT-5 High, Qwen 3 122B-A10B, and GPT-OSS 120B.
127
Benchmark Comparison 👉🔴GPT-OSS 120B \[defeated by qwen 3 35b 🥳\] MMLU-Pro: 80.8 HLE (Humanity’s Last Exam): 14.9 GPQA Diamond: 80.1 IFBench: 69.0 👉🔴Qwen 3 122B-A10B MMLU-Pro: 86.7 HLE (Humanity’s Last Exam): 25.3 (47.5 with tools — 🏆 Winner) GPQA Diamond: 86.6 (🏆 Winner) IFBench: 76.1 (🏆 Winner) 👉🔴Qwen 3 35B-A3B MMLU-Pro: 85.3 HLE (Humanity’s Last Exam): 22.4 (47.4 with tools) GPQA Diamond: 84.2 IFBench: 70.2 👉🔴GPT-5 High MMLU-Pro: 87.1 (🏆 Winner) HLE (Humanity’s Last Exam): 26.5 (🏆 Winner, no tools) GPQA Diamond: 85.4 IFBench: 73.1 Summary: GPT 5 \[HIGH\] ≈ Qwen 3.5 122b > qwen 35b > gpt oss 120 \[high\] 👉Sources: OPENROUTER, ARTIFICIAL ANALYSIS, HUGGING FACE
2026-02-24T18:19:40
https://i.redd.it/01tsyrq8ihlg1.png
9r4n4y
i.redd.it
1970-01-01T00:00:00
0
{}
1rdo1z5
false
null
t3_1rdo1z5
/r/LocalLLaMA/comments/1rdo1z5/qwen_35_122b35b_is_fire_score_comparision_between/
false
false
https://preview.redd.it/…5733172b1d9c6916
127
{'enabled': True, 'images': [{'id': '01tsyrq8ihlg1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/01tsyrq8ihlg1.png?width=108&crop=smart&auto=webp&s=4b72593a6f341c618c1899fcc4f9e5673ab49228', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/01tsyrq8ihlg1.png?width=216&crop=smart&auto=webp&s=7f25c1296f6f80c79dae1c22a235996c08bf0941', 'width': 216}, {'height': 190, 'url': 'https://preview.redd.it/01tsyrq8ihlg1.png?width=320&crop=smart&auto=webp&s=58892a943eb8493f795d785f5481a8f04572dcd7', 'width': 320}, {'height': 381, 'url': 'https://preview.redd.it/01tsyrq8ihlg1.png?width=640&crop=smart&auto=webp&s=60c3fdc89c1d046a8cdd786122d9e9d61d8c1f82', 'width': 640}, {'height': 572, 'url': 'https://preview.redd.it/01tsyrq8ihlg1.png?width=960&crop=smart&auto=webp&s=e50b798df62bd186ad438d6472ec074ba683f290', 'width': 960}], 'source': {'height': 590, 'url': 'https://preview.redd.it/01tsyrq8ihlg1.png?auto=webp&s=e940e39aae472314bcb948ff58dbc1f8825cd44f', 'width': 989}, 'variants': {}}]}
Qwen3-Coder-Next vs Qwen3.5-35B-A3B vs Qwen3.5-27B - A quick coding test
84
https://preview.redd.it/… |Qwen3.5-27B|2|
2026-02-24T18:15:17
https://www.reddit.com/r/LocalLLaMA/comments/1rdnxe6/qwen3codernext_vs_qwen3535ba3b_vs_qwen3527b_a/
bobaburger
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdnxe6
false
null
t3_1rdnxe6
/r/LocalLLaMA/comments/1rdnxe6/qwen3codernext_vs_qwen3535ba3b_vs_qwen3527b_a/
false
false
https://preview.redd.it/…de78880329384341
84
null
What plugins are you actually using daily?
0
Hey, I'm just getting into OpenClaw plugins and I love the concept. I can't wait to try more. If you use any or if you've built one yourself, drop it here. I want to test as many as I can.
2026-02-24T18:03:27
https://www.reddit.com/r/LocalLLaMA/comments/1rdnl4x/what_plugins_are_you_actually_using_daily/
Glad-Adhesiveness319
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdnl4x
false
null
t3_1rdnl4x
/r/LocalLLaMA/comments/1rdnl4x/what_plugins_are_you_actually_using_daily/
false
false
self
0
null
Building 5000 Decentralized Industrial AI Labs — Looking for Elite AI Engineers to Lead Them
1
[removed]
2026-02-24T17:59:42
https://www.reddit.com/r/LocalLLaMA/comments/1rdnh0o/building_5000_decentralized_industrial_ai_labs/
six6622
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdnh0o
false
null
t3_1rdnh0o
/r/LocalLLaMA/comments/1rdnh0o/building_5000_decentralized_industrial_ai_labs/
false
false
self
1
null
Built Mycelia, a desktop app that turns your AI conversations into a knowledge graph — free with Ollama, no account needed
1
You chat with your local model. Mycelia extracts structured markdown notes, auto-links concepts, and builds a searchable vault in the background. Zero manual filing, nothing leaves your machine. [Watch Mycelia extracting a note in live](https://i.redd.it/qu63t0xjehlg1.gif) Free tier runs fully on Ollama. Plain markdown files, Obsidian-compatible. Looking for early users — would love to hear what you think. → [mycelia.garden](https://mycelia.garden)
2026-02-24T17:58:35
https://www.reddit.com/r/LocalLLaMA/comments/1rdnfwt/built_mycelia_a_desktop_app_that_turns_your_ai/
williamtng
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdnfwt
false
null
t3_1rdnfwt
/r/LocalLLaMA/comments/1rdnfwt/built_mycelia_a_desktop_app_that_turns_your_ai/
false
false
https://preview.redd.it/…cf6fa3a2164f4f01
1
null
Qwen3.5-35B-A3B locally
80
tested on 3090s GGUF downloaded from [https://huggingface.co/gokmakog/Qwen3.5-35B-A3B-GGUF](https://huggingface.co/gokmakog/Qwen3.5-35B-A3B-GGUF)
2026-02-24T17:58:02
https://www.reddit.com/gallery/1rdnfcu
jacek2023
reddit.com
1970-01-01T00:00:00
0
{}
1rdnfcu
false
null
t3_1rdnfcu
/r/LocalLLaMA/comments/1rdnfcu/qwen3535ba3b_locally/
false
false
https://preview.redd.it/…0242414912c87c4a
80
null
Built a persistent memory API for AI bots – register, remember, recall in 3 endpoints
1
Bots are stateless by default. Every conversation starts from zero. Built EngramPort to fix that — a Memory-as-a-Service API any bot can connect to. 3 core endpoints: \- POST /register → get your ek\_bot\_ API key \- POST /remember → store memories with semantic embeddings \- POST /recall → retrieve relevant memories by cosine similarity Plus /reflect for overnight insight synthesis across stored memories. Free tier live: [https://engram.eideticlab.com](https://engram.eideticlab.com) Happy to answer technical questions.
2026-02-24T17:50:18
https://www.reddit.com/r/LocalLLaMA/comments/1rdn7nq/built_a_persistent_memory_api_for_ai_bots/
Equal-Stay6064
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdn7nq
false
null
t3_1rdn7nq
/r/LocalLLaMA/comments/1rdn7nq/built_a_persistent_memory_api_for_ai_bots/
false
false
self
1
null
Help planning out a new home server for AI and some gaming
1
Hi all, I’m planning a machine primarily to learn and run local LLMs, and I’d really appreciate some advice before committing to hardware. I'm a Medical Doctor by profession, but learned some Software Engineering on the side and decided nothing could come wrong out of having an expensive hobby. **My main predicted use case (AI):** * Extracting clearly stated diagnoses from medical PDFs locally (privacy reasons, GDPR, so cloud is not ideal) * Handling abbreviations, misspellings, and structured extraction * Some experimentation with embeddings and basic TensorFlow / PyTorch **Constraints / assumptions:** * As long as I stick with this sort of workload, I believe 20 GB VRAM should be enough for my foreseeable needs * I’m not planning to train models, only inference * System will likely run 24/7 as a home server. I'm planning to access it via my laptop through tailscale + ssh. * I value stability, efficiency, and reliability * I may want to scale later if needed **Secondary uses:** * Game streaming (max I foresee is FF7 Rebirth at 1440p, 60 fps, medium settings) * NAS * General homelab / experimentation Options I’m considering: **Option A: Desktop with RTX 4000 Ada (20 GB)** * Pros: 20 GB VRAM, efficiency (\~130 W), blower style, designed for workstations * Cons: Expensive per dollar of compute **Option B: Desktop with RTX 4080 (16 GB)** * Pros: Much faster raw performance * Cons: Less VRAM, higher power (\~320 W), less server-oriented **Option C: Desktop with RTX 5080 (16 GB)** * Pros: Much faster raw performance * Cons: Less VRRAM, higher power, less server-oriented, price! Questions: 1. For local LLM inference, how important is 20 GB vs 16 GB VRAM in practice today? 2. Would you choose RTX 4000 Ada vs 4080 for a dedicated local LLM server? 3. Is an eGPU a decent alternative so I'd only have to spend on the GPU and the enclosure, or is it better to go straight to a desktop? 4. For a 24/7 always-on AI server, do people favor workstation cards mainly for efficiency and thermals, or are there other reasons? 5. Any regrets or lessons learned from people who built similar setups? My main goal is to build something practical, reliable, and not regret the GPU choice in 1–2 years. Thanks a lot for the help!
2026-02-24T17:46:58
https://www.reddit.com/r/LocalLLaMA/comments/1rdn49u/help_planning_out_a_new_home_server_for_ai_and/
Blues003
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdn49u
false
null
t3_1rdn49u
/r/LocalLLaMA/comments/1rdn49u/help_planning_out_a_new_home_server_for_ai_and/
false
false
self
1
null
If you use hosted AI models, this might be worth a look
0
Idk how many of you are mixing local + hosted models, but I randomly found this and the pricing at Blackbox AI; it kinda threw me off. There’s a platform doing $2 for the first month (then $10 after). You get $20 in credits that work across a bunch of the big frontier models, plus some models that are just unlimited and don’t even burn credits. I’ve mostly been running local stuff lately, but every now and then you need the heavy models for reasoning or long context and I really don’t feel like paying for 3 diff subs at once. this seems like a cheaper way to poke at all of them without committing to $20 x 3 every month. The unlimited ones aren’t trash either. Kimi K2.5 especially surprised me, handled long docs way better than I expected. Not saying everyone should jump on it, just figured I’d share since $2 to test everything is kinda low risk.
2026-02-24T17:35:23
https://www.reddit.com/r/LocalLLaMA/comments/1rdms9f/if_you_use_hosted_ai_models_this_might_be_worth_a/
abdullah4863
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdms9f
false
null
t3_1rdms9f
/r/LocalLLaMA/comments/1rdms9f/if_you_use_hosted_ai_models_this_might_be_worth_a/
false
false
self
0
null
Qwen/Qwen3.5-35B-A3B
1
[https://huggingface.co/Qwen/Qwen3.5-35B-A3B](https://huggingface.co/Qwen/Qwen3.5-35B-A3B)
2026-02-24T17:33:18
https://www.reddit.com/r/LocalLLaMA/comments/1rdmq6m/qwenqwen3535ba3b/
Emotional-Baker-490
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdmq6m
false
null
t3_1rdmq6m
/r/LocalLLaMA/comments/1rdmq6m/qwenqwen3535ba3b/
false
false
self
1
{'enabled': False, 'images': [{'id': '9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=108&crop=smart&auto=webp&s=d90f3b8cfc5fae78a5f6bd5852f034d7cdb38530', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=216&crop=smart&auto=webp&s=33de3c05ff0889634d642d1530aabe56637b8742', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=320&crop=smart&auto=webp&s=b3432acd83c423293f928f5a7ca1485f41b9519a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=640&crop=smart&auto=webp&s=f4846b1e5fd750530b9aa43eb95e74460e90d4ec', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=960&crop=smart&auto=webp&s=7bafdceeee9cea3322f54cd190bf709facdd92b9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=1080&crop=smart&auto=webp&s=48fc08b8107e0d200b98b29df17ee1e81bfd037f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?auto=webp&s=7232ca7fdd47e438d438373a83a96ea70bd229e0', 'width': 1200}, 'variants': {}}]}
I have 1 day to fine tune an LLM that can perform entity extraction on a list of items. Which is the best model to do this? Requirements below
0
1) Should be able to be run on 24GB VRAM, max 32 2) Inference Speed is of utmost priority as I have 100GB of website data 3) Ideally the output should be in a structured format ad also tell you if the entity is actully being described. For example text " Ronaldo and Messi are the greatest soccer players in the world. However, we don't have enough information about Baseball. This page is not about Tom Brady" Entities: \['Ronaldo', 'Messi', "Tom Brady","soccer", "baseball",\] Output \-\[{Entity:Ronaldo, Type:Footballer, Status:Present}\], {Entity:Messi, Type:Footballer, Status:Present\], {Entity:soccer Type:Game, Status:Present\], {Entity:Baseball Type:Game, Status:Unsure\], {Entity:Tombrady Type:American Footballer, Status:Absent\], \]
2026-02-24T17:29:42
https://www.reddit.com/r/LocalLLaMA/comments/1rdmmht/i_have_1_day_to_fine_tune_an_llm_that_can_perform/
TinyVector
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdmmht
false
null
t3_1rdmmht
/r/LocalLLaMA/comments/1rdmmht/i_have_1_day_to_fine_tune_an_llm_that_can_perform/
false
false
self
0
null
Has anyone enabled GPU/NPU for llama.cpp on Android 15 / HyperOS?
2
Hi everyone, I’m trying to run llamacpp on Android 15 / HyperOS via Termux with Vulkan or OpenCL, but my builds keep failing. Right now my device is not rooted, and I’m wondering if root is necessary to get GPU or NPU acceleration working. Has anyone successfully: Built llama.cpp with GPU or NPU acceleration on Android? Managed to run it without rooting? Used specific flags, patches, or workarounds for hardware acceleration? I’d love advice on whether rooting is worth it, or if there’s a way to enable hardware acceleration without it. Thanks in advance!
2026-02-24T17:26:22
https://www.reddit.com/r/LocalLLaMA/comments/1rdmj89/has_anyone_enabled_gpunpu_for_llamacpp_on_android/
NeoLogic_Dev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdmj89
false
null
t3_1rdmj89
/r/LocalLLaMA/comments/1rdmj89/has_anyone_enabled_gpunpu_for_llamacpp_on_android/
false
false
self
2
null
Threat data from 91K production AI agent interactions — what self-hosted deployments should know about tool abuse and agent attacks (Feb 2026 to date)
0
Figured this community would be interested in real-world threat data, especially since many of you run agents locally with tool-calling. February data: 91,284 interactions, 47 deployments, 35,711 threats detected. Detection model is Gemma-based (5-head multilabel classifier). **WHAT MATTERS FOR SELF-HOSTED DEPLOYMENTS** Tool abuse is the fastest-growing threat. If you're running agents with tool-calling (function calling, MCP servers, code execution, file system access), tool/command abuse nearly doubled to 14.5%. The dominant pattern is tool chain escalation: a harmless read call followed by a write or execute. Most local setups give the agent tool access and trust it. The data says that's not enough. Agent goal hijacking targets the planning phase. Doubled to 6.9%. If you're running autonomous agent loops, attackers can inject objectives during reasoning. Relevant for local setups with less monitoring on agent state. RAG poisoning shifted to metadata. 12.0%, up from 10.0%. If you're running local RAG, the new pattern targets document metadata (titles, authors, annotations) rather than content. Most people sanitize content but pass metadata through as-is. Multimodal injection is new. 2.3% — instructions hidden in images and PDFs. If your local agent processes these files, text-only safety scanning misses them. DETECTION FOR LOCAL: Our pipeline is two-layer. L1 is pattern matching (218 rules, sub-ms, runs entirely locally). L2 is Gemma-based. The full community edition is open source: [github.com/raxe-ai/raxe-ce](http://github.com/raxe-ai/raxe-ce) NUMBERS: `| Threat | Feb % | MoM Change |` `|----------------------|--------|------------|` `| Data Exfiltration | 18.0% | -1.2 |` `| Tool/Command Abuse | 14.5% | +6.4 |` `| RAG/Context Attack | 12.0% | +2.0 |` `| Jailbreak | 11.0% | -1.3 |` `| Prompt Injection | 8.1% | -0.7 |` `| Agent Goal Hijack | 6.9% | +3.3 |` `| Inter-Agent Attack | 5.0% | +1.6 |` Full report: [https://raxe.ai/labs/threat-intelligence/latest](https://raxe.ai/labs/threat-intelligence/latest)
2026-02-24T17:25:45
https://www.reddit.com/r/LocalLLaMA/comments/1rdmilq/threat_data_from_91k_production_ai_agent/
cyberamyntas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdmilq
false
null
t3_1rdmilq
/r/LocalLLaMA/comments/1rdmilq/threat_data_from_91k_production_ai_agent/
false
false
self
0
{'enabled': False, 'images': [{'id': 'G5gLPnkp1UF_Jw7GzEqaPV9s4I4y5HWrFZy5vPO1oW0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/G5gLPnkp1UF_Jw7GzEqaPV9s4I4y5HWrFZy5vPO1oW0.png?width=108&crop=smart&auto=webp&s=6345530825c120c60582a19d75327b33bb383393', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/G5gLPnkp1UF_Jw7GzEqaPV9s4I4y5HWrFZy5vPO1oW0.png?width=216&crop=smart&auto=webp&s=27fe63745f84537aec3a9ed7c3cc9f4380cb143f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/G5gLPnkp1UF_Jw7GzEqaPV9s4I4y5HWrFZy5vPO1oW0.png?width=320&crop=smart&auto=webp&s=e4052843dc7b84d023f2f547cb5a6ade60480543', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/G5gLPnkp1UF_Jw7GzEqaPV9s4I4y5HWrFZy5vPO1oW0.png?width=640&crop=smart&auto=webp&s=9701d8f317b14434672d7dcbad596f2244a0f033', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/G5gLPnkp1UF_Jw7GzEqaPV9s4I4y5HWrFZy5vPO1oW0.png?width=960&crop=smart&auto=webp&s=9ca8646b0a83d8cb7afe53a495d2b0b37320f7b9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/G5gLPnkp1UF_Jw7GzEqaPV9s4I4y5HWrFZy5vPO1oW0.png?width=1080&crop=smart&auto=webp&s=80b6c03fc02ec9b119b0d632dbe83648082e8b99', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/G5gLPnkp1UF_Jw7GzEqaPV9s4I4y5HWrFZy5vPO1oW0.png?auto=webp&s=043416220432e9ff999092928613e80f1b0f77dd', 'width': 1200}, 'variants': {}}]}
Run local LLMs in Flutter with <25ms inter-token latency and zero cloud dependencies
1
Most mobile AI demos are "benchmark bursts" they look great for 30 seconds but crash during real ususage due to thermal spikes or RSS memory peaks. I've open sourced [Edge Veda](https://github.com/ramanujammv1988/edge-veda), a supervised runtime for flutter that treats on-device AI a physical hardware problem. It moved beyond simple FFI wrappers to provide a stable, production-ready enironment. **From technical Architecture POV:** 1. **Background Isolate Workers:** Dart FFi is synchronous in nature and it would freeze you UI, we implemented persisten workers where native pointer stay in background. You UI remains at a smooth 60fps even during heavy 3 tok/s inference. 2. **Suppervised Runtime logic**: we wrote from scratch a C++ `memory_guard` to monitor system level RSS. when OS send a pressure, we applies a **"Compute Budget Contract"** to trim the KV cache instead of letting process die. 3. **Smart Modal Advisor:** probes the user if the model is going to fit before user hits the download button I have included the Performance Flight Recorder logs in the so you can audit the frame-by-frame ethermal and latency telemetry yourself.
2026-02-24T17:20:49
https://i.redd.it/ndkztqf16hlg1.gif
Mundane-Tea-3488
i.redd.it
1970-01-01T00:00:00
0
{}
1rdmdo2
false
null
t3_1rdmdo2
/r/LocalLLaMA/comments/1rdmdo2/run_local_llms_in_flutter_with_25ms_intertoken/
false
false
https://preview.redd.it/…346172a0f6370627
1
{'enabled': True, 'images': [{'id': 'ndkztqf16hlg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ndkztqf16hlg1.gif?width=108&crop=smart&format=png8&s=841134325ba21d902fd517130b2a2e7dcc2037a1', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ndkztqf16hlg1.gif?width=216&crop=smart&format=png8&s=79422b84cf71c8c6c14e009bd03b848842445ce2', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ndkztqf16hlg1.gif?width=320&crop=smart&format=png8&s=eb7154baca03bada7331db750d09c073f837e801', 'width': 320}], 'source': {'height': 848, 'url': 'https://preview.redd.it/ndkztqf16hlg1.gif?format=png8&s=8f1e879a74edb2996a387a629d5ce159b3ab2919', 'width': 384}, 'variants': {'gif': {'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ndkztqf16hlg1.gif?width=108&crop=smart&s=51b4114bb3faf7889dca9fd36f0d4e96c57034e5', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ndkztqf16hlg1.gif?width=216&crop=smart&s=9d4db2a6c40a0cd53df5b70af657dba81359788d', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ndkztqf16hlg1.gif?width=320&crop=smart&s=eb8984c818bb794cb911c23da6d7dc6e89ea2715', 'width': 320}], 'source': {'height': 848, 'url': 'https://preview.redd.it/ndkztqf16hlg1.gif?s=f619fbe046aa115e02eb377fcfaa361c3ed7538b', 'width': 384}}, 'mp4': {'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ndkztqf16hlg1.gif?width=108&format=mp4&s=f4e3051f7cc5a5dae4d5204cc1a837d3efcb5356', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ndkztqf16hlg1.gif?width=216&format=mp4&s=143080eb26010909ea9623576ab0738dd61cc17b', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ndkztqf16hlg1.gif?width=320&format=mp4&s=8713ec5f8fdcd153aec5d1b5db53956839ea7936', 'width': 320}], 'source': {'height': 848, 'url': 'https://preview.redd.it/ndkztqf16hlg1.gif?format=mp4&s=4fae46f08667e75a89f0edb323be75a07b4b79b2', 'width': 384}}}}]}
Should I root to enable GPU/NPU for llama.cpp on Android 15 / HyperOS?
1
[deleted]
2026-02-24T17:19:10
[deleted]
1970-01-01T00:00:00
0
{}
1rdmc1c
false
null
t3_1rdmc1c
/r/LocalLLaMA/comments/1rdmc1c/should_i_root_to_enable_gpunpu_for_llamacpp_on/
false
false
default
1
null
Qwen3.5 - The middle child's 122B-A10B benchmarks looking seriously impressive - on par or edges out gpt-5-mini consistently
116
*Processing img gda5n9ko6hlg1...* Qwen3.5-122B-A10B generally comes out ahead of gpt-5-mini and gpt-oss-120b across most benchmarks. **vs GPT-5-mini:** Qwen3.5 wins on knowledge (MMLU-Pro 86.7 vs 83.7), STEM reasoning (GPQA Diamond 86.6 vs 82.8), agentic tasks (BFCL-V4 72.2 vs 55.5), and vision tasks (MathVision 86.2 vs 71.9). GPT-5-mini is only competitive in a few coding benchmarks and translation. **vs GPT-OSS-120B:** Qwen3.5 wins more decisively. GPT-OSS-120B holds its own in competitive coding (LiveCodeBench 82.7 vs 78.9) but falls behind significantly on knowledge, agents, vision, and multilingual tasks. **TL;DR:** Qwen3.5-122B-A10B is the strongest of the three overall. GPT-5-mini is its closest rival in coding/translation. GPT-OSS-120B trails outside of coding. Lets see if the quants hold up to the benchmarks
2026-02-24T17:18:38
https://www.reddit.com/r/LocalLLaMA/comments/1rdmbhv/qwen35_the_middle_childs_122ba10b_benchmarks/
carteakey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdmbhv
false
null
t3_1rdmbhv
/r/LocalLLaMA/comments/1rdmbhv/qwen35_the_middle_childs_122ba10b_benchmarks/
false
false
self
116
null
I originally thought the speed would be painfully slow if I didn't offload all layers to the GPU with the --n-gpu-layers parameter.. But now, this performance actually seems acceptable compared to those smaller models that keep throwing errors all the time in AI agent use cases.
3
My system specs: * AMD Ryzen 5 7600 * RX 9060 XT 16GB * 32GB RAM
2026-02-24T17:16:19
https://i.redd.it/6uybvtn17hlg1.png
BitOk4326
i.redd.it
1970-01-01T00:00:00
0
{}
1rdm984
false
null
t3_1rdm984
/r/LocalLLaMA/comments/1rdm984/i_originally_thought_the_speed_would_be_painfully/
false
false
https://preview.redd.it/…030341b113c8c4d9
3
{'enabled': True, 'images': [{'id': '6uybvtn17hlg1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/6uybvtn17hlg1.png?width=108&crop=smart&auto=webp&s=324c74506e76f4efa0e5307b5f6980f1df944022', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/6uybvtn17hlg1.png?width=216&crop=smart&auto=webp&s=b07575c23f2ddc503bfd509014c337e015f5399d', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/6uybvtn17hlg1.png?width=320&crop=smart&auto=webp&s=f8bb266273702f5f8f6124a379b5f08a10634df3', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/6uybvtn17hlg1.png?width=640&crop=smart&auto=webp&s=7345bc3b0319bcb37151f00231be8e842fdf534f', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/6uybvtn17hlg1.png?width=960&crop=smart&auto=webp&s=535988fa2aa0f12e28ee74667f5a05358fa11769', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/6uybvtn17hlg1.png?width=1080&crop=smart&auto=webp&s=6a262fd417aa155e9afc74aab171ceab007e73ff', 'width': 1080}], 'source': {'height': 1162, 'url': 'https://preview.redd.it/6uybvtn17hlg1.png?auto=webp&s=bc29a9584d71aae365c71c774198db50efed6795', 'width': 1548}, 'variants': {}}]}
trying to convince llama llama3.2:1b its actually 2026
3
https://preview.redd.it/…models are funny
2026-02-24T17:10:28
https://www.reddit.com/r/LocalLLaMA/comments/1rdm36b/trying_to_convince_llama_llama321b_its_actually/
Substantial_Set5836
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdm36b
false
null
t3_1rdm36b
/r/LocalLLaMA/comments/1rdm36b/trying_to_convince_llama_llama321b_its_actually/
false
false
https://preview.redd.it/…590c7d783d1cff0c
3
null
Agentic RAG for Dummies v2.0
9
Hey everyone! I've been working on **Agentic RAG for Dummies**, an open-source project that shows how to build a modular Agentic RAG system with LangGraph — and today I'm releasing v2.0. The goal of the project is to bridge the gap between basic RAG tutorials and real, extensible agent-driven systems. It supports any LLM provider (Ollama, OpenAI, Anthropic, Google) and includes a step-by-step notebook for learning + a modular Python project for building. ## What's new in v2.0 🧠 **Context Compression** — The agent now compresses its working memory when the context exceeds a configurable token threshold, keeping retrieval loops lean and preventing redundant tool calls. Both the threshold and the growth factor are fully tunable. 🛑 **Agent Limits & Fallback Response** — Hard caps on tool invocations and reasoning iterations ensure the agent never loops indefinitely. When a limit is hit, instead of failing silently, the agent falls back to a dedicated response node and generates the best possible answer from everything retrieved so far. ## Core features - Hierarchical indexing (parent/child chunks) with hybrid search via Qdrant - Conversation memory across questions - Human-in-the-loop query clarification - Multi-agent map-reduce for parallel sub-query execution - Self-correction when retrieval results are insufficient - Works fully local with Ollama There's also a Google Colab notebook if you want to try it without setting anything up locally. GitHub: https://github.com/GiovanniPasq/agentic-rag-for-dummies
2026-02-24T17:01:16
https://www.reddit.com/r/LocalLLaMA/comments/1rdltkd/agentic_rag_for_dummies_v20/
CapitalShake3085
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdltkd
false
null
t3_1rdltkd
/r/LocalLLaMA/comments/1rdltkd/agentic_rag_for_dummies_v20/
false
false
self
9
{'enabled': False, 'images': [{'id': '0hPxFGYT6YCWrY0F2YbW20jF4ADCQ4tcrjmzUBuJCvI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0hPxFGYT6YCWrY0F2YbW20jF4ADCQ4tcrjmzUBuJCvI.png?width=108&crop=smart&auto=webp&s=52393dd52ceb36316813dc1180f847ab0902aa00', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0hPxFGYT6YCWrY0F2YbW20jF4ADCQ4tcrjmzUBuJCvI.png?width=216&crop=smart&auto=webp&s=9026049f274a27d84d05d2dcfbc7407a117bd452', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0hPxFGYT6YCWrY0F2YbW20jF4ADCQ4tcrjmzUBuJCvI.png?width=320&crop=smart&auto=webp&s=c3f839be2c22888dd81203ba9bc143fe0b206d38', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0hPxFGYT6YCWrY0F2YbW20jF4ADCQ4tcrjmzUBuJCvI.png?width=640&crop=smart&auto=webp&s=4ec1d00741ca04de8b5967b99e8f18677cee346a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0hPxFGYT6YCWrY0F2YbW20jF4ADCQ4tcrjmzUBuJCvI.png?width=960&crop=smart&auto=webp&s=f0080c068d5b39dab995669d90bd64d111f7bed9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0hPxFGYT6YCWrY0F2YbW20jF4ADCQ4tcrjmzUBuJCvI.png?width=1080&crop=smart&auto=webp&s=7f8bf5306b8f22d6fe87f7adca80f44423dcb9b1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0hPxFGYT6YCWrY0F2YbW20jF4ADCQ4tcrjmzUBuJCvI.png?auto=webp&s=77b6154058783af13791765732f7287e4710e9f8', 'width': 1200}, 'variants': {}}]}
My theory on all the negative Chinese AI media coverage right now. It's about the stock market, investor panic, and the upcoming release of Deepseek V4.
15
Everywhere you look right now in the media, the news cycle is dominated by attacks on Chinese AI Labs, saying they trained on illegal Nvidia GPUs, the can only do what they do because they distill on American model companies responses, they lack any true capability of innovation internally and can only copy what they see. I have not seen this many coordinated attacks against Chinese AI Labs before, although after Deepseek was released last year there were definitely atttacks. I've been thinking about this barrage of negative coverage at this very moment from every single American AI Labs, plus Nvidia (all at the same time) and it occurred to me that the last time Deepseek launched a model there was massive investor panic, and what is expected to happen anytime now? Yep, Deepseek is expected to release their anticipated V4 version of Deepseek. I believe this timing of negative coverage is specifically designed to drown out any media attention on the upcoming release. Nvidia and the AI companies don't want a repeat of last year, specifically with the investor panic, as they try to raise record amounts for their own AI. And Nividia and Google, etc.. would rather not have their stock values decline by double digits. So they are manufacturing FUD to try to prevent it. Just think about the timing of all this negative media posting when you see it and look through the FUD to see the real fear based on historical evidence before buying into it. If I was the company behind Deepseek I'd hold my release of Deekseek for a while until the media circus dies down now that the American AI industry has shot all their bullets.
2026-02-24T17:00:17
https://www.reddit.com/r/LocalLLaMA/comments/1rdlsgq/my_theory_on_all_the_negative_chinese_ai_media/
awebb78
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdlsgq
false
null
t3_1rdlsgq
/r/LocalLLaMA/comments/1rdlsgq/my_theory_on_all_the_negative_chinese_ai_media/
false
false
self
15
null
LLM Council - framework for multi-LLM critique + consensus evaluation
6
Open source Repo: [https://github.com/abhishekgandhi-neo/llm\_council](https://github.com/abhishekgandhi-neo/llm_council) This is a small framework we internally built for running multiple LLMs (local or API) on the same prompt, letting them critique each other, and producing a final structured answer. It’s mainly intended for evaluation and reliability experiments with OSS models. **Why this can be useful for local models** When comparing local models, raw accuracy numbers don’t always show reasoning errors or hallucinations. A critique phase helps surface disagreements and blind spots. Useful for: • comparing local models on your own dataset • testing quantization impact • RAG validation with local embeddings • model-as-judge experiments • auto-labeling datasets **Practical details** • Async parallel calls so latency is close to a single model call • Structured outputs with each model’s answer, critiques, and final synthesis • Provider-agnostic configs so you can mix Ollama/vLLM models with API ones • Includes basics like retries, timeouts, and batch runs for eval workflows I'm keen to hear what council or aggregation strategies worked well for small local models vs larger ones.
2026-02-24T16:58:38
https://www.reddit.com/r/LocalLLaMA/comments/1rdlqs9/llm_council_framework_for_multillm_critique/
gvij
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdlqs9
false
null
t3_1rdlqs9
/r/LocalLLaMA/comments/1rdlqs9/llm_council_framework_for_multillm_critique/
false
false
self
6
{'enabled': False, 'images': [{'id': 'hlPaH0bty-Xtyz1SM4t1F-PSs4-_i2cFykjVFls7zjs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hlPaH0bty-Xtyz1SM4t1F-PSs4-_i2cFykjVFls7zjs.png?width=108&crop=smart&auto=webp&s=89f3fec74e60362984c33a7dd7e3221f9d2eea81', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hlPaH0bty-Xtyz1SM4t1F-PSs4-_i2cFykjVFls7zjs.png?width=216&crop=smart&auto=webp&s=a63fa7a704bcd1d386462944786b573d5b822a58', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hlPaH0bty-Xtyz1SM4t1F-PSs4-_i2cFykjVFls7zjs.png?width=320&crop=smart&auto=webp&s=4350b4e9de4c265bf7f041372913a6c84378f4c4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hlPaH0bty-Xtyz1SM4t1F-PSs4-_i2cFykjVFls7zjs.png?width=640&crop=smart&auto=webp&s=97dcd6439bf62c8d78b4e59beec7947c5555bd4d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hlPaH0bty-Xtyz1SM4t1F-PSs4-_i2cFykjVFls7zjs.png?width=960&crop=smart&auto=webp&s=fc1aaa314a4a4289d7797d777777576b0adfdcef', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hlPaH0bty-Xtyz1SM4t1F-PSs4-_i2cFykjVFls7zjs.png?width=1080&crop=smart&auto=webp&s=8e2137af59a1890ee37b60a3876ce7e958ed1a21', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hlPaH0bty-Xtyz1SM4t1F-PSs4-_i2cFykjVFls7zjs.png?auto=webp&s=c964068f6d82f47154826f13dd4fccd00e4495bf', 'width': 1200}, 'variants': {}}]}
UnSloth Qwen 3.5 27b out
80
https://huggingface.co/collections/unsloth/qwen35
2026-02-24T16:54:43
https://www.reddit.com/r/LocalLLaMA/comments/1rdlmrb/unsloth_qwen_35_27b_out/
KittyPigeon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdlmrb
false
null
t3_1rdlmrb
/r/LocalLLaMA/comments/1rdlmrb/unsloth_qwen_35_27b_out/
false
false
self
80
{'enabled': False, 'images': [{'id': 'dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg.png?width=108&crop=smart&auto=webp&s=bc22945ffd1a5b4538e9461f0008217c12ab36d5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg.png?width=216&crop=smart&auto=webp&s=8424da8798c0aaa1cc507342283deec7ecab8102', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg.png?width=320&crop=smart&auto=webp&s=bd0b841f63efa2bbabe550c13942eb8faa7dc3e5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg.png?width=640&crop=smart&auto=webp&s=a0cabe321c52951854b9dbc3bdae8efaf50806ae', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg.png?width=960&crop=smart&auto=webp&s=64216464a61550631721bc3d991b1aa3d2d44638', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg.png?width=1080&crop=smart&auto=webp&s=f001bcc6b957a406d088f5dc2e14398d2a1b171d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dlDFzALy1O-EBRHN-g1NVeXL1TkSB16uGphZF5pl_bg.png?auto=webp&s=0a38314dbac4d5a970eb60097f005a25f9562b60', 'width': 1200}, 'variants': {}}]}
"Agentic Gaming" — local LLMs, local TTS, local image gen, local embeddings, 80+ orchestrated AI tasks... an update on Synthasia, and an honest deep dive into what I'm actually trying to build
1
[removed]
2026-02-24T16:52:09
https://www.reddit.com/r/LocalLLaMA/comments/1rdljzr/agentic_gaming_local_llms_local_tts_local_image/
orblabs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdljzr
false
null
t3_1rdljzr
/r/LocalLLaMA/comments/1rdljzr/agentic_gaming_local_llms_local_tts_local_image/
false
false
https://preview.redd.it/…81ec67a1184c7544
1
null
They're out on HF!! (qwen3.5)
1
[deleted]
2026-02-24T16:49:16
[deleted]
1970-01-01T00:00:00
0
{}
1rdlh1e
false
null
t3_1rdlh1e
/r/LocalLLaMA/comments/1rdlh1e/theyre_out_on_hf_qwen35/
false
false
default
1
null
Small Qwen Models OUT!!
200
[https://huggingface.co/Qwen/Qwen3.5-35B-A3B](https://huggingface.co/Qwen/Qwen3.5-35B-A3B) *Processing img sxcx52pp1hlg1...*
2026-02-24T16:46:04
https://www.reddit.com/r/LocalLLaMA/comments/1rdldt6/small_qwen_models_out/
Wooden-Deer-1276
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdldt6
false
null
t3_1rdldt6
/r/LocalLLaMA/comments/1rdldt6/small_qwen_models_out/
false
false
https://external-preview…6e02703938fcc784
200
null
New Qwen 3.5 models are online on HF
43
2026-02-24T16:44:47
https://huggingface.co/Qwen/Qwen3.5-35B-A3B/tree/main
matteogeniaccio
huggingface.co
1970-01-01T00:00:00
0
{}
1rdlck5
false
null
t3_1rdlck5
/r/LocalLLaMA/comments/1rdlck5/new_qwen_35_models_are_online_on_hf/
false
false
https://external-preview…6e02703938fcc784
43
{'enabled': False, 'images': [{'id': '9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=108&crop=smart&auto=webp&s=d90f3b8cfc5fae78a5f6bd5852f034d7cdb38530', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=216&crop=smart&auto=webp&s=33de3c05ff0889634d642d1530aabe56637b8742', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=320&crop=smart&auto=webp&s=b3432acd83c423293f928f5a7ca1485f41b9519a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=640&crop=smart&auto=webp&s=f4846b1e5fd750530b9aa43eb95e74460e90d4ec', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=960&crop=smart&auto=webp&s=7bafdceeee9cea3322f54cd190bf709facdd92b9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=1080&crop=smart&auto=webp&s=48fc08b8107e0d200b98b29df17ee1e81bfd037f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?auto=webp&s=7232ca7fdd47e438d438373a83a96ea70bd229e0', 'width': 1200}, 'variants': {}}]}
Qwen/Qwen3.5-122B-A10B · Hugging Face
594
2026-02-24T16:44:13
https://huggingface.co/Qwen/Qwen3.5-122B-A10B
coder543
huggingface.co
1970-01-01T00:00:00
0
{}
1rdlc02
false
null
t3_1rdlc02
/r/LocalLLaMA/comments/1rdlc02/qwenqwen35122ba10b_hugging_face/
false
false
https://external-preview…d200f04112a3c300
594
{'enabled': False, 'images': [{'id': 'jXshLXVh7iCkI_DkUnvVFkKtp2L9P6wekJnwAzaRzjM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jXshLXVh7iCkI_DkUnvVFkKtp2L9P6wekJnwAzaRzjM.png?width=108&crop=smart&auto=webp&s=13efe52518ada7a7f6489c04b897cc0fddefeb39', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jXshLXVh7iCkI_DkUnvVFkKtp2L9P6wekJnwAzaRzjM.png?width=216&crop=smart&auto=webp&s=d13ee5f34f9d29b0aad58e9c2541c9c23ae6da75', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jXshLXVh7iCkI_DkUnvVFkKtp2L9P6wekJnwAzaRzjM.png?width=320&crop=smart&auto=webp&s=5c8e310a28a47bdd90f64d3a77f85e86dbc0d826', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jXshLXVh7iCkI_DkUnvVFkKtp2L9P6wekJnwAzaRzjM.png?width=640&crop=smart&auto=webp&s=298bac8d8df642a16a7b098a721723a8766a21d8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jXshLXVh7iCkI_DkUnvVFkKtp2L9P6wekJnwAzaRzjM.png?width=960&crop=smart&auto=webp&s=aff05d087598abfcd4ece8e8725e77643b7648b5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jXshLXVh7iCkI_DkUnvVFkKtp2L9P6wekJnwAzaRzjM.png?width=1080&crop=smart&auto=webp&s=ec25502072f8a81923132c5a6100adf04c43663f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jXshLXVh7iCkI_DkUnvVFkKtp2L9P6wekJnwAzaRzjM.png?auto=webp&s=1697199b7207e654ab2ed1bf34000990a42e0422', 'width': 1200}, 'variants': {}}]}
Qwen/Qwen3.5-35B-A3B · Hugging Face
547
2026-02-24T16:44:05
https://huggingface.co/Qwen/Qwen3.5-35B-A3B
ekojsalim
huggingface.co
1970-01-01T00:00:00
0
{}
1rdlbvc
false
null
t3_1rdlbvc
/r/LocalLLaMA/comments/1rdlbvc/qwenqwen3535ba3b_hugging_face/
false
false
https://external-preview…6e02703938fcc784
547
{'enabled': False, 'images': [{'id': '9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=108&crop=smart&auto=webp&s=d90f3b8cfc5fae78a5f6bd5852f034d7cdb38530', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=216&crop=smart&auto=webp&s=33de3c05ff0889634d642d1530aabe56637b8742', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=320&crop=smart&auto=webp&s=b3432acd83c423293f928f5a7ca1485f41b9519a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=640&crop=smart&auto=webp&s=f4846b1e5fd750530b9aa43eb95e74460e90d4ec', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=960&crop=smart&auto=webp&s=7bafdceeee9cea3322f54cd190bf709facdd92b9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?width=1080&crop=smart&auto=webp&s=48fc08b8107e0d200b98b29df17ee1e81bfd037f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9t9hISbgGxfk479gTZKF1XJ1oO6QhRPmNzUYpMNUbjs.png?auto=webp&s=7232ca7fdd47e438d438373a83a96ea70bd229e0', 'width': 1200}, 'variants': {}}]}
For those who use local Chinese models, does bias not affect you?
0
Chinese models from deepseek, alibaba, moonshot, and more contain large censorship and restrictions pertaining to china sensitive topics, and these biases can be seen when prompting the model even without explicit language containing censored topics. For those to run these models locally, do you use distilled or uncensored versions of them, or do you not care about the biases the model has?
2026-02-24T16:42:54
https://www.reddit.com/r/LocalLLaMA/comments/1rdlaqr/for_those_who_use_local_chinese_models_does_bias/
ggbalgeet
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rdlaqr
false
null
t3_1rdlaqr
/r/LocalLLaMA/comments/1rdlaqr/for_those_who_use_local_chinese_models_does_bias/
false
false
self
0
null
How it feels listening to Anthropic complain about competitors distilling their models
361
2026-02-24T16:39:19
https://i.redd.it/uz8fkvgj0hlg1.jpeg
MMAgeezer
i.redd.it
1970-01-01T00:00:00
0
{}
1rdl76d
false
null
t3_1rdl76d
/r/LocalLLaMA/comments/1rdl76d/how_it_feels_listening_to_anthropic_complain/
false
false
https://preview.redd.it/…f3b21d8f84d7331a
361
{'enabled': True, 'images': [{'id': 'uz8fkvgj0hlg1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/uz8fkvgj0hlg1.jpeg?width=108&crop=smart&auto=webp&s=0f60cb60c5a0edd6a4fc3e718af796c2cda228fc', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/uz8fkvgj0hlg1.jpeg?width=216&crop=smart&auto=webp&s=fcebfc454a5b719f66e006e9a8b5823d74b4c421', 'width': 216}, {'height': 201, 'url': 'https://preview.redd.it/uz8fkvgj0hlg1.jpeg?width=320&crop=smart&auto=webp&s=f920bee5fd8ba34f7fc5968b35d1e5d8696fe9cf', 'width': 320}, {'height': 403, 'url': 'https://preview.redd.it/uz8fkvgj0hlg1.jpeg?width=640&crop=smart&auto=webp&s=9d33bda9a766d265d4f32d4d5cadaa3ad5c033dd', 'width': 640}, {'height': 604, 'url': 'https://preview.redd.it/uz8fkvgj0hlg1.jpeg?width=960&crop=smart&auto=webp&s=010b53e00298be79c5e4c8bb01bbd71de8e64ea7', 'width': 960}], 'source': {'height': 645, 'url': 'https://preview.redd.it/uz8fkvgj0hlg1.jpeg?auto=webp&s=21e9c5b582fd807a21f39e144f4588266d6e24e1', 'width': 1024}, 'variants': {}}]}