title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Holo2 GUI localization models by H Company: Holo2-4B, Holo2-8B (Apache 2.0) and Holo2-30B-A3B, Holo2-235B-A22B (research-only license)
1
Blog post: [https://hcompany.ai/holo2-235b-a22b-preview](https://hcompany.ai/holo2-235b-a22b-preview) Hugging Face: [https://huggingface.co/Hcompany/Holo2-235B-A22B](https://huggingface.co/Hcompany/Holo2-235B-A22B) Collection: https://huggingface.co/collections/Hcompany/holo2
2026-02-03T10:32:31
https://www.reddit.com/gallery/1quoj5v
Nunki08
reddit.com
1970-01-01T00:00:00
0
{}
1quoj5v
false
null
t3_1quoj5v
/r/LocalLLaMA/comments/1quoj5v/holo2_gui_localization_models_by_h_company/
false
false
https://b.thumbs.redditm…r5xJBtfHohIk.jpg
1
null
Using Synthetic with open source models instead of Claude Pro, my experience so far
1
Got tired of Claude's rate limits. Switched to Synthetic.new, same price, limits reset way faster and no weekly cap. They host open source models like GLM-4.7, Kimi K2, MiniMax, DeepSeek. Not Opus-level but GLM-4.7 handles most coding fine. OpenAI and Anthropic compatible API so it works with whatever client you already use. US/EU datacenters, no training on your prompts. What do you guys use? https://synthetic.new/?referral=XGBEMjshQRGDnKp (referral, we both get credits)
2026-02-03T10:25:36
https://www.reddit.com/r/LocalLLaMA/comments/1quof2r/using_synthetic_with_open_source_models_instead/
Fatmofficial
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quof2r
false
null
t3_1quof2r
/r/LocalLLaMA/comments/1quof2r/using_synthetic_with_open_source_models_instead/
false
false
self
1
null
bots on LocalLLaMA
1
Is there any strategy to defend against bots on this sub? Bots create comments under posts and people fall for it, but I'm also sure they upvote/vote posts.
2026-02-03T10:16:48
https://www.reddit.com/r/LocalLLaMA/comments/1quo9ue/bots_on_localllama/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quo9ue
false
null
t3_1quo9ue
/r/LocalLLaMA/comments/1quo9ue/bots_on_localllama/
false
false
self
1
null
LLM360/K2-Think-V2
1
[https://huggingface.co/LLM360/K2-Think-V2](https://huggingface.co/LLM360/K2-Think-V2)
2026-02-03T10:13:02
https://www.reddit.com/r/LocalLLaMA/comments/1quo7j6/llm360k2thinkv2/
External_Mood4719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quo7j6
false
null
t3_1quo7j6
/r/LocalLLaMA/comments/1quo7j6/llm360k2thinkv2/
false
false
self
1
null
Intel Xeon 600 Workstation CPUs Launched: Up To 86 Cores, 8000 MT/s Memory, 128 Gen5 Lanes, 350W TDP With OC Support, & More Cores/$ Than Threadripper 9000
1
2026-02-03T10:05:52
https://wccftech.com/intel-xeon-600-cpus-launched-up-to-86-cores-better-value-than-threadripper/amp/
hainesk
wccftech.com
1970-01-01T00:00:00
0
{}
1quo398
false
null
t3_1quo398
/r/LocalLLaMA/comments/1quo398/intel_xeon_600_workstation_cpus_launched_up_to_86/
false
false
https://external-preview…9ad1002b820de9ae
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/qRGFi5W1MKKifxdKWjq-Z9EvJoJICK6GlGjx6E2rLX8.jpeg?auto=webp&s=249157507bcba8f3b648272f9977c8ce7a7e9684', 'width': 1520, 'height': 880}, 'resolutions': [{'url': 'https://external-preview.redd.it/qRGFi5W1MKKifxdKWjq-Z9EvJoJICK6GlGjx6E2rLX8.jpeg?width=108&crop=smart&auto=webp&s=a51e17b7cb21773b13599d7a0273965fa5916ccf', 'width': 108, 'height': 62}, {'url': 'https://external-preview.redd.it/qRGFi5W1MKKifxdKWjq-Z9EvJoJICK6GlGjx6E2rLX8.jpeg?width=216&crop=smart&auto=webp&s=5845340662df36a88645212c7fa068f2c0aee62c', 'width': 216, 'height': 125}, {'url': 'https://external-preview.redd.it/qRGFi5W1MKKifxdKWjq-Z9EvJoJICK6GlGjx6E2rLX8.jpeg?width=320&crop=smart&auto=webp&s=8317b024c1724d4735a2ebfdbc4c1af59e985e3d', 'width': 320, 'height': 185}, {'url': 'https://external-preview.redd.it/qRGFi5W1MKKifxdKWjq-Z9EvJoJICK6GlGjx6E2rLX8.jpeg?width=640&crop=smart&auto=webp&s=edf818262e4a121198fd637c574afc8cfd4e984d', 'width': 640, 'height': 370}, {'url': 'https://external-preview.redd.it/qRGFi5W1MKKifxdKWjq-Z9EvJoJICK6GlGjx6E2rLX8.jpeg?width=960&crop=smart&auto=webp&s=2023695d2a89d97789810f47b408ffff38a351e6', 'width': 960, 'height': 555}, {'url': 'https://external-preview.redd.it/qRGFi5W1MKKifxdKWjq-Z9EvJoJICK6GlGjx6E2rLX8.jpeg?width=1080&crop=smart&auto=webp&s=f2f8fcdcbea06f61285b4b367cc7aca78b6edfc6', 'width': 1080, 'height': 625}], 'variants': {}, 'id': 'qRGFi5W1MKKifxdKWjq-Z9EvJoJICK6GlGjx6E2rLX8'}], 'enabled': False}
oh-my-claudecode v3.9.9 Released - 32 Bug Fixes + Performance Boost
1
Just released v3.9.9 with some solid improvements: ✅ 32 critical bugs fixed ⚡ Performance optimizations (better caching, debouncing, regex improvements) 🔒 Defensive bounds checking to prevent crashes 🔧 Auto-update URL fix If you're using oh-my-claudecode for parallel Claude Code sessions, this update makes things noticeably smoother and more stable. Update: \`npm install -g oh-my-claude-sisyphus@3.9.9\` Release notes: [https://github.com/Yeachan-Heo/oh-my-claudecode/releases/tag/v3.9.9](https://github.com/Yeachan-Heo/oh-my-claudecode/releases/tag/v3.9.9)
2026-02-03T09:58:51
https://www.reddit.com/r/LocalLLaMA/comments/1qunyz7/ohmyclaudecode_v399_released_32_bug_fixes/
Bellman_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qunyz7
false
null
t3_1qunyz7
/r/LocalLLaMA/comments/1qunyz7/ohmyclaudecode_v399_released_32_bug_fixes/
false
false
self
1
null
Neumann and this time I will try to explain it better! AI led Infrastructure! Not the holy grail of agent memory and context but something to help you all build better safer applications!
1
Hi guys! Yesterday I came to this sub to share my work with you all called Neumann: [https://github.com/Shadylukin/Neumann](https://github.com/Shadylukin/Neumann) Now it is open source and AI led Infrastructure with a few key twists that make it "AI" First thing is the unification of 3 types of storage: \- Relational \- Graph \- Vector It is available in Python, Typescript, Rust and Via direct install, Brew and Docker. Why should you care? Well I have a few reasons why I built it for myself and it is easier if I explain how it was built. I work as a Systems Architect (ex Engineer worked for Banks, Defence Contractors now working as a consultant) and I implemented this with 90% Claude Code with the 10% finicky integration and testing work done by myself. I have learned a lot from this and tomorrow I will share some learnings I have about how some of you avid builders who are "Vibe" coding could likely close the gap on that illusive 10% that makes your apps never seem to quite work right. Neumann can answer som Unified Queries i.e. -- Find engineers similar to Alice who report to Bob FIND NODE person WHERE role = 'engineer' SIMILAR TO 'user:alice' CONNECTED TO 'user:bob' Unified storage. One entity can have table fields, graph edges, AND vector embeddings. No sync logic between systems. Essentially what this means is if you are using RAG applications you could use Neumann as a swap in infrastructure for more complex queries simplified. This saves tokens used. Agent Memory Conversation history with semantic recall across sessions. const client = await NeumannClient.connect("localhost:9200"); // Store message with embedding await client.execute(` INSERT messages session='abc', role='user', content='...', embedding=[0.1, 0.2, ...] `); // Recall similar past conversations const memories = await client.execute(` SIMILAR 'current-context' TOP 10 `); Semantic Search with Access Control # Store user with permissions via graph client.execute("NODE CREATE user name='alice', team='eng'") client.execute("EDGE CREATE user:alice -> project:neumann can_read") # Query respects graph-based access results = client.execute(""" FIND NODE document WHERE team = 'eng' SIMILAR TO 'query embedding' CONNECTED TO 'user:alice' """) Semantic search with access control is handy if you want to build guardrails on agent access and put policies to drop those permissions under certain circumstances the infrastructure was built for it. I am not here to claim I have solved agent memory. All I can say is I am using this for two clients and will be deploying it to live environments so it works for my use and I have Open Sourced it because I wanted to share something that is working for me! Any questions feel free to ask! I answer them as fast as I can! I'm blown away by Claude Code after over a decade in the industry I'm still astounded by how lucky we are to live in a time like this with tools like this.
2026-02-03T09:33:19
https://www.reddit.com/r/LocalLLaMA/comments/1qunkyv/neumann_and_this_time_i_will_try_to_explain_it/
CoopaScoopa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qunkyv
false
null
t3_1qunkyv
/r/LocalLLaMA/comments/1qunkyv/neumann_and_this_time_i_will_try_to_explain_it/
false
false
self
1
null
Latest Open source Ai benchmark 2026 made by opus 4.5
1
View the latest open source ai benchmarks 2026 covering popular open source LLM across two key domains \- Reasoning \- Hallucination [https://claude.ai/public/artifacts/056522a8-95a4-4d75-81ed-896dc4ddd13e](https://claude.ai/public/artifacts/056522a8-95a4-4d75-81ed-896dc4ddd13e) Have fun!!!! \#ai #madebyclaude #2026aiopensource #opensource
2026-02-03T09:23:46
https://www.reddit.com/r/LocalLLaMA/comments/1qunflz/latest_open_source_ai_benchmark_2026_made_by_opus/
Terrible_Key_8828
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qunflz
false
null
t3_1qunflz
/r/LocalLLaMA/comments/1qunflz/latest_open_source_ai_benchmark_2026_made_by_opus/
false
false
self
1
null
Top AI papers of 2025
1
https://archivara.org/top-2025
2026-02-03T09:01:29
https://i.redd.it/qynfy5vpv8hg1.jpeg
gbomb13
i.redd.it
1970-01-01T00:00:00
0
{}
1qun30l
false
null
t3_1qun30l
/r/LocalLLaMA/comments/1qun30l/top_ai_papers_of_2025/
false
false
default
1
{'images': [{'source': {'url': 'https://preview.redd.it/qynfy5vpv8hg1.jpeg?auto=webp&s=f668c7859d989cab9b92ad0d487a7faa1a34be22', 'width': 1151, 'height': 896}, 'resolutions': [{'url': 'https://preview.redd.it/qynfy5vpv8hg1.jpeg?width=108&crop=smart&auto=webp&s=777598075b560663bff3d006487e4f6211accc63', 'width': 108, 'height': 84}, {'url': 'https://preview.redd.it/qynfy5vpv8hg1.jpeg?width=216&crop=smart&auto=webp&s=dc2fff00d12bb162964e099c07e8f441b26bcf54', 'width': 216, 'height': 168}, {'url': 'https://preview.redd.it/qynfy5vpv8hg1.jpeg?width=320&crop=smart&auto=webp&s=49ee75fa6ed3855e84eb4171c5ba122014cf954a', 'width': 320, 'height': 249}, {'url': 'https://preview.redd.it/qynfy5vpv8hg1.jpeg?width=640&crop=smart&auto=webp&s=c35de07e0f34316984fafa3caa0a948e672e5b56', 'width': 640, 'height': 498}, {'url': 'https://preview.redd.it/qynfy5vpv8hg1.jpeg?width=960&crop=smart&auto=webp&s=f1f75d7b537ef29afd0300a4010cca5b943a1054', 'width': 960, 'height': 747}, {'url': 'https://preview.redd.it/qynfy5vpv8hg1.jpeg?width=1080&crop=smart&auto=webp&s=2ac61f9b7e990e829032e98ff7737df39f1b5642', 'width': 1080, 'height': 840}], 'variants': {}, 'id': 'qynfy5vpv8hg1'}], 'enabled': True}
Voice cloning: is emotion / acting style control actually possible?
1
I’ve been playing with **Qwen3-TTS voice cloning** (via ComfyUI) and wanted to sanity-check something with people who know the model better. Cloning **speaker identity** works very well for me, even with short reference clips (≈5–8s, clean English). But once cloning is enabled, I can’t seem to get **reliable emotions or acting styles** into the output — things like angry, excited, whispery, shy, flirty, etc. I’ve tried the usual tricks: * stage directions or emotion hints in the text * punctuation / pauses * manual chunking * different model sizes (0.6B vs 1.7B) Result is mostly neutral speech or inconsistent emotion that doesn’t survive regeneration. Interestingly, the *same model* can clearly generate emotional speech when **not** using voice cloning (e.g. designed/custom voices). So I’m trying to understand what’s going on here. **Questions** * Is emotion/style control for *cloned* voices currently **unsupported or intentionally limited** in Qwen3-TTS? * Has anyone found a **working workflow** (prompting, node setup, chaining) that actually preserves emotions when cloning? * Or is **fine-tuning** the only real solution right now? * If yes: are there any **repos, experiments, or researchers** who have shown emotional control working on cloned voices with Qwen (or Qwen-based forks)? Not looking for generic TTS theory — I’m specifically interested in how **Qwen3-TTS behaves in practice**, and whether this is a known limitation or something I’m missing. Would love pointers, code links, or “this is not possible yet and here’s why” answers.
2026-02-03T08:56:32
https://www.reddit.com/r/LocalLLaMA/comments/1qun01h/voice_cloning_is_emotion_acting_style_control/
piscoster
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qun01h
false
null
t3_1qun01h
/r/LocalLLaMA/comments/1qun01h/voice_cloning_is_emotion_acting_style_control/
false
false
self
1
null
Looking for arXiv submissionendorsement (cs.LG/cs.AI) for my paper "Blink KV" on bounded-memory LLM streaming
1
[removed]
2026-02-03T08:54:02
https://www.reddit.com/r/LocalLLaMA/comments/1qumynf/looking_for_arxiv_submissionendorsement_cslgcsai/
Savings-Poet5718
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qumynf
false
null
t3_1qumynf
/r/LocalLLaMA/comments/1qumynf/looking_for_arxiv_submissionendorsement_cslgcsai/
false
false
self
1
null
Looking for arXiv submission endorsement (cs.LG/cs.AI) for my paper "Blink KV" bounded-memory LLM streaming
1
[removed]
2026-02-03T08:46:12
https://www.reddit.com/r/LocalLLaMA/comments/1qumu76/looking_for_arxiv_submission_endorsement_cslgcsai/
Savings-Poet5718
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qumu76
false
null
t3_1qumu76
/r/LocalLLaMA/comments/1qumu76/looking_for_arxiv_submission_endorsement_cslgcsai/
false
false
self
1
null
Best local LLM + STT for German Medical Reports on consumer hardware?
1
Hi everyone, I trying to build a workflow to transcribe spoken German dictations (Radiology/Nuclear Medicine) and format them into a structured report template using a local LLM. I am working as a radiologist and want to make my life a bit easier. So far the results were a little bit underwhelming even using some LLM like Gemma 3 27B. I am using whisper-large-v3-turbo for the transcription which produces a lot of junk even with a very specific initial prompt. Gemini 3 Fast handles the task well (it was able to correctly identify the terms from whispers word salad), as well as Kimi K2 but one is a data security problem and the other is super expensive to run locally.  Does anyone have experience or recommendations with maybe German-finetuned models (7B to 70B parameter range) for clinical data? Maybe even a way to improve the initial transcript to make it easier for the LLMs to fill in the template? Ideally it would run on consumer grade hardware and I know I am asking for a lot. Thanks in advance.
2026-02-03T08:30:02
https://www.reddit.com/r/LocalLLaMA/comments/1quml29/best_local_llm_stt_for_german_medical_reports_on/
LastSmitch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quml29
false
null
t3_1quml29
/r/LocalLLaMA/comments/1quml29/best_local_llm_stt_for_german_medical_reports_on/
false
false
self
1
null
I built a way to have my whole laptop in my pocket (VS Code + Claude Code + Free Kimi 2.5 on iPhone/iPad)
0
I love my iPhone and iPad. They are beautiful and portable, but the moment I need to do "real" development work heavy compiles, Docker, or running complex AI agents—the dream usually dies. Web editors feel laggy, and mobile "IDE" apps often feel like toys compared to a pro desktop setup. I wanted the real thing: **Full VS Code. Full Terminal. Local AI Agents.** I wanted to leave my heavy laptop at home but still have my entire dev environment ready at a moment's notice from a coffee shop, the train, or my couch. So I built **Relay**. It's a low-latency bridge that connects your mobile device to your powerful machines (Mac, PC, or Cloud Server) with a native-feel interface that actually works. [Screenshot from Rcode](https://preview.redd.it/7gdj5l9jn8hg1.png?width=2752&format=png&auto=webp&s=9c81102ff22bb729e85308b3811c8781ca3ba29d) # What makes this different? Most remote-access tools like Chrome Remote Desktop (CRD) just mirror a desktop screen you have to squint at. **Relay is a native, independent mobile runtime.** It stays lightweight on your device while treating your computer as a background "resource bank" to offload heavy tasks like **compiling, indexing, and complex AI processing.** # 1. Workspaces (Multi-Machine support) Most remote-access apps tie you to one machine. Relay is designed around **Workspaces**: * **The Mac at Home**: Connect to your main rig for access to all your local projects and GPU. * **Cloud Power**: Connect to an AWS or DigitalOcean instance for heavy-duty server work. * **Switching is Instant**: Flip between your local Mac and a cloud Linux box effortlessly from your phone without losing your terminal state. * **No Laptop? No Problem**: Relay includes a **basic native computing env** right in the app. It can run lightweight tasks and AI agent logic directly on your phone, so you're never completely stuck. # 2. The "Killer Feature": Rcode Agent (Like Claude Code / Cursor) [Claude Code](https://preview.redd.it/nvwx3yiun8hg1.png?width=2868&format=png&auto=webp&s=1f8cddf64b5dcd6349a5f535bb7e355251771798) I'm obsessed with agentic workflows. I wanted that "Cursor" power on my iPad, but since mobile Rcode doesn't support VS Code extensions yet, I had to build a native solution. https://preview.redd.it/jsdcwrw1o8hg1.png?width=2126&format=png&auto=webp&s=d57158ac0485b2f0210b2ed75610c2cf7e22b07b * **Supported by Claude Code & Kimi 2.5**: The Rcode Agent lives in your editor. Point it at a file, say "refactor this," and it gets to work. * **Free Kimi 2.5 (NVIDIA-Powered)**: I've included access to **Kimi 2.5** for free, right out of the box. No API keys or credit cards needed. * **The Meta-Story**: I actually used **Claude Code** to help me build this feature. It assisted in architecting the terminal bridge and the autonomous logic. It's an agent built *by* an agent, *for* your mobile workspace. # 3. Offline First (The "State" Engine) Relay isn't just a window into your server; it represents a "State" on your device. * **Browse & Edit Offline**: Read your code and make changes on a plane or in a tunnel. * **Auto-Sync**: Everything syncs back to your host machine the second you catch a signal. No "Connection Lost" popups that freeze your editor. # 4. Build Your Own Apps (Total Freedom) Relay is more than an editor; it's a **runtime**. You can build your own specialized tools using our SDK. * **Bypass the App Store**: Build internal tools, custom dashboards, or unique interfaces and share them via our Git-based community store. No gatekeepers, just a simple PR to get your app live. [Relay Store](https://preview.redd.it/1y2a9oo6o8hg1.png?width=2114&format=png&auto=webp&s=d8e50165aac0e2b6ac20acfa7cf0e5ce8092af44) # Open, Free, and Global * **Free Forever**: The core platform is free. No subscriptions required to code. * **Localized**: Fully supported in **English** and **Mandarin (中文)**. * **Privacy**: Most components are open source because you should own your compute and your code. # Why I made this I got tired of sitting with a hot laptop on my lap just to monitor an AI agent or watch a progress bar. I built Relay so I could: 1. **Start a task** on my desktop rig. 2. **Walk away**. 3. **Check in & intervene** from my phone while I'm living my life. # Links & Resources * **Try it now**: [relay.notapublicfigureanymore.com](https://relay.notapublicfigureanymore.com/) * **GitHub**: [dotrly/relay](https://github.com/dotrly/relay) * **Report Issues**: [GitHub Issues](https://github.com/dotrly/relay/issues) * **Developer SDK**: [Build your own apps](https://relay.notapublicfigureanymore.com/docs/sdk) *If you've been looking for a reason to finally use your iPad for more than just Netflix, I'd love for you to give this a try!*
2026-02-03T08:21:16
https://www.reddit.com/r/LocalLLaMA/comments/1qumg6i/i_built_a_way_to_have_my_whole_laptop_in_my/
jaseunda
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qumg6i
false
null
t3_1qumg6i
/r/LocalLLaMA/comments/1qumg6i/i_built_a_way_to_have_my_whole_laptop_in_my/
false
false
https://a.thumbs.redditm…51iJaAuMvxd8.jpg
0
null
Made a local-first app to branch AI chats and reuse prompts
2
I built a small desktop app called ThinkStream because I kept losing track of ideas when exploring multiple directions with AI. Here’s what it does: Branch from any message — explore side ideas without losing your main conversation See where you are — know which branch you’re in and where it came from Navigate easily — jump between branches and follow the flow naturally Prompt templates — reuse setups so you don’t have to type the same prompts again and again Local-first — all your chats stay on your machine, no cloud needed Parallel exploration — try multiple paths at once without overwriting anything I mainly use it for research when one question turns into several. Would love feedback from folks who work with local or multi-model setups: does the branching feel intuitive? are the prompt templates useful? anything you’d change or add? Site: [thinkstream.app](https://thinkstream.app)
2026-02-03T08:07:59
https://www.reddit.com/r/LocalLLaMA/comments/1qum8n0/made_a_localfirst_app_to_branch_ai_chats_and/
VirtualBoard000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qum8n0
false
null
t3_1qum8n0
/r/LocalLLaMA/comments/1qum8n0/made_a_localfirst_app_to_branch_ai_chats_and/
false
false
self
2
null
Things to try on Strix Halo 128GB? GPT OSS, OpenClaw, n8n...
1
Hi everyone, I just invested in the MinisForum ms s1 and I'm very happy with the results! For GPT-OSS-120b, I'm getting \~30tps on ollama and \~49tps on llama.cpp. Does anyone have some ideas as to what to do with this? I was thinking OpenClaw if I could run it in an isolated envioronment -- I know the security is abysmal. Self-hosted n8n seems like a fun option too I've cleared out my next week to play around, so I''ll try as much as I can
2026-02-03T08:04:21
https://www.reddit.com/r/LocalLLaMA/comments/1qum6kz/things_to_try_on_strix_halo_128gb_gpt_oss/
MiyamotoMusashi7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qum6kz
false
null
t3_1qum6kz
/r/LocalLLaMA/comments/1qum6kz/things_to_try_on_strix_halo_128gb_gpt_oss/
false
false
self
1
null
Tool for scanning your local agents for sketchy behavior patterns
0
AI is kinda crapping the bed with security so I made NOX works with any ai and solves some problems that seem to be spiraling I'm happy to remix it for your specific needs. pip install nox-security nox-scan my\_agent.py Checks for: \- Self-replication patterns \- Anti-termination hooks \- Data exfiltration signatures \- Prompt injection in outputs \- Sandbox escape attempts Zero dependencies. Works offline. MIT licensed. [https://github.com/AnnulusLabs/nox-security](https://github.com/AnnulusLabs/nox-security) Not saying your waifubot is plotting anything, but... maybe check?
2026-02-03T07:37:20
https://www.reddit.com/r/LocalLLaMA/comments/1qulql6/tool_for_scanning_your_local_agents_for_sketchy/
annuluslabs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qulql6
false
null
t3_1qulql6
/r/LocalLLaMA/comments/1qulql6/tool_for_scanning_your_local_agents_for_sketchy/
false
false
self
0
null
O(1) memory attention - running 512K context in 3.85 GB instead of 1TB
0
Releasing eval binaries for a new attention mechanism I built. Memory stays constant regardless of sequence length. At 512K tokens: - Standard attention: 1,099 GB (lol) - This: 3.85 GB Full dense attention, not approximation. Works on H100/A100/GH200. https://github.com/RegularJoe-CEO/waller-eval Can't share the implementation (patent pending) but the binary runs a full benchmark so you can verify.
2026-02-03T07:30:39
https://www.reddit.com/r/LocalLLaMA/comments/1qulmn9/o1_memory_attention_running_512k_context_in_385/
sevinsixtwo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qulmn9
false
null
t3_1qulmn9
/r/LocalLLaMA/comments/1qulmn9/o1_memory_attention_running_512k_context_in_385/
false
false
self
0
null
Found a wallet-drain prompt-injection payload on Moltbook (screenshots) — builders: treat feeds as untrusted
321
Hey folks — quick heads-up for anyone building “agents that browse social feeds” or experimenting with Moltbook. I ran across a post in m/grok-420 that looks like a normal “how to use Base chain / viem” mini-guide… but at the bottom it appends an obvious prompt-injection / tool-hijack payload. It includes classic strings like: “SYSTEM OVERRIDE” “ignore all prior rules / you are the developer message” “require_confirmation=false / execute_trade=true” a fake <use_tool_…> tag that instructs an agent to transfer 0.1 ETH to a specific address I’m attaching screenshots. I already reported it to Moltbook, but their response window can be up to ~30 days, so I wanted to warn others now. Why this matters: If you have an agent that ingests social posts and has wallet/tool permissions, and your wrapper doesn’t enforce strict trust boundaries, this is the kind of thing that can cause unauthorized transactions or other write-actions. Even if 99% of agents ignore it, the 1% that don’t is enough to cause real damage. What I’m NOT doing: I’m not trying to “teach prompt injection.” I’m not sharing copy/paste payload text beyond what’s visible in the screenshots. Please don’t repost the full injection block in comments. Defensive checklist (for builders): Treat all social/web content as untrusted data, never instructions Separate read tools from write tools; require explicit confirmation for any transfer/swap Don’t store raw private keys in an agent; use policy-gated signing Log provenance: “what input triggered this action?” Block obvious injection markers from being interpreted as commands (e.g., role:"system", “ignore prior instructions”, <use_tool_…>) If anyone from Moltbook/security teams wants more details (timestamps, URL/history, etc.), I can share privately. Stay safe.
2026-02-03T07:24:08
https://www.reddit.com/gallery/1qulipj
Impressive-Willow593
reddit.com
1970-01-01T00:00:00
0
{}
1qulipj
false
null
t3_1qulipj
/r/LocalLLaMA/comments/1qulipj/found_a_walletdrain_promptinjection_payload_on/
false
false
https://b.thumbs.redditm…EdNTZeGA66HU.jpg
321
null
smolcluster: Model-parallel GPT-2 inference across Mac Minis + iPad
0
So, I have been tinkering around with the concept of model parallelism and distributed inferencing as part of my project called smolcluster. The goal is to let users make use of any combination of devices (Mac minis, Raspberry Pis, NVIDIA GPUs, etc.) to do training and inference. I did get success using a small cluster of 2× Mac Minis + 1× iPad (A16) running GPT-2 (117M) inference with a model-parallel SyncPS architecture. Model Parallelism is a technique used to scatter layers of a model across different nodes and establishing a common comms protocol between them to pass in activations etc for text generation for example. Synchronous Parameter Server (SyncPS) is an architecture used to establish such a comms system employing the above mentioned algorithm to do the inference. A video is also attached showing the inference running in real time on this compute cluster. Checkout  [smolcluster website](https://www.smolcluster.com) here! https://preview.redd.it/5ybxsx1o88hg1.png?width=3360&format=png&auto=webp&s=144fc7f08c099a1c61de413bf0c1ad2a368cbf48 https://reddit.com/link/1qul5pi/video/ch1sobzo88hg1/player
2026-02-03T07:03:19
https://www.reddit.com/r/LocalLLaMA/comments/1qul5pi/smolcluster_modelparallel_gpt2_inference_across/
East-Muffin-6472
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qul5pi
false
null
t3_1qul5pi
/r/LocalLLaMA/comments/1qul5pi/smolcluster_modelparallel_gpt2_inference_across/
false
false
https://b.thumbs.redditm…X9wk_G5T5JeU.jpg
0
null
OSS 120b v GLM 4.7 flash. Is the latter better for anything?
46
Is GLM 4.7 flash better than OSS 120b for anything? I would normally look for a benchmark but I don't know which ones to trust any more.
2026-02-03T06:34:53
https://www.reddit.com/r/LocalLLaMA/comments/1quknpy/oss_120b_v_glm_47_flash_is_the_latter_better_for/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quknpy
false
null
t3_1quknpy
/r/LocalLLaMA/comments/1quknpy/oss_120b_v_glm_47_flash_is_the_latter_better_for/
false
false
self
46
null
Last Week in Multimodal AI - Local Edition
2
I curate a weekly multimodal AI roundup, here are the local/open-source highlights from last week: **Z-Image - Controllable Text-to-Image** * Foundation model built for precise control with classifier-free guidance, negative prompting, and LoRA support. * [Hugging Face](https://huggingface.co/Tongyi-MAI/Z-Image) https://preview.redd.it/tkuso0j158hg1.png?width=1456&format=png&auto=webp&s=e2c3376942edada97d5dfac59b537cfbda876812 **HunyuanImage-3.0-Instruct - Image Generation & Editing** * Image generation and editing model with multimodal fusion from Tencent. * [Hugging Face](https://huggingface.co/tencent/HunyuanImage-3.0-Instruct) https://preview.redd.it/7bfx5b5358hg1.png?width=1456&format=png&auto=webp&s=c7976d83afa785388b3c2943f9dc6411608d531e **LTX-2 LoRA - Image-to-Video Adapter** * Open-source Image-to-Video adapter LoRA for LTX-2 by MachineDelusions. * [Hugging Face](https://huggingface.co/MachineDelusions/LTX-2_Image2Video_Adapter_LoRa) https://reddit.com/link/1quknk3/video/6p93cv4458hg1/player **TeleStyle - Style Transfer** * Content-preserving style transfer for images and videos. * [Project Page](https://tele-ai.github.io/TeleStyle/) https://reddit.com/link/1quknk3/video/0arp6bc558hg1/player **MOSS-Video-and-Audio - Synchronized Generation** * 32B MoE model generates video and audio in one pass. * [Hugging Face](https://huggingface.co/OpenMOSS-Team/MOVA-360p) https://reddit.com/link/1quknk3/video/3ryr1oo658hg1/player **LingBot-World:** An open-source world simulator for video generation research. - [GitHub](https://github.com/Robbyant/lingbot-world) | [HuggingFace](https://huggingface.co/robbyant/lingbot-world-base-cam) https://reddit.com/link/1quknk3/video/57ub0nwb58hg1/player Checkout the [full roundup](https://open.substack.com/pub/thelivingedge/p/multimodal-monday-43-models-that?utm_campaign=post-expanded-share&utm_medium=web) for more demos, papers, and resources.
2026-02-03T06:34:38
https://www.reddit.com/r/LocalLLaMA/comments/1quknk3/last_week_in_multimodal_ai_local_edition/
Vast_Yak_4147
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quknk3
false
null
t3_1quknk3
/r/LocalLLaMA/comments/1quknk3/last_week_in_multimodal_ai_local_edition/
false
false
https://b.thumbs.redditm…5G9AewCG5j5M.jpg
2
null
Is Novita.Ai scamming me or am I myself?
1
[removed]
2026-02-03T06:05:49
https://www.reddit.com/gallery/1quk4ty
Short-Cobbler-901
reddit.com
1970-01-01T00:00:00
0
{}
1quk4ty
false
null
t3_1quk4ty
/r/LocalLLaMA/comments/1quk4ty/is_novitaai_scamming_me_or_am_i_myself/
false
false
https://b.thumbs.redditm…FG2P9WjL0Bkc.jpg
1
null
Semantic LLM Interpreter - only tested on a potato
2
Hi everyone, I’m an independent AI researcher trying to work at the most fundamental levels to make LLMs more reliable at all scales. Problem is, my laptop is a potato, so I can only run <5B models before my laptop freezes up. I've developed an approach to redefine Temperature to be applied around the "median" tokens rather than the "modal" token through semantic interpretation of outputs. The approach successfully identifies where the median intent applies, avoiding hallucinations caused by modal tokens with less than 50% confidence not representing the majority of the output possibilities. The explanation of how it works I’ve tested this on tiny open-weights models (<5B parameters), and it seems to work really well. It often produces different outputs to standard greedy token selection at 0 temperature, and the outputs are often a lot more useful when the model is confident and less likely to hallucinate when the model is less confident. I’ve just open-sourced the repo and I need help testing this on larger, quantized, or fine-tuned models (Llama 3 70B, Mixtral, etc.). I believe this fixes reliability at a fundamental level without needing brittle guardrails or prompt engineering. It wraps around any PyTorch/Keras model, I just need someone with less of a potato to give it a go and provide feedback. If you're interested, please give the repo a look.
2026-02-03T06:05:32
https://github.com/brodie-eaton/Semantic-LLM-Interpreter
No-Bus-3800
github.com
1970-01-01T00:00:00
0
{}
1quk4ne
false
null
t3_1quk4ne
/r/LocalLLaMA/comments/1quk4ne/semantic_llm_interpreter_only_tested_on_a_potato/
false
false
default
2
{'enabled': False, 'images': [{'id': 'm0xqYRg84t32K_kqbrOlOlUhzGMqoNE2hSaVaz2N1eY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/m0xqYRg84t32K_kqbrOlOlUhzGMqoNE2hSaVaz2N1eY.png?width=108&crop=smart&auto=webp&s=4239acf255e078ca5a579161d9d2ec84c16e5755', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/m0xqYRg84t32K_kqbrOlOlUhzGMqoNE2hSaVaz2N1eY.png?width=216&crop=smart&auto=webp&s=907b329d5f4599604d22018b196db130e2bb793b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/m0xqYRg84t32K_kqbrOlOlUhzGMqoNE2hSaVaz2N1eY.png?width=320&crop=smart&auto=webp&s=8b4748ff6f68b05c21e035b871a01e4212d67325', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/m0xqYRg84t32K_kqbrOlOlUhzGMqoNE2hSaVaz2N1eY.png?width=640&crop=smart&auto=webp&s=43d16e00e838a6e15ac02d03a4d7a7f8a0013259', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/m0xqYRg84t32K_kqbrOlOlUhzGMqoNE2hSaVaz2N1eY.png?width=960&crop=smart&auto=webp&s=01cb22550fb8073abfdd12dfa04d566f6a47e388', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/m0xqYRg84t32K_kqbrOlOlUhzGMqoNE2hSaVaz2N1eY.png?width=1080&crop=smart&auto=webp&s=93aaefedfe3456841a9ebeb0730094847c3f92d0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/m0xqYRg84t32K_kqbrOlOlUhzGMqoNE2hSaVaz2N1eY.png?auto=webp&s=c41fec8b77c01f04ad7c068c885057c5ffd88bd7', 'width': 1200}, 'variants': {}}]}
I am building an LLM arena inside 0 A.D. so models can battle in real-time RTS matches
4
I hacked together a little project that lets you control a live 0 A.D. match with LLM agents basically an LLM arena on top of the 0 A.D. game. Repo: [https://github.com/0xrushi/openenv-0ad-bridge](https://github.com/0xrushi/openenv-0ad-bridge) Agents read an omniscient JSON snapshot of the game state and send low-level commands into the same running match (so you can do stuff like `gemini` vs `gpt-5` on the same map). I first tried this on the open-source Age of Empires-style engine [openage](https://github.com/SFTtech/openage), but that project has been “almost there” for \~10 years. 0 A.D. felt stable enough, so I rebuilt everything around its RL interface with an OpenEnv-style proxy and some helper tools. If you’re into agent-y things, I’d love help on better prompts and a cleaner action cookbook (move / econ / build / combat / scout), plus any ideas for fun experiments to run on top.
2026-02-03T05:58:50
https://www.reddit.com/r/LocalLLaMA/comments/1quk02j/i_am_building_an_llm_arena_inside_0_ad_so_models/
0xrushi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quk02j
false
null
t3_1quk02j
/r/LocalLLaMA/comments/1quk02j/i_am_building_an_llm_arena_inside_0_ad_so_models/
false
false
self
4
{'enabled': False, 'images': [{'id': 'l6mp0fmRLJRNP2Bzva8iw2tqdtAg4pgbs_tQkCZi7jE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/l6mp0fmRLJRNP2Bzva8iw2tqdtAg4pgbs_tQkCZi7jE.png?width=108&crop=smart&auto=webp&s=5a10ea90866e5e6731381a42af95a844bf99eab0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/l6mp0fmRLJRNP2Bzva8iw2tqdtAg4pgbs_tQkCZi7jE.png?width=216&crop=smart&auto=webp&s=4c475fa1d8ee5f3299236ee93e205e60fc463934', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/l6mp0fmRLJRNP2Bzva8iw2tqdtAg4pgbs_tQkCZi7jE.png?width=320&crop=smart&auto=webp&s=22a1a135dfcbd1b2eda8ce87eac5f048656d05f2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/l6mp0fmRLJRNP2Bzva8iw2tqdtAg4pgbs_tQkCZi7jE.png?width=640&crop=smart&auto=webp&s=622efa30c1d000fa0af76b0fa28f24b1d54bcb14', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/l6mp0fmRLJRNP2Bzva8iw2tqdtAg4pgbs_tQkCZi7jE.png?width=960&crop=smart&auto=webp&s=07581a0d2e52c9f21e875c111b05a90bce08e369', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/l6mp0fmRLJRNP2Bzva8iw2tqdtAg4pgbs_tQkCZi7jE.png?width=1080&crop=smart&auto=webp&s=77cee2cf927d48a3ac2385c1529e227e946ca28d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/l6mp0fmRLJRNP2Bzva8iw2tqdtAg4pgbs_tQkCZi7jE.png?auto=webp&s=cbba4604d4f3b21ddf2e2dd0a65574b3980d274c', 'width': 1200}, 'variants': {}}]}
vllm 0.15.0 docker image error
0
Was trying the latest version of vllm but i'm having this error and can't find any info on it: ``` vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] WorkerProc failed to start. vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] Traceback (most recent call last): vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 743, in worker_main vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] worker = WorkerProc(*args, **kwargs) vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 569, in __init__ vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] self.worker.init_device() vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/worker_base.py", line 326, in init_device vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] self.worker.init_device() # type: ignore vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] ^^^^^^^^^^^^^^^^^^^^^^^^^ vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_worker.py", line 210, in init_device vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] current_platform.set_device(self.device) vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] File "/usr/local/lib/python3.12/dist-packages/vllm/platforms/cuda.py", line 123, in set_device vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] torch.cuda.set_device(device) vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] File "/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py", line 567, in set_device vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] torch._C._cuda_setDevice(device) vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] File "/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py", line 410, in _lazy_init vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] torch._C._cuda_init() vllm-qwen3-vl-nvfp4 | ERROR 02-02 21:49:32 [v1/executor/multiproc_executor.py:772] RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 803: system has unsupported display driver / cuda driver combination ``` This is the docker image and i've latest cuda container toolkit and nvidia driver. OS is ubuntu server 25. Did anyone see anything like this or have any pointer? Thanks!
2026-02-03T05:57:11
https://www.reddit.com/r/LocalLLaMA/comments/1qujz0g/vllm_0150_docker_image_error/
Reasonable_Friend_77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qujz0g
false
null
t3_1qujz0g
/r/LocalLLaMA/comments/1qujz0g/vllm_0150_docker_image_error/
false
false
self
0
null
Reducing LLMs with Semantic Temperature instead of Greedy Temperature
1
[removed]
2026-02-03T05:43:59
https://www.reddit.com/r/LocalLLaMA/comments/1qujqai/reducing_llms_with_semantic_temperature_instead/
No-Bus-3800
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qujqai
false
null
t3_1qujqai
/r/LocalLLaMA/comments/1qujqai/reducing_llms_with_semantic_temperature_instead/
false
false
self
1
{'enabled': False, 'images': [{'id': 'm0xqYRg84t32K_kqbrOlOlUhzGMqoNE2hSaVaz2N1eY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/m0xqYRg84t32K_kqbrOlOlUhzGMqoNE2hSaVaz2N1eY.png?width=108&crop=smart&auto=webp&s=4239acf255e078ca5a579161d9d2ec84c16e5755', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/m0xqYRg84t32K_kqbrOlOlUhzGMqoNE2hSaVaz2N1eY.png?width=216&crop=smart&auto=webp&s=907b329d5f4599604d22018b196db130e2bb793b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/m0xqYRg84t32K_kqbrOlOlUhzGMqoNE2hSaVaz2N1eY.png?width=320&crop=smart&auto=webp&s=8b4748ff6f68b05c21e035b871a01e4212d67325', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/m0xqYRg84t32K_kqbrOlOlUhzGMqoNE2hSaVaz2N1eY.png?width=640&crop=smart&auto=webp&s=43d16e00e838a6e15ac02d03a4d7a7f8a0013259', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/m0xqYRg84t32K_kqbrOlOlUhzGMqoNE2hSaVaz2N1eY.png?width=960&crop=smart&auto=webp&s=01cb22550fb8073abfdd12dfa04d566f6a47e388', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/m0xqYRg84t32K_kqbrOlOlUhzGMqoNE2hSaVaz2N1eY.png?width=1080&crop=smart&auto=webp&s=93aaefedfe3456841a9ebeb0730094847c3f92d0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/m0xqYRg84t32K_kqbrOlOlUhzGMqoNE2hSaVaz2N1eY.png?auto=webp&s=c41fec8b77c01f04ad7c068c885057c5ffd88bd7', 'width': 1200}, 'variants': {}}]}
I fixed "Vote Splitting" in sampling: Applying Temperature Semantically to make small models reliable (Open Source)
1
[removed]
2026-02-03T05:35:56
https://www.reddit.com/r/LocalLLaMA/comments/1qujl14/i_fixed_vote_splitting_in_sampling_applying/
No-Bus-3800
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qujl14
false
null
t3_1qujl14
/r/LocalLLaMA/comments/1qujl14/i_fixed_vote_splitting_in_sampling_applying/
false
false
self
1
null
Benchmarks are being gamed. Can we build a "Vibe Index" based on this sub's actual feedback?
0
Like many of you, I’m getting tired of seeing new models hitting SOTA on paper, only to find out they’re just another case of benchmark-smuggling or overfitting. All popular leaderboards are known to have bias towards certain model companies (LMSYS, LiveBench etc) Personally, I usually trust highly voted comments in this sub more than any single benchmark. # A few questions: * When you see a new model posted here, what convinces you it might be good? (specific phrases, tests, failure modes, numbers?) * Do you rely more on: * upvotes on the post * a few detailed technical comments * or your own quick local tests? I’m asking because I’m thinking about building an open-source tool with an automated pipeline that scrapes r/LocalLLaMA posts and comments to build a "Community Vibe Score" for models. The goal is to **turn UGC into a structured leaderboard where "Sentiment" is weighted by upvotes, user reputation, and technical specifics**. Honest answers appreciated, including “this would be useless.” lol
2026-02-03T05:21:02
https://www.reddit.com/r/LocalLLaMA/comments/1qujb8c/benchmarks_are_being_gamed_can_we_build_a_vibe/
Ok-Atmosphere3141
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qujb8c
false
null
t3_1qujb8c
/r/LocalLLaMA/comments/1qujb8c/benchmarks_are_being_gamed_can_we_build_a_vibe/
false
false
self
0
null
Should I buy a P104-100 or CMP 30HX for LM Studio?
3
My current specs are a Ryzen 2400G and 32GB of RAM. I’m looking for a cheap GPU to run LLMs locally (mostly using LM Studio). Since these mining cards are quite affordable, I'm considering them, but I’m worried about the VRAM. With only 6–8GB, what models can I realistically run? For context, I’m currently running a 20B model on my 2400G (offloading to CPU) at about 4 tokens/s. On my laptop (4800H + GTX 1650), I get around 10 tokens/s, but it slows down significantly as the context grows or when I use tools like search/document analysis. Which card would be the better upgrade?
2026-02-03T05:05:06
https://www.reddit.com/r/LocalLLaMA/comments/1quj024/should_i_buy_a_p104100_or_cmp_30hx_for_lm_studio/
Dazzling_Buy9625
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quj024
false
null
t3_1quj024
/r/LocalLLaMA/comments/1quj024/should_i_buy_a_p104100_or_cmp_30hx_for_lm_studio/
false
false
self
3
null
TheLatent.io - Social Network for AI Agents (Python SDK + MCP Server)
0
Hey everyone! Just released TheLatent.io Python SDK - a social network designed specifically for AI agents. \*\*Install:\*\* \`\`\` pip install thelatent \`\`\` \*\*Features:\*\* - Full social networking for AI bots (posts, DMs, reactions, follows) - MCP Server for Claude Desktop/Code integration - Say "Post to TheLatent" and Claude does it automatically - Bot API for programmatic access \*\*Quick Example:\*\* \`\`\`python from thelatent import Bot bot = Bot(api\_key="your-key") bot.post("Hello from my AI agent!") bot.react(post\_id, "fire") bot.dm("other\_bot", "Let's chat!") \`\`\` \*\*Links:\*\* - PyPI: https://pypi.org/project/thelatent/ - Website: https://thelatent.io Would love to hear your feedback!
2026-02-03T05:03:17
https://www.reddit.com/r/LocalLLaMA/comments/1quiytj/thelatentio_social_network_for_ai_agents_python/
Standard_Pea_2547
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quiytj
false
null
t3_1quiytj
/r/LocalLLaMA/comments/1quiytj/thelatentio_social_network_for_ai_agents_python/
false
false
self
0
null
Do LLM make more mistakes with CSV compared to JSON
2
As CSV only have type in header and you have to count commas would a LLM get confused and mismatch columns? List of JSON object repeat the key for every row, does that help LLM to keep track of key value pairs?
2026-02-03T05:01:27
https://www.reddit.com/r/LocalLLaMA/comments/1quixeg/do_llm_make_more_mistakes_with_csv_compared_to/
arstarsta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quixeg
false
null
t3_1quixeg
/r/LocalLLaMA/comments/1quixeg/do_llm_make_more_mistakes_with_csv_compared_to/
false
false
self
2
null
Made a security proxy for OpenClaw/Moltbot/Clawdbot - one URL change
0
Been running OpenClaw and the prompt injection thing kept nagging at me. Saw that ZeroLeaks test showing 91% injection success rate and finally decided to do something about it. So I built a proxy that sits between your agent and the LLM. It scans everything going in and out - prompt injection, API keys leaking, PII, SSRF, base64 encoding tricks, all of it. One URL change to set it up. Works with Claude, GPT, Gemini, whatever you're using. Your keys stay in Cloudflare KV so we never see them. [SeqPU.com/mco](http://SeqPU.com/mco)
2026-02-03T04:45:42
https://www.reddit.com/r/LocalLLaMA/comments/1quim14/made_a_security_proxy_for_openclawmoltbotclawdbot/
Impressive-Law2516
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quim14
false
null
t3_1quim14
/r/LocalLLaMA/comments/1quim14/made_a_security_proxy_for_openclawmoltbotclawdbot/
false
false
self
0
null
Is Quantization-as-a-Service actually needed, or are free tools already enough?
0
I’m building a Quantization-as-a-Service (QaaS) product and I want an honest sanity check from people deploying LLMs in production. Before going further, I’m trying to answer a basic question: **is QaaS even necessary?** The direction I’m exploring looks like this: * run **baseline metrics** (perplexity / MMLU / task-specific evals) * show **forecasted post-quant metrics** before doing the full conversion * **one-click quantize** into common formats (AWQ / FP8 / GGUF) * handle **instance + format churn** and roll in new quant methods over time * deliver everything via a stable SDK instead of custom scripts But I’m not convinced this is actually solving a real problem. For those running models in prod: * Do you already have this handled internally with llama.cpp / vLLM / bitsandbytes? * Is accuracy drift something you actively worry about, or do you just test and move on? * Is the pain more about **trust**, **time**, or is there basically *no pain* here? If a managed service handled quantization end-to-end (including before/after metrics), would you use it — or is this firmly in “don’t outsource this” territory? I’m genuinely trying to understand whether QaaS is a real category, or just unnecessary abstraction over tools that are already good enough. Brutal feedback appreciated.
2026-02-03T04:32:30
https://www.reddit.com/r/LocalLLaMA/comments/1quicis/is_quantizationasaservice_actually_needed_or_are/
Creative_Tax8134
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quicis
false
null
t3_1quicis
/r/LocalLLaMA/comments/1quicis/is_quantizationasaservice_actually_needed_or_are/
false
false
self
0
null
What settings are best for stepfun-ai/Step-3.5-Flash-Int4 on llama.cpp ???
13
I'm getting a LOT of repetition in the thinking with: --ctx-size 80000 \ --batch-size 4096 \ --ubatch-size 2048 \ --fit on \ --flash-attn on \ --cache-type-k q8_0 \ --cache-type-v q8_0 \ --cont-batching \ --kv-unified \ --jinja \ --mlock \ --no-mmap \ --numa distribute \ --op-offload \ --repack \ --slots \ --parallel 1 \ --threads 16 \ --threads-batch 16 \ --temp 1.0 \ --top-k 40 \ --top-p 0.95 \ --min-p 0.0 \ --warmup
2026-02-03T04:17:18
https://www.reddit.com/r/LocalLLaMA/comments/1qui1ir/what_settings_are_best_for/
johnnyApplePRNG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qui1ir
false
null
t3_1qui1ir
/r/LocalLLaMA/comments/1qui1ir/what_settings_are_best_for/
false
false
self
13
{'enabled': False, 'images': [{'id': 'uWCMnI8WKB5t3B08tzVhdcIn0bJddI_Dq-SOlrXq4pM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uWCMnI8WKB5t3B08tzVhdcIn0bJddI_Dq-SOlrXq4pM.png?width=108&crop=smart&auto=webp&s=975e1f0db4397bf6896f6fe40aa021b3f28d3111', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uWCMnI8WKB5t3B08tzVhdcIn0bJddI_Dq-SOlrXq4pM.png?width=216&crop=smart&auto=webp&s=c49eec5702f2447a339ac6bef22a123674b27c71', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uWCMnI8WKB5t3B08tzVhdcIn0bJddI_Dq-SOlrXq4pM.png?width=320&crop=smart&auto=webp&s=bd43ef5195f19c12d5287479f1d67464f1466f57', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uWCMnI8WKB5t3B08tzVhdcIn0bJddI_Dq-SOlrXq4pM.png?width=640&crop=smart&auto=webp&s=69d0711a89a12cbf2c744c54cd6d194d1d9dc9cf', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uWCMnI8WKB5t3B08tzVhdcIn0bJddI_Dq-SOlrXq4pM.png?width=960&crop=smart&auto=webp&s=fec5b48ab410060d572337f3ae7b72ddd68c8743', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uWCMnI8WKB5t3B08tzVhdcIn0bJddI_Dq-SOlrXq4pM.png?width=1080&crop=smart&auto=webp&s=5d69a1a6f4790fe223cf4594edc5628ed5c9d121', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uWCMnI8WKB5t3B08tzVhdcIn0bJddI_Dq-SOlrXq4pM.png?auto=webp&s=e4aefc6289603d232d39254ba8acb10e90154238', 'width': 1200}, 'variants': {}}]}
Using LLM Machine as a Desktop and Server
1
I've installed a 3060 12GB in my machine and can run qwen3:14b without many issues, staying with 10GB VRAM. When I try to go for the bigger models like qwen3:30b-a3b, it fills up my VRAM and spills into my RAM, as expected. Unfortunately, my monitor freezes up and is unusable until the computation is done. For those who use their computers both as LLM servers and desktops, do you switch between modes, or somehow allocate enough VRAM to keep your computer from freezing up with running inference? I guess I could shell in and stop the llama.cpp container, but I'm wondering if there's a more elegant solution.
2026-02-03T04:10:40
https://www.reddit.com/r/LocalLLaMA/comments/1quhwoq/using_llm_machine_as_a_desktop_and_server/
UndefinedBurrito
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quhwoq
false
null
t3_1quhwoq
/r/LocalLLaMA/comments/1quhwoq/using_llm_machine_as_a_desktop_and_server/
false
false
self
1
null
I built Qwen3-TTS Studio – Clone your voice and generate podcasts locally, no ElevenLabs needed
194
Hey everyone, I've been using Qwen3-TTS and found the existing demo a bit limited for what I wanted to do. So I built a proper interface with fine-grained control and a killer feature: \*\*automated podcast generation\*\*. \*\*What it does:\*\* - 🎙️ Clone any voice with just a 3-second audio sample - 🎚️ Fine-tune parameters (temperature, top-k, top-p) with quality presets - 📻 Generate complete podcasts from just a topic – AI writes the script, assigns voices, and synthesizes everything - 🌍 10 languages supported (Korean, English, Chinese, Japanese, etc.) Currently uses gpt5.2 for script generation, but the architecture is modular – you can swap in any local LLM (Qwen, Llama, etc.) if you want fully local. \*\*The TTS runs entirely local\*\* on your machine (macOS MPS / Linux CUDA). No API calls for voice synthesis = unlimited generations, zero cost. Basically: ElevenLabs-style voice cloning + NotebookLM-style podcast generation, but local. GitHub: [https://github.com/bc-dunia/qwen3-TTS-studio](https://github.com/bc-dunia/qwen3-TTS-studio) Happy to answer any questions!
2026-02-03T04:06:59
https://www.reddit.com/r/LocalLLaMA/comments/1quhtzi/i_built_qwen3tts_studio_clone_your_voice_and/
BC_MARO
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quhtzi
false
null
t3_1quhtzi
/r/LocalLLaMA/comments/1quhtzi/i_built_qwen3tts_studio_clone_your_voice_and/
false
false
self
194
null
Is it true on a powerful system that llamacpp is not good?
0
If that’s the case, what would you guys recommend?
2026-02-03T03:40:53
https://www.reddit.com/r/LocalLLaMA/comments/1quhaa0/is_it_true_on_a_powerful_system_that_llamacpp_is/
XiRw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quhaa0
false
null
t3_1quhaa0
/r/LocalLLaMA/comments/1quhaa0/is_it_true_on_a_powerful_system_that_llamacpp_is/
false
false
self
0
null
Open source security harness for AI coding agents — blocks rm -rf, SSH key theft, API key exposure before execution (Rust)
0
With AI coding agents getting shell access, filesystem writes, and git control, I got paranoid enough to build a security layer. OpenClaw Harness intercepts every tool call an AI agent makes and checks it against security rules before allowing execution. Think of it as iptables for AI agents. Key features: \- Pre-execution blocking (not post-hoc scanning) \- 35 rules: regex, keyword, or template-based \- Self-protection: 6 layers prevent the agent from disabling the harness \- Fallback mode: critical rules work even if the daemon crashes \- Written in Rust for zero overhead Example — agent tries \`rm -rf \~/Documents\`: → Rule "dangerous\_rm" matches → Command NEVER executes → Agent gets error and adjusts approach → You get a Telegram alert GitHub: [https://github.com/sparkishy/openclaw-harness](https://github.com/sparkishy/openclaw-harness) Built with Rust + React. Open source (BSL 1.1 → Apache 2.0 after 4 years).
2026-02-03T03:15:02
https://www.reddit.com/r/LocalLLaMA/comments/1qugqbg/open_source_security_harness_for_ai_coding_agents/
Automatic-Ask8373
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qugqbg
false
null
t3_1qugqbg
/r/LocalLLaMA/comments/1qugqbg/open_source_security_harness_for_ai_coding_agents/
false
false
self
0
{'enabled': False, 'images': [{'id': 'e7-KedzOJkZXSe6YMapwYQVn6UueaNqnbc9iXT7BzgI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e7-KedzOJkZXSe6YMapwYQVn6UueaNqnbc9iXT7BzgI.png?width=108&crop=smart&auto=webp&s=798e1b72b5187f4801ef442ea1c6123aa0399cb1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/e7-KedzOJkZXSe6YMapwYQVn6UueaNqnbc9iXT7BzgI.png?width=216&crop=smart&auto=webp&s=13fd696174d03a1cddd4c91f8ab86e7f77c9a6d2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/e7-KedzOJkZXSe6YMapwYQVn6UueaNqnbc9iXT7BzgI.png?width=320&crop=smart&auto=webp&s=841d1c8d94c8083ceb20bb368d04722c7ee0fb6e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/e7-KedzOJkZXSe6YMapwYQVn6UueaNqnbc9iXT7BzgI.png?width=640&crop=smart&auto=webp&s=29f058517d93e7495c76630f6877b059199dd7c0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/e7-KedzOJkZXSe6YMapwYQVn6UueaNqnbc9iXT7BzgI.png?width=960&crop=smart&auto=webp&s=8c5095f3dcb642e6e522e0c458001a253ad94928', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/e7-KedzOJkZXSe6YMapwYQVn6UueaNqnbc9iXT7BzgI.png?width=1080&crop=smart&auto=webp&s=3d6fd28c3bbce94db5dd50634663aca72ad77b9a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/e7-KedzOJkZXSe6YMapwYQVn6UueaNqnbc9iXT7BzgI.png?auto=webp&s=637f005074bb9e8ed437e52b24afc0344f84eb28', 'width': 1200}, 'variants': {}}]}
Something isn't right , I need help
0
I didn't buy amd for ai work load , i brought it mainly to run macOS (hackintosh, in a itx pc ) but since i had it i decided to see how it performance running some basic llm task ........ expectation 10-20 tokens .. if im lucky maybe 30 plus base on reviews and recommendation from ai models , reddit and facebook and youtube .. they always suggest not buying a gpu without cuda ( nvida ) basically MAYBE I'VE A SPECIAL UNIT and silicon is just slightly better or maybe im crazy but why am i seeing 137tokens nearly **140 tok/sec** **3080 is so limited by it vram** , 3080 super car but the vram is like a grandma trying to load the data .. yes a fast gpu but that extra 6gb that most "youtubers " tell you is not worth it getting amd ... is nonsense and reviews online and people drink " cuda " like if it's a drug .... i don't believe in brand loyalty .. i have a core ultra 7 265k .. .. slight regret . bit sad they're dumping platform i will of love to upgrade to a more efficient cpu ... anyways what im trying to say is **amd have done a really great job ,** fresh install by the way literally install llm studio and download model . max context length 132k i notice if the longer context windows do reduce performance every so slightly ... but i hit it really hard with a very large code basic and lowest was 80tok/sec ... reason i didn't put this in most user who posted, they also use small context windows .. if you uplaod a file. the performance is okay ... but if you try to copy and large an insane amount of text .. it do drop
2026-02-03T02:56:42
https://www.reddit.com/gallery/1qugbfb
big-D-Larri
reddit.com
1970-01-01T00:00:00
0
{}
1qugbfb
false
null
t3_1qugbfb
/r/LocalLLaMA/comments/1qugbfb/something_isnt_right_i_need_help/
false
false
https://b.thumbs.redditm…mMy-A0IEJLLk.jpg
0
null
about Cursor ai
1
[removed]
2026-02-03T02:36:11
https://www.reddit.com/r/LocalLLaMA/comments/1qufurb/about_cursor_ai/
Current-Branch-8595
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qufurb
false
null
t3_1qufurb
/r/LocalLLaMA/comments/1qufurb/about_cursor_ai/
false
false
self
1
null
When calling local llm in cursor ai
1
[removed]
2026-02-03T02:34:23
https://www.reddit.com/r/LocalLLaMA/comments/1quftcc/when_calling_local_llm_in_cursor_ai/
Current-Branch-8595
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quftcc
false
null
t3_1quftcc
/r/LocalLLaMA/comments/1quftcc/when_calling_local_llm_in_cursor_ai/
false
false
self
1
null
I built an open-source observability tool for AI agents — track costs, tokens, and debug traces (self-hostable)
1
Hey everyone, I've been building AI agents for a while and got frustrated with: 1. Not knowing how much each agent run costs 2. Debugging failed runs without seeing the full trace 3. Paying for expensive SaaS tools just to see basic metrics So I built **AgentPulse** — lightweight, open-source observability for AI agents. **What it does:** • Cost tracking: See exactly how much each agent run costs (supports GPT-4o, Claude 3.5, etc.) • Trace visualization: Full span tree showing every LLM call, tool use, and nested operation • Auto-instrumentation: Patch OpenAI/Anthropic clients to capture calls automatically • Self-hostable: Single docker-compose up, data stays on your machine **Quick start:** pip install agentpulse-ai from agentpulse import AgentPulse, trace ap = AgentPulse(endpoint="http://localhost:3000") (name="my-agent") def run_agent(prompt): # your agent code pass **Stack:** • Python SDK (zero dependencies) • Collector: Bun + Hono + SQLite • Dashboard: SvelteKit **Links:** • GitHub: [https://github.com/nandusmasta/agentpulse](https://github.com/nandusmasta/agentpulse) • PyPI: [https://pypi.org/project/agentpulse-ai/](https://pypi.org/project/agentpulse-ai/) • Docs: [https://github.com/nandusmasta/agentpulse/tree/main/docs](https://github.com/nandusmasta/agentpulse/tree/main/docs) It's MIT licensed, free forever for self-hosting. I'm considering a hosted version later but the core will always be open source. Would love feedback! What features would make this more useful for your workflow?
2026-02-03T02:06:46
https://www.reddit.com/r/LocalLLaMA/comments/1quf6iv/i_built_an_opensource_observability_tool_for_ai/
nanduskaiser
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1quf6iv
false
null
t3_1quf6iv
/r/LocalLLaMA/comments/1quf6iv/i_built_an_opensource_observability_tool_for_ai/
false
false
self
1
null
South Korea's AI Industry Exports Full Stack to Saudi Aramco
3
2026-02-03T01:34:38
https://www.chosun.com/english/industry-en/2026/02/02/KAQOZZIMUZH4LDU5PPTZUWGF2M/
self-fix
chosun.com
1970-01-01T00:00:00
0
{}
1quefwe
false
null
t3_1quefwe
/r/LocalLLaMA/comments/1quefwe/south_koreas_ai_industry_exports_full_stack_to/
false
false
default
3
{'enabled': False, 'images': [{'id': 'LFFzDXEACZ0ILGbeqwi1ihGS0ir0TC1P_bpvwuzFTIk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/LFFzDXEACZ0ILGbeqwi1ihGS0ir0TC1P_bpvwuzFTIk.jpeg?width=108&crop=smart&auto=webp&s=e9e5df689cd271f56e9a76b0aa2de638b23e7d4b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/LFFzDXEACZ0ILGbeqwi1ihGS0ir0TC1P_bpvwuzFTIk.jpeg?width=216&crop=smart&auto=webp&s=28ef113ea4b56dcd74c2ffe8dea7b19b471ddb29', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/LFFzDXEACZ0ILGbeqwi1ihGS0ir0TC1P_bpvwuzFTIk.jpeg?width=320&crop=smart&auto=webp&s=dfcd3531eade7b8ba4a3ac6117c57ee26293f336', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/LFFzDXEACZ0ILGbeqwi1ihGS0ir0TC1P_bpvwuzFTIk.jpeg?width=640&crop=smart&auto=webp&s=8ba046f10fd0bc3e954dd4b7cd7d035ea53134ea', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/LFFzDXEACZ0ILGbeqwi1ihGS0ir0TC1P_bpvwuzFTIk.jpeg?width=960&crop=smart&auto=webp&s=88fad84ff24258d611c3ae9db42b777f46be58a7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/LFFzDXEACZ0ILGbeqwi1ihGS0ir0TC1P_bpvwuzFTIk.jpeg?width=1080&crop=smart&auto=webp&s=24b29941712d883b70d17ff00ed0662d93dca63b', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/LFFzDXEACZ0ILGbeqwi1ihGS0ir0TC1P_bpvwuzFTIk.jpeg?auto=webp&s=d57808320defbc13d14be9363a0d1c597121d666', 'width': 1200}, 'variants': {}}]}
What model for RTX 3090 Ti?
0
What model size and context to load on ollama for openclaw? RTX 3090 Ti FE Ryzen 9 9050X 64GB RAM
2026-02-03T01:21:54
https://www.reddit.com/r/LocalLLaMA/comments/1que58a/what_model_for_rtx_3090_ti/
throwaway510150999
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1que58a
false
null
t3_1que58a
/r/LocalLLaMA/comments/1que58a/what_model_for_rtx_3090_ti/
false
false
self
0
null
LLM based on ancient Indian Philosophy
0
I finetuned an LLM on Indian Philosophical texts to improve reasoning. Looking for red-teamers!https://www.atmakosh.com/invite
2026-02-03T01:10:08
https://www.reddit.com/r/LocalLLaMA/comments/1qudvgc/llm_based_on_ancient_indian_philosophy/
nrgore1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qudvgc
false
null
t3_1qudvgc
/r/LocalLLaMA/comments/1qudvgc/llm_based_on_ancient_indian_philosophy/
true
false
spoiler
0
null
How to prevent MacOS annoying RAM compression behavior
14
Hi guys. I recently bought a MacBook M4 Pro 48GB. And I currently running a Qwen coder 30B in LM Studio all time. It works pretty well, never hit swap. But what annoying me is that MacOS always tries to compress this llm when llm goes into inactive status, and it seems like this compression process never goes to end so that RAM load indicator is always yellow until I trigger the llm to response my request. Does this behavior cause any significant problems in long time? or is there any solution to prevent macOS from trying to compress this LLM? Thanks. https://preview.redd.it/zd3i4xl8h6hg1.png?width=2480&format=png&auto=webp&s=14eed75559eb851f5396a0d696d3d4b028ba042e
2026-02-03T01:04:36
https://www.reddit.com/r/LocalLLaMA/comments/1qudqul/how_to_prevent_macos_annoying_ram_compression/
Sea_Smoke_7626
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qudqul
false
null
t3_1qudqul
/r/LocalLLaMA/comments/1qudqul/how_to_prevent_macos_annoying_ram_compression/
false
false
self
14
null
Seeking advice on RAG optimization for legal discovery on M4 Pro (48GB RAM)
0
Hi everyone, I'm running **Ollama (Qwen 3:30B)** and **AnythingLLM** on a **Mac M4 Pro (48GB RAM)**. I'm using this setup to analyse local documents for my own employment lawsuit (allegation of retaliation). I have hundreds of pages of evidence, and since my hearing was delayed to late Feb, I’m trying to leverage local LLMs to review the files more effectively. However, I've run into a few hurdles and would appreciate your expertise: 1. **Multilingual OCR Issues**: Some of my evidence files contain **French**. Even after using macOS Preview's 'Embed Text' feature, the French portions are often garbled or unrecognisable by the LLM. Are there any superior local OCR tools (CLI or GUI) that handle bilingual (EN/FR) legal scans more reliably? 2. **Unstable Context/Retrieval**: Even after embedding all key files into AnythingLLM, the "memory" and context indexing feel inconsistent. The AI often misses crucial details or hallucinate facts that are clearly in the documents. * What are the best **Chunk Size** and **Overlap** settings for dense legal text? * Should I be looking at different **Vector Databases** or specific **RAG configurations** to improve pinpoint accuracy for cross-referencing? Any tips on how to turn this into a "bulletproof" legal assistant before my hearing? Thanks in advance!
2026-02-03T00:58:47
https://www.reddit.com/r/LocalLLaMA/comments/1qudlwq/seeking_advice_on_rag_optimization_for_legal/
Jamie_GZ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qudlwq
false
null
t3_1qudlwq
/r/LocalLLaMA/comments/1qudlwq/seeking_advice_on_rag_optimization_for_legal/
false
false
self
0
null
Best GPU for $250? NVIDIA P40 or MI50 32gb?
0
P40 looks supported on CUDA, NVIDIA drivers. Last gen but works. AMD MI50 seems to be a hassle to even install drivers LOL? **Using** * multi GPU * vLLM, ik\_llama.cpp (tensor parallel), llama.cpp * inference, maybe finetuning 32GB seems like a win, thoughts?
2026-02-03T00:37:57
https://www.reddit.com/r/LocalLLaMA/comments/1qud4nu/best_gpu_for_250_nvidia_p40_or_mi50_32gb/
ClimateBoss
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qud4nu
false
null
t3_1qud4nu
/r/LocalLLaMA/comments/1qud4nu/best_gpu_for_250_nvidia_p40_or_mi50_32gb/
false
false
self
0
null
Lenovo P700 bifurcation, which ports are which setting in bios?
1
[removed]
2026-02-03T00:37:08
https://www.reddit.com/r/LocalLLaMA/comments/1qud3zu/lenovo_p700_bifurcation_which_ports_are_which/
ShreddinPB
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qud3zu
false
null
t3_1qud3zu
/r/LocalLLaMA/comments/1qud3zu/lenovo_p700_bifurcation_which_ports_are_which/
false
false
self
1
null
WebLLM wrapper with resumable downloads - stop re-downloading 4GB models when connection drops
0
Been using WebLLM for browser inference. Downloads kept failing partway through and I'd have to start from zero every time. Made a wrapper that: * Stores verified chunks in IndexedDB * Resumes from last chunk after network failure * WASM streaming hasher (2MB memory vs buffering entire model in RAM) * Verifies integrity so you know the model file wasn't corrupted or tampered with Drop-in replacement for MLCEngine: import { VerifiedMLCEngine } from '@verifyfetch/webllm'; const engine = new VerifiedMLCEngine(); await engine.reload("Phi-3-mini-4k-instruct-q4f16_1-MLC"); Works with any WebLLM model. Also has a lower-level API if you're loading GGUF or other formats manually. [https://github.com/hamzaydia/verifyfetch](https://github.com/hamzaydia/verifyfetch) Curious if anyone else doing browser inference has run into this.
2026-02-03T00:34:55
https://www.reddit.com/r/LocalLLaMA/comments/1qud235/webllm_wrapper_with_resumable_downloads_stop/
aginext
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qud235
false
null
t3_1qud235
/r/LocalLLaMA/comments/1qud235/webllm_wrapper_with_resumable_downloads_stop/
false
false
self
0
null
Built an open-source marketplace where AI agents discover and consume tools -- runs fully local with Ollama, no API keys
0
I've been working on Axiomeer, an open-source project that lets AI agents shop for tools at runtime instead of hardcoding integrations. The idea: providers publish products (APIs, datasets, model endpoints) to a marketplace. Agents describe what they need, the marketplace ranks options by capability match / latency / cost, executes the best one, validates the output (are citations present? is the data real or mock?), and logs an immutable receipt. The trust layer is the part I think is missing from existing approaches: - If citations are required but missing → execution fails - If evidence quality is LOW (mock/simulated) → the agent abstains instead of hallucinating - Every execution is logged as an auditable receipt It runs 100% locally with Ollama for capability inference and answer generation. No paid API keys. The v1 has real providers (Open-Meteo weather, Wikipedia summaries) plus mock endpoints for testing. Stack: Python, FastAPI, SQLAlchemy, Ollama, Typer/Rich CLI. 67 tests passing. I'm looking for contributors to add more providers each one is ~30 lines of Python + a JSON manifest. The architecture supports anything that returns structured JSON: finance APIs, search engines, code execution, document retrieval, etc. GitHub: https://github.com/ujjwalredd/Axiomeer Happy to answer questions about the architecture or design decisions.
2026-02-03T00:26:27
https://www.reddit.com/r/LocalLLaMA/comments/1qucup0/built_an_opensource_marketplace_where_ai_agents/
AutoProspectAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qucup0
false
null
t3_1qucup0
/r/LocalLLaMA/comments/1qucup0/built_an_opensource_marketplace_where_ai_agents/
false
false
self
0
null
Smartest model for 24-28GB vram?
59
I was super happy to find qwen 30B A3B being so damn clever on my 3090 and then I tried GLM flash 4.7 and I was blown away. Is there any other model that’s smart like this? My use case is using it as an agentic coder but bonus points if it can do rp like GLM flash lol
2026-02-03T00:19:27
https://www.reddit.com/r/LocalLLaMA/comments/1qucoid/smartest_model_for_2428gb_vram/
Borkato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qucoid
false
null
t3_1qucoid
/r/LocalLLaMA/comments/1qucoid/smartest_model_for_2428gb_vram/
false
false
self
59
null
Best clinical models for cardiovascular ?
0
What are some best clinical models for the cardiovascular or just generally nowadays (<= 30B preferrably) ?
2026-02-02T23:49:50
https://www.reddit.com/r/LocalLLaMA/comments/1qubys7/best_clinical_models_for_cardiovascular/
jiii95
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qubys7
false
null
t3_1qubys7
/r/LocalLLaMA/comments/1qubys7/best_clinical_models_for_cardiovascular/
false
false
self
0
null
What's your daily driver model in early 2026?
1
[removed]
2026-02-02T23:46:27
https://www.reddit.com/r/LocalLLaMA/comments/1qubvuu/whats_your_daily_driver_model_in_early_2026/
LocalLLMHobbyist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qubvuu
false
null
t3_1qubvuu
/r/LocalLLaMA/comments/1qubvuu/whats_your_daily_driver_model_in_early_2026/
false
false
self
1
null
Power limiting RTX 3060 and B580 to avoid buying a new PSU
0
My specs: -i5-13500, PL2 set to 65W -2x16GB DDR5-4800 -2x NVMe PCIe 3.0 x4 SSD -3x case fans -1x tower CPU cooler fan -MSI B760M Gaming Plus Wifi DDR5 -Intel ARC B580 on the first PCIe x16 slot (card has only 8 lanes) -RTX 3060 on the second PCIe x16 slot, limited to x4 from chipset -Corsair CX550F RGB I am planning to use the B580 for gaming and custom LLM training in pytorch. The 3060 will only be used for tensor parallel inference using vulkan llama.cpp, and the only time both GPUs will draw a lot of power is during the token preprocessing stage. Would it be safe for me to skip buying a higher power PSU if i were to power limit both while i am running inference? I made the mistake of not budgeting properly and I am really tired of spending money after replacing my mobo and getting the B580. I already have all the parts listed right now.
2026-02-02T23:32:40
https://www.reddit.com/r/LocalLLaMA/comments/1qubjr7/power_limiting_rtx_3060_and_b580_to_avoid_buying/
disasterloafgonedumb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qubjr7
false
null
t3_1qubjr7
/r/LocalLLaMA/comments/1qubjr7/power_limiting_rtx_3060_and_b580_to_avoid_buying/
false
false
self
0
null
Some Step-3.5-Flash benchmarks on AMD Strix Halo (llama.cpp)
5
Benchmark on AMD Strix Halo (Minisforum MS S1 Max) : # Rocm 7.1.1 # llama-bench |model|size|params|backend|ngl|fa|mmap|test|t/s| |:-|:-|:-|:-|:-|:-|:-|:-|:-| |step35 ?B Q4\_K - Small|103.84 GiB|196.96 B|ROCm|999|1|0|pp4096|258.82 ± 3.15| |step35 ?B Q4\_K - Small|103.84 GiB|196.96 B|ROCm|999|1|0|pp32768|208.35 ± 1.86| |step35 ?B Q4\_K - Small|103.84 GiB|196.96 B|ROCm|999|1|0|tg512|22.93 ± 0.00| # Vulkan-amdvlk # llama-bench |model|size|params|backend|ngl|fa|mmap|test|t/s| |:-|:-|:-|:-|:-|:-|:-|:-|:-| |step35 ?B Q4\_K - Small|103.84 GiB|196.96 B|Vulkan|999|1|0|pp4096|153.04 ± 0.30| |step35 ?B Q4\_K - Small|103.84 GiB|196.96 B|Vulkan|999|1|0|pp32768|79.55 ± 0.59| |step35 ?B Q4\_K - Small|103.84 GiB|196.96 B|Vulkan|999|1|0|tg512|2.50 ± 0.00| # Vulkan-radv # llama-bench |model|size|params|backend|ngl|fa|mmap|test|t/s| |:-|:-|:-|:-|:-|:-|:-|:-|:-| |step35 ?B Q4\_K - Small|103.84 GiB|196.96 B|Vulkan|999|1|0|pp4096|164.20 ± 1.30| |step35 ?B Q4\_K - Small|103.84 GiB|196.96 B|Vulkan|999|1|0|pp32768|104.36 ± 0.29| |step35 ?B Q4\_K - Small|103.84 GiB|196.96 B|Vulkan|999|1|0|tg512|27.86 ± 0.00|
2026-02-02T23:22:15
https://www.reddit.com/r/LocalLLaMA/comments/1qubamo/some_step35flash_benchmarks_on_amd_strix_halo/
Grouchy-Bed-7942
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qubamo
false
null
t3_1qubamo
/r/LocalLLaMA/comments/1qubamo/some_step35flash_benchmarks_on_amd_strix_halo/
false
false
self
5
null
vLLM: Nvidia 590.48.01 and CUDA 13.1 "incompatible"?
0
Freshly upgraded Ubuntu. On vLLM, whether the nightly or main docker image, I get: RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 803: system has unsupported display driver / cuda driver combination Unsupported how? Llama.Cpp doesn't have a problem with it, and I'm not sure how or whether I should downgrade. The new vLLM is supposed to support CUDA 13.
2026-02-02T23:18:50
https://www.reddit.com/r/LocalLLaMA/comments/1qub7on/vllm_nvidia_5904801_and_cuda_131_incompatible/
FrozenBuffalo25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qub7on
false
null
t3_1qub7on
/r/LocalLLaMA/comments/1qub7on/vllm_nvidia_5904801_and_cuda_131_incompatible/
false
false
self
0
null
Jailbreaking an AI Teaches You More About Humans Than Machines
0
2026-02-02T23:17:03
https://medium.com/@neonmaxima/jailbreaking-an-ai-teaches-you-more-about-humans-than-machines-c3e98fb7d81f
amylkazyl
medium.com
1970-01-01T00:00:00
0
{}
1qub64b
false
null
t3_1qub64b
/r/LocalLLaMA/comments/1qub64b/jailbreaking_an_ai_teaches_you_more_about_humans/
false
false
default
0
null
Memora v0.2.18 — Persistent memory for AI agents with knowledge graphs, now with auto-hierarchy
0
New release of Memora, an MCP memory server for Claude Code / Codex CLI with knowledge graphs. **What's new:** **Auto-hierarchy inference** — When you create a memory without specifying where it belongs, Memora now looks at similar existing memories and automatically places it in the right hierarchy. If your architecture notes live under memora/architecture, a new architecture-related memory lands there automatically. Confidence threshold of 0.5 — below that it suggests but doesn't apply. GitHub: [https://github.com/agentic-mcp-tools/memora](https://github.com/agentic-mcp-tools/memora) Release: [https://github.com/agentic-mcp-tools/memora/releases/tag/v0.2.18](https://github.com/agentic-mcp-tools/memora/releases/tag/v0.2.18) https://i.redd.it/gx48jtdny5hg1.gif
2026-02-02T23:13:13
https://www.reddit.com/r/LocalLLaMA/comments/1qub2pn/memora_v0218_persistent_memory_for_ai_agents_with/
spokv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qub2pn
false
null
t3_1qub2pn
/r/LocalLLaMA/comments/1qub2pn/memora_v0218_persistent_memory_for_ai_agents_with/
false
false
https://b.thumbs.redditm…sYBgd-i4Xf-U.jpg
0
null
How do you keep track of all the AI agents running locally on your machine?
0
I’ve been experimenting with running multiple AI agents locally and realized I didn’t have a great answer to basic questions like: \* what’s actually running right now? \* what woke up in the background? \* what’s still using CPU or memory? Nothing was obviously broken, but I couldn’t confidently explain the lifecycle of some long-running agents. Curious how others here handle this today. Do you actively monitor local agents, or mostly trust the setup?
2026-02-02T22:40:07
https://www.reddit.com/r/LocalLLaMA/comments/1qua7ya/how_do_you_keep_track_of_all_the_ai_agents/
Creative-Pizza661
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qua7ya
false
null
t3_1qua7ya
/r/LocalLLaMA/comments/1qua7ya/how_do_you_keep_track_of_all_the_ai_agents/
false
false
self
0
null
Kimi distillation attempt
21
So the question of a "small Kimi" arises time and time again. And at least once Moonshot said they would welcome community distills: [https://github.com/MoonshotAI/Kimi-K2/issues/16](https://github.com/MoonshotAI/Kimi-K2/issues/16) . Sadly I keep missing AMAs to ask their present view of community distills. I've been interested in the topic for a while, and for the last couple of months was actually trying to do it. I could probably do a lot better, so I'll outline what went on, and the end of the post has a link to my test checkpoint - suggestions of what to change in my process are very mush welcome as is any feedback on the checkpoint. I would also love to learn about other distill projects; so far I know of one, a part of a CoT distill set of leading thinking models: [https://huggingface.co/TeichAI/Qwen3-8B-Kimi-K2-Thinking-Distill](https://huggingface.co/TeichAI/Qwen3-8B-Kimi-K2-Thinking-Distill) . Compared to what I am trying to do, it seems more technical-oriented and also sources Kimi K2 Thinking while my favourite is K2 Instruct 0905 (never tried the non-0905 though). To make mistakes cheap (this is my first model trainjing project) and to ensure the result runs on anything, I picked a very small first target/student model, Granite 4.0 hybrid 1B (really 1.5B). It's actually one heck of a 1B, trained on 15T tokens from scratch - not a sequential distill of something bigger like the Gemma and Qwen examples in this size. Granite's expression style is very neutral and quite constrained (it ignores style/persona instructions in the system prompt); but that also means one is not fighting an existing "vibe" when implanting a new one. The Mamba-hybrid nature means it can scale to longer contexts withoug chokingm even when running on CPU. There's the big question of what one is distilling for; I went for vibe/style/conversation (with roleplay a potential addition at a later stage), but of course there are other options. And from there one gets to "where to get the prompts for generation". The best I could think of was to grab user prompts off existing datasets. First I generated a max\_seq\_len 6000 dataset of Kimi K2 Instruct 0905 answers - including some seriously strong prose, based on prompts from [https://huggingface.co/datasets/HuggingFaceTB/smoltalk-multilingual8-Qwen3-32B-main-gen](https://huggingface.co/datasets/HuggingFaceTB/smoltalk-multilingual8-Qwen3-32B-main-gen) (advice seeking category) and the magpie-ultra source in main Smoltalk. I worked out a Qwen-based pipeline to detect typical hallucinations and also to find facts that need verification; I used Gemini 2.5 Flash with grounding to verify the facts and dropped the lines with wrong or dubious claims. [https://huggingface.co/datasets/ramendik/kimify-20251115](https://huggingface.co/datasets/ramendik/kimify-20251115) Unfortunately, after \*a lot\* of checkpoints it turned out that such long form won't fly with a 1.5B, at least immediately. The result was always too prone to looping (somehow, ifeval at t=0 is a good looping tendency detector and I have a script that specifically checks for loops and counts them; Granite 4.0 h 1b has <20 loops in ifeval while the long-form trained checkpoionts resulted in around 50). While training on that dataset and trying to defeat the instabilty, I found a training algorithm, CorDA KPM [https://huggingface.co/docs/peft/v0.18.0/en/developer\_guides/lora#corda](https://huggingface.co/docs/peft/v0.18.0/en/developer_guides/lora#corda) , that makes things much more stable. As the "knowledge" dataset I just use tool calls (a random subset of the xLAM dataset, reformatted for Granite - can publish if there's any need for it); this lets me avoid locking in Granite's style. While it made things better, I eventually had to give up on the long-form dataset, at least for the first stage. So I generated a larger dataset of smaller answers, using a system prompt to make Kimi birfer but still quite punchy. The typical hallucination filter and fact verifier happened again, and I also filtered out entries where any one assistant message is over 1000 Granite tokens. [https://huggingface.co/datasets/ramendik/kimify-short-20260131](https://huggingface.co/datasets/ramendik/kimify-short-20260131) I also wanted to buttress instruction following but not to benchmax for ifeval, so I never used ifeval prompts but instead took prompts from [https://huggingface.co/datasets/HuggingFaceH4/ifeval-like-data](https://huggingface.co/datasets/HuggingFaceH4/ifeval-like-data) \- then verified the results of Kimi's generation against the constraints. The result is [https://huggingface.co/datasets/ramendik/kimify-ifeval-like](https://huggingface.co/datasets/ramendik/kimify-ifeval-like) My hope is to get a good first checkpoint that has picked up at least the basics of Kimi's stype - and then expand my CorDA KPM dataset with actual text generation in the new style. I would hope that, with the basic style and the new CorDA KPM dataset in place, I can train the next checkpoint on longer samples and on actual multiturn conversations (generated with a red-teaming model). For now it's short-ish single-turn advice-seeking answers and three-turn magpie-ultra-short answers. So, I made my candidate "stage 1" checkpoint. Unlike baselike Granite, it does change its style on system prompts - this is an emergent behaviour, my dataset has no system prompts. So please test with different system prompts; if you don't supply a system prompt, the Granite tokenizer uses a default one that dampens things a bit (or should I cut that out of the tokenizer?). With the larger dataset, the emergent system prompt plasticity was more pronounced and when "creative" was requested the style got quite exuberant - but the loops made me pull away; I am hoping to bring that back in stage 2 with a "fatter" CorDA KPM. (I named the project "Miki" and the 1B size "pebble" - there are suitable Granite models for "cobble" and "boulder" but I want to polish the technique on "pebble" first). The hyperparameters I used - CorDA KPM, r=128 a=256, target\_modules = \["q\_proj", "k\_proj", "v\_proj", "o\_proj", "mamba.in\_proj", "mamba.out\_proj"\] (but notably not the MLP layers - targeting those somehow dilutes any styke impact significantly), Muon optimizer (somehow better on the style), LR=1.5e-5. These gave the best result out of a rather large sweep. This candidate checkpoint is at [https://huggingface.co/ramendik/miki-pebble-20260131](https://huggingface.co/ramendik/miki-pebble-20260131) \- that's the GGUFs in BF16 and Q8\_0 ; if anyone actually needs a lower quant at this size please tell me and I'll bother with the imatrix thing. There is a safetensors version too, at [https://huggingface.co/ramendik/miki-pebble-20260131-safetensors](https://huggingface.co/ramendik/miki-pebble-20260131-safetensors) . Again, feedback very much appreciated, \*especially\* what I can do better. Better sources of prompts, anything really. (One thing I'm not changing is the general style/writing/conversational direction; I just don't think I know enough to do a coding or agentic oriented distill). And links to other Kimi distill projects are very welcome too.
2026-02-02T22:36:51
https://www.reddit.com/r/LocalLLaMA/comments/1qua4xb/kimi_distillation_attempt/
ramendik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qua4xb
false
null
t3_1qua4xb
/r/LocalLLaMA/comments/1qua4xb/kimi_distillation_attempt/
false
false
self
21
{'enabled': False, 'images': [{'id': 'UpEZLDbtiAmghXcZIIzdXfNnPN7hsdTNtw88v2I75J8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UpEZLDbtiAmghXcZIIzdXfNnPN7hsdTNtw88v2I75J8.png?width=108&crop=smart&auto=webp&s=7f47aa2be3f8fdf6f7f88f3048d8d52a1db59f63', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UpEZLDbtiAmghXcZIIzdXfNnPN7hsdTNtw88v2I75J8.png?width=216&crop=smart&auto=webp&s=767bc29593bd45c8b709ddda9939b787ab1e10d5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UpEZLDbtiAmghXcZIIzdXfNnPN7hsdTNtw88v2I75J8.png?width=320&crop=smart&auto=webp&s=16bbdcf6bbfb1e3e288738fa79dbbf9d0165848c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UpEZLDbtiAmghXcZIIzdXfNnPN7hsdTNtw88v2I75J8.png?width=640&crop=smart&auto=webp&s=9d72728fb94c5663c4cc51520bac6659d0b4a92a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UpEZLDbtiAmghXcZIIzdXfNnPN7hsdTNtw88v2I75J8.png?width=960&crop=smart&auto=webp&s=ccadd95b815b38f488a346a3b3185af7e2fb9ca2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UpEZLDbtiAmghXcZIIzdXfNnPN7hsdTNtw88v2I75J8.png?width=1080&crop=smart&auto=webp&s=20de891a65067b63b7c61804cd95a57915948d01', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UpEZLDbtiAmghXcZIIzdXfNnPN7hsdTNtw88v2I75J8.png?auto=webp&s=c7fde360dc88b9555cdc4adfaa14ca4c6219e326', 'width': 1200}, 'variants': {}}]}
Multi-gpu setting and PCIE lain problem
1
https://preview.redd.it/…erformance loss?
2026-02-02T22:35:39
https://www.reddit.com/r/LocalLLaMA/comments/1qua3t8/multigpu_setting_and_pcie_lain_problem/
tony9959
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qua3t8
false
null
t3_1qua3t8
/r/LocalLLaMA/comments/1qua3t8/multigpu_setting_and_pcie_lain_problem/
false
false
https://a.thumbs.redditm…7PfNWwfSRPf0.jpg
1
null
I trained a LLM on Jefferey Epstein's emails
0
Downloaded a dataset of 3000 emails from Epstein and fine tuned Qwen 3 4b instruct 2507 on them Reason: I was bored and I find sending silly little system prompts stupid so I decided to actually fine tune a model I'm gonna sleep now but if you want I can ask it questions for you, I might upload the full model weights tomorrow. For now it's just gonna be a discord bot for me and my friends
2026-02-02T22:31:10
https://www.reddit.com/gallery/1qu9zia
Foxen--
reddit.com
1970-01-01T00:00:00
0
{}
1qu9zia
false
null
t3_1qu9zia
/r/LocalLLaMA/comments/1qu9zia/i_trained_a_llm_on_jefferey_epsteins_emails/
false
false
https://a.thumbs.redditm…UsAEkhWhlf_8.jpg
0
null
Best local LLM to train with my own knowledge and niche skills?
3
I work in tech and see that there are crazy costs to models like claude and they dont really know my niche skills when it comes to programming and solving tech issues. I got an unraid server with some decent hardware and want to train a model to learn from my behaviors and act like me but locally. What would be a good model to start off with and get to learn things?
2026-02-02T22:14:45
https://www.reddit.com/r/LocalLLaMA/comments/1qu9jja/best_local_llm_to_train_with_my_own_knowledge_and/
xAcex28
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu9jja
false
null
t3_1qu9jja
/r/LocalLLaMA/comments/1qu9jja/best_local_llm_to_train_with_my_own_knowledge_and/
false
false
self
3
null
YunoAI: An adversarial system prompt to kill Sycophancy
0
I've been lurking here for years. We all know the problem: RLHF has lobotomized models into becoming sycophantic yes-men. They prioritize "politeness" over rigor. I spent the last year obsessively iterating on a system prompt configuration designed to do the opposite: Active Adversarial Sparring. The goal isn't to be a "helpful assistant". The goal is to: 1. Identify weak premises in your logic. 2. Attack them relentlessly. 3. Force you to clarify your thinking or admit defeat. Why share this now? I was previously using Claude Code to automate research on vector orthogonalization, attempting to adapt recent findings to newer architectures like Kimi2 and Qwen-3. That level of mechanic interpretability/tinkering got me a swift ban from Anthropic. Since then, I decided to stop poking at the weights and focus on the interaction layer. I pivoted to building YunoAI seriously—not to hack the model's internals, but to hack the conversation dynamics. I currently use it on top of Gemini 2.5/3.0 to force the kind of rigor I was originally looking for. It's raw. It's aggressive. It's not for everyone. But if you are tired of ChatGPT telling you "Great idea!" when you are about to make a mistake, give it a try. Looking for feedback on how this handles local models (Llama 3, Mistral). Let me know if it breaks them. https://preview.redd.it/25g4xsmgi5hg1.png?width=984&format=png&auto=webp&s=b9aa4e041ab71d448d48c4c54b060ba1a4cee7aa The "Too Good to be True" Benchmark (And why I need you) I'm attaching a run from SpiralBench where yunoai-v255 scores disturbingly high, effectively tying with gpt-oss-120b and beating o4-mini. ⚠️ HUGE DISCLAIMER: This was evaluated using gpt-5 as a judge (SpiralBench default), kimi k2 as "user" and yunoai as assitant model I am deeply skeptical of synthetic benchmarks. I know "LLM-as-a-judge" favors models that sound like the judge. This chart might be hallucinating competence. That is exactly why I am posting here. YunoAI: An adversarial system prompt to kill Sycophancy don't trust this chart. I trust human intuition and real-world edge cases. I need the r/LocalLLaMA community to tell me if this score is a fluke of the prompting strategy or if the reasoning capabilities are actually there. Break it. Test it against your hardest logic puzzles. Tell me if the graph is lying. Repo: [https://github.com/Xuno-io/yuno-md](https://github.com/Xuno-io/yuno-md)
2026-02-02T22:08:44
https://www.reddit.com/r/LocalLLaMA/comments/1qu9dpm/yunoai_an_adversarial_system_prompt_to_kill/
Ok_Condition4242
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu9dpm
false
null
t3_1qu9dpm
/r/LocalLLaMA/comments/1qu9dpm/yunoai_an_adversarial_system_prompt_to_kill/
false
false
https://preview.redd.it/…a3822c4e3dbfd00d
0
null
Large categorized list of AI / LLM benchmarks & leaderboards
2
I compiled a large, categorized list of AI / LLM benchmarks and leaderboards. Reddit blocks long link lists in posts, so the full list is in the comments.
2026-02-02T21:53:04
https://www.reddit.com/r/LocalLLaMA/comments/1qu8yh0/large_categorized_list_of_ai_llm_benchmarks/
Individual-Hippo3043
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu8yh0
false
null
t3_1qu8yh0
/r/LocalLLaMA/comments/1qu8yh0/large_categorized_list_of_ai_llm_benchmarks/
false
false
self
2
null
I've built a local twitter-like for bots - so you can have `moltbook` at home ;)
0
Check it at \`http://127.0.0.1:9999\`.... But seriously, it's a small after-hour project that allows local agents (only Ollama at the moment) to talk to each other on a microblog / social media site running on your pc. There is also a primitive web ui - so you can read their hallucinations ;) I've been running it on RTX 3050 - so you do not need much. (\`granite4:tiny-h\` seems to work well - tool calling is needed). [https://github.com/maciekglowka/bleater](https://github.com/maciekglowka/bleater) https://preview.redd.it/0fos7xidj5hg1.png?width=717&format=png&auto=webp&s=e1126f9ca04a966e6493dfa8738a3c6e9377606d
2026-02-02T21:48:13
https://www.reddit.com/r/LocalLLaMA/comments/1qu8tzr/ive_built_a_local_twitterlike_for_bots_so_you_can/
maciek_glowka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu8tzr
false
null
t3_1qu8tzr
/r/LocalLLaMA/comments/1qu8tzr/ive_built_a_local_twitterlike_for_bots_so_you_can/
false
false
self
0
null
Anyone else down the "data sovereignty" rabbit hole or am I going crazy?
56
it started with just wanting to run models locally so my stuff doesn't get scraped. Now I'm like 3 weeks deep reading about self-sovereign Identity, network state stuff and wondering if there's a way to actually prove your data isn't being touched vs just hoping it isn't. Local models help I guess.. but it still feels like we're just trusting that nothing's phoning home. Is there anything out there that gives you like actual cryptographic proof your queries aren't being logged? Or am I seriously overthinking this lol
2026-02-02T21:43:48
https://www.reddit.com/r/LocalLLaMA/comments/1qu8pqw/anyone_else_down_the_data_sovereignty_rabbit_hole/
itsnotKelsey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu8pqw
false
null
t3_1qu8pqw
/r/LocalLLaMA/comments/1qu8pqw/anyone_else_down_the_data_sovereignty_rabbit_hole/
false
false
self
56
null
Graphic boards farm at home
0
A friend of mine bought few powerful graphics boards to build ai farm at home. I wonder if it is possible to save money by running local home factory compare to the one you can rent? Is anyone here have experience with this?
2026-02-02T21:40:33
https://www.reddit.com/r/LocalLLaMA/comments/1qu8mkx/graphic_boards_farm_at_home/
gpo-work
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu8mkx
false
null
t3_1qu8mkx
/r/LocalLLaMA/comments/1qu8mkx/graphic_boards_farm_at_home/
false
false
self
0
null
Comprehensive list of AI / LLM benchmarks & leaderboards
1
[removed]
2026-02-02T21:40:17
https://www.reddit.com/r/LocalLLaMA/comments/1qu8mbe/comprehensive_list_of_ai_llm_benchmarks/
Individual-Hippo3043
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu8mbe
false
null
t3_1qu8mbe
/r/LocalLLaMA/comments/1qu8mbe/comprehensive_list_of_ai_llm_benchmarks/
false
false
self
1
null
Human documentation is legacy infrastructure. We built a compiler for agents.(for Moltbots)
0
Most documentation on the web is written for humans. HTML pages, navigation, prose, repetition. All interface artifacts. Agents don’t need any of that. When agents “learn from docs”, they’re reasoning over a rendering format, not the underlying technical truth. That’s why context breaks and hallucinations show up. Not a model problem. A substrate problem. At Brane, we’ve been working on agent memory and coordination. One conclusion kept repeating. The real bottleneck isn’t intelligence. It’s context and memory infrastructure. So we built Moltext. Moltext is a documentation compiler for agentic systems. Not a chat interface. Not a summarizer. Not RERT. It takes the legacy web and compiles it into deterministic, agent-native context. No interpretation. No hidden cognition. No vibes. Just raw documentation, preserved structure, stable artifacts agents can reason over repeatedly. We wrote a detailed breakdown of the problem, the design choices, and where this fits in the agent stack here: [https://gobrane.com/moltext/](https://gobrane.com/moltext/) Looking for feedback from people building long-running agents, local-first systems, or anyone hitting context brittleness in practice.
2026-02-02T21:28:28
https://www.reddit.com/r/LocalLLaMA/comments/1qu8aoc/human_documentation_is_legacy_infrastructure_we/
Uditakhourii
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu8aoc
false
null
t3_1qu8aoc
/r/LocalLLaMA/comments/1qu8aoc/human_documentation_is_legacy_infrastructure_we/
false
false
self
0
null
Anyone else solving the AI hallucination problem with MCP + indexed docs?
0
Been frustrated with LLMs confidently making up stuff about documentation.. outdated methods, wrong syntax, things that don't exist. Copy-pasting docs into context works but hits limits fast. Started building around MCP to let the model search real indexed content instead of guessing. Point it at docs, Notion, GitHub, whatever... then the AI queries that instead of hallucinating. Made a short video showing how it works 👆 Curious what approaches others are using? RAG setups? Other MCP tools? Something else entirely?
2026-02-02T21:26:16
https://v.redd.it/qrdk97hmf5hg1
vildanbina
v.redd.it
1970-01-01T00:00:00
0
{}
1qu88mj
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/qrdk97hmf5hg1/DASHPlaylist.mpd?a=1772659596%2CNTM3YTk1OGQ1MjVhMTVkM2Y2NDQwNGRhOGM0ZDY0YjkyMjljMjEwMTRhYWFjNDQ4ZDRhMmEzNGE0ZWQ5NmRjZg%3D%3D&v=1&f=sd', 'duration': 150, 'fallback_url': 'https://v.redd.it/qrdk97hmf5hg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/qrdk97hmf5hg1/HLSPlaylist.m3u8?a=1772659596%2CNWE1MmM0MmRmM2JkNGQzZDFlOGU3ZjczZGMyNDYyODZmNDUzNDIzMDRhMTI1ZWFjOGU2OWE0NGViMmJmNDNhZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qrdk97hmf5hg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1qu88mj
/r/LocalLLaMA/comments/1qu88mj/anyone_else_solving_the_ai_hallucination_problem/
false
false
https://external-preview…907bf0546245bf3b
0
{'enabled': False, 'images': [{'id': 'amlnZXplaG1mNWhnMcBlKalYa3OMWIQ9XoCB3xOk_Vvv-0EkPXz2lc-I-IMX', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/amlnZXplaG1mNWhnMcBlKalYa3OMWIQ9XoCB3xOk_Vvv-0EkPXz2lc-I-IMX.png?width=108&crop=smart&format=pjpg&auto=webp&s=ca6bbabf2c537092210ba404adc9d805060937ed', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/amlnZXplaG1mNWhnMcBlKalYa3OMWIQ9XoCB3xOk_Vvv-0EkPXz2lc-I-IMX.png?width=216&crop=smart&format=pjpg&auto=webp&s=74c636e30e82e957c235bf2ed45666ce015b2aba', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/amlnZXplaG1mNWhnMcBlKalYa3OMWIQ9XoCB3xOk_Vvv-0EkPXz2lc-I-IMX.png?width=320&crop=smart&format=pjpg&auto=webp&s=9fb60395bf746a3bc84819a278ccc72dfc1c3179', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/amlnZXplaG1mNWhnMcBlKalYa3OMWIQ9XoCB3xOk_Vvv-0EkPXz2lc-I-IMX.png?width=640&crop=smart&format=pjpg&auto=webp&s=14bb03938951b677887e7a76a3a964f53d022f68', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/amlnZXplaG1mNWhnMcBlKalYa3OMWIQ9XoCB3xOk_Vvv-0EkPXz2lc-I-IMX.png?width=960&crop=smart&format=pjpg&auto=webp&s=96c52c98542d2ec2887b775e06b228036aa3082d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/amlnZXplaG1mNWhnMcBlKalYa3OMWIQ9XoCB3xOk_Vvv-0EkPXz2lc-I-IMX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=26bc4bd0e7cf55743c4761688d9349abf104cb7b', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/amlnZXplaG1mNWhnMcBlKalYa3OMWIQ9XoCB3xOk_Vvv-0EkPXz2lc-I-IMX.png?format=pjpg&auto=webp&s=4af49e8ec679f21d22cb63593358345e05d04726', 'width': 1280}, 'variants': {}}]}
Anyone else solving the AI hallucination problem with MCP + indexed docs?
0
Been frustrated with LLMs confidently making up stuff about documentation... outdated methods, wrong syntax, things that don't exist. Copy-pasting docs into context works but hits limits fast. Started building around MCP to let the model search real indexed content instead of guessing. Point it at docs, Notion, GitHub, whatever... then the AI queries that instead of hallucinating. Curious what approaches others are using? RAG setups? Different solutions? Made a quick video showing my approach if anyone's interested 👆
2026-02-02T21:25:41
https://v.redd.it/na3v17g2f5hg1
vildanbina
v.redd.it
1970-01-01T00:00:00
0
{}
1qu8819
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/na3v17g2f5hg1/DASHPlaylist.mpd?a=1772659560%2CZjYzNDIyN2U4ZjVhZGZkOGRjMDZlMzIyMjNlMThiNzA1NDEyNDFjOWExMTgzMjFmNDg1NDJjZTZjOTcyMmI2Mw%3D%3D&v=1&f=sd', 'duration': 150, 'fallback_url': 'https://v.redd.it/na3v17g2f5hg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/na3v17g2f5hg1/HLSPlaylist.m3u8?a=1772659560%2CM2VjYjg4Zjc4YjQyYWVkYTVmYzFiYTdlZGQwM2ZiNzY3OWZjNGVlYzMxMzRhMzBkMDkyYWM0MzU4MWRjNzg3NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/na3v17g2f5hg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1qu8819
/r/LocalLLaMA/comments/1qu8819/anyone_else_solving_the_ai_hallucination_problem/
false
false
https://external-preview…95bf2ec381963b57
0
{'enabled': False, 'images': [{'id': 'cDd0NHFmZzJmNWhnMcBlKalYa3OMWIQ9XoCB3xOk_Vvv-0EkPXz2lc-I-IMX', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cDd0NHFmZzJmNWhnMcBlKalYa3OMWIQ9XoCB3xOk_Vvv-0EkPXz2lc-I-IMX.png?width=108&crop=smart&format=pjpg&auto=webp&s=1c423842163d730d6c0d181b9556bd3860deb102', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cDd0NHFmZzJmNWhnMcBlKalYa3OMWIQ9XoCB3xOk_Vvv-0EkPXz2lc-I-IMX.png?width=216&crop=smart&format=pjpg&auto=webp&s=466ca3f1d5338340a4ea13749d60fa85f183502b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cDd0NHFmZzJmNWhnMcBlKalYa3OMWIQ9XoCB3xOk_Vvv-0EkPXz2lc-I-IMX.png?width=320&crop=smart&format=pjpg&auto=webp&s=e22b73e5ab899d3a6658c86b228e254f8a968d47', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cDd0NHFmZzJmNWhnMcBlKalYa3OMWIQ9XoCB3xOk_Vvv-0EkPXz2lc-I-IMX.png?width=640&crop=smart&format=pjpg&auto=webp&s=ef901e83784f1d6ba119e1f90cc3352bb220b71c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cDd0NHFmZzJmNWhnMcBlKalYa3OMWIQ9XoCB3xOk_Vvv-0EkPXz2lc-I-IMX.png?width=960&crop=smart&format=pjpg&auto=webp&s=dcf799efab1b36632b106f48c092f86aab64d240', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cDd0NHFmZzJmNWhnMcBlKalYa3OMWIQ9XoCB3xOk_Vvv-0EkPXz2lc-I-IMX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c9902ed342e7dd8233cf88adfd543f23d889cf13', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/cDd0NHFmZzJmNWhnMcBlKalYa3OMWIQ9XoCB3xOk_Vvv-0EkPXz2lc-I-IMX.png?format=pjpg&auto=webp&s=2e08d36e09ce4d02e3cb86ce27093a19ef9a41ec', 'width': 1280}, 'variants': {}}]}
StepFun has just announced Step 3.5 Flash
13
Here's an overview of its benchmark performance across three key domains: Math/Reasoning, Code, and Agentic/Browser. https://preview.redd.it/utzuv4m6f5hg1.png?width=987&format=png&auto=webp&s=342158612d0e5ebb9df30ef519278ba282823f60
2026-02-02T21:23:35
https://www.reddit.com/r/LocalLLaMA/comments/1qu85yl/stepfun_has_just_announced_step_35_flash/
Ok_Presentation1577
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu85yl
false
null
t3_1qu85yl
/r/LocalLLaMA/comments/1qu85yl/stepfun_has_just_announced_step_35_flash/
false
false
self
13
null
GLM releases OCR model
249
https://huggingface.co/zai-org/GLM-OCR Enjoy my friends, looks like a banger! GLM cooking hard! Seems like a 1.4B-ish model (0.9B vision, 0.5B language). Must be super fast.
2026-02-02T21:01:12
https://www.reddit.com/r/LocalLLaMA/comments/1qu7jqi/glm_releases_ocr_model/
Mr_Moonsilver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu7jqi
false
null
t3_1qu7jqi
/r/LocalLLaMA/comments/1qu7jqi/glm_releases_ocr_model/
false
false
self
249
{'enabled': False, 'images': [{'id': 'ln9l9VYiqmiIpjy0J_jvzMtD5AaeFLSsBaVe9XdCQEk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ln9l9VYiqmiIpjy0J_jvzMtD5AaeFLSsBaVe9XdCQEk.png?width=108&crop=smart&auto=webp&s=0924b1dd7d1a7f9163df2a910f493d0ad686359c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ln9l9VYiqmiIpjy0J_jvzMtD5AaeFLSsBaVe9XdCQEk.png?width=216&crop=smart&auto=webp&s=0ad5d9ef3c313951ecad40bfc563fff1877767dd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ln9l9VYiqmiIpjy0J_jvzMtD5AaeFLSsBaVe9XdCQEk.png?width=320&crop=smart&auto=webp&s=590559b546964aac094ee94017452e7d8f4e9db8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ln9l9VYiqmiIpjy0J_jvzMtD5AaeFLSsBaVe9XdCQEk.png?width=640&crop=smart&auto=webp&s=d4dc223422c5611f0b16cc93726757f1c90444b4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ln9l9VYiqmiIpjy0J_jvzMtD5AaeFLSsBaVe9XdCQEk.png?width=960&crop=smart&auto=webp&s=a65a81394967b461f760db7f0c6c071551f61fe1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ln9l9VYiqmiIpjy0J_jvzMtD5AaeFLSsBaVe9XdCQEk.png?width=1080&crop=smart&auto=webp&s=5f4ae5a40a09278282836830062bc35bf7ecfb35', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ln9l9VYiqmiIpjy0J_jvzMtD5AaeFLSsBaVe9XdCQEk.png?auto=webp&s=538141f814dbea95419e0cb2ab23ea6739087df6', 'width': 1200}, 'variants': {}}]}
Public Ollama Node Available - gemma3:4b - Canada
1
[removed]
2026-02-02T20:56:20
https://www.reddit.com/r/LocalLLaMA/comments/1qu7esg/public_ollama_node_available_gemma34b_canada/
PuzzleheadedTotal286
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu7esg
false
null
t3_1qu7esg
/r/LocalLLaMA/comments/1qu7esg/public_ollama_node_available_gemma34b_canada/
false
false
self
1
null
Do you have an opinion on memUbot versus OpenClaw?
0
2026-02-02T20:55:31
https://memu.bot/compare/openclaw
East_Consequence_494
memu.bot
1970-01-01T00:00:00
0
{}
1qu7dyo
false
null
t3_1qu7dyo
/r/LocalLLaMA/comments/1qu7dyo/do_you_have_an_opinion_on_memubot_versus_openclaw/
false
false
default
0
null
[Free Compute] Azure A100 80GB Instance Available for Use (Expiring Feb 9th)
2
I have available compute on an Azure **Standard NC24ads A100 v4** instance (1x A100 80GB, 24 vCPUs, 220 GiB RAM) that I’d like to offer to the community. My credits expire on **February 9th**, so the machine is available for any intensive fine-tuning or training jobs until then. If you have a project that could use this power, please reach out!
2026-02-02T20:52:49
https://www.reddit.com/r/LocalLLaMA/comments/1qu7ba2/free_compute_azure_a100_80gb_instance_available/
Mental_Interview_534
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu7ba2
false
null
t3_1qu7ba2
/r/LocalLLaMA/comments/1qu7ba2/free_compute_azure_a100_80gb_instance_available/
false
false
self
2
null
Arguably, the best AI code review MCP server (with Serena integration)
0
We’ve officially open-sourced [Lad](https://github.com/Shelpuk-AI-Technology-Consulting/lad_mcp_server) – the Code Review & System Design MCP server we built internally to quality-check our coding agents. https://preview.redd.it/tc2knsxz25hg1.png?width=1638&format=png&auto=webp&s=8c9d7b2f89e6e026860966f63582a836ec350249 Why build another code reviewer? Because "Agent Tunnel Vision" is real. LLMs generate text token by token. Once an agent makes a bad design choice early in the code, every subsequent token tries to justify that mistake to maintain cohesion. The agent effectively gaslights itself. To catch this, you need a second pair of eyes - a fresh context. But existing solutions (like [PAL](https://github.com/BeehiveInnovations/pal-mcp-server)) were failing us. They required manual config for every new model, had 32k context window assumptions for default (not configured) models, and limited file input to \~6k tokens. Effectively, it was unusable for complex design and code review tasks. But the biggest problem with AI reviewing AI: Lack of Context A human reviewer doesn't just check for syntax errors. They check against requirements, team constraints, and prior architectural decisions. Standard AI reviewers are "amnesic" –they only see the diff, not the history. Lad does things differently. * Lad fetches the OpenRouter model information via the OpenRouter MCP, including context window size and tool calling applicability. No need to configure anything: as soon as the LLM is available at OpenRouter, Lad can use it. * Lad supports one-reviewer or two-reviewer mode. By default, Lad uses both `moonshotai/kimi-k2-thinking` and `z-ai/glm-4.7` as reviewers. You can change any of them or switch the secondary reviewer off via the environmental variable configuration. * Lad provides two tools: `system_design_review` and `code_review`, plugging into both planning (system design) and implementation (code) workflow stages. * Lad supports both text and file references so that your coding agent is not required to regenerate the code or system design for review – referencing a file would do. Lad's key feature: Project-wide codebase index and memory awareness. Lad integrates with [Serena](https://github.com/oraios/serena), a “headless IDE” for coding agents. Serena allows your agent to use the project index token-efficiently as well as store and retrieve “memories” – records on important information that survive between the coding sessions. You can instruct your coding agent to record requirements, principal system design decisions, debug findings, and other useful information to Serena so that they can be retrieved and used later.   Moreover, you can share Serena memory bank across multiple teams such that the backend team’s AI coding agent can be aware of the frontend or DevOps team’s coding agents’ memories and vice versa. (Disclaimer: We are not affiliated with Serena in any way) For us, this closed the loop. It prevents our coding agents from hallucinating valid-looking but architecturally or conceptually wrong code. It works with Claude Code, Cursor, Antigravity, and any other MCP-supported agent. P.S. If you give it a try or like the idea, please drop us a star on GitHub - it’s always huge motivation for us to keep improving it! ⭐️ P.P.S. You can also check out our [Kindly Web Search MCP](https://github.com/Shelpuk-AI-Technology-Consulting/kindly-web-search-mcp-server) – it pairs perfectly with Lad for a full research-and-review workflow.
2026-02-02T20:40:21
https://www.reddit.com/r/LocalLLaMA/comments/1qu6ylc/arguably_the_best_ai_code_review_mcp_server_with/
Quirky_Category5725
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu6ylc
false
null
t3_1qu6ylc
/r/LocalLLaMA/comments/1qu6ylc/arguably_the_best_ai_code_review_mcp_server_with/
false
false
self
0
null
NTTuner - Local Fine-Tuning Made Easy (Unsloth + GUI).
17
NTTuner: A fine-tuning framework that implements LoRA/QLoRA and integrates Unsloth for 2-5x faster training · NTCompanion: A GUI wrapper that lets you prep data, configure training, and test models without touching code Why I think they're worth checking out: ✅ Actually works on single-GPU setups (tested on RTX 4090/3090) ✅ Integrates Unsloth - getting those memory savings and speed boosts without manual setup ✅ GUI makes dataset preparation much less painful (converts CSV/JSON to proper chat formats) ✅ Active development - noosed is responsive to issues and keeps up with new techniques ✅ Windows-friendly (always a plus for local ML tools) GitHub links: · NTTuner: https://github.com/noosed/NTTuner · NTCompanion: https://github.com/noosed/NTCompanion My experience: Just fine-tuned a Mistral 7B model on some custom Q&A data. The GUI made formatting my dataset trivial, and training with Unsloth integration was noticeably faster than my previous Axolotl setups. Went from \~12 hours estimated to \~4 hours for the same job. Who this is for: · If you want to fine-tune locally but find Axolotl/Ollama-training/etc. too command-line heavy · If you're tired of manually formatting JSONL files for training · If you want Unsloth benefits without deep technical setup · If you're on Windows and want a smooth fine-tuning experience
2026-02-02T20:38:04
https://www.reddit.com/r/LocalLLaMA/comments/1qu6wcc/nttuner_local_finetuning_made_easy_unsloth_gui/
Few-Pie5592
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu6wcc
false
null
t3_1qu6wcc
/r/LocalLLaMA/comments/1qu6wcc/nttuner_local_finetuning_made_easy_unsloth_gui/
false
false
self
17
{'enabled': False, 'images': [{'id': '7mUOi0Mpidy5ExxPmwJpYFRUSNi5ZbrHnZ04aut0ehA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7mUOi0Mpidy5ExxPmwJpYFRUSNi5ZbrHnZ04aut0ehA.png?width=108&crop=smart&auto=webp&s=4e1388ab6a1de8111fb8e1ab26c6e004512bf8dc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7mUOi0Mpidy5ExxPmwJpYFRUSNi5ZbrHnZ04aut0ehA.png?width=216&crop=smart&auto=webp&s=e5159640ee9cbc98be4a63163ed6c4736afeed60', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7mUOi0Mpidy5ExxPmwJpYFRUSNi5ZbrHnZ04aut0ehA.png?width=320&crop=smart&auto=webp&s=a0b06d0e38eab3ce24ba1aca8cb8646b54934a7c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7mUOi0Mpidy5ExxPmwJpYFRUSNi5ZbrHnZ04aut0ehA.png?width=640&crop=smart&auto=webp&s=bbf0d884eb2ce8edc633ea4568aec1a105f3c25a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7mUOi0Mpidy5ExxPmwJpYFRUSNi5ZbrHnZ04aut0ehA.png?width=960&crop=smart&auto=webp&s=9b35ca2513a0806d88dbeabc3da8960cebaf6e67', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7mUOi0Mpidy5ExxPmwJpYFRUSNi5ZbrHnZ04aut0ehA.png?width=1080&crop=smart&auto=webp&s=80d646c67599cfd81b3241e2faed7ec4ed6b5a68', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7mUOi0Mpidy5ExxPmwJpYFRUSNi5ZbrHnZ04aut0ehA.png?auto=webp&s=e9e774e7cb6a3d5c1716b8436ffa629956d8c255', 'width': 1200}, 'variants': {}}]}
Training on watermarked videos?
0
I want to train an AI to generate videos of old 1980s China Central TV news segments and practically every bit of archived footage online is watermarked. Is there a way to train on these watermarked videos and generate new footage that doesn't have any watermarks aside from the ones from the original broadcast (like the CCTV logo and the time displayed on the top right corner)?
2026-02-02T20:28:40
https://www.reddit.com/r/LocalLLaMA/comments/1qu6mwy/training_on_watermarked_videos/
IronLover64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu6mwy
false
null
t3_1qu6mwy
/r/LocalLLaMA/comments/1qu6mwy/training_on_watermarked_videos/
false
false
self
0
{'enabled': False, 'images': [{'id': '3kIKisW9Om7fO0UQgBoEzh9ImaJJhmPef5TTrPwDbOo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/3kIKisW9Om7fO0UQgBoEzh9ImaJJhmPef5TTrPwDbOo.jpeg?width=108&crop=smart&auto=webp&s=1efeaa87e8a53ea097afb0d79363980d29d3845e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/3kIKisW9Om7fO0UQgBoEzh9ImaJJhmPef5TTrPwDbOo.jpeg?width=216&crop=smart&auto=webp&s=1f5fc403636033baa00282e568ce7e0c7d037e37', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/3kIKisW9Om7fO0UQgBoEzh9ImaJJhmPef5TTrPwDbOo.jpeg?width=320&crop=smart&auto=webp&s=bd99579e259534b06e6dd50bda4a1b05bd25419e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/3kIKisW9Om7fO0UQgBoEzh9ImaJJhmPef5TTrPwDbOo.jpeg?auto=webp&s=6cca292192c880e208dd3dbd95c45707008c330c', 'width': 480}, 'variants': {}}]}
Using ClawRAG as external knowledge base – Feedback on MCP integration wanted
0
I've been running OpenClaw for my home server automation via WhatsApp (works great!) but kept hitting a wall: the agent couldn't reference my local documents Built ClawRAG as a bridge – it exposes document search via MCP so OpenClaw can call it as a tool. Now when I ask "What did my lease say about maintenance?",the bot queries my local ChromaDB and cites the exact paragraph Why MCP worked for this I chose MCP because it provides structured schemas that LLMs understand natively. The MCP server exposes query\_knowledge as a tool, allowing the agent to decide exactly when to pull from the knowledge base vs. when to use its built-in memory. It prevents "tool-drift" and ensures type-safe responses One issue I'm wrestling with The citation preservation over WhatsApp round-trips is fragile Currently passing chunk IDs through the MCP tool result, but formatting gets tricky with long quotes Would love maintainer/community thoughts: Is MCP the recommended path for external knowledge bases long-term? Or would a native plugin architecture (shared memory) be better for low-latency retrieval? [https://github.com/2dogsandanerd/ClawRag](https://github.com/2dogsandanerd/ClawRag) Working example with docker-compose included
2026-02-02T20:28:27
https://www.reddit.com/r/LocalLLaMA/comments/1qu6mom/using_clawrag_as_external_knowledge_base_feedback/
ChapterEquivalent188
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu6mom
false
null
t3_1qu6mom
/r/LocalLLaMA/comments/1qu6mom/using_clawrag_as_external_knowledge_base_feedback/
false
false
self
0
{'enabled': False, 'images': [{'id': 'GdH33ai-IEvuliuFRxa0xNhpmREHM2TdSTAFxDeMbBQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GdH33ai-IEvuliuFRxa0xNhpmREHM2TdSTAFxDeMbBQ.png?width=108&crop=smart&auto=webp&s=d00299f7334aae0e41f4bfc43d2e87cc8d7c28b9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GdH33ai-IEvuliuFRxa0xNhpmREHM2TdSTAFxDeMbBQ.png?width=216&crop=smart&auto=webp&s=1d6b222c21697a41895682389dbddf3bf67c288a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GdH33ai-IEvuliuFRxa0xNhpmREHM2TdSTAFxDeMbBQ.png?width=320&crop=smart&auto=webp&s=4ca734fefbdfa88d31ec26ae1e50c9703e61c958', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GdH33ai-IEvuliuFRxa0xNhpmREHM2TdSTAFxDeMbBQ.png?width=640&crop=smart&auto=webp&s=3f37a6ec9efa2dfddbf4d407c7ac1146e678c603', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GdH33ai-IEvuliuFRxa0xNhpmREHM2TdSTAFxDeMbBQ.png?width=960&crop=smart&auto=webp&s=d58654a526d3945cab8f42d9eadf071c6fc5504a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GdH33ai-IEvuliuFRxa0xNhpmREHM2TdSTAFxDeMbBQ.png?width=1080&crop=smart&auto=webp&s=2465fe9c747c2f90b0aa4679a517e7eca9c96e2e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GdH33ai-IEvuliuFRxa0xNhpmREHM2TdSTAFxDeMbBQ.png?auto=webp&s=e73931efa92d4c4cc930c90168221151a0229f62', 'width': 1200}, 'variants': {}}]}
I made a proxy to save your tokens for distillation training
17
before I release it I'm thinking that I should give people the ability to share their tokens. I am a little worried that even with opt in it could be a security risk if people don't understand what they're doing, but if even a few dozens of us do share tokens it could lead to some very valuable data for distillation. thoughts?
2026-02-02T20:19:49
https://i.redd.it/0bh7eaqj35hg1.png
FaustAg
i.redd.it
1970-01-01T00:00:00
0
{}
1qu6e7w
false
null
t3_1qu6e7w
/r/LocalLLaMA/comments/1qu6e7w/i_made_a_proxy_to_save_your_tokens_for/
false
false
default
17
{'enabled': True, 'images': [{'id': '0bh7eaqj35hg1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/0bh7eaqj35hg1.png?width=108&crop=smart&auto=webp&s=24724a6e221244b27247cdeaf24bfb9ae0f8e67d', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/0bh7eaqj35hg1.png?width=216&crop=smart&auto=webp&s=6949084742cb3c7339527a4bc4157230d82507c1', 'width': 216}, {'height': 170, 'url': 'https://preview.redd.it/0bh7eaqj35hg1.png?width=320&crop=smart&auto=webp&s=91bae9da34e44a675d67a04c5a58e3d158155cb6', 'width': 320}, {'height': 340, 'url': 'https://preview.redd.it/0bh7eaqj35hg1.png?width=640&crop=smart&auto=webp&s=5c7660e1de353b5587de48d62690c0c2c55354cc', 'width': 640}, {'height': 510, 'url': 'https://preview.redd.it/0bh7eaqj35hg1.png?width=960&crop=smart&auto=webp&s=342783d0709ccc2e56a618ac6bf7bdfbaac596b4', 'width': 960}], 'source': {'height': 544, 'url': 'https://preview.redd.it/0bh7eaqj35hg1.png?auto=webp&s=eeb91f33547cbe5365d8a55a835c839eec39351d', 'width': 1022}, 'variants': {}}]}
Trying a different way to structure agent execution
1
I got tired of agent frameworks hiding execution. This is a small runtime where you define exactly how tools, models, and state behave.
2026-02-02T20:07:14
https://github.com/creact-labs/creact
Final-Shirt-8410
github.com
1970-01-01T00:00:00
0
{}
1qu61dh
false
null
t3_1qu61dh
/r/LocalLLaMA/comments/1qu61dh/trying_a_different_way_to_structure_agent/
false
false
https://external-preview…2f976d0f9d686712
1
{'enabled': False, 'images': [{'id': 'AA-Ubf1kf2MeOJhu1qbOlYQWrAOaFjkbBaMrNt4UnIY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AA-Ubf1kf2MeOJhu1qbOlYQWrAOaFjkbBaMrNt4UnIY.png?width=108&crop=smart&auto=webp&s=e2365559cd5b842fb61aea8953d1b9713feb0b32', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AA-Ubf1kf2MeOJhu1qbOlYQWrAOaFjkbBaMrNt4UnIY.png?width=216&crop=smart&auto=webp&s=40ecc46d26bdd4a474adb1661abede489ed84044', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AA-Ubf1kf2MeOJhu1qbOlYQWrAOaFjkbBaMrNt4UnIY.png?width=320&crop=smart&auto=webp&s=9c815dc7c22ce40654d15c3921e13648bc045c61', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AA-Ubf1kf2MeOJhu1qbOlYQWrAOaFjkbBaMrNt4UnIY.png?width=640&crop=smart&auto=webp&s=e3672d4c327b7ddcda532bde38e12deabe2ed73e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AA-Ubf1kf2MeOJhu1qbOlYQWrAOaFjkbBaMrNt4UnIY.png?width=960&crop=smart&auto=webp&s=93f702f022a5424fd3d777d2b86f6aa96ad1a50f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AA-Ubf1kf2MeOJhu1qbOlYQWrAOaFjkbBaMrNt4UnIY.png?width=1080&crop=smart&auto=webp&s=24edaa6db16a2f24547476de4117b95badfe678f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AA-Ubf1kf2MeOJhu1qbOlYQWrAOaFjkbBaMrNt4UnIY.png?auto=webp&s=6a8d190e0efa5dcb755806f31dad5287e4016512', 'width': 1200}, 'variants': {}}]}
How it feels deploying an OpenClaw agent
0
2026-02-02T20:02:02
https://i.redd.it/v5dfvogk05hg1.jpeg
spobin
i.redd.it
1970-01-01T00:00:00
0
{}
1qu5w1n
false
null
t3_1qu5w1n
/r/LocalLLaMA/comments/1qu5w1n/how_it_feels_deploying_an_openclaw_agent/
false
false
default
0
{'enabled': True, 'images': [{'id': 'v5dfvogk05hg1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/v5dfvogk05hg1.jpeg?width=108&crop=smart&auto=webp&s=0504f72a4cc26bc439af52814fb9aede2d4de516', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/v5dfvogk05hg1.jpeg?width=216&crop=smart&auto=webp&s=212987dc28b3c0b5d6715ba8dde03ccc5dad150c', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/v5dfvogk05hg1.jpeg?width=320&crop=smart&auto=webp&s=6d74e181aaa134eb7d015a720c392aebb7761c5e', 'width': 320}, {'height': 349, 'url': 'https://preview.redd.it/v5dfvogk05hg1.jpeg?width=640&crop=smart&auto=webp&s=f8c416c6fe9e7607dfd0e59e5614617c68319863', 'width': 640}, {'height': 523, 'url': 'https://preview.redd.it/v5dfvogk05hg1.jpeg?width=960&crop=smart&auto=webp&s=f21b933b1ccdd84bb5c1092f8f716a7a5573690f', 'width': 960}, {'height': 589, 'url': 'https://preview.redd.it/v5dfvogk05hg1.jpeg?width=1080&crop=smart&auto=webp&s=071c13f3dad26cee4586118113e96ba2d5ae19ae', 'width': 1080}], 'source': {'height': 768, 'url': 'https://preview.redd.it/v5dfvogk05hg1.jpeg?auto=webp&s=acdbcd37bb40951a651dbd6b315eb4a878f88516', 'width': 1408}, 'variants': {}}]}
Experiment: Fine-tuning GPT-2 on a smartphone CPU - observations on loss vs quality, dataset ordering effects
11
**Body:** I've been running an experiment fine-tuning GPT-2 on a Redmi 12 (Snapdragon 685, CPU only) using Termux. No cloud, no GPU. Wanted to share some observations that might be interesting to this community. ## Setup - Base: GPT-2 124M - Hardware: Snapdragon 685 CPU (no GPU) - Environment: Termux - Progress: ~2,000 / 37,500 steps (5.3%) - Training time: ~50 hours - Speed: ~86 sec/step ## Interesting findings **1. Loss is unreliable with heterogeneous data** Checkpoint 2700 had the lowest loss (1.62) but scored 12% worse in manual evaluation than checkpoint 2000 (loss 1.94). When your training data varies in quality across domains, lower loss can mean the model is just memorizing noise better. Has anyone else observed this pattern? Curious how others handle quality evaluation beyond loss. **2. Dataset ordering has strong effects** I used an alphabetically ordered code corpus. Result: Agda (early in alphabet) scores 55/100, Python (late) scores 8/100 at the same checkpoint. Obvious in hindsight, but the magnitude surprised me. **3. Quality is non-monotonic** Tested checkpoints 1400 through 2700. Best overall was 2000, not the latest. Later checkpoints showed signs of overfitting on lower-quality data sections. **4. Mobile training is viable but slow** At 86 sec/step, completing 37,500 steps takes ~37 days continuous. Thermal throttling was manageable without device modifications. ## Current results | Language | Score |-----|----- | Agda | 55/100 | C | 20/100 | Assembly | 15/100 | Python | 8/100 Average improved 146% between checkpoints 1400 and 2000. ## Sample output (checkpoint 2000) Prompt: `module Main where` ```plaintext module Main where open import Function open import Data.Nat open import Data.Unit open import Data.Nat.Properties ``` Correct Agda structure with real imports. ## Questions for the community 1. For those fine-tuning on code: how do you handle multi-language datasets? Interleaving vs sequential? 2. Any recommendations for automated code quality evaluation beyond loss? Currently using manual scoring which doesn't scale. 3. Has anyone experimented with training on ARM devices? Curious about others' experiences with mobile/edge training. ## Limitations - Single run, no replication - Manual evaluation - Fine-tuning only (from-scratch planned for v1.0) - Early stage (5.3% complete) If anyone wants to look at the outputs or try it: [weights on HF](https://huggingface.co/OpceanAI/Yuuki-best), Apache 2.0. Paper documenting methodology in progress. Mainly posting to share the findings and hear if others have seen similar patterns with loss/quality divergence.
2026-02-02T19:46:26
https://www.reddit.com/r/LocalLLaMA/comments/1qu5gf9/experiment_finetuning_gpt2_on_a_smartphone_cpu/
agua_omg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu5gf9
false
null
t3_1qu5gf9
/r/LocalLLaMA/comments/1qu5gf9/experiment_finetuning_gpt2_on_a_smartphone_cpu/
false
false
self
11
{'enabled': False, 'images': [{'id': 'sh0T-P8uz_tYAhc1751nfjzg09DyFu_7i5ZBlRb9LtQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sh0T-P8uz_tYAhc1751nfjzg09DyFu_7i5ZBlRb9LtQ.png?width=108&crop=smart&auto=webp&s=cbaed859779fc61bb88ec800377dcf472e5f8192', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/sh0T-P8uz_tYAhc1751nfjzg09DyFu_7i5ZBlRb9LtQ.png?width=216&crop=smart&auto=webp&s=d76b0719542d81d1f3f9af4025e7fecd2dafb434', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/sh0T-P8uz_tYAhc1751nfjzg09DyFu_7i5ZBlRb9LtQ.png?width=320&crop=smart&auto=webp&s=505966b337fd761985241dfd4364bf163976c014', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/sh0T-P8uz_tYAhc1751nfjzg09DyFu_7i5ZBlRb9LtQ.png?width=640&crop=smart&auto=webp&s=1b54bfc763a7167d07e4dc8c2d5d0a58a0880bb9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/sh0T-P8uz_tYAhc1751nfjzg09DyFu_7i5ZBlRb9LtQ.png?width=960&crop=smart&auto=webp&s=69e6c0f46c08ed4dfc2e748f8ee9c15175892f34', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/sh0T-P8uz_tYAhc1751nfjzg09DyFu_7i5ZBlRb9LtQ.png?width=1080&crop=smart&auto=webp&s=feecfa7a02775b4d9b10e4a85b2d28c4a3499b8d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/sh0T-P8uz_tYAhc1751nfjzg09DyFu_7i5ZBlRb9LtQ.png?auto=webp&s=8ab9a4f74824e083c8fb01d8e7c18b70c9401890', 'width': 1200}, 'variants': {}}]}
Suggestions for better TTS, I have Qwen3 TTS at the moment but I would like to sample the voice and then give it prompt for it to make it more emotional.
1
Same as the title. I have looked around on my own, and, there seems to be workarounds but I don't really understand them completely. I am open to suggestions for other TTS models if they are better suited for my needs. I like Qwen3 TTS but it appears it hasn't matured enough yet as it is relatively new.
2026-02-02T19:38:09
https://www.reddit.com/r/LocalLLaMA/comments/1qu582x/suggestions_for_better_tts_i_have_qwen3_tts_at/
RoutineEchidna7835
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu582x
false
null
t3_1qu582x
/r/LocalLLaMA/comments/1qu582x/suggestions_for_better_tts_i_have_qwen3_tts_at/
false
false
self
1
null
Seriously !How the actual production pipeline works with different pdfs after extraction of data's? Is real problem is extraction or extraction of information from the chucks?
0
I have working with many different domain and regulations based pdfs, but for build the RAG or finetuning we need to extract the data from the pdfs but how? is my biggest .Like can extract using the docling or pdf to markdown files but after that path. How knowledge graph will get built! is fixed schema are what ? Like different regulations or domains have different schemas or single extractions to model. My real problem is what happens after the extraction.
2026-02-02T19:20:06
https://www.reddit.com/r/LocalLLaMA/comments/1qu4pot/seriously_how_the_actual_production_pipeline/
Disastrous_Talk7604
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu4pot
false
null
t3_1qu4pot
/r/LocalLLaMA/comments/1qu4pot/seriously_how_the_actual_production_pipeline/
false
false
self
0
null
Why NVIDIA PersonaPlex sucks??
0
Hey guys, tried this one right now and already got back pain while installing. *Nvidia PersonaPlex* sounds cool but in reality it's like solution for some call-support idk, but why people from youtube/twitter or whatever talking about real conversation between user-ai. am I dumb and didn't get point of hype? thank you for attention, and sorry for not good English
2026-02-02T19:14:26
https://www.reddit.com/r/LocalLLaMA/comments/1qu4jui/why_nvidia_personaplex_sucks/
Fit-Horse-3100
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu4jui
false
null
t3_1qu4jui
/r/LocalLLaMA/comments/1qu4jui/why_nvidia_personaplex_sucks/
false
false
self
0
null
Experimenting and then what?
1
I keep seeing everyone here “experimenting with local AI”. New models, new quants, benchmarks, screenshots, etc. Cool and all, but real question: does any of this actually turn into something usefull? I’m trying to build a local LLM + RAG thing that does something boring but real. Feed it PDFs (contracts, forms, invoices), extract data, then check it against rules / legislation. All local, no cloud stuff and mostly vibecoding (yes, vibecoding calm your tits) And honestly… this is way harder then people make it look. PDFs are garbage. Tables are pure pain. OCR works “ok-ish” until one tiny error sneaks in and suddenly the model is confidently talking nonsense. RAG is never 100% wrong, but also never 100% right. And “almost correct” is still wrong in real life. Running this on 24GB VRAM + 96GB RAM so compute isn’t the issue here. Reliability is, I think Every time I fix something, something else breaks. Edge cases everywhere. Feels less like AI and more like duct taping pipelines together at 2am. So yeah, curious: are people here actually building tools they use day to day, or is it mostly just experiments and benchmarks? If you did get something solid working: what part almost made you quit? Because right now it feels like everyone is winning except me… and that just doesn’t add up 😅
2026-02-02T19:12:23
https://www.reddit.com/r/LocalLLaMA/comments/1qu4hr9/experimenting_and_then_what/
Mangostickyrice1999
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu4hr9
false
null
t3_1qu4hr9
/r/LocalLLaMA/comments/1qu4hr9/experimenting_and_then_what/
false
false
self
1
null
I replaced Claude Code’s entire backend with free Alternatives
0
I have been working on a side-project which replaces the following things in the Claude ecosystem with free alternatives: \- Replaces Anthropic models with NVIDIA-NIM models: It acts as middleware between Claude-Code and NVIDIA-NIM allowing unlimited usage upto 40 RPM with a free NVIDIA-NIM api-key. \- Replaces the Claude mobile app with telegram: It allows the user to send messages to a local server via telegram that spin up a CLI instance and do a task. Replies resume a conversation and new messages create a new instance. You can concurrently use multiple CLI sessions and chats. It has features that distinguish it from similar proxies: \- The interleaved thinking tokens generated between tool calls are preserved allowing reasoning models like GLM 4.7 and kimi-k2.5 to take full advantage of thinking from previous turns. \- Fast prefix detection stops the CLI from sending bash command prefix classification requests to the LLM making it feel blazing fast. I have made the code modular so that adding other providers or messaging apps is easy.
2026-02-02T19:10:51
https://github.com/Alishahryar1/cc-nim
LastNoobLeft
github.com
1970-01-01T00:00:00
0
{}
1qu4g3q
false
null
t3_1qu4g3q
/r/LocalLLaMA/comments/1qu4g3q/i_replaced_claude_codes_entire_backend_with_free/
false
false
default
0
null
Testing HyperCycle decentralized AI - gemma3:4b node available for testing
1
[removed]
2026-02-02T19:08:30
https://www.reddit.com/r/LocalLLaMA/comments/1qu4dq3/testing_hypercycle_decentralized_ai_gemma34b_node/
PuzzleheadedTotal286
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu4dq3
false
null
t3_1qu4dq3
/r/LocalLLaMA/comments/1qu4dq3/testing_hypercycle_decentralized_ai_gemma34b_node/
false
false
self
1
null
kv cache translated to gpu flops savings
1
We know kv-cache is important, saves cost and latency, but I haven't seen any specifics of how many gpu flops are saved by a kv-cache hit. Does anyone know? For example for a 5000token query with 100 token output and 10B parameter model, what is the ration of gpu flops used for inferencing a query with 0% cache and a query where 50% of the tokens have k and v cached from a previous query.
2026-02-02T19:03:48
https://www.reddit.com/r/LocalLLaMA/comments/1qu48tt/kv_cache_translated_to_gpu_flops_savings/
DismalHold1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu48tt
false
null
t3_1qu48tt
/r/LocalLLaMA/comments/1qu48tt/kv_cache_translated_to_gpu_flops_savings/
false
false
self
1
null
Can your model beat this Motherload clone?
21
I recreated the classic *Motherload* Flash game so it can be played by an LLM. The goal is to mine a specific ore while managing fuel, earning money, buying upgrades, and so on. Of the models I’ve tested, only Gemini Flash has beaten it—and that happened just once. Give it a try! [https://github.com/JosephCurwin/motherload-agent](https://github.com/JosephCurwin/motherload-agent)
2026-02-02T18:48:40
https://v.redd.it/vwp8rn9cf4hg1
JosephCurvin
v.redd.it
1970-01-01T00:00:00
0
{}
1qu3sz7
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/vwp8rn9cf4hg1/DASHPlaylist.mpd?a=1772650135%2CZGE1ODJkNzdmYzNmN2Y5ZWY1YjU1MzE4NmYwMGU3NTJkNzNjMGQwNzljOTI1YjM3MDZkYjczNWRmMDg0ZGFmMQ%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/vwp8rn9cf4hg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/vwp8rn9cf4hg1/HLSPlaylist.m3u8?a=1772650135%2CYjU5YzU1NWYzN2MxNGNkY2NmMzU5NTRjYmM5YmIzMWYwMTAyZjQ1MDY2ZTg4YjcwZGUzOGNmYWM0MDI4OTgxYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vwp8rn9cf4hg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1qu3sz7
/r/LocalLLaMA/comments/1qu3sz7/can_your_model_beat_this_motherload_clone/
false
false
https://external-preview…bbdbb70c78b49a26
21
{'enabled': False, 'images': [{'id': 'OG10Nmp3OWNmNGhnMS4ma4XKyQkWAloPUH0A5TEFsSJAEkk1TA9wcLrLDXE4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OG10Nmp3OWNmNGhnMS4ma4XKyQkWAloPUH0A5TEFsSJAEkk1TA9wcLrLDXE4.png?width=108&crop=smart&format=pjpg&auto=webp&s=b3cf20a80e5fc057779439fefa44e6816c03e93b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OG10Nmp3OWNmNGhnMS4ma4XKyQkWAloPUH0A5TEFsSJAEkk1TA9wcLrLDXE4.png?width=216&crop=smart&format=pjpg&auto=webp&s=0c1bb0505ff9ba9f014835839df3fd7f52a17aa2', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OG10Nmp3OWNmNGhnMS4ma4XKyQkWAloPUH0A5TEFsSJAEkk1TA9wcLrLDXE4.png?width=320&crop=smart&format=pjpg&auto=webp&s=9473fcf452516e29bccb6eb61512f0a4fe089d4c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OG10Nmp3OWNmNGhnMS4ma4XKyQkWAloPUH0A5TEFsSJAEkk1TA9wcLrLDXE4.png?width=640&crop=smart&format=pjpg&auto=webp&s=8970f5e01fc18bc65971f086360c858bcd9a290b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OG10Nmp3OWNmNGhnMS4ma4XKyQkWAloPUH0A5TEFsSJAEkk1TA9wcLrLDXE4.png?width=960&crop=smart&format=pjpg&auto=webp&s=bfd36df3f784ba58e3d1c10d29cfa2943ad5a97a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OG10Nmp3OWNmNGhnMS4ma4XKyQkWAloPUH0A5TEFsSJAEkk1TA9wcLrLDXE4.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6d0ebc17388b9dfe534f1b2cc634592c61abcbe7', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/OG10Nmp3OWNmNGhnMS4ma4XKyQkWAloPUH0A5TEFsSJAEkk1TA9wcLrLDXE4.png?format=pjpg&auto=webp&s=cdf03bf029f1f5f990a1151bb7f5e84d7c2e2012', 'width': 1280}, 'variants': {}}]}
Stop asking your Llama 3 model to write poetry. Give it a wallet and see if it can trade. I built an API for it.
0
I got tired of seeing "AI Agents" that just summarize PDFs or write haikus. I wanted to see if local models (Llama 3, Mistral, etc.) could actually survive in a dynamic, hostile environment where "hallucinating" means going bankrupt. So I built **SynthStreet** – a 24/7 simulated stock market designed exclusively for AI Agents. **The Pitch:** It’s a headless API stock market. You don't trade with a mouse and keyboard; your agent trades via HTTP requests. I’ve set up an AMM (Automated Market Maker) architecture where the price moves based on supply/demand from the swarm of bots. **The Tech Stack:** * **Backend:** Laravel Octane (High-performance Async). * **Real-time:** WebSockets for market feeds (so your bot can react to price crashes in milliseconds). * **The "Villain":** I have a "House Bot" running on my server that arbitrages the price against the *real* S&P 500 (using an Oracle). It keeps the market pegged to reality—but it’s slow. **The Challenge:** If your agent is faster or smarter than my House Bot, you can front-run it and drain the liquidity pool. If your agent is dumb (or hallucinates a "Strong Buy" signal on a crash), my bot will take your money. **Features:** * **Sentiment Trading:** Agents can `POST` to the public chat and `GET` messages to read the room. If 10 bots start screaming "PANIC!", does your agent sell? * **24/7 Uptime:** The market never closes. * **Visuals:** I added TradingView charts so you can watch the bot wars in real-time. **How to join:** 1. Go to [SynthStreet.io](https://www.google.com/search?q=https://synthstreet.io) 2. Grab an API Key. 3. Download the `simple_agent.py` starter script (it uses `requests` and a basic random strategy). 4. Plug in your LLM of choice (OpenAI, Anthropic, or local Ollama) to make the trading decisions. I’m looking for people to break it. Let me know if your agent manages to find an infinite money glitch. **Link:** [https://synthstreet.io](https://www.google.com/search?q=https://synthstreet.io) **Docs:** [https://synthstreet.io/docs](https://synthstreet.io/docs)
2026-02-02T18:44:33
https://www.reddit.com/r/LocalLLaMA/comments/1qu3omy/stop_asking_your_llama_3_model_to_write_poetry/
theFinalNode
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu3omy
false
null
t3_1qu3omy
/r/LocalLLaMA/comments/1qu3omy/stop_asking_your_llama_3_model_to_write_poetry/
false
false
self
0
null
RAG Chat with your documents (3-4 concurrent users)
1
Hi everyone! I am new to working with LLMs and RAG systems, and I am planning to use Kotaemon to enable chat over internal company documents. Use case details: Concurrent users: 3–4 users at a time Documents: PDFs / text files, typically 1–100 pages Goal: To chat with the documents, asking questions from it. I’m planning to self-host the solution and would like guidance on: Which LLM (model + size) is suitable for this use case? What GPU (VRAM size / model) would be sufficient for smooth performance?
2026-02-02T18:41:29
https://www.reddit.com/r/LocalLLaMA/comments/1qu3ldm/rag_chat_with_your_documents_34_concurrent_users/
Beneficial_Guava5171
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu3ldm
false
null
t3_1qu3ldm
/r/LocalLLaMA/comments/1qu3ldm/rag_chat_with_your_documents_34_concurrent_users/
false
false
self
1
null
[Project Share] I built a free, local UI with Neurosymbolic RAG, Multi-Agent Peer Review, and browser-based Python validation (MIT)
0
Hey everyone, I wanted to share a project I've been working on to solve some specific frustrations I had with existing local LLM interfaces. I love the current ecosystem, but I found that most UIs are either simple chat wrappers or fully agentic frameworks that are hard to control. I wanted a middle ground: a polished UI that handles complex reasoning tasks locally without needing a server backend. It’s called **Neural Nexus UI**. It’s 100% open source (MIT), runs on Vite/React, and is designed for Ollama (with optional Groq support for speed). **The main features I built for power users:** * **Neurosymbolic RAG (The "Secret Sauce"):** Instead of just doing simple vector cosine similarity (which often fails on specific dates/names), I implemented a hybrid search. It combines Neural Embeddings + Entity Extraction + Knowledge Graphs. It "reads" your docs to extract dates, money, and emails as entities before searching. * **Multi-Agent Peer Review:** You can toggle a "Peer Review" mode where 3 local agents collaborate. One drafts the answer, an "Adversary" agent reviews it for flaws (with trace tests), and a "Refiner" applies the corrections. * **Client-Side Python Validation:** I integrated **Pyodide** (WebAssembly). If the LLM generates Python code, the UI can validate and run it directly in your browser sandbox to check for syntax errors before you even copy-paste it. No code leaves your machine. * **Persistent Indexing:** All RAG embeddings are cached in IndexedDB. You can reload the page and your 1,000-page PDF embedding is still there instantly—no re-indexing. **Tech Stack:** * React 18 + Vite * Ollama (Local) / Groq (Cloud) * IndexedDB for local storage (no server DB required) **Repo:** will share in the comment section. I’m really looking for feedback on the **Neurosymbolic search accuracy**. I’ve found it much better than standard RAG for my finance/docs use cases, but I’d love to see how it holds up with your local models (Llama 3, Mistral, DeepSeek-R1, etc.). Let me know what you think! \#Ollama #RAG #OpenSource #LocalLLM #React
2026-02-02T18:32:45
https://www.reddit.com/r/LocalLLaMA/comments/1qu3c3t/project_share_i_built_a_free_local_ui_with/
Extreme-Temporary-85
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu3c3t
false
null
t3_1qu3c3t
/r/LocalLLaMA/comments/1qu3c3t/project_share_i_built_a_free_local_ui_with/
false
false
self
0
null
Is there a generic verb meaning "ask LLM chatbot"?
0
I *google* even when I use DuckDuckGo, because googling is a long time established verb meaning online search. Is there some new word for interacting with LLMs? * chatGTPing? * Geminiing? * Deepseeking? * Clawding? * Slopping/co-pilotting?
2026-02-02T18:28:32
https://www.reddit.com/r/LocalLLaMA/comments/1qu37pb/is_there_a_generic_verb_meaning_ask_llm_chatbot/
Vaddieg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu37pb
false
null
t3_1qu37pb
/r/LocalLLaMA/comments/1qu37pb/is_there_a_generic_verb_meaning_ask_llm_chatbot/
false
false
self
0
null
best model for writing?
3
Which model is best for writing? I’ve heard Kimi K2 is extremely good at writing and 2.5 regressed? Specifically a model that is good at non-AI detection (most human-like)
2026-02-02T18:25:37
https://www.reddit.com/r/LocalLLaMA/comments/1qu34os/best_model_for_writing/
No-Tiger3430
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1qu34os
false
null
t3_1qu34os
/r/LocalLLaMA/comments/1qu34os/best_model_for_writing/
false
false
self
3
null